Antonis
On my side, I have seen 2 challenges when trying to apply this framework:
- Do not overload the team: implementation moves fast and at the same time with our discovery efforts, and for sure we don't want to affect team velocity. Initially, our meetings were too many for our engineers but along the way we optimized the time dedicated so that we stay both productive and engaged.
- Educate the stakeholders: changing your team's mindset and way of prioritization is a big change. People that were used to bringing in requests and asking for a delivery date, would have to be educated towards the new prioritization model. This takes time and needs to be explained in detail, but it seems that if the framework's foundation is solid it is easy to understand and adapt.
Evelyn
Finding the right balance between usual day-to-day tasks and discovery activities:
A real challenge for all team members is trying to tackle the tickets of the running sprint and at the same time interrupting this usual flow of day-to-day work to jump into discovery activities such as group discovery sessions, live design sessions, interview calls or usability testing sessions. Sometimes there was overcommitment on what can be delivered, some other times the discovery activities would take longer than expected. Getting used to the idea of ‘continuous product discovery’ in a ‘quick and dirty’ mindset can educate us on how to find the right balance. And one important thing here is to know when to delegate certain activities to other members of the team.
Marianthi
- Educate the team to avoid providing solutions or chasing after competitors but focus on the challenges we have to overcome, understand user needs and propose ideas that will solve our users' pain points.
- Find a way to quickly move from opportunities to solutions and experiments and get feedback from stakeholders and customers: Design iteration phase might be tricky and time consuming. Knowing when is the right time to test a prototype with users, how to gather insights during discovery sessions and translate them into design solutions was undoubtedly challenging.
For us these "feature" requests are important and helpful for the product. Ideation could be company wide, as the more you hear the better. We just want to drive the way they arise and the way they are prioritized and for that we focus on two main points:
- Bring us problems not solutions: anything that comes to the product team should be based on a problem that we want to solve. Random ideation can't prove helpful and can be misleading for a team. For that reason, every time a relative request comes we try to uncover the actual need below it, before adding it to our backlog.
- Educate our stakeholders regarding the way features are prioritized. We have established a solid flow regarding the way we prioritize features, through some initial exploration of topics close to our business drivers and a prioritization according to their score (value brought vs effort required). We keep showcasing this process and inform our stakeholders, so that every request that comes doesn't need to be followed by a specific deliver deadline.
The first part of any problem that we would like to solve, is to find an idea and then set the context together with the team on what is the problem we are solving, what we want to achieve and what does success mean for us. This context is then a foundation for the PM, who takes the success this feature could bring and translates it to monetary value. So, if we are tackling business efficiency, the PM will look for how much time we currently spend on the part we are automating, check how many times it is triggered and calculate the time saved on a yearly basis. This time is then according to our avg. department salary translated into monetary value and a score for the feature. Same process is followed for Revenues, NPS or any other driver we could have.
Regarding the post-release value measurement it actually depends on the feature. We want to make sure that before any launch we have clearly defined the metrics we need to establish, in order to be able to measure this feature's success. After the release, we keep an eye on these metrics and check how they evolve, until they are mature enough to be compared to our pre-release metrics and offer us insights on our evaluation.
Every single week we run our 1-to-1 interviews with customers to get a deeper understanding of their current experiences with online booking via our website. This is our generative research which helps us recognise patterns and those pain points many of our customers face with the current experience. This is the time when we discover new opportunities and understand where we need to shift our focus next. Therefore this part of the process helps us prioritise those elements of the booking process which need more detailed research. That also helps with prioritising the new problems to be discovered for the following quarter.
When it comes to evaluative research, the prioritisation of what to research next depends on the initial problems we’ve set. Those are problems we define at the beginning of a quarter (from the insights we get during our generative research).
After completing group discovery activities such as lightning demos for each of the problems, we move to live design sessions for some of them.This is the point where we normally realise that we may need further user input for certain problem areas. Either because we need more information or because we have came up with a brand new idea that we need to test to ensure it meets our customer’s needs.
During the develop process, we run group live design sessions and produce initial recommendations, sometimes we even come up with more than 1 different ideas for a certain concept. There are times when we want to test these initial ideas so we move on with rapid lo-fi prototypes that we can put in front of our customers to get initial feedback.
These prototypes normally test customers’ understanding around a new concept, section, copy change we have introduced as well as their expectations around a new experience. More specifically we want to run tests that will allow us to understand things such as whether our customers will notice and interact with a new element we’ve added in the experience, how clear the copy is for a new concept we are presenting or how they find a new layout we’ve recommended for a certain section of a page.
A prototype can be tested during an interview session. We’d normally spend the last 15 minutes of an interview to share a link to a prototype and ask our customers to perform a simple task. We then further discuss with them, asking additional questions to uncover more insights around the idea we’re testing.
When we are at a stage of the discovery process where we are happy with a design recommendation but want further user input, we also create hi-fi prototypes and run dedicated 45-mins usability testings with customers (normally with 6-8 people).
These are prototypes that cover bigger parts of the end to end booking experience including the new recommendations we want to test. We provide a realistic scenario to our customers and ask them to perform one or two tasks. We observe them while they are interacting with the prototype and talking aloud. The purpose of these is to test a certain part of the booking experience in context with a realistic scenario and confirm our assumptions on a certain design recommendation.
Having your own business drivers has proved really valuable to us for multiple reasons:
- You can set the drivers that are tailored to your business and have actual data on how to measure them.
- You are flexible to update or change along the way if you see that there is a better fit.
- Most important: the whole company speaks the same language. All the teams are engaged on these drivers as they are not product specific and we can effectively communicate the value we bring to the business.
This framework started as an internal exercise between the Product Manager, Product Designer and Software Engineers. During discovery sessions and in an effort to determine whether potential solutions can improve our online booking flow, we realised we lacked stakeholder's ideas and input to start designing appropriate solutions before testing them with users. Therefore, we decided to invite them to our discovery sessions, right before ideation phase in order to collect more insights around business objectives and problems that we will solve.
Moreover, what was considered as a challenge rather than a limitation, was the fact that during live design sessions a lot of ideas might pop up in our minds and thus, deciding which one of them to test was quite tricky. We'd normally choose 2-3 concepts that we consider more impactful, proceed with rapid lo-fi prototypes, get initial feedback from customers and validate whether our assumptions were correct or not.
We run 1-to-1 interviews to explore the unique experience of a customer each time. The purpose of these is to uncover all the things that a single customer had to face during their online experience via our website.
Running multiple 1-to-1 interviews brings us to a stage where we start uncovering similar patterns on the ways people think or do things. Through our insights analysis we focus our attention to these patterns that might bring us gains or might solve pains.
The recruiting process we have set plays a key role on making sure we talk to the right segment of users. Our interviews cover questions around the end-to-end booking experience via the Blueground website. And the people who book these interview sessions are customers who have recently booked an apartment via the website so they have recently gone through this booking process. Those can be either new or existing customers, people from different backgrounds, nationalities and with different stories to share.
This framework has served us very well regarding this part and it is one of the main reasons we implemented it. Our prioritization score is based on the value each feature brings for specific business drivers, taking into account the company direction too (the drivers that the company wants to focus most). So, if for example we are focusing on Revenues, our NPS features would rank really low for the next quarter. Let's say for example, that business changes its focus and we want to improve our NPS, so it becomes the most important driver. Automatically, our backlog evaluation will be updated and our NPS features will rank much higher now. We are able, as a result, to quickly shift and pick the most valuable features at all times.
We normally work on potential opportunities rather than "features". The concept of this framework is that opportunities that have high user and business impact drive our roadmap, whether these are content updates, removing or updating existing flows etc. The main goal is to optimize our online booking flow, not to necessarily create something from scratch. Based on the feedback we receive and taking into consideration business drivers, we prioritize ideas and proceed with potential solutions and experiments.