Kick-start your continuous user research in 5 steps
uxdesign.cc
Kick-start your Continuous Research in 5stepsIdentify and map your key user feedback sources to streamline your research process and maximizeimpact!Its been a while since I last wrote an article, and I must admit Ive missed sharing what I learn daily with the design community .For this one, I want to walk you through how my team and I set up a Continuous Research process at OpenClassrooms, and how it helped us deliver better insights to feed the roadmaps and ultimately improve our users experience.By the end of this article, you should be able to do the same for yourproduct!Lets divein!But wait! Before starting: why should we dothis?Youre right to ask that question!:)One of my core beliefs is that User Research should have a strong impact within a company while remaining efficient and time-effective. Unfortunately, this is not always the case: in some companies, user research can slow down decision-making or lead to poor strategic choices.The main objectives of Continuous Researchare:Regularly and easily gathering user feedback on strategic parts of theproductConsolidating knowledge aboutusersEnabling user-centric decision-makingCollecting user feedback continuously helps prioritize the roadmap, improve user experience, and speed up iteration cycles.Step 1: Identify Existing FeedbackSourcesHave you ever noticed that User Research is very similar to data analysis? Just like with data, structuring and cleaning your sources before extracting valuable and actionable insights iscrucial.You can collect feedback from various sources (Trustpilot, Appstore, Zendesk, Surveys)To start your Continuous Research process, here are the questions you need to ask yourself:Who in the company already has regular contact with users? (e.g., Sales, CustomerSuccess)Do they already collect regular feedback and how? (e.g., surveys, CSAT,calls)How do these sources perform? (e.g., number of monthly responses)Is this feedback regularly analyzed and bywhom?What is the purpose of these feedbacksources?What kind of insights do they generate?From these sources, identify the ones relevant for Continuous Research, eliminate duplicates, and remove those that dont provide real value. Favor sources that will bring relevant insights over the longterm.Clean up redundant or low-value feedback sources that lack clear ownership or objectives.Step 2: Map the FeedbackSourcesNow that you identified the most relevant feedback sources, you can start mapping them on every stage of the User Journey to have a good overall vision of whats available. You might also need to create new feedback sources as strategic touchpoint for business objectives that are not yet covered. Anyway, I really recommend starting small and complete your mapping one step at atime.At OpenClassrooms, we categorized different types of sources feeding Continuous Research:Different types of users feedbacksourcesStep Experience SurveysFeedback on a User JourneystepThese are the primary feedback sources for squads Continuous Research. We have one survey at the end of each step, composed of three mainparts:1. CSATCustomer Satisfaction It measures satisfaction on specific parts of the user experience.The question is How satisfied are you with the [Step Name] process?.2. CESCustomer Effort It measures the effort required for specific actions.The question is Completing this [Action or Step] seemed: very easy to very difficult.3. Qualitative feedbackThis open question provides context for the ratings, helping us understand the why behind user scores and gather improvement ideas.We add a question at the end to collect emails from users interested in improving the product, creating a user pool for testing and interviews!Recently, we tested adding a question about step clarity to assess whether the available information is clear and useful. This will help us measure the impact of Content Design more precisely.NPS surveysFeedback on the global ExperienceNet Promoter Score (NPS) measures customer loyalty toward the Brand. The qualitative feedback informs the company about users main concerns and expectations regarding their global experience with the product or service. The question is Would you recommend [Name of the company]?.NPS is widely debated as it provides broad feedback that is not always directly actionable. However, categorizing responses into themes allows us to the monitor changes in user satisfaction and dissatisfaction over time and better understand why.Temporary sourcesFeedback pre- or post-releaseThese are temporary sources, launched for a specific purpose and having having a lifetime of 1 to 6 months. They include surveys, user tests, and interviews that help gather feedback on a particular topic.Example of specific survey for the Matching stepIllustration by FabienGoubyExternal sourcesThese are third-party sources like App Store reviews, Google ratings, or Trustpilot. While you dont control them, analyzing them helps validate insights and spot trends that alignor notwith your own insights.Example of external feedback sourceTrustpilotGet more information on Continuous Research in thisarticle.Step 3: Document FeedbackSourcesNow that youve started mapping your sources, you can take it a step further by documenting each one more precisely:Objective: What do we want to measure? How does this feed into our strategy and inform decisions?Source: Where is it stored? In whichtool?Format: How is feedback collected? (e.g., in-product, email,call)Trigger: At what stage is feedback requested? Is there a specifictrigger?Filter: Are specific user segments targeted?Number: How many responses are received monthly? Is the source performing well?Card template to map your feedbacksourcesIt makes it easy for everyone on the team to understand what sources exist and what they are usedfor.Clarifying this information helps teams understand existing sources and their purposes while ensuring easy access to raw data if needed. It also prevents redundant surveys that ask similar questionsAnd your users will thank you for that!:)I strongly encourage you to do this in collaboration with other teams (e.g., marketing, support, sales,). This mapping should also belong to them, and they should be able to help you update and improveit.Step 4: Analyze Feedback RegularlyYou now have feedback sources that are relevant, mapped, and documented: welldone!But unanalyzed feedback has no value, agree? Now, you need to ensure this material is analyzed regularly without overwhelming teams. At OpenClassrooms, we review feedback monthly, typically at the end of eachmonth.All feedback is automatically stored in a dedicated folder in our Research Repository:List of continuous researchprojectsWe can import multiple sources into the same folder, allowing us to cross-analyze user interviews, Step Experience surveys, and external sources. This enhances the reliability of insights by merging different perspectives.Product Designers analyze feedback within their scope, tagging and categorizing it appropriately:Feedback analysis in our Research RepositoryThis ensures Designers and Product Managers gain a deeper understanding of users concerns while keeping the process efficient.But this process of tagging content can be time-consuming. To optimize it further, were experimenting with AI to assist in analyzing and automatically categorizing feedback. If youre interested in our first tests, check out thisarticle:AI prompt - Analyse user feedback | NotionStep 5: Present Insights and Drive DecisionsThese regular analyses help us generate valuable insights into user pain points and closely monitor satisfaction and effort overtime.Each month, Product Designers present their findings, combining them with Product Managers insights and data. This forms what we call the Squads Monthly Reports, which help prioritize roadmap topics based on criticality and business objectives.Findings are widely shared with the squads, stakeholders and leadership members.Depending on the objectives, we can present results in different ways:Categorized feedback byuserSplit of categories regarding our AlumnifeedbackThis allows teams to quickly identify patterns and trends. By comparing the proportion of different feedback themes, we can prioritize the most critical user concerns and ensure that decision-making is data-driven. It can impact the roadmap of all teams across the company, not just the Product team (e.g., Learning team, Mentorship team, Student Successteam).Top three pain points for a givenstep3 main user pain points for the Application stepThis can directly help prioritize the squad roadmap and identify key pain points that require deeper analysis or further exploration in the next product discovery phase.Evolution of Satisfaction and Effort overtimeThis allows us to correlate user satisfaction with business objectives. For example, if application rates drop, we can examine the Satisfaction and Effort scores for this step and analyze qualitative feedback to identify potential causes.It also helps us assess the impact of new releases. For example, whenever we make a change to the funnel, we analyze our Application Experience KPI to determine whether it reduces effort and enhances satisfaction.We can now track this evolution across all squads and have begun setting teams goals based on thosescores:Exemple of Satisfaction evolution over time for 3squadsWhats next?When launching your first surveys, youll likely find some ineffective. Some may not receive enough responses, while others may provide non-actionable feedback. This is normalyou wont get everything right on the first try!:)You will need to regularly iterate on your feedback sources: add new ones, remove ineffective ones, and adjust placement, channels, or questions asneeded.At OpenClassrooms, we continuously monitor Feedback sources performance and refine our approach accordingly.Impact of Continuous Research on ProductStrategyTime-saving: Collecting and analyzing feedback becomes easier. Teams no longer have to start fromscratch.Deeper user insights: Ongoing analysis enhances user understanding andempathy.Cross-team impact: Insights influence not only product roadmaps but also Customer Success, Sales, and Marketing.Proactive issue detection: Monitoring scores helps identify trends and take corrective action before problems escalate.Key numbers10+ Continuous research folderscreated900+ Feedback pieces analyzedmonthly20+ Actionable insights generated each month, directly influencing productstrategyWant to go further? Check out my article on Research Repositories:How to start a UX Research RepositoryKick-start your continuous user research in 5 steps was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
0 Comments ·0 Shares ·47 Views