
Opening our minds to AI-moderated research
uxdesign.cc
How to confidently find yourself in the new researchmethod.I was talking to a former colleague about AI moderation for research sessionswhere an AI moderator conducts qualitative research with a human respondent.Her reaction was equal parts disgusted and demoralized: Uggggggh, crap.I recognize that reactionits the unsettling feeling of AI threatening to take away something you do, something youlove.For qualitative researchers, this hits especially hard. We believe in our work, and we take pride in it. If you look inside any qualitative researcher, youll find a trophy cabinet of moments when they cracked the mysterious human code and uncovered an insight no one elsesaw.AI moderation seems to devalue or jeopardize all of that. But Id like to offer a different perspective.Finding craft in the newmethodI started experimenting with AI moderation about a year agoat first, to prove it wrong. But eventually, I found myself building my own AI moderator, DeepNeed, to prove itright.Ive trained many practitioners over the years. I even taught qualitative methods at MIT for two semesters. My thinking was: how would we feel about this technology if it respected our rules for thedance?And in doing so, something in me shifted. I moved from a place of self-protection and defensiveness to one of gratitudefor what this method affordsus.Why? Because Ive seen it open doors: more opportunities for qual, more credibility, and more strategic impact.Perhaps more importantly, I found that it still felt like our work. The craft was still there. And I still liked my part init.The methodological barriersThe first step in opening our minds is to address the three most common critiques of AI moderation:AI moderators cant probe to uncover the deepwhy.AI moderators miss emotional subtext and environmental context.AI moderators cant buildrapport.Critique 1: AI moderators cant probe to uncover the deepwhyTheres something almost magical about a seasoned interviewer, fluidly probing to uncover deeper motivations. When I was first learning, I remember watching veteran interviewers and thinking they had superpowers.The general consensus is that AI cant do thiswell.As a recent Ipsos report putit:An AI moderator bot often behaves like a novice moderator that is constantly looking down at the discussion guide and, as a result, takes its eyes off the prize. It misses out on fertile opportunities toprobe.Of all the critiques, this is the one I disagree withmost.If an AI moderator fails to probe effectively, its a design flawnot a fundamental limitation of the technology. Like a human, AI has to be trained to move beyond the discussion guide and recognize rich moments worth exploring.At DeepNeed, we use an agentic workflow, where a coding agent and an interview agent work together to determine where to probe. Heres an example from a recent interview:Artificial ProbingIt wont win a Pulitzer. But does it uncover the deeper why?Yes.And more often than not, I find myself thinking: Thats exactly what I would haveasked.Critique 2: AI moderators miss emotional subtext and environmental contextThe second critique is that AI moderators lack the ability to interpret body language and use subtle cues to probedeeper.As Ipsosstates:Seasoned moderators read between the linesnoticing hesitations, excitement, or discomfort and following up on those signals. Without those cues, theres a risk the research team misses subtext.I agree that AI can miss these latent signalsbut I dont see it as a dealbreaker.When n = 10, picking up on these cues is critical to maintaining data integrity. But when n = 400, individual moments of hesitation or nuance tend to average out, revealing broader patterns atscale.On top of that, many AI moderation platforms, such as Listen Labs, already incorporate video functionality, allowing human researchers to analyze subtext. Technologies like Affectivas Emotion AI claim to detect complex emotional states, further bridging thegap.The more significant limitation, in my view, is the lack of environmental context. Research interviews are often a blend of conversation and observationsomething AI still struggles to replicate.That said, the pandemic proved that in-person research isnt always necessary to capture context. Platforms like Dscout and WatchMeThink have pioneered virtual observation, allowing researchers to gather rich, real-world data remotely.And this is just the beginning. AI companies are actively developing vision functionalityEthan Mollicks demo of OpenAIs Live Mode suggests that LLMs will grow their ability to analyze real-time video, giving AI moderators the ability to interpret context visually.If youre looking for a foolproof reason why AI moderation will never work, I wouldnt bet on thisone.Critique 3: AI moderators cant buildrapportThe final argument is that AI cant create the trust needed for respondents to fully openup.As researchers, we take pride in our ability to mirror emotions, validate experiences, and create a safe space for honest reflection.I dont disagree. In fact, at DeepNeed, weve worked hard to design ways for AI to validate responses throughout an interview.But I also question thepremise.We may be underestimating how intimidating or invasive human-led interviews can feelespecially when compensation is involved.Early research suggests that respondents often prefer AI moderators precisely because they perceive AI as non-judgmental.A London School of Economics study on French voters foundthat:50% preferred an AI interviewerOnly 15% preferred a human interviewer35% were indifferentAccording to the study, participants feltthat:The AI is a non-judgmental entity they could freely share their thoughts without fear of beingjudged.This aligns with established psychology research showing that people disclose more honest and sensitive information to computer-based agents when they believe no human is observing them.The very thing we assume is AIs weaknessits lack of human warmthmay actually be a strength when it comes to encouraging candid responses.The valuebarrierI dont think the biggest barriers to AI moderation stem from these methodological critiques. The real issue is that we dont yet fully grasp its valueand how could we? The technology is still in itsinfancy.As practitioners, we often treat traditional qualitative research like a Michelin-star dining experiencedeep, intentional, and crafted with care. Table 3? You should have seen how their faces lit up at the ceviche!At the other end of the spectrum, quantitative research is like Chipotlequick, efficient, and mass-produced. It serves a purpose, but no ones walking away talking about how life-changing their burritowas.Quant vs.QualI believe the real value of AI moderation lies somewhere in betweenperhaps like a local caf ordeli.Like a local caf, AI moderation serves a unique purpose and has a reason to exist. Its not just about fast, fast, fast!its about leveraging scale and speed to create a distinct method in its own right, not just a poor imitation of traditional approaches.Speed provides strategic bandwidthQualitative research is incredibly resource-intensive. Its easy to get so caught up in coding, debriefing, and insight formation that we lose sight of the core question were meant toanswer.Metaphorically, our stakeholders are asking for a fully cooked meal, but we deliver an organized report of perfectly choppedcarrots.On a recent project, we had just three weeks to deliver results before a board presentationan impossibly tight turnaround for a traditional study. But with AI, we collected the data in two days, giving us ample time to think, refine, anditerate.We ended up reworking the core framework/story three times before delivering it, ensuring a much betteroutcome.Speed provides more opportunities forresearchThis one is perhaps the most obvious: when research becomes faster (and more affordable), it becomes more accessible.Instead of being seen as a slow, resource-intensive process, stakeholders start to view qualitative research as a nimble tool they can deploy more frequently and more strategically.We can try to force stakeholders to dine at our Michelin-star restaurantbut we need to realize theyre increasingly choosing to skip the restaurant altogether.Aaron Cannon, co-founder of Outset.ai, makes this point by drawing a parallel to the cost of data storage. As the price of computer memory and storage plummeted over the last 50 years, it didnt just make existing computing cheaperit enabled entirely new innovations, like smartphones.Outset AISimilarly, as the cost and time investment of qualitative research decreases, it doesnt just make research easierit expands whats possible.Scale provides a fullerpictureTheres a common perception that AI moderation produces worse databut thats not necessarily true.Scale doesnt just accelerate research; it uncovers insights and opportunities we might have missed in a smaller sample or overlooked because they didnt immediately resonate withus.For instance, in a recent study, we conducted 400 interviews with patients. We captured 300 hours of audio, and we identified 3,580 deep customer needs. We clustered those needs into 29 aggregate categories, organized by frequency.A single AI-moderated interview might not feel as nuanced. But across hundreds of interviews, the sheer scale builds a richer, more completepicture.Scale provides credibility.I dont know about you, but Im tired of fighting the sample sizewar.Every company seems to have key stakeholders who dont trust or believe in qualitative work. But something shifts when we can back up deep insights with quantitative-level scale.Theres a big difference between presenting an insight that emerged from four conversations versus165.The sheer volume doesnt just strengthen credibilityit also allows us to capture the nuances within broader patterns, giving us a richer, more defensible story.The Last Barrier: OurselvesThe final challenge isnt technologicalits personal.This work is deeply meaningful to us. We dedicate our creative talents to understanding others, people entirely different from ourselves.Research isnt just a job; it shapes our identities. We dedicate our creative talents to understanding otherspeople entirely different from usand their stories stay withus.Something clicked when I did my first AI-moderated study. I found myself: Excitedly digging into what people said, hunting for golden nugget Noticing interesting word choices that summed up key insights Building a story I knew would help debunk the teams biggestmythsAnd I realized: this is still the work Ilove.AI moderation isnt just a tool; its a new method. And like any method, it still needs human expertise at thehelm.The key is to design our place within these new models, with confidence. So I encourage you: try it. Experience the value firsthand. And confidently find yourself in the newmethod.ReferencesIpsos. (2024). AI Moderation in Qualitative Research: Opportunities and Limitations. [Referenced for critique of AI moderation capabilities]Ratcliff, M. (2023). The State of AI in Qualitative Research. Murmur Research. [Referenced regarding AI qualitative research being open-ended quant]London School of Economics. (2023). AI vs. Human Interviewers: Respondent Preferences in Political Research. [Study finding 50% of respondents preferred AI interviews]Mollick, E. (2024). Demonstration of OpenAIs Live Mode. [Referenced regarding multimodal vision capabilities]Opening our minds to AI-moderated research was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
0 Kommentare
·0 Anteile
·53 Ansichten