The effort paradox in AI design
uxdesign.cc
Why making things too easy canbackfireIn our rush to automate daily tasks with AI, we risk reliving product design mistakes from longago.We can learn a lot from Betty Crocker andIKEA.Product design historylessonsBetty Crockers stumble andrecoveryWhen General Mills introduced the worlds first instant cake mixes in the 1940s, they seemed perfect: Add water to a Betty Crocker mix, bake, and enjoy a delicious cake.But sales disappointed and market research revealed a surprising truth: target audiences overwhelmingly found the process too easy. Betty Crockers customers didnt invest enough effort to feel proud. They didnt feel like they were properly caring for their families, and the product made them feel undervalued.The solution? The recipe was modified to require people to add anegg.This tiny, targeted addition of more work made all the difference. Ads highlighted the egg step, sales soared, and Betty Crocker cake mix wasfamous.IKEA, Legos andorigamiA set of 2011 Harvard Business School studies tested consumers value perceptions of products they assembled, vs. those that experts assembled. The effect was the same for utilitarian products (IKEA boxes) and fun products (Legos and origami animals): participants who constructed products valued them higher than preconstructed versions of the same products.Modern designers and businesses often focus on minimizing effort and saving customers time. And that oftenworks.But in many contexts and moments, subtracting effort backfires. Cognitive scientists call this the effortparadox.The AIparallelTodays AI product designers face a similar challenge. In our enthusiasm to automate everything, we risk leaving users feeling disconnected and unrewarded. Just as 1940s homemakers wanted to feel invested in their baking, todays users often want to feeland bemeaningfully involved in AI-assisted work.Thats good because strong collaboration between humans and AI assistants often yields better outcomes than tasks performed by AI models that go it alone. (Soon well dive into how you can leverage that, in an upcoming Mindful AI Design article.)But outcomes aside, removing too much customer effort from AI product interactions can rob customers of psychological ownership and satisfaction.Real-world AIexamplesAI writing assistantsIn a 2024 University of Waterloo study, participants wrote short or long text prompts, which were fed into an AI service that generated stories.People who created more detailed and extensive prompts reported notably stronger feelings of psychological ownership over the final stories, compared to those who provided shorter, simpler prompts. (To a point. The beneficial effects plateaued as the input neared the output story lengtharound 150words.)Notably, the perceived quality of the resulting stories didnt vary based on addedeffort.Reimagining the car ownersmanualThe principle played into my work on Smart Manual, an AI-powered conversational car manual and repair assistant. Early concept testing revealed a consistent theme: While respondents appreciated faster answers to car problems, the were uncomfortable relying solely on the AIs advice and instructions. Two users explicitly requested the ability to cross-reference the original car owners manual to verify the AIs troubleshooting reasoning.Early sketches of Smart Manual concept. In interviews, drivers expressed reservations and distrust about this version of the AI concept. They wanted more direct views into actual ownersmanuals.We updated the Smart Manual interface to surface relevant diagrams and excerpts from the car manual where appropriate, and to always link clearly and directly from AI summaries to the original manual material.That encourages drivers to verify the agents information. And it helps them dig deeper, and learnmore.Feedback drove us to integrate content from the original car owners manuals directly and frequently throughout Smart Manual interactions. This makes it easier for drivers to double-check the AIs accuracy, and it helps them feel involved in maintaining, repairing, and learning about theircars.This approach requires a bit more user efforta bit more taps, more scrolling, more thinking, and more deciding. But it builds trust and human involvement, and we hope it will boost the tools credibility. We hope to amplify drivers capabilities by framing the AI as an aid to making informed decisions, rather than as a magical cure-all.Finding the SweetSpotAs these examples show, setting the stage for meaningful human engagement in AI design requires the right balance between over-automation (excluding people from work where they bring, and derive, value) and human drudgery (bogging people down with work they dont enjoy and that AI is well suited to deliver).Its about identifying those places where a bit of extra effort can boost users sense of accomplishment, control, and investmentand foster a sense of ownership andmastery.That sweet spot will vary for different users and contexts. Some will call for more hands-on involvement, while others will need more automation.Guidelines for AI Product Designers1. Find the optimal human touchpoints. Distinguish between genuine friction and moments of meaningful effortwhere human involvement boosts value and enjoyment. Consider:What parts of this process give people a sense of accomplishment?Where does human judgment add genuinevalue?When is the effort the pointwhen people want to grapple with a problem ortopic?How can we augment, rather than replace, human capabilities?2. Preserve human agency. Give people clear control over key decisions.3. Show the work. Make AI processes transparent enough to keep people informed and involved, and design clear means to verify AI outputs. (This requires a nuanced balance. Too much explanation can bog down usersand AI toolstoo.)4. Help people learn, and help them refine explorations based on their learnings. That often beats flat, one-shotanswers.5. Consider customization. For more advanced users and contexts, controls for people to adjust levels of automation can be appropriate.Remember: effort isnt ourenemy.WALL-Es world of over-automation. It was a beautiful movie. Lets not make it areality.Looking AheadTheres a lot more to dig into here with research around human effort, perceptions of effort, and how this can impact the design of AI products.Studies already suggest that: Human perceptions of effort arent fixed and can be shaped dramatically by learning and experience. Different people judge levels of effort differently. Rewarding mental labor now can boost peoples willingness to expend effort in thefuture.As AI capabilities expand, the temptation to over-automate will grow stronger. Thats not an AI thing, its a human thing. We repeat that story with every new wave of technology. (Remember The Tragic Life ofClippy?)Its our job as mindful designers to steer past that tendency.Its time to set aside the false binary of manual vs. automated to ask more nuanced questions.This will be key to designing AI products that respect human agency while amplifying human potential.What do youthink?Have you felt unsatisfied with an AI tool that made something tooeasy?Have you encountered examples of the effort paradox in customers reactions to an AI product or service youre workingon?Have you had any success finding the sweet spot between overautomation and too much hassle, or finding those optimal human touchpoints and moments when injecting a bit of human effort can boost enjoyment or engagement?Please tell us about it in the comments.Satisfaction lies in the effort, not in the attainment MahatmaGandhiPart of the Mindful AI Design series. Alsosee:Do mosquitoes bite leeches? Keys to calibrating trust in AI productdesignBlack Mirror: Override. Dystopian storytelling for humane AIdesignRelatedThe Effort Paradox: Effort Is Both Costly and ValuedPMCWriting with AI Lowers Psychological Ownership, but Longer Prompts Can Help | AI Research PaperDetailsThe IKEA Effect: When Labor Leads to LoveMichael I. Norton Daniel Mochon DanArielyRewarding cognitive effort increases the intrinsic value of mental labor |PNASOn the specifics of valuing effort: a developmental and a formalized perspective on preferences for cognitive and physicaleffort15 Times to use AI, and 5 Not toby EthanMollickThe effort paradox in AI design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
0 Comments ·0 Shares ·35 Views