Shaping minds: how first impressions drive AI adoption
uxdesign.cc
Make-or-break moments. The first interaction with an AI systemwhether its a website, landing page, or demoshapes the mental model of the system. This, in turn, determines whether it will be adopted or not. Are these decisions driven by emotion or logic? Heres how Technology Adoption Theory unpacks the mechanisms of technology acceptance, with insights applied to AIsystems.By Katie Metz,sourceIt takes only 10 seconds for someone to decide whether a website is worth their time or not. And while there is a wealth of resources on designing user-centered products, far fewer focus on how to communicate their value during those pivotal first momentsthe initial touchpoints that shape a users mental model of the system. This challenge is especially true for AI systems, which are often technically complex and hard to simplify. Ive seen talented teams build extraordinary, user-focused solutions, only to struggle with conveying their true value in a way thats clear, engaging, and instantly meaningful.This article delves into the psychological barriers to accepting and adopting new technologies, offering insights on how to highlight your products value and transform first impressions into lasting connections.New tech, oldhabitsHave you heard of Khanmigo? Its an AI-driven teaching assistant from Khan Academy, designed to guide students through their learning journey with engaging, conversational interactions. Its empathetic, engaging, and patient. Make a mistake? No problem. Itll gently explain what went wrong and how to fix it, creating a learning experience that feels less like being corrected and more like growing together. Its a glimpse into how AI can reinvent old patterns, making interactions more personal, more flexible, and, dare I say, morehuman.Source: KhanmigoOf course, kids are a relatively easy audience for Khanmigo, as they are naturally open to such innovations. They dont carry years of learning fatigue, forged by sitting through endless lectures and associating study time with boredom. AI meets them where they are, unspoiled andeager.Now imagine a different scenario: a car equipped with AI that tracks your facial expressions and eyelid movements to detect when youre too tired to drive safely. It suggests, perhaps with a subtle alarm, that you pull over for a rest. Tell that to my grandpa, though, and hed probably chuckle at the idea that a camera could know better than he does when he needs a break. There will always be early adoptersthose eager to embrace the new and excitingand those who resist, for reasons that may be logical or deeply personal. For instance, some might worry that AI will take their job, while others may mistrust the technology purely because it feels unfamiliar or intrusive. Understanding and addressing these perspectives is the first step towards designing AI systems that can bridge the gap between skepticism and acceptance.The good news? This isnt a new challenge. Humanity has faced it during every industrial revolution, each time adapting its thinking to a new normal. While I wont delve into all of these transformative erasor the ongoing Fourth Industrial RevolutionId like to focus on the most recent completed one. Lets rewind to the Third Industrial Revolutionthe dawn of the computer and internet age in the late 20th centuryand explore its key ideas of facilitating system adoption.When computers methumanityThe 1980s marked a significant turning point in the study of technology adoption, spurred by the rapid rise of personal computers and the challenge of integrating these new tools into everyday life. Researchers quickly recognized the need to focus on factors like user involvement in the design and implementation of information systems. This emphasis acknowledged a simple truth: technology is only as effective as its ability to meet the needs of the people who useit.1983, sourceOn the practical side, industry practitioners concentrated on developing and refining system designs, aiming to make them more user-friendly and effective. My favorite example is research at Xerox PARC (Palo Alto Research Center), where researchers closely observed office workers behaviors and workflows. Their insights led to the creation of the desktop metaphor, introducing familiar concepts like files, folders, and a workspace that mirrored physical desks. This innovation revolutionized graphical user interfaces (GUIs), laying the foundation for systems like Apples Macintosh and Microsoft Windows. The Dream Machine by M. Mitchell Waldrop or Dealers of Lightning by Michael Hiltzik share more details about history and impact of XeroxPARC.These parallel effortsacademic research and hands-on developmentled to the creation of numerous theories and frameworks to better understand and guide technology adoption. Among these frameworks, the Technology Acceptance Model (TAM) stands out as one of the most influential.Technology Acceptance ModelBack in 1986, Fred Davis created it to answer a simple but pivotal question: why do some people adopt new technology while others resist? TAM was designed to measure this adoption process by focusing on customer attitudesspecifically, whether the technology feels useful and easy to use. These two factors form the foundation of the model, offering a lens to understand how people decide to embrace (or avoid) new tools andsystems.The first factor, perceived usefulnessis how much a user believes the technology will improve their performance or productivity. Its outcome-oriented, zeroing in on whether the tool helps users achieve their goals, complete tasks faster, or deliver betterresults.The second factor of TAM is perceived ease of usethe belief that using the technology will be simple and free of unnecessary effort. While usefulness might get a users attention, ease of use determines whether theyll stick with it. If a system feels complicated, clunky, or overly technical, even its benefits might not be enough to win users over. People naturally gravitate toward tools that feel intuitive.Adapted from the Technology Acceptance Model (Davis, 1986),sourceIn 2000, Venkatesh and Davis expanded the original TAM model to dig deeper into what shapes Perceived Usefulness and peoples intentions to use technology. They introduced two key influences: social influencehow the opinions of others and societal norms impact adoptionand cognitive instrumental processes, which focus on how users mentally evaluate and connect with a system. Lets unpack these factors and explore how they can help shape a mental model of an AI system that fosters adoption.Perceived UsefulnessPerceived Usefulness doesnt exist in a vacuum. One of the social factors is subjective norm, or the pressure we feel from others to use (or not use) a particular technology. This ties closely to image, the way adopting a tool might enhance someones status or reputationthink of design influencers after attending Config, dissecting the latest features and showcasing their expertise.But subjective norm doesnt impact everyone the same way. Experience can dull its influence. For those just starting with a new system, social pressure often holds more weightunsure of their footing, they look to others for guidance. As they grow more comfortable, though, external opinions start to matter less, and their own evaluation takes over. Voluntariness also changes the game. When adoption is a choice, users are less swayed by others opinions. But when its requiredwhether by a workplace mandate or social obligationsubjective norm has a much strongerpull.On the cognitive side, job relevance plays a big role. Users ask, Does this technology actually help me in my specific role? If the answer is no, its unlikely theyll see it as useful. Similarly, output qualitywhether the system delivers results that meet or exceed expectationsreinforces its value. Finally, theres result demonstrability, or how clearly the benefits of the technology can be observed and communicated. The easier it is to see and measure the impact, the more likely users are to view it asuseful.Adapted from Technology Acceptance Model (TAM 2) by Venkatesh and Davis, 2000.sourceWhile product design cant directly influence subjective norm, it often plays a role in shaping imagehow people perceive themselves or imagine others will see them when they adopt the technology. Its not so much about the product itself, but what using it says about the individual. By focusing on the right narrative from the very first touchpoint, some applications make it easy for users to see how adopting the tool reflects positively onthem.Take folk.app, for instance. Instead of just listing features, it focuses on solving specific pain points, framing the app as a tool for staying organized and professional. The messages feels personal and practical. For example, a section title like Sales research, done for you suggests that without any additional effort, users will have valuable insights at their fingertips. Its not just about solving a problem; its about positioning the user as more prepared, professional, and efficient.Folk.app, sourceBraintrust takes a different angle. They highlight glowing media endorsements, signaling that the platform is widely recognised. Its not just about saying that app works; its about creating a sense that using it puts you on the cutting edge, part of a forward-thinking community. This builds image, making users feel like adopting the technology aligns with innovation andsuccess.Braintrust, sourcePerceived Ease ofUseIf perceived usefulness answers the question, Will this technology help me?, then perceived ease of use asks an equally important question: Will it be easy to figure out? Research shows that this perception is influenced by two main groups of factorsanchors and adjustments.Anchors serve as the starting point for a users judgment of ease. They include internal traits and predispositions, such as computer self-efficacya users confidence in their ability to use technologyand perceptions of external control, or the belief that support and resources are available if needed. Another anchor is computer playfulness, which reflects a users natural tendency to explore and experiment with technology. This sense of curiosity can make systems feel more approachable, even when theyre complex. On the flip side, computer anxiety, or a fear of engaging with technology, can act as a barrier, making systems seem more difficult than they really are. When applying these principles to AI systems, we see a new form of apprehension emerging: AIanxiety.Once users begin interacting with a system, adjustments come into play. Unlike anchors, which are deeply rooted in a users pre-existing traits and beliefs, adjustments are dynamicthey refine or reshape initial perceptions of ease of use based on real-world experience with thesystem.One key adjustment is perceived enjoyment, which asks whether the act of using the system is inherently satisfying or even delightful. This concept is closely tied to User delight, where interactions go beyond pure functionality to create moments of joy or surprise. Have you ever searched for cat in Google and noticed a yellow button with a paw? Thats delight. Its unexpected, playful, and entirely unnecessary for functionalitybut it sticks withyou.Another adjustment is objective usabilitythe systems actual performance as observed during use. Before interacting with the system, a user might assume it will be complex or difficult. But as they engage with the AI, accurate and intuitive responses can shift this perception, reinforcing the idea that the system is not only functional but easy touse.Adapted from Technology Acceptance Model (TAM 3) by Venkatesh and Bala,2008.Computer self-efficacya users confidence in their ability to use technologycant be controlled directly, but it can definitely be nudged in the right direction. The secret lies in making the application feel approachable, so users believe theyre capable of mastering it.One way to do this is by showcasing the experiences of others. Highlighting user reviews or testimonials isnt just about marketingit taps into the idea of Banduras Social Cognitive Theory. When people see others successfully using a tool, they start to think, If they can handle it, why cant I? Its not just about proof; its about planting the seed of possibility.Contra, sourceAnother approach is helping users form a mental map of how the technology works. GitBook, for example, pairs feature descriptions with skeleton-state interface snippetsclean, minimalist snapshots that give users just enough information to understand the basics without overwhelming them. Animations guide their focus, while interactive elements bring in a subtle gamification layer, making learning feel less like a chore and more like discovery. Its user-centric design done righta confidence boost, one step at atime.GitBook, sourceSlite provides an example of how the job relevance factor can make a product introduction resonate right from the first page. One of the challenges in introducing a knowledge base is resistance to sharing information. Studies reveal that 60% of employees struggle to obtain critical information from colleagues, often due to a phenomenon known as knowledge hidingthe deliberate withholding or concealing of information. This behavior stems from fears like losing status or job security, creating barriers to collaboration and productivity.Slite tackles this challenge head-on with a playful, relatable touch, wrapping it in humor: The knowledge base even [Name] from [one of 6 target industries] wants to use. This subtle nod to targeted pain points highlights its key differentiators: beautiful documentation, hassle-free adoption, and AI-powered search from day one, emphasizing perceived enjoymentafter all, who doesnt love beautiful, effortless solutions?Its not just about functionality; its about creating a product so intuitive and engaging that it minimizes resistance and inspires adoption, transforming apprehension into enthusiasm.Slite, sourceFinal thoughtsThe Technology Acceptance Model, while valuable, is not a universal solution but rather a frameworka lens through which we can examine and interpret the dynamics of technology adoption. Since its introduction over a quarter-century ago, it has illuminated patterns in how users perceive and engage with technology. However, it can also risk being overly generalizable, glossing over the nuanced and context-specific factors that shape user behavior. Rooted in the psychological theories of reasoned action and planned behavior, TAM serves as a navigatorhelping us better understand and adapt to the complexities of human affective reasoning. By recognizing its strengths and limitations, it can be used as a guide to create technology experiences truly resonate with the people they are designed toserve.Additional resources:Get Your Product Used: Adoption and Appropriation is a course from IxDF by Alan Dix, one of the authors of my work Bible, Human-Computer Interaction.How To Measure Product Adoption (Metrics & Tools) provides a solid overview of metrics that can help grasp the current state of productadoptionIncreasing the Adoption of UX and the Products You Design (Parts 1 and 2) are articles by Chris Kiess that provide a breakdown of the Diffusion of Innovations theory and Coopers Model of User Distribution, and relevance of Jakob Nielsens 5 components of usability.Have ideas, thoughts, or experiences to share? Leave your insights in the comments!Shaping minds: how first impressions drive AI adoption was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
0 Comentários ·0 Compartilhamentos ·68 Visualizações