20+ GenAI UX patterns, examples and implementation tactics
A shared language for product teams to build usable, intelligent and safe GenAI experiences beyond just the modelGenerative AI introduces a new way for humans to interact with systems by focusing on intent-based outcome specification. GenAI introduces novel challenges because its outputs are probabilistic, requires understanding of variability, memory, errors, hallucinations and malicious use which brings an essential need to build principles and design patterns as described by IBM.Moreover, any AI product is a layered system where LLM is just one ingredient and memory, orchestration, tool extensions, UX and agentic user-flows builds the real magic!This article is my research and documentation of evolving GenAI design patterns that provide a shared language for product managers, data scientists, and interaction designers to create products that are human-centred, trustworthy and safe. By applying these patterns, we can bridge the gap between user needs, technical capabilities and product development process.Here are 21 GenAI UX patternsGenAI or no GenAIConvert user needs to data needsAugment or automateDefine level of automationProgressive AI adoptionLeverage mental modelsConvey product limitsDisplay chain of thought Leverage multiple outputsProvide data sourcesConvey model confidenceDesign for memory and recallProvide contextual input parametersDesign for coPilot, co-Editing or partial automationDefine user controls for AutomationDesign for user input error statesDesign for AI system error statesDesign to capture user feedbackDesign for model evaluationDesign for AI safety guardrailsCommunicate data privacy and controls1. GenAI or no GenAIEvaluate whether GenAI improves UX or introduces complexity. Often, heuristic-basedsolutions are easier to build and maintain.Scenarios when GenAI is beneficialTasks that are open-ended, creative and augments user.E.g., writing prompts, summarizing notes, drafting replies.Creating or transforming complex outputs.E.g., converting a sketch into website code.Where structured UX fails to capture user intent.Scenarios when GenAI should be avoidedOutcomes that must be precise, auditable or deterministic. E.g., Tax forms or legal contracts.Users expect clear and consistent information.E.g. Open source software documentationHow to use this patternDetermine the friction points in the customer journeyAssess technology feasibility: Determine if AI can address the friction point. Evaluate scale, dataset availability, error risk assessment and economic ROI.Validate user expectations: - Determine if the AI solution erodes user expectations by evaluating whether the system augments human effort or replaces it entirely, as outlined in pattern 3, Augment vs. automate. - Determine if AI solution erodes pattern 6, Mental models2. Convert user needs to data needsThis pattern ensures GenAI development begins with user intent and data model required to achieve that. GenAI systems are only as good as the data they’re trained on. But real users don’t speak in rows and columns, they express goals, frustrations, and behaviours. If teams fail to translate user needs into structured, model-ready inputs, the resulting system or product may optimise for the wrong outcomes and thus user churn.How to use this patternCollaborate as a cross-functional team of PMs, Product designers and Data Scientists and align on user problems worth solving.Define user needs by using triangulated research: Qualitative+ Quantitative+ Emergentand synthesising user insights using JTBD framework, Empathy Map to visualise user emotions and perspectives. Value Proposition Canvas to align user gains and pains with featuresDefine data needs and documentation by selecting a suitable data model, perform gap analysis and iteratively refine data model as needed. Once you understand the why, translate it into the what for the model. What features, labels, examples, and contexts will your AI model need to learn this behaviour? Use structured collaboration to figure out.3. Augment vs automateOne of the critical decisions in GenAI apps is whether to fully automate a task or to augment human capability. Use this pattern to to align with user intent and control preferences with the technology.Automation is best for tasks users prefer to delegate especially when they are tedious, time-consuming or unsafe. E.g., Intercom FinAI automatically summarizes long email threads into internal notes, saving time on repetitive, low-value tasks.Augmentation enhances tasks users want to remain involved in by increasing efficiency, increase creativity and control. E.g., Magenta Studio in Abelton support creative controls to manipulate and create new music.How to use this patternTo select the best approach, evaluate user needs and expectations using research synthesis tools like empathy mapand value proposition canvasTest and validate if the approach erodes user experience or enhances it.4. Define level of automationIn AI systems, automation refers to how much control is delegated to the AI vs user. This is a strategic UX pattern to decide degree of automation based upon user pain-point, context scenarios and expectation from the product.Levels of automationNo automationThe AI system provides assistance and suggestions to the user but requires the user to make all the decisions. E.g., Grammarly highlights grammar issues but the user accepts or rejects corrections.Partial automation/ co-pilot/ co-editorThe AI initiates actions or generates content, but the user reviews or intervenes as needed. E.g., GitHub Copilot suggest code that developers can accept, modify, or ignore.Full automationThe AI system performs tasks without user intervention, often based on predefined rules, tools and triggers. Full automation in GenAI are often referred to as Agentic systems. E.g., Ema can autonomously plan and execute multi-step tasks like researching competitors, generating a report and emailing it without user prompts or intervention at each step.How to use this patternEvaluate user pain point to be automated and risk involved: Automating tasks is most effective when the associated risk is low without severe consequences in case of failure. Low-risk tasks such as sending automated reminders, promotional emails, filtering spam emails or processing routine customer queries can be automated with minimal downside while saving time and resources. High-risk tasks such as making medical diagnoses, sending business-critical emails, or executing financial trades requires careful oversight due to the potential for significant harm if errors occur.Evaluate and design for particular automation level: Evaluate if user pain point should fall under — No Automation, Partial Automation or Full Automation based upon user expectations and goals.Define user controls for automation5. Progressive GenAI adoptionWhen users first encounter a product built on new technology, they often wonder what the system can and can’t do, how it works and how they should interact with it.This pattern offers multi-dimensional strategy to help user onboard an AI product or feature, mitigate errors, aligns with user readiness to deliver an informed and human-centered UX.How to use this patternThis pattern is a culmination of many other patternsFocus on communicating benefits from the start: Avoid diving into details about the technology and highlight how the AI brings new value.Simplify the onboarding experience Let users experience the system’s value before asking data-sharing preferences, give instant access to basic AI features first. Encourage users to sign up later to unlock advanced AI features or share more details. E.g., Adobe FireFly progressively onboards user with basic to advance AI featuresDefine level of automationand gradually increase autonomy or complexity.Provide explainability and trust by designing for errors.Communicate data privacy and controlsto clearly convey how user data is collected, stored, processed and protected.6. Leverage mental modelsMental models help user predict how a systemwill work and, therefore, influence how they interact with an interface. When a product aligns with a user’s existing mental models, it feels intuitive and easy to adopt. When it clashes, it can cause frustration, confusion, or abandonment.E.g. Github Copilot builds upon developers’ mental models from traditional code autocomplete, easing the transition to AI-powered code suggestionsE.g. Adobe Photoshop builds upon the familiar approach of extending an image using rectangular controls by integrating its Generative Fill feature, which intelligently fills the newly created space.How to use this patternIdentifying and build upon existing mental models by questioningWhat is the user journey and what is user trying to do?What mental models might already be in place?Does this product break any intuitive patterns of cause and effect?Are you breaking an existing mental model? If yes, clearly explain how and why. Good onboarding, microcopy, and visual cues can help bridge the gap.7. Convey product limitsThis pattern involves clearly conveying what an AI model can and cannot do, including its knowledge boundaries, capabilities and limitations.It is helpful to builds user trust, sets appropriate expectations, prevents misuse, and reduces frustration when the model fails or behaves unexpectedly.How to use this patternExplicitly state model limitations: Show contextual cues for outdated knowledge or lack of real-time data. E.g., Claude states its knowledge cutoff when the question falls outside its knowledge domainProvide fallbacks or escalation options when the model cannot provide a suitable output. E.g., Amazon Rufus when asked about something unrelated to shopping, says “it doesn’t have access to factual information and, I can only assists with shopping related questions and requests”Make limitations visible in product marketing, onboarding, tooltips or response disclaimers.8. Display chain of thought In AI systems, chain-of-thoughtprompting technique enhances the model’s ability to solve complex problems by mimicking a more structured, step-by-step thought process like that of a human.CoT display is a UX pattern that improves transparency by revealing how the AI arrived at its conclusions. This fosters user trust, supports interpretability, and opens up space for user feedback especially in high-stakes or ambiguous scenarios.E.g., Perplexity enhances transparency by displaying its processing steps helping users understand the thoughtful process behind the answers.E.g., Khanmigo an AI Tutoring system guides students step-by-step through problems, mimicking human reasoning to enhance understanding and learning.How to use this patternShow status like “researching” and “reasoning to communicate progress, reduce user uncertainty and wait times feel shorter.Use progressive disclosure: Start with a high-level summary, and allow users to expand details as needed.Provide AI tooling transparency: Clearly display external tools and data sources the AI uses to generate recommendations.Show confidence & uncertainty: Indicate AI confidence levels and highlight uncertainties when relevant.9. Leverage multiple outputsGenAI can produce varied responses to the same input due to its probabilistic nature. This pattern exploits variability by presenting multiple outputs side by side. Showing diverse options helps users creatively explore, compare, refine or make better decisions that best aligns with their intent. E.g., Google Gemini provides multiple options to help user explore, refine and make better decisions.How to use this patternExplain the purpose of variation: Help users understand that differences across outputs are intentional and meant to offer choice.Enable edits: Let users rate, select, remix, or edit outputs seamlessly to shape outcomes and provide feedback. E.g., Midjourney helps user adjust prompt and guide your variations and edits using remix10. Provide data sourcesArticulating data sources in a GenAI application is essential for transparency, credibility and user trust. Clearly indicating where the AI derives its knowledge helps users assess the reliability of responses and avoid misinformation.This is especially important in high stakes factual domains like healthcare, finance or legal guidance where decisions must be based on verified data.How to use this patternCite credible sources inline: Display sources as footnotes, tooltips, or collapsible links. E.g., NoteBookLM adds citations to its answers and links each answer directly to the part of user’s uploaded documents.Disclose training data scope clearly: For generative tools, offer a simple explanation of what data the model was trained on and what wasn’t included. E.g., Adobe Firefly discloses that its Generative Fill feature is trained on stock imagery, openly licensed work and public domain content where the copyright has expired.Provide source-level confidence:In cases where multiple sources contribute, visually differentiate higher-confidence or more authoritative sources.11. Convey model confidenceAI-generated outputs are probabilistic and can vary in accuracy. Showing confidence scores communicates how certain the model is about its output. This helps users assess reliability and make better-informed decisions.How to use this patternAssess context and decision stakes: Showing model confidence depends on the context and its impact on user decision-making. In high-stakes scenarios like healthcare, finance or legal advice, displaying confidence scores are crucial. However, in low stake scenarios like AI-generated art or storytelling confidence may not add much value and could even introduce unnecessary confusion.Choose the right visualization: If design research shows that displaying model confidence aids decision-making, the next step is to select the right visualization method. Percentages, progress bars or verbal qualifierscan communicate confidence effectively. The apt visualisation method depends on the application’s use-case and user familiarity. E.g., Grammarly uses verbal qualifiers like “likely” to the content it generated along with the userGuide user action during low confidence scenarios: Offer paths forward such as asking clarifying questions or offering alternative options.12. Design for memory and recallMemory and recall is an important concept and design pattern that enables the AI product to store and reuse information from past interactions such as user preferences, feedback, goals or task history to improve continuity and context awareness.Enhances personalization by remembering past choices or preferencesReduces user burden by avoiding repeated input requests especially in multi-step or long-form tasksSupports complex tasks like longitudinal workflows like in project planning, learning journeys by referencing or building on past progress.Memory used to access information can be ephemeralor persistentand may include conversational context, behavioural signals, or explicit inputs.How to use this patternDefine the user context and choose memory typeChoose memory type like ephemeral or persistent or both based upon use case. A shopping assistant might track interactions in real time without needing to persist data for future sessions whereas personal assistants need long-term memory for personalization.Use memory intelligently in user interactionsBuild base prompts for LLM to recall and communicate information contextually.Communicate transparency and provide controlsClearly communicate what’s being saved and let users view, edit or delete stored memory. Make “delete memories” an accessible action. E.g. ChatGPT offers extensive controls across it’s platform to view, update, or delete memories anytime.13. Provide contextual input parametersContextual Input parameters enhance the user experience by streamlining user interactions and gets to user goal faster. By leveraging user-specific data, user preferences or past interactions or even data from other users who have similar preferences, GenAI system can tailor inputs and functionalities to better meet user intent and decision making.How to use this patternLeverage prior interactions: Pre-fill inputs based on what the user has previously entered. Refer pattern 12, Memory and recall.Use auto complete or smart defaults: As users type, offer intelligent, real-time suggestions derived from personal and global usage patterns. E.g., Perplexity offers smart next query suggestions based on your current query thread.Suggest interactive UI widgets: Based upon system prediction, provide tailored input widgets like toasts, sliders, checkboxes to enhance user input. E.g., ElevenLabs allows users to fine-tune voice generation settings by surfacing presets or defaults.14. Design for co-pilot / co-editing / partial automationCo-pilot is an augmentation pattern where AI acts as a collaborative assistant, offering contextual and data-driven insights while the user remains in control. This design pattern is essential in domains like strategy, ideating, writing, designing or coding where outcomes are subjective, users have unique preferences or creative input from the user is critical.Co-pilot speed up workflows, enhance creativity and reduce cognitive load but the human retains authorship and final decision-making.How to use this patternEmbed inline assistance: Place AI suggestions contextually so users can easily accept, reject or modify them. E.g., Notion AI helps you draft, summarise and edit content while you control the final version.user intent and creative direction: Let users guide the AI with input like goals, tone, or examples, maintaining authorship and creative direction. E.g., Jasper AI allows users to set brand voice and tone guidelines, helping structure AI output to better match the user’s intent.15. Design user controls for automationBuild UI-level mechanisms that let users manage or override automation based upon user goals, context scenarios or system failure states.No system can anticipate all user contexts. Controls give users agency and keep trust intact even when the AI gets it wrong.How to use this patternUse progressive disclosure: Start with minimal automation and allow users to opt into more complex or autonomous features over time. E.g., Canva Magic Studio starts with simple AI suggestions like text or image generation then gradually reveals advanced tools like Magic Write, AI video scenes and brand voice customisation.Give users automation controls: UI controls like toggles, sliders, or rule-based settings to let users choose when and how automation can be controlled. E.g., Gmail lets users disable Smart Compose.Design for automation error recovery: Give users correction when AI fails. Add manual override, undo, or escalate options to human support. E.g., GitHub Copilot suggests code inline, but developers can easily reject, modify or undo suggestions when output is off.16. Design for user input error statesGenAI systems often rely on interpreting human input. When users provide ambiguous, incomplete or erroneous information, the AI may misunderstand their intent or produce low-quality outputs.Input errors often reflect a mismatch between user expectations and system understanding. Addressing these gracefully is essential to maintain trust and ensure smooth interaction.How to use this patternHandle typos with grace: Use spell-checking or fuzzy matching to auto-correct common input errors when confidence is high, and subtly surface corrections.Ask clarifying questions: When input is too vague or has multiple interpretations, prompt the user to provide missing context. In Conversation Design, these types of errors occur when the intent is defined but the entity is not clear. Know more about entity and intent. E.g., ChatGPT when given low-context prompts like “What’s the capital?”, it asks follow-up questions rather than guessing.Support quick correction: Make it easy for users to edit or override your interpretation. E.g., ChatGPT displays an edit button beside submitted prompts, enabling users to revise their input17. Design for AI system error statesGenAI outputs are inherently probabilistic and subject to errors ranging from hallucinations and bias to contextual misalignments.Unlike traditional systems, GenAI error states are hard to predict. Designing for these states requires transparency, recovery mechanisms and user agency. A well-designed error state can help users understand AI system boundaries and regain control.A Confusion matrix helps analyse AI system errors and provides insight into how well the model is performing by showing the counts of - True positives- False positives- True negatives- False negativesScenarios of AI errors and failure statesSystem failureFalse positives or false negatives occur due to poor data, biases or model hallucinations. E.g., Citibank financial fraud system displays a message “Unusual transaction. Your card is blocked. If it was you, please verify your identity”System limitation errorsTrue negatives occur due to untrained use cases or gaps in knowledge. E.g., when an ODQA system is given a user input outside the trained dataset, throws the following error “Sorry, we don’t have enough information. Please try a different query!”Contextual errorsTrue positives that confuse users due to poor explanations or conflicts with user expectations comes under contextual errors. E.g., when user logs in from a new device, gets locked out. AI responds: “Your login attempt was flagged for suspicious activity”How to use this patternCommunicate AI errors for various scenarios: Use phrases like “This may not be accurate”, “This seems like…” or surface confidence levels to help calibrate trust.Use pattern convey model confidence for low confidence outputs.Offer error recovery: Incase of System failure or Contextual errors, provide clear paths to override, retry or escalate the issue. E.g., Use way forwards like “Try a different query,” or “Let me refine that.” or “Contact Support”.Enable user feedback: Make it easy to report hallucinations or incorrect outputs. about pattern 19. Design to capture user feedback.18. Design to capture user feedbackReal-world alignment needs direct user feedback to improve the model and thus the product. As people interact with AI systems, their behaviours shape and influence the outputs they receive in the future. Thus, creating a continuous feedback loop where both the system and user behaviour adapt over time. E.g., ChatGPT uses Reaction buttons and Comment boxes to collect user feedback.How to use this patternAccount for implicit feedback: Capture user actions such as skips, dismissals, edits, or interaction frequency. These passive signals provide valuable behavioral cues that can tune recommendations or surface patterns of disinterest.Ask for explicit feedback: Collect direct user input through thumbs-up/down, NPS rating widgets or quick surveys after actions. Use this to improve both model behavior and product fit.Communicate how feedback is used: Let users know how their feedback shapes future experiences. This increases trust and encourages ongoing contribution.19. Design for model evaluationRobust GenAI models require continuous evaluation during training as well as post-deployment. Evaluation ensures the model performs as intended, identify errors and hallucinations and aligns with user goals especially in high-stakes domains.How to use this patternThere are three key evaluation methods to improve ML systems.LLM based evaluationsA separate language model acts as an automated judge. It can grade responses, explain its reasoning and assign labels like helpful/harmful or correct/incorrect.E.g., Amazon Bedrock uses the LLM-as-a-Judge approach to evaluate AI model outputs.A separate trusted LLM, like Claude 3 or Amazon Titan, automatically reviews and rates responses based on helpfulness, accuracy, relevance, and safety. For instance, two AI-generated replies to the same prompt are compared, and the judge model selects the better one.This automation reduces evaluation costs by up to 98% and speeds up model selection without relying on slow, expensive human reviews.Enable code-based evaluations: For structured tasks, use test suites or known outputs to validate model performance, especially for data processing, generation, or retrieval.Capture human evaluation: Integrate real-time UI mechanisms for users to label outputs as helpful, harmful, incorrect, or unclear. about it in pattern 19. Design to capture user feedbackA hybrid approach of LLM-as-a-judge and human evaluation drastically boost accuracy to 99%.20. Design for AI guardrailsDesign for AI guardrails means building practises and principles in GenAI models to minimise harm, misinformation, toxic behaviour and biases. It is a critical consideration toProtect users and children from harmful language, made-up facts, biases or false information.Build trust and adoption: When users know the system avoids hate speech and misinformation, they feel safer and show willingness to use it often.Ethical compliance: New rules like the EU AI act demand safe AI design. Teams must meet these standards to stay legal and socially responsible.How to use this patternAnalyse and guide user inputs: If a prompt could lead to unsafe or sensitive content, guide users towards safer interactions. E.g., when Miko robot comes across profanity, it answers“I am not allowed to entertain such language”Filter outputs and moderate content: Use real-time moderation to detect and filter potentially harmful AI outputs, blocking or reframing them before they’re shown to the user. E.g., show a note like: “This response was modified to follow our safety guidelines.Use pro-active warnings: Subtly notify users when they approach sensitive or high stakes information. E.g., “This is informational advice and not a substitute for medical guidance.”Create strong user feedback: Make it easy for users to report unsafe, biased or hallucinated outputs to directly improve the AI over time through active learning loops. E.g., Instagram provides in-app option for users to report harm, bias or misinformation.Cross-validate critical information: For high-stakes domains, back up AI-generated outputs with trusted databases to catch hallucinations. Refer pattern 10, Provide data sources.21. Communicate data privacy and controlsThis pattern ensures GenAI applications clearly convey how user data is collected, stored, processed and protected.GenAI systems often rely on sensitive, contextual, or behavioral data. Mishandling this data can lead to user distrust, legal risk or unintended misuse. Clear communication around privacy safeguards helps users feel safe, respected and in control. E.g., Slack AI clearly communicates that customer data remains owned and controlled by the customer and is not used to train Slack’s or any third-party AI modelsHow to use this patternShow transparency: When a GenAI feature accesses user data, display explanation of what’s being accessed and why.Design opt-in and opt-out flows: Allow users to easily toggle data sharing preferences.Enable data review and deletion: Allow users to view, download or delete their data history giving them ongoing control.ConclusionThese GenAI UX patterns are a starting point and represent the outcome of months of research, shaped directly and indirectly with insights from notable designers, researchers, and technologists across leading tech companies and the broader AI communites across Medium and Linkedin. I have done my best to cite and acknowledge contributors along the way but I’m sure I’ve missed many. If you see something that should be credited or expanded, please reach out.Moreover, these patterns are meant to grow and evolve as we learn more about creating AI that’s trustworthy and puts people first. If you’re a designer, researcher, or builder working with AI, take these patterns, challenge them, remix them and contribute your own. Also, please let me know in comments about your suggestions. If you would like to collaborate with me to further refine this, please reach out to me.20+ GenAI UX patterns, examples and implementation tactics was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
#genai #patterns #examples #implementation #tactics
20+ GenAI UX patterns, examples and implementation tactics
A shared language for product teams to build usable, intelligent and safe GenAI experiences beyond just the modelGenerative AI introduces a new way for humans to interact with systems by focusing on intent-based outcome specification. GenAI introduces novel challenges because its outputs are probabilistic, requires understanding of variability, memory, errors, hallucinations and malicious use which brings an essential need to build principles and design patterns as described by IBM.Moreover, any AI product is a layered system where LLM is just one ingredient and memory, orchestration, tool extensions, UX and agentic user-flows builds the real magic!This article is my research and documentation of evolving GenAI design patterns that provide a shared language for product managers, data scientists, and interaction designers to create products that are human-centred, trustworthy and safe. By applying these patterns, we can bridge the gap between user needs, technical capabilities and product development process.Here are 21 GenAI UX patternsGenAI or no GenAIConvert user needs to data needsAugment or automateDefine level of automationProgressive AI adoptionLeverage mental modelsConvey product limitsDisplay chain of thought Leverage multiple outputsProvide data sourcesConvey model confidenceDesign for memory and recallProvide contextual input parametersDesign for coPilot, co-Editing or partial automationDefine user controls for AutomationDesign for user input error statesDesign for AI system error statesDesign to capture user feedbackDesign for model evaluationDesign for AI safety guardrailsCommunicate data privacy and controls1. GenAI or no GenAIEvaluate whether GenAI improves UX or introduces complexity. Often, heuristic-basedsolutions are easier to build and maintain.Scenarios when GenAI is beneficialTasks that are open-ended, creative and augments user.E.g., writing prompts, summarizing notes, drafting replies.Creating or transforming complex outputs.E.g., converting a sketch into website code.Where structured UX fails to capture user intent.Scenarios when GenAI should be avoidedOutcomes that must be precise, auditable or deterministic. E.g., Tax forms or legal contracts.Users expect clear and consistent information.E.g. Open source software documentationHow to use this patternDetermine the friction points in the customer journeyAssess technology feasibility: Determine if AI can address the friction point. Evaluate scale, dataset availability, error risk assessment and economic ROI.Validate user expectations: - Determine if the AI solution erodes user expectations by evaluating whether the system augments human effort or replaces it entirely, as outlined in pattern 3, Augment vs. automate. - Determine if AI solution erodes pattern 6, Mental models2. Convert user needs to data needsThis pattern ensures GenAI development begins with user intent and data model required to achieve that. GenAI systems are only as good as the data they’re trained on. But real users don’t speak in rows and columns, they express goals, frustrations, and behaviours. If teams fail to translate user needs into structured, model-ready inputs, the resulting system or product may optimise for the wrong outcomes and thus user churn.How to use this patternCollaborate as a cross-functional team of PMs, Product designers and Data Scientists and align on user problems worth solving.Define user needs by using triangulated research: Qualitative+ Quantitative+ Emergentand synthesising user insights using JTBD framework, Empathy Map to visualise user emotions and perspectives. Value Proposition Canvas to align user gains and pains with featuresDefine data needs and documentation by selecting a suitable data model, perform gap analysis and iteratively refine data model as needed. Once you understand the why, translate it into the what for the model. What features, labels, examples, and contexts will your AI model need to learn this behaviour? Use structured collaboration to figure out.3. Augment vs automateOne of the critical decisions in GenAI apps is whether to fully automate a task or to augment human capability. Use this pattern to to align with user intent and control preferences with the technology.Automation is best for tasks users prefer to delegate especially when they are tedious, time-consuming or unsafe. E.g., Intercom FinAI automatically summarizes long email threads into internal notes, saving time on repetitive, low-value tasks.Augmentation enhances tasks users want to remain involved in by increasing efficiency, increase creativity and control. E.g., Magenta Studio in Abelton support creative controls to manipulate and create new music.How to use this patternTo select the best approach, evaluate user needs and expectations using research synthesis tools like empathy mapand value proposition canvasTest and validate if the approach erodes user experience or enhances it.4. Define level of automationIn AI systems, automation refers to how much control is delegated to the AI vs user. This is a strategic UX pattern to decide degree of automation based upon user pain-point, context scenarios and expectation from the product.Levels of automationNo automationThe AI system provides assistance and suggestions to the user but requires the user to make all the decisions. E.g., Grammarly highlights grammar issues but the user accepts or rejects corrections.Partial automation/ co-pilot/ co-editorThe AI initiates actions or generates content, but the user reviews or intervenes as needed. E.g., GitHub Copilot suggest code that developers can accept, modify, or ignore.Full automationThe AI system performs tasks without user intervention, often based on predefined rules, tools and triggers. Full automation in GenAI are often referred to as Agentic systems. E.g., Ema can autonomously plan and execute multi-step tasks like researching competitors, generating a report and emailing it without user prompts or intervention at each step.How to use this patternEvaluate user pain point to be automated and risk involved: Automating tasks is most effective when the associated risk is low without severe consequences in case of failure. Low-risk tasks such as sending automated reminders, promotional emails, filtering spam emails or processing routine customer queries can be automated with minimal downside while saving time and resources. High-risk tasks such as making medical diagnoses, sending business-critical emails, or executing financial trades requires careful oversight due to the potential for significant harm if errors occur.Evaluate and design for particular automation level: Evaluate if user pain point should fall under — No Automation, Partial Automation or Full Automation based upon user expectations and goals.Define user controls for automation5. Progressive GenAI adoptionWhen users first encounter a product built on new technology, they often wonder what the system can and can’t do, how it works and how they should interact with it.This pattern offers multi-dimensional strategy to help user onboard an AI product or feature, mitigate errors, aligns with user readiness to deliver an informed and human-centered UX.How to use this patternThis pattern is a culmination of many other patternsFocus on communicating benefits from the start: Avoid diving into details about the technology and highlight how the AI brings new value.Simplify the onboarding experience Let users experience the system’s value before asking data-sharing preferences, give instant access to basic AI features first. Encourage users to sign up later to unlock advanced AI features or share more details. E.g., Adobe FireFly progressively onboards user with basic to advance AI featuresDefine level of automationand gradually increase autonomy or complexity.Provide explainability and trust by designing for errors.Communicate data privacy and controlsto clearly convey how user data is collected, stored, processed and protected.6. Leverage mental modelsMental models help user predict how a systemwill work and, therefore, influence how they interact with an interface. When a product aligns with a user’s existing mental models, it feels intuitive and easy to adopt. When it clashes, it can cause frustration, confusion, or abandonment.E.g. Github Copilot builds upon developers’ mental models from traditional code autocomplete, easing the transition to AI-powered code suggestionsE.g. Adobe Photoshop builds upon the familiar approach of extending an image using rectangular controls by integrating its Generative Fill feature, which intelligently fills the newly created space.How to use this patternIdentifying and build upon existing mental models by questioningWhat is the user journey and what is user trying to do?What mental models might already be in place?Does this product break any intuitive patterns of cause and effect?Are you breaking an existing mental model? If yes, clearly explain how and why. Good onboarding, microcopy, and visual cues can help bridge the gap.7. Convey product limitsThis pattern involves clearly conveying what an AI model can and cannot do, including its knowledge boundaries, capabilities and limitations.It is helpful to builds user trust, sets appropriate expectations, prevents misuse, and reduces frustration when the model fails or behaves unexpectedly.How to use this patternExplicitly state model limitations: Show contextual cues for outdated knowledge or lack of real-time data. E.g., Claude states its knowledge cutoff when the question falls outside its knowledge domainProvide fallbacks or escalation options when the model cannot provide a suitable output. E.g., Amazon Rufus when asked about something unrelated to shopping, says “it doesn’t have access to factual information and, I can only assists with shopping related questions and requests”Make limitations visible in product marketing, onboarding, tooltips or response disclaimers.8. Display chain of thought In AI systems, chain-of-thoughtprompting technique enhances the model’s ability to solve complex problems by mimicking a more structured, step-by-step thought process like that of a human.CoT display is a UX pattern that improves transparency by revealing how the AI arrived at its conclusions. This fosters user trust, supports interpretability, and opens up space for user feedback especially in high-stakes or ambiguous scenarios.E.g., Perplexity enhances transparency by displaying its processing steps helping users understand the thoughtful process behind the answers.E.g., Khanmigo an AI Tutoring system guides students step-by-step through problems, mimicking human reasoning to enhance understanding and learning.How to use this patternShow status like “researching” and “reasoning to communicate progress, reduce user uncertainty and wait times feel shorter.Use progressive disclosure: Start with a high-level summary, and allow users to expand details as needed.Provide AI tooling transparency: Clearly display external tools and data sources the AI uses to generate recommendations.Show confidence & uncertainty: Indicate AI confidence levels and highlight uncertainties when relevant.9. Leverage multiple outputsGenAI can produce varied responses to the same input due to its probabilistic nature. This pattern exploits variability by presenting multiple outputs side by side. Showing diverse options helps users creatively explore, compare, refine or make better decisions that best aligns with their intent. E.g., Google Gemini provides multiple options to help user explore, refine and make better decisions.How to use this patternExplain the purpose of variation: Help users understand that differences across outputs are intentional and meant to offer choice.Enable edits: Let users rate, select, remix, or edit outputs seamlessly to shape outcomes and provide feedback. E.g., Midjourney helps user adjust prompt and guide your variations and edits using remix10. Provide data sourcesArticulating data sources in a GenAI application is essential for transparency, credibility and user trust. Clearly indicating where the AI derives its knowledge helps users assess the reliability of responses and avoid misinformation.This is especially important in high stakes factual domains like healthcare, finance or legal guidance where decisions must be based on verified data.How to use this patternCite credible sources inline: Display sources as footnotes, tooltips, or collapsible links. E.g., NoteBookLM adds citations to its answers and links each answer directly to the part of user’s uploaded documents.Disclose training data scope clearly: For generative tools, offer a simple explanation of what data the model was trained on and what wasn’t included. E.g., Adobe Firefly discloses that its Generative Fill feature is trained on stock imagery, openly licensed work and public domain content where the copyright has expired.Provide source-level confidence:In cases where multiple sources contribute, visually differentiate higher-confidence or more authoritative sources.11. Convey model confidenceAI-generated outputs are probabilistic and can vary in accuracy. Showing confidence scores communicates how certain the model is about its output. This helps users assess reliability and make better-informed decisions.How to use this patternAssess context and decision stakes: Showing model confidence depends on the context and its impact on user decision-making. In high-stakes scenarios like healthcare, finance or legal advice, displaying confidence scores are crucial. However, in low stake scenarios like AI-generated art or storytelling confidence may not add much value and could even introduce unnecessary confusion.Choose the right visualization: If design research shows that displaying model confidence aids decision-making, the next step is to select the right visualization method. Percentages, progress bars or verbal qualifierscan communicate confidence effectively. The apt visualisation method depends on the application’s use-case and user familiarity. E.g., Grammarly uses verbal qualifiers like “likely” to the content it generated along with the userGuide user action during low confidence scenarios: Offer paths forward such as asking clarifying questions or offering alternative options.12. Design for memory and recallMemory and recall is an important concept and design pattern that enables the AI product to store and reuse information from past interactions such as user preferences, feedback, goals or task history to improve continuity and context awareness.Enhances personalization by remembering past choices or preferencesReduces user burden by avoiding repeated input requests especially in multi-step or long-form tasksSupports complex tasks like longitudinal workflows like in project planning, learning journeys by referencing or building on past progress.Memory used to access information can be ephemeralor persistentand may include conversational context, behavioural signals, or explicit inputs.How to use this patternDefine the user context and choose memory typeChoose memory type like ephemeral or persistent or both based upon use case. A shopping assistant might track interactions in real time without needing to persist data for future sessions whereas personal assistants need long-term memory for personalization.Use memory intelligently in user interactionsBuild base prompts for LLM to recall and communicate information contextually.Communicate transparency and provide controlsClearly communicate what’s being saved and let users view, edit or delete stored memory. Make “delete memories” an accessible action. E.g. ChatGPT offers extensive controls across it’s platform to view, update, or delete memories anytime.13. Provide contextual input parametersContextual Input parameters enhance the user experience by streamlining user interactions and gets to user goal faster. By leveraging user-specific data, user preferences or past interactions or even data from other users who have similar preferences, GenAI system can tailor inputs and functionalities to better meet user intent and decision making.How to use this patternLeverage prior interactions: Pre-fill inputs based on what the user has previously entered. Refer pattern 12, Memory and recall.Use auto complete or smart defaults: As users type, offer intelligent, real-time suggestions derived from personal and global usage patterns. E.g., Perplexity offers smart next query suggestions based on your current query thread.Suggest interactive UI widgets: Based upon system prediction, provide tailored input widgets like toasts, sliders, checkboxes to enhance user input. E.g., ElevenLabs allows users to fine-tune voice generation settings by surfacing presets or defaults.14. Design for co-pilot / co-editing / partial automationCo-pilot is an augmentation pattern where AI acts as a collaborative assistant, offering contextual and data-driven insights while the user remains in control. This design pattern is essential in domains like strategy, ideating, writing, designing or coding where outcomes are subjective, users have unique preferences or creative input from the user is critical.Co-pilot speed up workflows, enhance creativity and reduce cognitive load but the human retains authorship and final decision-making.How to use this patternEmbed inline assistance: Place AI suggestions contextually so users can easily accept, reject or modify them. E.g., Notion AI helps you draft, summarise and edit content while you control the final version.user intent and creative direction: Let users guide the AI with input like goals, tone, or examples, maintaining authorship and creative direction. E.g., Jasper AI allows users to set brand voice and tone guidelines, helping structure AI output to better match the user’s intent.15. Design user controls for automationBuild UI-level mechanisms that let users manage or override automation based upon user goals, context scenarios or system failure states.No system can anticipate all user contexts. Controls give users agency and keep trust intact even when the AI gets it wrong.How to use this patternUse progressive disclosure: Start with minimal automation and allow users to opt into more complex or autonomous features over time. E.g., Canva Magic Studio starts with simple AI suggestions like text or image generation then gradually reveals advanced tools like Magic Write, AI video scenes and brand voice customisation.Give users automation controls: UI controls like toggles, sliders, or rule-based settings to let users choose when and how automation can be controlled. E.g., Gmail lets users disable Smart Compose.Design for automation error recovery: Give users correction when AI fails. Add manual override, undo, or escalate options to human support. E.g., GitHub Copilot suggests code inline, but developers can easily reject, modify or undo suggestions when output is off.16. Design for user input error statesGenAI systems often rely on interpreting human input. When users provide ambiguous, incomplete or erroneous information, the AI may misunderstand their intent or produce low-quality outputs.Input errors often reflect a mismatch between user expectations and system understanding. Addressing these gracefully is essential to maintain trust and ensure smooth interaction.How to use this patternHandle typos with grace: Use spell-checking or fuzzy matching to auto-correct common input errors when confidence is high, and subtly surface corrections.Ask clarifying questions: When input is too vague or has multiple interpretations, prompt the user to provide missing context. In Conversation Design, these types of errors occur when the intent is defined but the entity is not clear. Know more about entity and intent. E.g., ChatGPT when given low-context prompts like “What’s the capital?”, it asks follow-up questions rather than guessing.Support quick correction: Make it easy for users to edit or override your interpretation. E.g., ChatGPT displays an edit button beside submitted prompts, enabling users to revise their input17. Design for AI system error statesGenAI outputs are inherently probabilistic and subject to errors ranging from hallucinations and bias to contextual misalignments.Unlike traditional systems, GenAI error states are hard to predict. Designing for these states requires transparency, recovery mechanisms and user agency. A well-designed error state can help users understand AI system boundaries and regain control.A Confusion matrix helps analyse AI system errors and provides insight into how well the model is performing by showing the counts of - True positives- False positives- True negatives- False negativesScenarios of AI errors and failure statesSystem failureFalse positives or false negatives occur due to poor data, biases or model hallucinations. E.g., Citibank financial fraud system displays a message “Unusual transaction. Your card is blocked. If it was you, please verify your identity”System limitation errorsTrue negatives occur due to untrained use cases or gaps in knowledge. E.g., when an ODQA system is given a user input outside the trained dataset, throws the following error “Sorry, we don’t have enough information. Please try a different query!”Contextual errorsTrue positives that confuse users due to poor explanations or conflicts with user expectations comes under contextual errors. E.g., when user logs in from a new device, gets locked out. AI responds: “Your login attempt was flagged for suspicious activity”How to use this patternCommunicate AI errors for various scenarios: Use phrases like “This may not be accurate”, “This seems like…” or surface confidence levels to help calibrate trust.Use pattern convey model confidence for low confidence outputs.Offer error recovery: Incase of System failure or Contextual errors, provide clear paths to override, retry or escalate the issue. E.g., Use way forwards like “Try a different query,” or “Let me refine that.” or “Contact Support”.Enable user feedback: Make it easy to report hallucinations or incorrect outputs. about pattern 19. Design to capture user feedback.18. Design to capture user feedbackReal-world alignment needs direct user feedback to improve the model and thus the product. As people interact with AI systems, their behaviours shape and influence the outputs they receive in the future. Thus, creating a continuous feedback loop where both the system and user behaviour adapt over time. E.g., ChatGPT uses Reaction buttons and Comment boxes to collect user feedback.How to use this patternAccount for implicit feedback: Capture user actions such as skips, dismissals, edits, or interaction frequency. These passive signals provide valuable behavioral cues that can tune recommendations or surface patterns of disinterest.Ask for explicit feedback: Collect direct user input through thumbs-up/down, NPS rating widgets or quick surveys after actions. Use this to improve both model behavior and product fit.Communicate how feedback is used: Let users know how their feedback shapes future experiences. This increases trust and encourages ongoing contribution.19. Design for model evaluationRobust GenAI models require continuous evaluation during training as well as post-deployment. Evaluation ensures the model performs as intended, identify errors and hallucinations and aligns with user goals especially in high-stakes domains.How to use this patternThere are three key evaluation methods to improve ML systems.LLM based evaluationsA separate language model acts as an automated judge. It can grade responses, explain its reasoning and assign labels like helpful/harmful or correct/incorrect.E.g., Amazon Bedrock uses the LLM-as-a-Judge approach to evaluate AI model outputs.A separate trusted LLM, like Claude 3 or Amazon Titan, automatically reviews and rates responses based on helpfulness, accuracy, relevance, and safety. For instance, two AI-generated replies to the same prompt are compared, and the judge model selects the better one.This automation reduces evaluation costs by up to 98% and speeds up model selection without relying on slow, expensive human reviews.Enable code-based evaluations: For structured tasks, use test suites or known outputs to validate model performance, especially for data processing, generation, or retrieval.Capture human evaluation: Integrate real-time UI mechanisms for users to label outputs as helpful, harmful, incorrect, or unclear. about it in pattern 19. Design to capture user feedbackA hybrid approach of LLM-as-a-judge and human evaluation drastically boost accuracy to 99%.20. Design for AI guardrailsDesign for AI guardrails means building practises and principles in GenAI models to minimise harm, misinformation, toxic behaviour and biases. It is a critical consideration toProtect users and children from harmful language, made-up facts, biases or false information.Build trust and adoption: When users know the system avoids hate speech and misinformation, they feel safer and show willingness to use it often.Ethical compliance: New rules like the EU AI act demand safe AI design. Teams must meet these standards to stay legal and socially responsible.How to use this patternAnalyse and guide user inputs: If a prompt could lead to unsafe or sensitive content, guide users towards safer interactions. E.g., when Miko robot comes across profanity, it answers“I am not allowed to entertain such language”Filter outputs and moderate content: Use real-time moderation to detect and filter potentially harmful AI outputs, blocking or reframing them before they’re shown to the user. E.g., show a note like: “This response was modified to follow our safety guidelines.Use pro-active warnings: Subtly notify users when they approach sensitive or high stakes information. E.g., “This is informational advice and not a substitute for medical guidance.”Create strong user feedback: Make it easy for users to report unsafe, biased or hallucinated outputs to directly improve the AI over time through active learning loops. E.g., Instagram provides in-app option for users to report harm, bias or misinformation.Cross-validate critical information: For high-stakes domains, back up AI-generated outputs with trusted databases to catch hallucinations. Refer pattern 10, Provide data sources.21. Communicate data privacy and controlsThis pattern ensures GenAI applications clearly convey how user data is collected, stored, processed and protected.GenAI systems often rely on sensitive, contextual, or behavioral data. Mishandling this data can lead to user distrust, legal risk or unintended misuse. Clear communication around privacy safeguards helps users feel safe, respected and in control. E.g., Slack AI clearly communicates that customer data remains owned and controlled by the customer and is not used to train Slack’s or any third-party AI modelsHow to use this patternShow transparency: When a GenAI feature accesses user data, display explanation of what’s being accessed and why.Design opt-in and opt-out flows: Allow users to easily toggle data sharing preferences.Enable data review and deletion: Allow users to view, download or delete their data history giving them ongoing control.ConclusionThese GenAI UX patterns are a starting point and represent the outcome of months of research, shaped directly and indirectly with insights from notable designers, researchers, and technologists across leading tech companies and the broader AI communites across Medium and Linkedin. I have done my best to cite and acknowledge contributors along the way but I’m sure I’ve missed many. If you see something that should be credited or expanded, please reach out.Moreover, these patterns are meant to grow and evolve as we learn more about creating AI that’s trustworthy and puts people first. If you’re a designer, researcher, or builder working with AI, take these patterns, challenge them, remix them and contribute your own. Also, please let me know in comments about your suggestions. If you would like to collaborate with me to further refine this, please reach out to me.20+ GenAI UX patterns, examples and implementation tactics was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
#genai #patterns #examples #implementation #tactics
·66 Ansichten