• أفضل أداة لتحويل النصوص إلى صوت طبيعي باستخدام الذكاء الاصطناعي! ElevenLabs

    أفضل أداة لتحويل النصوص إلى صوت طبيعي باستخدام الذكاء الاصطناعي! ElevenLabs
    #أفضل #أداة #لتحويل #النصوص #إلى
    أفضل أداة لتحويل النصوص إلى صوت طبيعي باستخدام الذكاء الاصطناعي! 🎙️🔥 ElevenLabs
    أفضل أداة لتحويل النصوص إلى صوت طبيعي باستخدام الذكاء الاصطناعي! 🎙️🔥 ElevenLabs #أفضل #أداة #لتحويل #النصوص #إلى
    WWW.YOUTUBE.COM
    أفضل أداة لتحويل النصوص إلى صوت طبيعي باستخدام الذكاء الاصطناعي! 🎙️🔥 ElevenLabs
    أفضل أداة لتحويل النصوص إلى صوت طبيعي باستخدام الذكاء الاصطناعي! 🎙️🔥 ElevenLabs
    Like
    Love
    Wow
    Sad
    Angry
    234
    0 Comentários 0 Compartilhamentos
  • ElevenLabs debuts Conversational AI 2.0 voice assistants that understand when to pause, speak, and take turns talking

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    AI is advancing at a rapid clip for businesses, and that’s especially true of speech and voice AI models.
    Case in point: Today, ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing.
    This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications.
    The launch comes just four months after the debut of the original platform, reflecting ElevenLabs’ commitment to rapid development, and a day after rival voice AI startup Hume launched its own new, turn-based voice AI model, EVI 3.
    It also comes after new open source AI voice models hit the scene, prompting some AI influencers to declare ElevenLabs dead. It seems those declarations were, naturally, premature.
    According to Jozef Marko from ElevenLabs’ engineering team, Conversational AI 2.0 is substantially better than its predecessor, setting a new standard for voice-driven experiences.
    Enhancing naturalistic speech
    A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model.
    This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems.
    By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen.
    This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation.
    Multilingual support
    Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need for manual configuration.
    This capability ensures that the agent can recognize the language spoken by the user and respond accordingly within the same interaction.
    The feature caters to global enterprises seeking consistent service for diverse customer bases, removing language barriers and fostering more inclusive experiences.
    Enterprise-grade
    One of the more powerful additions is the built-in Retrieval-Augmented Generationsystem. This feature allows the AI to access external knowledge bases and retrieve relevant information instantly, while maintaining minimal latency and strong privacy protections.
    For example, in healthcare settings, this means a medical assistant agent can pull up treatment guidelines directly from an institution’s database without delay. In customer support, agents can access up-to-date product details from internal documentation to assist users more effectively.
    Multimodality and alternate personas
    In addition to these core features, ElevenLabs’ new platform supports multimodality, meaning agents can communicate via voice, text, or a combination of both. This flexibility reduces the engineering burden on developers, as agents only need to be defined once to operate across different communication channels.
    Further enhancing agent expressiveness, Conversational AI 2.0 allows multi-character mode, enabling a single agent to switch between different personas. This capability could be valuable in scenarios such as creative content development, training simulations, or customer engagement campaigns.
    Batch outbound calling
    For enterprises looking to automate large-scale outreach, the platform now supports batch calls.\
    Organizations can initiate multiple outbound calls simultaneously using Conversational AI agents, an approach well-suited for surveys, alerts, and personalized messages.
    This feature aims to increase both reach and operational efficiency, offering a more scalable alternative to manual outbound efforts.
    Enterprise-grade standards and pricing plans
    Beyond the features that enhance communication and engagement, Conversational AI 2.0 places a strong emphasis on trust and compliance. The platform is fully HIPAA-compliant, a critical requirement for healthcare applications that demand strict privacy and data protection. It also supports optional EU data residency, aligning with data sovereignty requirements in Europe.
    ElevenLabs reinforces these compliance-focused features with enterprise-grade security and reliability. Designed for high availability and integration with third-party systems, Conversational AI 2.0 is positioned as a secure and dependable choice for businesses operating in sensitive or regulated environments.
    As far as pricing is concerned, here are the available subscription plans that include Conversational AI currently listed on ElevenLabs’ website:

    Free: /month, includes 15 minutes, 4 concurrency limit, requires attribution and no commercial licensing.
    Starter: /month, includes 50 minutes, 6 concurrency limit.
    Creator: /month, includes 250 minutes, 6 concurrency limit, ~per additional minute.
    Pro: /month, includes 1,100 minutes, 10 concurrency limit, ~per additional minute.
    Scale: /month, includes 3,600 minutes, 20 concurrency limit, ~per additional minute.
    Business: /month, includes 13,750 minutes, 30 concurrency limit, ~per additional minute.

    A new chapter in realistic, naturalistic AI voice interactions
    As stated in the company’s video introducing the new release, “The potential of conversational AI has never been greater. The time to build is now.”
    With Conversational AI 2.0, ElevenLabs aims to provide the tools and infrastructure for enterprises to create truly intelligent, context-aware voice agents that elevate the standard of digital interactions.
    For those interested in learning more, ElevenLabs encourages developers and organizations to explore its documentation, visit the developer portal, or reach out to the sales team to see how Conversational AI 2.0 can enhance their customer experiences.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #elevenlabs #debuts #conversational #voice #assistants
    ElevenLabs debuts Conversational AI 2.0 voice assistants that understand when to pause, speak, and take turns talking
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI is advancing at a rapid clip for businesses, and that’s especially true of speech and voice AI models. Case in point: Today, ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing. This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications. The launch comes just four months after the debut of the original platform, reflecting ElevenLabs’ commitment to rapid development, and a day after rival voice AI startup Hume launched its own new, turn-based voice AI model, EVI 3. It also comes after new open source AI voice models hit the scene, prompting some AI influencers to declare ElevenLabs dead. It seems those declarations were, naturally, premature. According to Jozef Marko from ElevenLabs’ engineering team, Conversational AI 2.0 is substantially better than its predecessor, setting a new standard for voice-driven experiences. Enhancing naturalistic speech A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model. This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems. By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen. This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation. Multilingual support Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need for manual configuration. This capability ensures that the agent can recognize the language spoken by the user and respond accordingly within the same interaction. The feature caters to global enterprises seeking consistent service for diverse customer bases, removing language barriers and fostering more inclusive experiences. Enterprise-grade One of the more powerful additions is the built-in Retrieval-Augmented Generationsystem. This feature allows the AI to access external knowledge bases and retrieve relevant information instantly, while maintaining minimal latency and strong privacy protections. For example, in healthcare settings, this means a medical assistant agent can pull up treatment guidelines directly from an institution’s database without delay. In customer support, agents can access up-to-date product details from internal documentation to assist users more effectively. Multimodality and alternate personas In addition to these core features, ElevenLabs’ new platform supports multimodality, meaning agents can communicate via voice, text, or a combination of both. This flexibility reduces the engineering burden on developers, as agents only need to be defined once to operate across different communication channels. Further enhancing agent expressiveness, Conversational AI 2.0 allows multi-character mode, enabling a single agent to switch between different personas. This capability could be valuable in scenarios such as creative content development, training simulations, or customer engagement campaigns. Batch outbound calling For enterprises looking to automate large-scale outreach, the platform now supports batch calls.\ Organizations can initiate multiple outbound calls simultaneously using Conversational AI agents, an approach well-suited for surveys, alerts, and personalized messages. This feature aims to increase both reach and operational efficiency, offering a more scalable alternative to manual outbound efforts. Enterprise-grade standards and pricing plans Beyond the features that enhance communication and engagement, Conversational AI 2.0 places a strong emphasis on trust and compliance. The platform is fully HIPAA-compliant, a critical requirement for healthcare applications that demand strict privacy and data protection. It also supports optional EU data residency, aligning with data sovereignty requirements in Europe. ElevenLabs reinforces these compliance-focused features with enterprise-grade security and reliability. Designed for high availability and integration with third-party systems, Conversational AI 2.0 is positioned as a secure and dependable choice for businesses operating in sensitive or regulated environments. As far as pricing is concerned, here are the available subscription plans that include Conversational AI currently listed on ElevenLabs’ website: Free: /month, includes 15 minutes, 4 concurrency limit, requires attribution and no commercial licensing. Starter: /month, includes 50 minutes, 6 concurrency limit. Creator: /month, includes 250 minutes, 6 concurrency limit, ~per additional minute. Pro: /month, includes 1,100 minutes, 10 concurrency limit, ~per additional minute. Scale: /month, includes 3,600 minutes, 20 concurrency limit, ~per additional minute. Business: /month, includes 13,750 minutes, 30 concurrency limit, ~per additional minute. A new chapter in realistic, naturalistic AI voice interactions As stated in the company’s video introducing the new release, “The potential of conversational AI has never been greater. The time to build is now.” With Conversational AI 2.0, ElevenLabs aims to provide the tools and infrastructure for enterprises to create truly intelligent, context-aware voice agents that elevate the standard of digital interactions. For those interested in learning more, ElevenLabs encourages developers and organizations to explore its documentation, visit the developer portal, or reach out to the sales team to see how Conversational AI 2.0 can enhance their customer experiences. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #elevenlabs #debuts #conversational #voice #assistants
    VENTUREBEAT.COM
    ElevenLabs debuts Conversational AI 2.0 voice assistants that understand when to pause, speak, and take turns talking
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI is advancing at a rapid clip for businesses, and that’s especially true of speech and voice AI models. Case in point: Today, ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing. This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications. The launch comes just four months after the debut of the original platform, reflecting ElevenLabs’ commitment to rapid development, and a day after rival voice AI startup Hume launched its own new, turn-based voice AI model, EVI 3. It also comes after new open source AI voice models hit the scene, prompting some AI influencers to declare ElevenLabs dead. It seems those declarations were, naturally, premature. According to Jozef Marko from ElevenLabs’ engineering team, Conversational AI 2.0 is substantially better than its predecessor, setting a new standard for voice-driven experiences. Enhancing naturalistic speech A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model. This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems. By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen. This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation. Multilingual support Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need for manual configuration. This capability ensures that the agent can recognize the language spoken by the user and respond accordingly within the same interaction. The feature caters to global enterprises seeking consistent service for diverse customer bases, removing language barriers and fostering more inclusive experiences. Enterprise-grade One of the more powerful additions is the built-in Retrieval-Augmented Generation (RAG) system. This feature allows the AI to access external knowledge bases and retrieve relevant information instantly, while maintaining minimal latency and strong privacy protections. For example, in healthcare settings, this means a medical assistant agent can pull up treatment guidelines directly from an institution’s database without delay. In customer support, agents can access up-to-date product details from internal documentation to assist users more effectively. Multimodality and alternate personas In addition to these core features, ElevenLabs’ new platform supports multimodality, meaning agents can communicate via voice, text, or a combination of both. This flexibility reduces the engineering burden on developers, as agents only need to be defined once to operate across different communication channels. Further enhancing agent expressiveness, Conversational AI 2.0 allows multi-character mode, enabling a single agent to switch between different personas. This capability could be valuable in scenarios such as creative content development, training simulations, or customer engagement campaigns. Batch outbound calling For enterprises looking to automate large-scale outreach, the platform now supports batch calls.\ Organizations can initiate multiple outbound calls simultaneously using Conversational AI agents, an approach well-suited for surveys, alerts, and personalized messages. This feature aims to increase both reach and operational efficiency, offering a more scalable alternative to manual outbound efforts. Enterprise-grade standards and pricing plans Beyond the features that enhance communication and engagement, Conversational AI 2.0 places a strong emphasis on trust and compliance. The platform is fully HIPAA-compliant, a critical requirement for healthcare applications that demand strict privacy and data protection. It also supports optional EU data residency, aligning with data sovereignty requirements in Europe. ElevenLabs reinforces these compliance-focused features with enterprise-grade security and reliability. Designed for high availability and integration with third-party systems, Conversational AI 2.0 is positioned as a secure and dependable choice for businesses operating in sensitive or regulated environments. As far as pricing is concerned, here are the available subscription plans that include Conversational AI currently listed on ElevenLabs’ website: Free: $0/month, includes 15 minutes, 4 concurrency limit, requires attribution and no commercial licensing. Starter: $5/month, includes 50 minutes, 6 concurrency limit. Creator: $11/month (discounted from $22), includes 250 minutes, 6 concurrency limit, ~$0.12 per additional minute. Pro: $99/month, includes 1,100 minutes, 10 concurrency limit, ~$0.11 per additional minute. Scale: $330/month, includes 3,600 minutes, 20 concurrency limit, ~$0.10 per additional minute. Business: $1,320/month, includes 13,750 minutes, 30 concurrency limit, ~$0.096 per additional minute. A new chapter in realistic, naturalistic AI voice interactions As stated in the company’s video introducing the new release, “The potential of conversational AI has never been greater. The time to build is now.” With Conversational AI 2.0, ElevenLabs aims to provide the tools and infrastructure for enterprises to create truly intelligent, context-aware voice agents that elevate the standard of digital interactions. For those interested in learning more, ElevenLabs encourages developers and organizations to explore its documentation, visit the developer portal, or reach out to the sales team to see how Conversational AI 2.0 can enhance their customer experiences. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comentários 0 Compartilhamentos
  • Melania Trump welcomes you into the AI audiobook era with memoir Melania

    First lady Melania Trump signs the 'Take It Down' Act.
    Credit: Chip Somodevilla/Getty Images

    Melania Trump announced on Friday that she is releasing an AI audiobook version of her memoir, Melania.In an X post, the first lady welcomed followers into "a new era in publishing" and announced that an audiobook featuring an AI-generated version of her voice will be released in the ElevenReader app."I am honored to bring you Melania — The AI Audiobook — narrated entirely using artificial intelligence in my own voice. Let the future of publishing begin."

    You May Also Like

    The First Lady's book, Melania,Melania AI audiobook was released on May 22 through the ElevenReader app, a text-to-voice AI app that also lets authors create audiobooks from their work. Variety reports that the AI audiobook will soon be available in additional languages, from Spanish to Hindi.“Writing this memoir has been a deeply personal and reflective journey for me," Trump says in a trailer for the book. "As a private person who has often been the subject of public scrutiny and misrepresentation, I feel a responsibility to clarify the facts. I believe it is important to share my perspective, the truth.”

    Mashable Light Speed

    Want more out-of-this world tech, space and science stories?
    Sign up for Mashable's weekly Light Speed newsletter.

    By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Thanks for signing up!

    AI is set to transform the audiobook industryAudiobooks featuring AI-generated voices are a hot topic in the publishing world. On May 13, Amazon announced that it was working with publishers and authors to expand Audible's catalog of AI-narrated audiobooks. In a blog post, the company said it was "committed to working closely with authors, narrators, and publishers to ensure these technologies meet their creative and business needs while maintaining the quality standards our listeners expect."Using artificial intelligence tools, it's now possible to recreate someone's voice, and in virtually every language. So, rather than Trump going into a recording studio and reading her book line by line, AI tools can generate an entire audiobook recording based on samples of her voice. When done correctly, it's nearly impossible to tell the difference between the real voice and the AI version. Thus, companies can save significant amounts of time and money, and not just for audiobooks. Just this week, The New York Times reported that AI is already being used to reduce the costs of producing animation by up to 90 percent.Many artists — including actors and voice actors — see the use of artificial intelligence as a direct threat to their livelihoods. When the video game Fortnite recently introduced a Darth Vader character with an AI-generated voice based on actor James Earl Jones, the SAG-AFTRA union filed an unfair labor practice charge with the National Labor Relations Board. Mashable has also reported on the backlash to the use of artificial intelligence to recreate the likeness of Agatha Christie and to generate material for movies like The Brutalist and Late Night With the Devil.ElevenLabs, the company that makes the ElevenReader app, recently received a billion valuation. Variety also reports that the ElevenReader app features audiobooks with the voices of deceased celebrities, including Judy Garland and Jerry Garcia. These types of AI resurrection projects are becoming more common and more controversial.You can order the new AI audiobook at the First Lady's website.

    Topics
    Books
    Politics

    Timothy Beck Werth
    Tech Editor

    Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. Tim has over 15 years of experience as a journalist and editor, and he has particular experience covering and testing consumer technology, smart home gadgets, and men’s grooming and style products. Previously, he was the Managing Editor and then Site Director of SPY.com, a men's product review and lifestyle website. As a writer for GQ, he covered everything from bull-riding competitions to the best Legos for adults, and he’s also contributed to publications such as The Daily Beast, Gear Patrol, and The Awl.Tim studied print journalism at the University of Southern California. He currently splits his time between Brooklyn, NY and Charleston, SC. He's currently working on his second novel, a science-fiction book.
    #melania #trump #welcomes #you #into
    Melania Trump welcomes you into the AI audiobook era with memoir Melania
    First lady Melania Trump signs the 'Take It Down' Act. Credit: Chip Somodevilla/Getty Images Melania Trump announced on Friday that she is releasing an AI audiobook version of her memoir, Melania.In an X post, the first lady welcomed followers into "a new era in publishing" and announced that an audiobook featuring an AI-generated version of her voice will be released in the ElevenReader app."I am honored to bring you Melania — The AI Audiobook — narrated entirely using artificial intelligence in my own voice. Let the future of publishing begin." You May Also Like The First Lady's book, Melania,Melania AI audiobook was released on May 22 through the ElevenReader app, a text-to-voice AI app that also lets authors create audiobooks from their work. Variety reports that the AI audiobook will soon be available in additional languages, from Spanish to Hindi.“Writing this memoir has been a deeply personal and reflective journey for me," Trump says in a trailer for the book. "As a private person who has often been the subject of public scrutiny and misrepresentation, I feel a responsibility to clarify the facts. I believe it is important to share my perspective, the truth.” Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! AI is set to transform the audiobook industryAudiobooks featuring AI-generated voices are a hot topic in the publishing world. On May 13, Amazon announced that it was working with publishers and authors to expand Audible's catalog of AI-narrated audiobooks. In a blog post, the company said it was "committed to working closely with authors, narrators, and publishers to ensure these technologies meet their creative and business needs while maintaining the quality standards our listeners expect."Using artificial intelligence tools, it's now possible to recreate someone's voice, and in virtually every language. So, rather than Trump going into a recording studio and reading her book line by line, AI tools can generate an entire audiobook recording based on samples of her voice. When done correctly, it's nearly impossible to tell the difference between the real voice and the AI version. Thus, companies can save significant amounts of time and money, and not just for audiobooks. Just this week, The New York Times reported that AI is already being used to reduce the costs of producing animation by up to 90 percent.Many artists — including actors and voice actors — see the use of artificial intelligence as a direct threat to their livelihoods. When the video game Fortnite recently introduced a Darth Vader character with an AI-generated voice based on actor James Earl Jones, the SAG-AFTRA union filed an unfair labor practice charge with the National Labor Relations Board. Mashable has also reported on the backlash to the use of artificial intelligence to recreate the likeness of Agatha Christie and to generate material for movies like The Brutalist and Late Night With the Devil.ElevenLabs, the company that makes the ElevenReader app, recently received a billion valuation. Variety also reports that the ElevenReader app features audiobooks with the voices of deceased celebrities, including Judy Garland and Jerry Garcia. These types of AI resurrection projects are becoming more common and more controversial.You can order the new AI audiobook at the First Lady's website. Topics Books Politics Timothy Beck Werth Tech Editor Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. Tim has over 15 years of experience as a journalist and editor, and he has particular experience covering and testing consumer technology, smart home gadgets, and men’s grooming and style products. Previously, he was the Managing Editor and then Site Director of SPY.com, a men's product review and lifestyle website. As a writer for GQ, he covered everything from bull-riding competitions to the best Legos for adults, and he’s also contributed to publications such as The Daily Beast, Gear Patrol, and The Awl.Tim studied print journalism at the University of Southern California. He currently splits his time between Brooklyn, NY and Charleston, SC. He's currently working on his second novel, a science-fiction book. #melania #trump #welcomes #you #into
    MASHABLE.COM
    Melania Trump welcomes you into the AI audiobook era with memoir Melania
    First lady Melania Trump signs the 'Take It Down' Act. Credit: Chip Somodevilla/Getty Images Melania Trump announced on Friday that she is releasing an AI audiobook version of her memoir, Melania.In an X post, the first lady welcomed followers into "a new era in publishing" and announced that an audiobook featuring an AI-generated version of her voice will be released in the ElevenReader app."I am honored to bring you Melania — The AI Audiobook — narrated entirely using artificial intelligence in my own voice. Let the future of publishing begin." You May Also Like The First Lady's book, Melania,Melania AI audiobook was released on May 22 through the ElevenReader app, a text-to-voice AI app that also lets authors create audiobooks from their work. Variety reports that the AI audiobook will soon be available in additional languages, from Spanish to Hindi.“Writing this memoir has been a deeply personal and reflective journey for me," Trump says in a trailer for the book. "As a private person who has often been the subject of public scrutiny and misrepresentation, I feel a responsibility to clarify the facts. I believe it is important to share my perspective, the truth.” Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! AI is set to transform the audiobook industryAudiobooks featuring AI-generated voices are a hot topic in the publishing world. On May 13, Amazon announced that it was working with publishers and authors to expand Audible's catalog of AI-narrated audiobooks. In a blog post, the company said it was "committed to working closely with authors, narrators, and publishers to ensure these technologies meet their creative and business needs while maintaining the quality standards our listeners expect."Using artificial intelligence tools, it's now possible to recreate someone's voice, and in virtually every language. So, rather than Trump going into a recording studio and reading her book line by line, AI tools can generate an entire audiobook recording based on samples of her voice. When done correctly, it's nearly impossible to tell the difference between the real voice and the AI version. Thus, companies can save significant amounts of time and money, and not just for audiobooks. Just this week, The New York Times reported that AI is already being used to reduce the costs of producing animation by up to 90 percent.Many artists — including actors and voice actors — see the use of artificial intelligence as a direct threat to their livelihoods. When the video game Fortnite recently introduced a Darth Vader character with an AI-generated voice based on actor James Earl Jones, the SAG-AFTRA union filed an unfair labor practice charge with the National Labor Relations Board. Mashable has also reported on the backlash to the use of artificial intelligence to recreate the likeness of Agatha Christie and to generate material for movies like The Brutalist and Late Night With the Devil.ElevenLabs, the company that makes the ElevenReader app, recently received a $3.3 billion valuation. Variety also reports that the ElevenReader app features audiobooks with the voices of deceased celebrities, including Judy Garland and Jerry Garcia. These types of AI resurrection projects are becoming more common and more controversial.You can order the new AI audiobook at the First Lady's website. Topics Books Politics Timothy Beck Werth Tech Editor Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. Tim has over 15 years of experience as a journalist and editor, and he has particular experience covering and testing consumer technology, smart home gadgets, and men’s grooming and style products. Previously, he was the Managing Editor and then Site Director of SPY.com, a men's product review and lifestyle website. As a writer for GQ, he covered everything from bull-riding competitions to the best Legos for adults, and he’s also contributed to publications such as The Daily Beast, Gear Patrol, and The Awl.Tim studied print journalism at the University of Southern California. He currently splits his time between Brooklyn, NY and Charleston, SC. He's currently working on his second novel, a science-fiction book.
    0 Comentários 0 Compartilhamentos
  • At TechCrunch Sessions: AI, Artemis Seaford and Ion Stoica confront the ethical crisis — when AI crosses the line

    As generative AI becomes faster, cheaper, and more convincing, the ethical stakes are no longer theoretical. What happens when the tools to deceive become widely accessible? And how do we build systems that are powerful — but safe enough to trust?
    At TechCrunch Sessions: AI, taking place June 5 at UC Berkeley’s Zellerbach Hall, Artemis Seaford, Head of AI Safety at ElevenLabs, and Ion Stoica, co-founder of Databricks and professor at UC Berkeley, will take the main stage to unpack the ethical challenges of today’s AI. Their conversation will cut to the heart of one of the most urgent questions in tech: What are we unleashing — and can we still steer it?

    How Seaford and Stoica are tackling AI’s toughest ethical questions
    Artemis brings a rare blend of academic depth and frontline experience. At ElevenLabs, she leads AI safety efforts focused on media authenticity and abuse prevention. Her background spans OpenAI, Meta, and global risk management at the intersection of law, policy, and geopolitics. Expect a grounded, clear-eyed take on how deepfakes are evolving, what new risks are emerging, and which interventions are actually working.
    Ion, meanwhile, brings a systems-level view. He’s not just a leader in AI research—he’s helped build the infrastructure behind it. From Spark to Ray to ChatBot Arena, his open source projects power many of today’s most advanced AI deployments. As executive chairman of Databricks and a founder of several AI-driven companies, Ion knows what it takes to scale responsibly — and where today’s tools still fall short.
    Together, they’ll unpack the ethical blind spots in today’s development cycles, explore how safety can be embedded into core architectures, and examine the roles that industry, academia, and regulation must play in the years ahead.
    Join the front lines of AI — insight, access, and + in ticket savings
    This session is part of a day-long exploration at the front lines of artificial intelligence, featuring speakers from OpenAI, Google Cloud, Anthropic, Cohere, and more. Expect tactical insight, candid dialogue, and a rare cross-section of technologists, researchers, and founders — all in one room. Plus, you’ll have the chance to engage with these top minds through breakouts and top-tier networking.
    Grab your ticket now and save big — over off, plus 50% off a second ticket. Whether you’re building the future or just trying to keep up with it, you and your plus-one should be in the room for this.

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    The tools are moving fast. The ethics need to catch up. Don’t get left behind — learn how to stay compliant.
    #techcrunch #sessions #artemis #seaford #ion
    At TechCrunch Sessions: AI, Artemis Seaford and Ion Stoica confront the ethical crisis — when AI crosses the line
    As generative AI becomes faster, cheaper, and more convincing, the ethical stakes are no longer theoretical. What happens when the tools to deceive become widely accessible? And how do we build systems that are powerful — but safe enough to trust? At TechCrunch Sessions: AI, taking place June 5 at UC Berkeley’s Zellerbach Hall, Artemis Seaford, Head of AI Safety at ElevenLabs, and Ion Stoica, co-founder of Databricks and professor at UC Berkeley, will take the main stage to unpack the ethical challenges of today’s AI. Their conversation will cut to the heart of one of the most urgent questions in tech: What are we unleashing — and can we still steer it? How Seaford and Stoica are tackling AI’s toughest ethical questions Artemis brings a rare blend of academic depth and frontline experience. At ElevenLabs, she leads AI safety efforts focused on media authenticity and abuse prevention. Her background spans OpenAI, Meta, and global risk management at the intersection of law, policy, and geopolitics. Expect a grounded, clear-eyed take on how deepfakes are evolving, what new risks are emerging, and which interventions are actually working. Ion, meanwhile, brings a systems-level view. He’s not just a leader in AI research—he’s helped build the infrastructure behind it. From Spark to Ray to ChatBot Arena, his open source projects power many of today’s most advanced AI deployments. As executive chairman of Databricks and a founder of several AI-driven companies, Ion knows what it takes to scale responsibly — and where today’s tools still fall short. Together, they’ll unpack the ethical blind spots in today’s development cycles, explore how safety can be embedded into core architectures, and examine the roles that industry, academia, and regulation must play in the years ahead. Join the front lines of AI — insight, access, and + in ticket savings This session is part of a day-long exploration at the front lines of artificial intelligence, featuring speakers from OpenAI, Google Cloud, Anthropic, Cohere, and more. Expect tactical insight, candid dialogue, and a rare cross-section of technologists, researchers, and founders — all in one room. Plus, you’ll have the chance to engage with these top minds through breakouts and top-tier networking. Grab your ticket now and save big — over off, plus 50% off a second ticket. Whether you’re building the future or just trying to keep up with it, you and your plus-one should be in the room for this. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW The tools are moving fast. The ethics need to catch up. Don’t get left behind — learn how to stay compliant. #techcrunch #sessions #artemis #seaford #ion
    TECHCRUNCH.COM
    At TechCrunch Sessions: AI, Artemis Seaford and Ion Stoica confront the ethical crisis — when AI crosses the line
    As generative AI becomes faster, cheaper, and more convincing, the ethical stakes are no longer theoretical. What happens when the tools to deceive become widely accessible? And how do we build systems that are powerful — but safe enough to trust? At TechCrunch Sessions: AI, taking place June 5 at UC Berkeley’s Zellerbach Hall, Artemis Seaford, Head of AI Safety at ElevenLabs, and Ion Stoica, co-founder of Databricks and professor at UC Berkeley, will take the main stage to unpack the ethical challenges of today’s AI. Their conversation will cut to the heart of one of the most urgent questions in tech: What are we unleashing — and can we still steer it? How Seaford and Stoica are tackling AI’s toughest ethical questions Artemis brings a rare blend of academic depth and frontline experience. At ElevenLabs, she leads AI safety efforts focused on media authenticity and abuse prevention. Her background spans OpenAI, Meta, and global risk management at the intersection of law, policy, and geopolitics. Expect a grounded, clear-eyed take on how deepfakes are evolving, what new risks are emerging, and which interventions are actually working. Ion, meanwhile, brings a systems-level view. He’s not just a leader in AI research—he’s helped build the infrastructure behind it. From Spark to Ray to ChatBot Arena, his open source projects power many of today’s most advanced AI deployments. As executive chairman of Databricks and a founder of several AI-driven companies, Ion knows what it takes to scale responsibly — and where today’s tools still fall short. Together, they’ll unpack the ethical blind spots in today’s development cycles, explore how safety can be embedded into core architectures, and examine the roles that industry, academia, and regulation must play in the years ahead. Join the front lines of AI — insight, access, and $600+ in ticket savings This session is part of a day-long exploration at the front lines of artificial intelligence, featuring speakers from OpenAI, Google Cloud, Anthropic, Cohere, and more. Expect tactical insight, candid dialogue, and a rare cross-section of technologists, researchers, and founders — all in one room. Plus, you’ll have the chance to engage with these top minds through breakouts and top-tier networking. Grab your ticket now and save big — over $300 off, plus 50% off a second ticket. Whether you’re building the future or just trying to keep up with it, you and your plus-one should be in the room for this. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW The tools are moving fast. The ethics need to catch up. Don’t get left behind — learn how to stay compliant.
    0 Comentários 0 Compartilhamentos
  • Fortnite's Foul-Mouthed AI Darth Vader Sparks Major Controversy

    In case you haven't heard, Fortnite — the megahit video game from Epic Games that's stuffed with characters from every media franchise imaginable, not to mention real celebrities — has become a cause célèbre after it introduced Darth Vader as an in-game boss. This was no ordinary homage to the "Star Wars" villain. It uses "conversational AI" to recreate the iconic voice of the late actor James Earl Jones, allowing gamers to chat with the Sith Lord and ask him pretty much any question they want.Though it's resulted in plenty of light-hearted fun, gamers, being gamers, immediately set to work tricking the AI into swearing and saying slurs.But that's only the beginning of the controversy, if you can believe it. On Monday, the Screen Actor's Guild blasted Epic Games for its AI Vader stunt and filed an unfair labor complaint against the developer with the National Labor Relations Board, arguing that Epic's use of AI violated their agreement by replacing human performers without notice."Fortnite's signatory company, Llama Productions, chose to replace the work of human performers with AI technology," SAG-AFTRA said in a statement. "Unfortunately, they did so without providing any notice of their intent to do this and without bargaining with us over appropriate terms." SAG-AFTRA is still on strike against the video game industry, though actors are still allowed work on Fortnite and some other exempted projects, . Voice actors, in general, have struggled to win the same protections against AI as other performers in other fields. It's easier and far cheaper to fake someone's voice and pass it off as real than it is to mimic a visual performance.For this stunt, Epic used Google's Gemini 2.0 model to generate the wording of Vader's responses, and ElevenLabs' Flash v2.5 model for the audio.Whatever your thoughts on the ethics of resurrecting a dead actor's voice with AI, no theft is involved with Epic's AI Vader  — just, if SAG is to be believed, dubious labor practices. It was created in collaboration with Jones' estate, according to an Epic press release featuring a statement from the family. Jones, shortly before he passed away, signed a contract with Disney allowing the AI startup Respeecher to clone his voice. That's all fine with SAG-AFTRA. It doesn't necessarily have a problem with actors — or their estates — licensing AI replicas of themselves. "However, we must protect our right to bargain terms and conditions around uses of voice that replace the work of our members," the union wrote, "including those who previously did the work of matching Darth Vader's iconic rhythm and tone in video games."We'll have to see what the labor board and Epic make of SAG-AFTRA's claims. In the meantime, it's pretty jarring to see an AI version of Jones' legendary Vader performance out in the wild and answering silly questions in a video game.More on AI: Even Audiobooks Aren't Safe From AI SlopShare This Article
    #fortnite039s #foulmouthed #darth #vader #sparks
    Fortnite's Foul-Mouthed AI Darth Vader Sparks Major Controversy
    In case you haven't heard, Fortnite — the megahit video game from Epic Games that's stuffed with characters from every media franchise imaginable, not to mention real celebrities — has become a cause célèbre after it introduced Darth Vader as an in-game boss. This was no ordinary homage to the "Star Wars" villain. It uses "conversational AI" to recreate the iconic voice of the late actor James Earl Jones, allowing gamers to chat with the Sith Lord and ask him pretty much any question they want.Though it's resulted in plenty of light-hearted fun, gamers, being gamers, immediately set to work tricking the AI into swearing and saying slurs.But that's only the beginning of the controversy, if you can believe it. On Monday, the Screen Actor's Guild blasted Epic Games for its AI Vader stunt and filed an unfair labor complaint against the developer with the National Labor Relations Board, arguing that Epic's use of AI violated their agreement by replacing human performers without notice."Fortnite's signatory company, Llama Productions, chose to replace the work of human performers with AI technology," SAG-AFTRA said in a statement. "Unfortunately, they did so without providing any notice of their intent to do this and without bargaining with us over appropriate terms." SAG-AFTRA is still on strike against the video game industry, though actors are still allowed work on Fortnite and some other exempted projects, . Voice actors, in general, have struggled to win the same protections against AI as other performers in other fields. It's easier and far cheaper to fake someone's voice and pass it off as real than it is to mimic a visual performance.For this stunt, Epic used Google's Gemini 2.0 model to generate the wording of Vader's responses, and ElevenLabs' Flash v2.5 model for the audio.Whatever your thoughts on the ethics of resurrecting a dead actor's voice with AI, no theft is involved with Epic's AI Vader  — just, if SAG is to be believed, dubious labor practices. It was created in collaboration with Jones' estate, according to an Epic press release featuring a statement from the family. Jones, shortly before he passed away, signed a contract with Disney allowing the AI startup Respeecher to clone his voice. That's all fine with SAG-AFTRA. It doesn't necessarily have a problem with actors — or their estates — licensing AI replicas of themselves. "However, we must protect our right to bargain terms and conditions around uses of voice that replace the work of our members," the union wrote, "including those who previously did the work of matching Darth Vader's iconic rhythm and tone in video games."We'll have to see what the labor board and Epic make of SAG-AFTRA's claims. In the meantime, it's pretty jarring to see an AI version of Jones' legendary Vader performance out in the wild and answering silly questions in a video game.More on AI: Even Audiobooks Aren't Safe From AI SlopShare This Article #fortnite039s #foulmouthed #darth #vader #sparks
    FUTURISM.COM
    Fortnite's Foul-Mouthed AI Darth Vader Sparks Major Controversy
    In case you haven't heard, Fortnite — the megahit video game from Epic Games that's stuffed with characters from every media franchise imaginable, not to mention real celebrities — has become a cause célèbre after it introduced Darth Vader as an in-game boss. This was no ordinary homage to the "Star Wars" villain. It uses "conversational AI" to recreate the iconic voice of the late actor James Earl Jones, allowing gamers to chat with the Sith Lord and ask him pretty much any question they want.Though it's resulted in plenty of light-hearted fun, gamers, being gamers, immediately set to work tricking the AI into swearing and saying slurs.But that's only the beginning of the controversy, if you can believe it. On Monday, the Screen Actor's Guild blasted Epic Games for its AI Vader stunt and filed an unfair labor complaint against the developer with the National Labor Relations Board, arguing that Epic's use of AI violated their agreement by replacing human performers without notice."Fortnite's signatory company, Llama Productions, chose to replace the work of human performers with AI technology," SAG-AFTRA said in a statement. "Unfortunately, they did so without providing any notice of their intent to do this and without bargaining with us over appropriate terms." SAG-AFTRA is still on strike against the video game industry, though actors are still allowed work on Fortnite and some other exempted projects, . Voice actors, in general, have struggled to win the same protections against AI as other performers in other fields. It's easier and far cheaper to fake someone's voice and pass it off as real than it is to mimic a visual performance.For this stunt, Epic used Google's Gemini 2.0 model to generate the wording of Vader's responses, and ElevenLabs' Flash v2.5 model for the audio.Whatever your thoughts on the ethics of resurrecting a dead actor's voice with AI, no theft is involved with Epic's AI Vader  — just, if SAG is to be believed, dubious labor practices. It was created in collaboration with Jones' estate, according to an Epic press release featuring a statement from the family. Jones, shortly before he passed away, signed a contract with Disney allowing the AI startup Respeecher to clone his voice. That's all fine with SAG-AFTRA. It doesn't necessarily have a problem with actors — or their estates — licensing AI replicas of themselves. "However, we must protect our right to bargain terms and conditions around uses of voice that replace the work of our members," the union wrote, "including those who previously did the work of matching Darth Vader's iconic rhythm and tone in video games."We'll have to see what the labor board and Epic make of SAG-AFTRA's claims. In the meantime, it's pretty jarring to see an AI version of Jones' legendary Vader performance out in the wild and answering silly questions in a video game.More on AI: Even Audiobooks Aren't Safe From AI SlopShare This Article
    0 Comentários 0 Compartilhamentos
  • Voice actor union strikes back after Fortnite debuts controversial AI-powered Darth Vader

    Chris Kerr, Senior Editor, NewsMay 20, 20252 Min ReadImage via Epic GamesVoice performer union SAG-AFTRA has filed an unfair labor practice charge against Epic Games subsidiary Llama Productions for deploying a controversial AI-powered Darth Vader chatbot in Fortnite.The AI-powered version of the iconic villain arrived in Fortnite last week. It imitated the performance of deceased Darth Vader voice actor James Earl Jonesby leaning on two conversational AI models in the form of Google Gemini 2.0 Flash and ElevenLabs Flash v2.5.Epic described the feature as "experimental" in a blog post but still implored players to speak with the character. Naturally, they obliged, and soon had Darth Vader dishing out all manner of statements ranging from bizarre to downright offensive. Epic recognized the issue and quickly deployed a hotfix, but not before clips of the AI character's vocal misdemeanours had flooded social media.Now, the company has also caught the attention of SAG-AFTRA, which is currently striking against major game studios to secure better AI protections for its union members.In a statement, SAG-AFTRA acknowledged the rights of its members and estates to control the use of their digital replicas, but said it must also "protect our right to bargain terms and conditions around uses of voice that replace the work of our members."Related:That, it noted, includes those performers who previously helped bring Darth Vader to life in video games.SAG-AFTRA accuses Fortnite maker Epic of replacing human performers with AI technologySAG-AFTRA claimed that opportunity has now been taken away by Llama Productions, which it said chose to "replace the work of human performers with AI technology.""Unfortunately,did so without providing any notice of their intent to do this and without bargaining with us over appropriate terms. As such, we have filed an unfair labor practice charge with the NLRB against Llama Productions," continued SAG-AFTRA.The full unfair labor practice charge can be read here and states that Llama "failed and refused to bargain in good faith with the union by making unilateral changes to terms and conditions of employment, without providing notice to the union or the opportunity to bargain, by utilizing AI-generated voices to replace bargaining unit work on the Interactive Program Fortnite."In short: SAG-AFTRA feels Llama and parent company Epic refused to open a constructive dialogue before deploying an AI-generated character that might have instead been voiced by a human actor.Related:SAG-AFTRA recently detailed the AI protections it is seeking in order to ratify a new Interactive Media Agreement with a bargaining group containing studios like Take 2 Productions, EA, Activision, and WB Games. Prior the that, the union suggested there were "alarming loopholes" in the AI proposals being tabled by those studios. about:Generative AILaborTop StoriesEpic GamesAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    #voice #actor #union #strikes #back
    Voice actor union strikes back after Fortnite debuts controversial AI-powered Darth Vader
    Chris Kerr, Senior Editor, NewsMay 20, 20252 Min ReadImage via Epic GamesVoice performer union SAG-AFTRA has filed an unfair labor practice charge against Epic Games subsidiary Llama Productions for deploying a controversial AI-powered Darth Vader chatbot in Fortnite.The AI-powered version of the iconic villain arrived in Fortnite last week. It imitated the performance of deceased Darth Vader voice actor James Earl Jonesby leaning on two conversational AI models in the form of Google Gemini 2.0 Flash and ElevenLabs Flash v2.5.Epic described the feature as "experimental" in a blog post but still implored players to speak with the character. Naturally, they obliged, and soon had Darth Vader dishing out all manner of statements ranging from bizarre to downright offensive. Epic recognized the issue and quickly deployed a hotfix, but not before clips of the AI character's vocal misdemeanours had flooded social media.Now, the company has also caught the attention of SAG-AFTRA, which is currently striking against major game studios to secure better AI protections for its union members.In a statement, SAG-AFTRA acknowledged the rights of its members and estates to control the use of their digital replicas, but said it must also "protect our right to bargain terms and conditions around uses of voice that replace the work of our members."Related:That, it noted, includes those performers who previously helped bring Darth Vader to life in video games.SAG-AFTRA accuses Fortnite maker Epic of replacing human performers with AI technologySAG-AFTRA claimed that opportunity has now been taken away by Llama Productions, which it said chose to "replace the work of human performers with AI technology.""Unfortunately,did so without providing any notice of their intent to do this and without bargaining with us over appropriate terms. As such, we have filed an unfair labor practice charge with the NLRB against Llama Productions," continued SAG-AFTRA.The full unfair labor practice charge can be read here and states that Llama "failed and refused to bargain in good faith with the union by making unilateral changes to terms and conditions of employment, without providing notice to the union or the opportunity to bargain, by utilizing AI-generated voices to replace bargaining unit work on the Interactive Program Fortnite."In short: SAG-AFTRA feels Llama and parent company Epic refused to open a constructive dialogue before deploying an AI-generated character that might have instead been voiced by a human actor.Related:SAG-AFTRA recently detailed the AI protections it is seeking in order to ratify a new Interactive Media Agreement with a bargaining group containing studios like Take 2 Productions, EA, Activision, and WB Games. Prior the that, the union suggested there were "alarming loopholes" in the AI proposals being tabled by those studios. about:Generative AILaborTop StoriesEpic GamesAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like #voice #actor #union #strikes #back
    WWW.GAMEDEVELOPER.COM
    Voice actor union strikes back after Fortnite debuts controversial AI-powered Darth Vader
    Chris Kerr, Senior Editor, NewsMay 20, 20252 Min ReadImage via Epic GamesVoice performer union SAG-AFTRA has filed an unfair labor practice charge against Epic Games subsidiary Llama Productions for deploying a controversial AI-powered Darth Vader chatbot in Fortnite.The AI-powered version of the iconic villain arrived in Fortnite last week. It imitated the performance of deceased Darth Vader voice actor James Earl Jones (with permission from his estate) by leaning on two conversational AI models in the form of Google Gemini 2.0 Flash and ElevenLabs Flash v2.5.Epic described the feature as "experimental" in a blog post but still implored players to speak with the character. Naturally, they obliged, and soon had Darth Vader dishing out all manner of statements ranging from bizarre to downright offensive (including slurs). Epic recognized the issue and quickly deployed a hotfix (thanks Kotaku), but not before clips of the AI character's vocal misdemeanours had flooded social media.Now, the company has also caught the attention of SAG-AFTRA, which is currently striking against major game studios to secure better AI protections for its union members.In a statement, SAG-AFTRA acknowledged the rights of its members and estates to control the use of their digital replicas, but said it must also "protect our right to bargain terms and conditions around uses of voice that replace the work of our members."Related:That, it noted, includes those performers who previously helped bring Darth Vader to life in video games.SAG-AFTRA accuses Fortnite maker Epic of replacing human performers with AI technologySAG-AFTRA claimed that opportunity has now been taken away by Llama Productions, which it said chose to "replace the work of human performers with AI technology.""Unfortunately, [Llama] did so without providing any notice of their intent to do this and without bargaining with us over appropriate terms. As such, we have filed an unfair labor practice charge with the NLRB against Llama Productions," continued SAG-AFTRA.The full unfair labor practice charge can be read here and states that Llama "failed and refused to bargain in good faith with the union by making unilateral changes to terms and conditions of employment, without providing notice to the union or the opportunity to bargain, by utilizing AI-generated voices to replace bargaining unit work on the Interactive Program Fortnite."In short: SAG-AFTRA feels Llama and parent company Epic refused to open a constructive dialogue before deploying an AI-generated character that might have instead been voiced by a human actor.Related:SAG-AFTRA recently detailed the AI protections it is seeking in order to ratify a new Interactive Media Agreement with a bargaining group containing studios like Take 2 Productions, EA, Activision, and WB Games. Prior the that, the union suggested there were "alarming loopholes" in the AI proposals being tabled by those studios.Read more about:Generative AILaborTop StoriesEpic GamesAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 Comentários 0 Compartilhamentos
  • 20+ GenAI UX patterns, examples and implementation tactics

    A shared language for product teams to build usable, intelligent and safe GenAI experiences beyond just the modelGenerative AI introduces a new way for humans to interact with systems by focusing on intent-based outcome specification. GenAI introduces novel challenges because its outputs are probabilistic, requires understanding of variability, memory, errors, hallucinations and malicious use which brings an essential need to build principles and design patterns as described by IBM.Moreover, any AI product is a layered system where LLM is just one ingredient and memory, orchestration, tool extensions, UX and agentic user-flows builds the real magic!This article is my research and documentation of evolving GenAI design patterns that provide a shared language for product managers, data scientists, and interaction designers to create products that are human-centred, trustworthy and safe. By applying these patterns, we can bridge the gap between user needs, technical capabilities and product development process.Here are 21 GenAI UX patternsGenAI or no GenAIConvert user needs to data needsAugment or automateDefine level of automationProgressive AI adoptionLeverage mental modelsConvey product limitsDisplay chain of thought Leverage multiple outputsProvide data sourcesConvey model confidenceDesign for memory and recallProvide contextual input parametersDesign for coPilot, co-Editing or partial automationDefine user controls for AutomationDesign for user input error statesDesign for AI system error statesDesign to capture user feedbackDesign for model evaluationDesign for AI safety guardrailsCommunicate data privacy and controls1. GenAI or no GenAIEvaluate whether GenAI improves UX or introduces complexity. Often, heuristic-basedsolutions are easier to build and maintain.Scenarios when GenAI is beneficialTasks that are open-ended, creative and augments user.E.g., writing prompts, summarizing notes, drafting replies.Creating or transforming complex outputs.E.g., converting a sketch into website code.Where structured UX fails to capture user intent.Scenarios when GenAI should be avoidedOutcomes that must be precise, auditable or deterministic. E.g., Tax forms or legal contracts.Users expect clear and consistent information.E.g. Open source software documentationHow to use this patternDetermine the friction points in the customer journeyAssess technology feasibility: Determine if AI can address the friction point. Evaluate scale, dataset availability, error risk assessment and economic ROI.Validate user expectations: - Determine if the AI solution erodes user expectations by evaluating whether the system augments human effort or replaces it entirely, as outlined in pattern 3, Augment vs. automate. - Determine if AI solution erodes pattern 6, Mental models2. Convert user needs to data needsThis pattern ensures GenAI development begins with user intent and data model required to achieve that. GenAI systems are only as good as the data they’re trained on. But real users don’t speak in rows and columns, they express goals, frustrations, and behaviours. If teams fail to translate user needs into structured, model-ready inputs, the resulting system or product may optimise for the wrong outcomes and thus user churn.How to use this patternCollaborate as a cross-functional team of PMs, Product designers and Data Scientists and align on user problems worth solving.Define user needs by using triangulated research: Qualitative+ Quantitative+ Emergentand synthesising user insights using JTBD framework, Empathy Map to visualise user emotions and perspectives. Value Proposition Canvas to align user gains and pains with featuresDefine data needs and documentation by selecting a suitable data model, perform gap analysis and iteratively refine data model as needed. Once you understand the why, translate it into the what for the model. What features, labels, examples, and contexts will your AI model need to learn this behaviour? Use structured collaboration to figure out.3. Augment vs automateOne of the critical decisions in GenAI apps is whether to fully automate a task or to augment human capability. Use this pattern to to align with user intent and control preferences with the technology.Automation is best for tasks users prefer to delegate especially when they are tedious, time-consuming or unsafe. E.g., Intercom FinAI automatically summarizes long email threads into internal notes, saving time on repetitive, low-value tasks.Augmentation enhances tasks users want to remain involved in by increasing efficiency, increase creativity and control. E.g., Magenta Studio in Abelton support creative controls to manipulate and create new music.How to use this patternTo select the best approach, evaluate user needs and expectations using research synthesis tools like empathy mapand value proposition canvasTest and validate if the approach erodes user experience or enhances it.4. Define level of automationIn AI systems, automation refers to how much control is delegated to the AI vs user. This is a strategic UX pattern to decide degree of automation based upon user pain-point, context scenarios and expectation from the product.Levels of automationNo automationThe AI system provides assistance and suggestions to the user but requires the user to make all the decisions. E.g., Grammarly highlights grammar issues but the user accepts or rejects corrections.Partial automation/ co-pilot/ co-editorThe AI initiates actions or generates content, but the user reviews or intervenes as needed. E.g., GitHub Copilot suggest code that developers can accept, modify, or ignore.Full automationThe AI system performs tasks without user intervention, often based on predefined rules, tools and triggers. Full automation in GenAI are often referred to as Agentic systems. E.g., Ema can autonomously plan and execute multi-step tasks like researching competitors, generating a report and emailing it without user prompts or intervention at each step.How to use this patternEvaluate user pain point to be automated and risk involved: Automating tasks is most effective when the associated risk is low without severe consequences in case of failure. Low-risk tasks such as sending automated reminders, promotional emails, filtering spam emails or processing routine customer queries can be automated with minimal downside while saving time and resources. High-risk tasks such as making medical diagnoses, sending business-critical emails, or executing financial trades requires careful oversight due to the potential for significant harm if errors occur.Evaluate and design for particular automation level: Evaluate if user pain point should fall under — No Automation, Partial Automation or Full Automation based upon user expectations and goals.Define user controls for automation5. Progressive GenAI adoptionWhen users first encounter a product built on new technology, they often wonder what the system can and can’t do, how it works and how they should interact with it.This pattern offers multi-dimensional strategy to help user onboard an AI product or feature, mitigate errors, aligns with user readiness to deliver an informed and human-centered UX.How to use this patternThis pattern is a culmination of many other patternsFocus on communicating benefits from the start: Avoid diving into details about the technology and highlight how the AI brings new value.Simplify the onboarding experience Let users experience the system’s value before asking data-sharing preferences, give instant access to basic AI features first. Encourage users to sign up later to unlock advanced AI features or share more details. E.g., Adobe FireFly progressively onboards user with basic to advance AI featuresDefine level of automationand gradually increase autonomy or complexity.Provide explainability and trust by designing for errors.Communicate data privacy and controlsto clearly convey how user data is collected, stored, processed and protected.6. Leverage mental modelsMental models help user predict how a systemwill work and, therefore, influence how they interact with an interface. When a product aligns with a user’s existing mental models, it feels intuitive and easy to adopt. When it clashes, it can cause frustration, confusion, or abandonment​.E.g. Github Copilot builds upon developers’ mental models from traditional code autocomplete, easing the transition to AI-powered code suggestionsE.g. Adobe Photoshop builds upon the familiar approach of extending an image using rectangular controls by integrating its Generative Fill feature, which intelligently fills the newly created space.How to use this patternIdentifying and build upon existing mental models by questioningWhat is the user journey and what is user trying to do?What mental models might already be in place?Does this product break any intuitive patterns of cause and effect?Are you breaking an existing mental model? If yes, clearly explain how and why. Good onboarding, microcopy, and visual cues can help bridge the gap.7. Convey product limitsThis pattern involves clearly conveying what an AI model can and cannot do, including its knowledge boundaries, capabilities and limitations.It is helpful to builds user trust, sets appropriate expectations, prevents misuse, and reduces frustration when the model fails or behaves unexpectedly.How to use this patternExplicitly state model limitations: Show contextual cues for outdated knowledge or lack of real-time data. E.g., Claude states its knowledge cutoff when the question falls outside its knowledge domainProvide fallbacks or escalation options when the model cannot provide a suitable output. E.g., Amazon Rufus when asked about something unrelated to shopping, says “it doesn’t have access to factual information and, I can only assists with shopping related questions and requests”Make limitations visible in product marketing, onboarding, tooltips or response disclaimers.8. Display chain of thought In AI systems, chain-of-thoughtprompting technique enhances the model’s ability to solve complex problems by mimicking a more structured, step-by-step thought process like that of a human.CoT display is a UX pattern that improves transparency by revealing how the AI arrived at its conclusions. This fosters user trust, supports interpretability, and opens up space for user feedback especially in high-stakes or ambiguous scenarios.E.g., Perplexity enhances transparency by displaying its processing steps helping users understand the thoughtful process behind the answers.E.g., Khanmigo an AI Tutoring system guides students step-by-step through problems, mimicking human reasoning to enhance understanding and learning.How to use this patternShow status like “researching” and “reasoning to communicate progress, reduce user uncertainty and wait times feel shorter.Use progressive disclosure: Start with a high-level summary, and allow users to expand details as needed.Provide AI tooling transparency: Clearly display external tools and data sources the AI uses to generate recommendations.Show confidence & uncertainty: Indicate AI confidence levels and highlight uncertainties when relevant.9. Leverage multiple outputsGenAI can produce varied responses to the same input due to its probabilistic nature. This pattern exploits variability by presenting multiple outputs side by side. Showing diverse options helps users creatively explore, compare, refine or make better decisions that best aligns with their intent. E.g., Google Gemini provides multiple options to help user explore, refine and make better decisions.How to use this patternExplain the purpose of variation: Help users understand that differences across outputs are intentional and meant to offer choice.Enable edits: Let users rate, select, remix, or edit outputs seamlessly to shape outcomes and provide feedback. E.g., Midjourney helps user adjust prompt and guide your variations and edits using remix10. Provide data sourcesArticulating data sources in a GenAI application is essential for transparency, credibility and user trust. Clearly indicating where the AI derives its knowledge helps users assess the reliability of responses and avoid misinformation.This is especially important in high stakes factual domains like healthcare, finance or legal guidance where decisions must be based on verified data.How to use this patternCite credible sources inline: Display sources as footnotes, tooltips, or collapsible links. E.g., NoteBookLM adds citations to its answers and links each answer directly to the part of user’s uploaded documents.Disclose training data scope clearly: For generative tools, offer a simple explanation of what data the model was trained on and what wasn’t included. E.g., Adobe Firefly discloses that its Generative Fill feature is trained on stock imagery, openly licensed work and public domain content where the copyright has expired.Provide source-level confidence:In cases where multiple sources contribute, visually differentiate higher-confidence or more authoritative sources.11. Convey model confidenceAI-generated outputs are probabilistic and can vary in accuracy. Showing confidence scores communicates how certain the model is about its output. This helps users assess reliability and make better-informed decisions.How to use this patternAssess context and decision stakes: Showing model confidence depends on the context and its impact on user decision-making. In high-stakes scenarios like healthcare, finance or legal advice, displaying confidence scores are crucial. However, in low stake scenarios like AI-generated art or storytelling confidence may not add much value and could even introduce unnecessary confusion.Choose the right visualization: If design research shows that displaying model confidence aids decision-making, the next step is to select the right visualization method. Percentages, progress bars or verbal qualifierscan communicate confidence effectively. The apt visualisation method depends on the application’s use-case and user familiarity. E.g., Grammarly uses verbal qualifiers like “likely” to the content it generated along with the userGuide user action during low confidence scenarios: Offer paths forward such as asking clarifying questions or offering alternative options.12. Design for memory and recallMemory and recall is an important concept and design pattern that enables the AI product to store and reuse information from past interactions such as user preferences, feedback, goals or task history to improve continuity and context awareness.Enhances personalization by remembering past choices or preferencesReduces user burden by avoiding repeated input requests especially in multi-step or long-form tasksSupports complex tasks like longitudinal workflows like in project planning, learning journeys by referencing or building on past progress.Memory used to access information can be ephemeralor persistentand may include conversational context, behavioural signals, or explicit inputs.How to use this patternDefine the user context and choose memory typeChoose memory type like ephemeral or persistent or both based upon use case. A shopping assistant might track interactions in real time without needing to persist data for future sessions whereas personal assistants need long-term memory for personalization.Use memory intelligently in user interactionsBuild base prompts for LLM to recall and communicate information contextually.Communicate transparency and provide controlsClearly communicate what’s being saved and let users view, edit or delete stored memory. Make “delete memories” an accessible action. E.g. ChatGPT offers extensive controls across it’s platform to view, update, or delete memories anytime.13. Provide contextual input parametersContextual Input parameters enhance the user experience by streamlining user interactions and gets to user goal faster. By leveraging user-specific data, user preferences or past interactions or even data from other users who have similar preferences, GenAI system can tailor inputs and functionalities to better meet user intent and decision making.How to use this patternLeverage prior interactions: Pre-fill inputs based on what the user has previously entered. Refer pattern 12, Memory and recall.Use auto complete or smart defaults: As users type, offer intelligent, real-time suggestions derived from personal and global usage patterns. E.g., Perplexity offers smart next query suggestions based on your current query thread.Suggest interactive UI widgets: Based upon system prediction, provide tailored input widgets like toasts, sliders, checkboxes to enhance user input. E.g., ElevenLabs allows users to fine-tune voice generation settings by surfacing presets or defaults.14. Design for co-pilot / co-editing / partial automationCo-pilot is an augmentation pattern where AI acts as a collaborative assistant, offering contextual and data-driven insights while the user remains in control. This design pattern is essential in domains like strategy, ideating, writing, designing or coding where outcomes are subjective, users have unique preferences or creative input from the user is critical.Co-pilot speed up workflows, enhance creativity and reduce cognitive load but the human retains authorship and final decision-making.How to use this patternEmbed inline assistance: Place AI suggestions contextually so users can easily accept, reject or modify them. E.g., Notion AI helps you draft, summarise and edit content while you control the final version.user intent and creative direction: Let users guide the AI with input like goals, tone, or examples, maintaining authorship and creative direction. E.g., Jasper AI allows users to set brand voice and tone guidelines, helping structure AI output to better match the user’s intent.15. Design user controls for automationBuild UI-level mechanisms that let users manage or override automation based upon user goals, context scenarios or system failure states.No system can anticipate all user contexts. Controls give users agency and keep trust intact even when the AI gets it wrong.How to use this patternUse progressive disclosure: Start with minimal automation and allow users to opt into more complex or autonomous features over time. E.g., Canva Magic Studio starts with simple AI suggestions like text or image generation then gradually reveals advanced tools like Magic Write, AI video scenes and brand voice customisation.Give users automation controls: UI controls like toggles, sliders, or rule-based settings to let users choose when and how automation can be controlled. E.g., Gmail lets users disable Smart Compose.Design for automation error recovery: Give users correction when AI fails. Add manual override, undo, or escalate options to human support. E.g., GitHub Copilot suggests code inline, but developers can easily reject, modify or undo suggestions when output is off.16. Design for user input error statesGenAI systems often rely on interpreting human input. When users provide ambiguous, incomplete or erroneous information, the AI may misunderstand their intent or produce low-quality outputs.Input errors often reflect a mismatch between user expectations and system understanding. Addressing these gracefully is essential to maintain trust and ensure smooth interaction.How to use this patternHandle typos with grace: Use spell-checking or fuzzy matching to auto-correct common input errors when confidence is high, and subtly surface corrections.Ask clarifying questions: When input is too vague or has multiple interpretations, prompt the user to provide missing context. In Conversation Design, these types of errors occur when the intent is defined but the entity is not clear. Know more about entity and intent. E.g., ChatGPT when given low-context prompts like “What’s the capital?”, it asks follow-up questions rather than guessing.Support quick correction: Make it easy for users to edit or override your interpretation. E.g., ChatGPT displays an edit button beside submitted prompts, enabling users to revise their input17. Design for AI system error statesGenAI outputs are inherently probabilistic and subject to errors ranging from hallucinations and bias to contextual misalignments.Unlike traditional systems, GenAI error states are hard to predict. Designing for these states requires transparency, recovery mechanisms and user agency. A well-designed error state can help users understand AI system boundaries and regain control.A Confusion matrix helps analyse AI system errors and provides insight into how well the model is performing by showing the counts of - True positives- False positives- True negatives- False negativesScenarios of AI errors and failure statesSystem failureFalse positives or false negatives occur due to poor data, biases or model hallucinations. E.g., Citibank financial fraud system displays a message “Unusual transaction. Your card is blocked. If it was you, please verify your identity”System limitation errorsTrue negatives occur due to untrained use cases or gaps in knowledge. E.g., when an ODQA system is given a user input outside the trained dataset, throws the following error “Sorry, we don’t have enough information. Please try a different query!”Contextual errorsTrue positives that confuse users due to poor explanations or conflicts with user expectations comes under contextual errors. E.g., when user logs in from a new device, gets locked out. AI responds: “Your login attempt was flagged for suspicious activity”How to use this patternCommunicate AI errors for various scenarios: Use phrases like “This may not be accurate”, “This seems like…” or surface confidence levels to help calibrate trust.Use pattern convey model confidence for low confidence outputs.Offer error recovery: Incase of System failure or Contextual errors, provide clear paths to override, retry or escalate the issue. E.g., Use way forwards like “Try a different query,” or “Let me refine that.” or “Contact Support”.Enable user feedback: Make it easy to report hallucinations or incorrect outputs. about pattern 19. Design to capture user feedback.18. Design to capture user feedbackReal-world alignment needs direct user feedback to improve the model and thus the product. As people interact with AI systems, their behaviours shape and influence the outputs they receive in the future. Thus, creating a continuous feedback loop where both the system and user behaviour adapt over time. E.g., ChatGPT uses Reaction buttons and Comment boxes to collect user feedback.How to use this patternAccount for implicit feedback: Capture user actions such as skips, dismissals, edits, or interaction frequency. These passive signals provide valuable behavioral cues that can tune recommendations or surface patterns of disinterest.Ask for explicit feedback: Collect direct user input through thumbs-up/down, NPS rating widgets or quick surveys after actions. Use this to improve both model behavior and product fit.Communicate how feedback is used: Let users know how their feedback shapes future experiences. This increases trust and encourages ongoing contribution.19. Design for model evaluationRobust GenAI models require continuous evaluation during training as well as post-deployment. Evaluation ensures the model performs as intended, identify errors and hallucinations and aligns with user goals especially in high-stakes domains.How to use this patternThere are three key evaluation methods to improve ML systems.LLM based evaluationsA separate language model acts as an automated judge. It can grade responses, explain its reasoning and assign labels like helpful/harmful or correct/incorrect.E.g., Amazon Bedrock uses the LLM-as-a-Judge approach to evaluate AI model outputs.A separate trusted LLM, like Claude 3 or Amazon Titan, automatically reviews and rates responses based on helpfulness, accuracy, relevance, and safety. For instance, two AI-generated replies to the same prompt are compared, and the judge model selects the better one.This automation reduces evaluation costs by up to 98% and speeds up model selection without relying on slow, expensive human reviews.Enable code-based evaluations: For structured tasks, use test suites or known outputs to validate model performance, especially for data processing, generation, or retrieval.Capture human evaluation: Integrate real-time UI mechanisms for users to label outputs as helpful, harmful, incorrect, or unclear. about it in pattern 19. Design to capture user feedbackA hybrid approach of LLM-as-a-judge and human evaluation drastically boost accuracy to 99%.20. Design for AI guardrailsDesign for AI guardrails means building practises and principles in GenAI models to minimise harm, misinformation, toxic behaviour and biases. It is a critical consideration toProtect users and children from harmful language, made-up facts, biases or false information.Build trust and adoption: When users know the system avoids hate speech and misinformation, they feel safer and show willingness to use it often.Ethical compliance: New rules like the EU AI act demand safe AI design. Teams must meet these standards to stay legal and socially responsible.How to use this patternAnalyse and guide user inputs: If a prompt could lead to unsafe or sensitive content, guide users towards safer interactions. E.g., when Miko robot comes across profanity, it answers“I am not allowed to entertain such language”Filter outputs and moderate content: Use real-time moderation to detect and filter potentially harmful AI outputs, blocking or reframing them before they’re shown to the user. E.g., show a note like: “This response was modified to follow our safety guidelines.Use pro-active warnings: Subtly notify users when they approach sensitive or high stakes information. E.g., “This is informational advice and not a substitute for medical guidance.”Create strong user feedback: Make it easy for users to report unsafe, biased or hallucinated outputs to directly improve the AI over time through active learning loops. E.g., Instagram provides in-app option for users to report harm, bias or misinformation.Cross-validate critical information: For high-stakes domains, back up AI-generated outputs with trusted databases to catch hallucinations. Refer pattern 10, Provide data sources.21. Communicate data privacy and controlsThis pattern ensures GenAI applications clearly convey how user data is collected, stored, processed and protected.GenAI systems often rely on sensitive, contextual, or behavioral data. Mishandling this data can lead to user distrust, legal risk or unintended misuse. Clear communication around privacy safeguards helps users feel safe, respected and in control. E.g., Slack AI clearly communicates that customer data remains owned and controlled by the customer and is not used to train Slack’s or any third-party AI modelsHow to use this patternShow transparency: When a GenAI feature accesses user data, display explanation of what’s being accessed and why.Design opt-in and opt-out flows: Allow users to easily toggle data sharing preferences.Enable data review and deletion: Allow users to view, download or delete their data history giving them ongoing control.ConclusionThese GenAI UX patterns are a starting point and represent the outcome of months of research, shaped directly and indirectly with insights from notable designers, researchers, and technologists across leading tech companies and the broader AI communites across Medium and Linkedin. I have done my best to cite and acknowledge contributors along the way but I’m sure I’ve missed many. If you see something that should be credited or expanded, please reach out.Moreover, these patterns are meant to grow and evolve as we learn more about creating AI that’s trustworthy and puts people first. If you’re a designer, researcher, or builder working with AI, take these patterns, challenge them, remix them and contribute your own. Also, please let me know in comments about your suggestions. If you would like to collaborate with me to further refine this, please reach out to me.20+ GenAI UX patterns, examples and implementation tactics was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #genai #patterns #examples #implementation #tactics
    20+ GenAI UX patterns, examples and implementation tactics
    A shared language for product teams to build usable, intelligent and safe GenAI experiences beyond just the modelGenerative AI introduces a new way for humans to interact with systems by focusing on intent-based outcome specification. GenAI introduces novel challenges because its outputs are probabilistic, requires understanding of variability, memory, errors, hallucinations and malicious use which brings an essential need to build principles and design patterns as described by IBM.Moreover, any AI product is a layered system where LLM is just one ingredient and memory, orchestration, tool extensions, UX and agentic user-flows builds the real magic!This article is my research and documentation of evolving GenAI design patterns that provide a shared language for product managers, data scientists, and interaction designers to create products that are human-centred, trustworthy and safe. By applying these patterns, we can bridge the gap between user needs, technical capabilities and product development process.Here are 21 GenAI UX patternsGenAI or no GenAIConvert user needs to data needsAugment or automateDefine level of automationProgressive AI adoptionLeverage mental modelsConvey product limitsDisplay chain of thought Leverage multiple outputsProvide data sourcesConvey model confidenceDesign for memory and recallProvide contextual input parametersDesign for coPilot, co-Editing or partial automationDefine user controls for AutomationDesign for user input error statesDesign for AI system error statesDesign to capture user feedbackDesign for model evaluationDesign for AI safety guardrailsCommunicate data privacy and controls1. GenAI or no GenAIEvaluate whether GenAI improves UX or introduces complexity. Often, heuristic-basedsolutions are easier to build and maintain.Scenarios when GenAI is beneficialTasks that are open-ended, creative and augments user.E.g., writing prompts, summarizing notes, drafting replies.Creating or transforming complex outputs.E.g., converting a sketch into website code.Where structured UX fails to capture user intent.Scenarios when GenAI should be avoidedOutcomes that must be precise, auditable or deterministic. E.g., Tax forms or legal contracts.Users expect clear and consistent information.E.g. Open source software documentationHow to use this patternDetermine the friction points in the customer journeyAssess technology feasibility: Determine if AI can address the friction point. Evaluate scale, dataset availability, error risk assessment and economic ROI.Validate user expectations: - Determine if the AI solution erodes user expectations by evaluating whether the system augments human effort or replaces it entirely, as outlined in pattern 3, Augment vs. automate. - Determine if AI solution erodes pattern 6, Mental models2. Convert user needs to data needsThis pattern ensures GenAI development begins with user intent and data model required to achieve that. GenAI systems are only as good as the data they’re trained on. But real users don’t speak in rows and columns, they express goals, frustrations, and behaviours. If teams fail to translate user needs into structured, model-ready inputs, the resulting system or product may optimise for the wrong outcomes and thus user churn.How to use this patternCollaborate as a cross-functional team of PMs, Product designers and Data Scientists and align on user problems worth solving.Define user needs by using triangulated research: Qualitative+ Quantitative+ Emergentand synthesising user insights using JTBD framework, Empathy Map to visualise user emotions and perspectives. Value Proposition Canvas to align user gains and pains with featuresDefine data needs and documentation by selecting a suitable data model, perform gap analysis and iteratively refine data model as needed. Once you understand the why, translate it into the what for the model. What features, labels, examples, and contexts will your AI model need to learn this behaviour? Use structured collaboration to figure out.3. Augment vs automateOne of the critical decisions in GenAI apps is whether to fully automate a task or to augment human capability. Use this pattern to to align with user intent and control preferences with the technology.Automation is best for tasks users prefer to delegate especially when they are tedious, time-consuming or unsafe. E.g., Intercom FinAI automatically summarizes long email threads into internal notes, saving time on repetitive, low-value tasks.Augmentation enhances tasks users want to remain involved in by increasing efficiency, increase creativity and control. E.g., Magenta Studio in Abelton support creative controls to manipulate and create new music.How to use this patternTo select the best approach, evaluate user needs and expectations using research synthesis tools like empathy mapand value proposition canvasTest and validate if the approach erodes user experience or enhances it.4. Define level of automationIn AI systems, automation refers to how much control is delegated to the AI vs user. This is a strategic UX pattern to decide degree of automation based upon user pain-point, context scenarios and expectation from the product.Levels of automationNo automationThe AI system provides assistance and suggestions to the user but requires the user to make all the decisions. E.g., Grammarly highlights grammar issues but the user accepts or rejects corrections.Partial automation/ co-pilot/ co-editorThe AI initiates actions or generates content, but the user reviews or intervenes as needed. E.g., GitHub Copilot suggest code that developers can accept, modify, or ignore.Full automationThe AI system performs tasks without user intervention, often based on predefined rules, tools and triggers. Full automation in GenAI are often referred to as Agentic systems. E.g., Ema can autonomously plan and execute multi-step tasks like researching competitors, generating a report and emailing it without user prompts or intervention at each step.How to use this patternEvaluate user pain point to be automated and risk involved: Automating tasks is most effective when the associated risk is low without severe consequences in case of failure. Low-risk tasks such as sending automated reminders, promotional emails, filtering spam emails or processing routine customer queries can be automated with minimal downside while saving time and resources. High-risk tasks such as making medical diagnoses, sending business-critical emails, or executing financial trades requires careful oversight due to the potential for significant harm if errors occur.Evaluate and design for particular automation level: Evaluate if user pain point should fall under — No Automation, Partial Automation or Full Automation based upon user expectations and goals.Define user controls for automation5. Progressive GenAI adoptionWhen users first encounter a product built on new technology, they often wonder what the system can and can’t do, how it works and how they should interact with it.This pattern offers multi-dimensional strategy to help user onboard an AI product or feature, mitigate errors, aligns with user readiness to deliver an informed and human-centered UX.How to use this patternThis pattern is a culmination of many other patternsFocus on communicating benefits from the start: Avoid diving into details about the technology and highlight how the AI brings new value.Simplify the onboarding experience Let users experience the system’s value before asking data-sharing preferences, give instant access to basic AI features first. Encourage users to sign up later to unlock advanced AI features or share more details. E.g., Adobe FireFly progressively onboards user with basic to advance AI featuresDefine level of automationand gradually increase autonomy or complexity.Provide explainability and trust by designing for errors.Communicate data privacy and controlsto clearly convey how user data is collected, stored, processed and protected.6. Leverage mental modelsMental models help user predict how a systemwill work and, therefore, influence how they interact with an interface. When a product aligns with a user’s existing mental models, it feels intuitive and easy to adopt. When it clashes, it can cause frustration, confusion, or abandonment​.E.g. Github Copilot builds upon developers’ mental models from traditional code autocomplete, easing the transition to AI-powered code suggestionsE.g. Adobe Photoshop builds upon the familiar approach of extending an image using rectangular controls by integrating its Generative Fill feature, which intelligently fills the newly created space.How to use this patternIdentifying and build upon existing mental models by questioningWhat is the user journey and what is user trying to do?What mental models might already be in place?Does this product break any intuitive patterns of cause and effect?Are you breaking an existing mental model? If yes, clearly explain how and why. Good onboarding, microcopy, and visual cues can help bridge the gap.7. Convey product limitsThis pattern involves clearly conveying what an AI model can and cannot do, including its knowledge boundaries, capabilities and limitations.It is helpful to builds user trust, sets appropriate expectations, prevents misuse, and reduces frustration when the model fails or behaves unexpectedly.How to use this patternExplicitly state model limitations: Show contextual cues for outdated knowledge or lack of real-time data. E.g., Claude states its knowledge cutoff when the question falls outside its knowledge domainProvide fallbacks or escalation options when the model cannot provide a suitable output. E.g., Amazon Rufus when asked about something unrelated to shopping, says “it doesn’t have access to factual information and, I can only assists with shopping related questions and requests”Make limitations visible in product marketing, onboarding, tooltips or response disclaimers.8. Display chain of thought In AI systems, chain-of-thoughtprompting technique enhances the model’s ability to solve complex problems by mimicking a more structured, step-by-step thought process like that of a human.CoT display is a UX pattern that improves transparency by revealing how the AI arrived at its conclusions. This fosters user trust, supports interpretability, and opens up space for user feedback especially in high-stakes or ambiguous scenarios.E.g., Perplexity enhances transparency by displaying its processing steps helping users understand the thoughtful process behind the answers.E.g., Khanmigo an AI Tutoring system guides students step-by-step through problems, mimicking human reasoning to enhance understanding and learning.How to use this patternShow status like “researching” and “reasoning to communicate progress, reduce user uncertainty and wait times feel shorter.Use progressive disclosure: Start with a high-level summary, and allow users to expand details as needed.Provide AI tooling transparency: Clearly display external tools and data sources the AI uses to generate recommendations.Show confidence & uncertainty: Indicate AI confidence levels and highlight uncertainties when relevant.9. Leverage multiple outputsGenAI can produce varied responses to the same input due to its probabilistic nature. This pattern exploits variability by presenting multiple outputs side by side. Showing diverse options helps users creatively explore, compare, refine or make better decisions that best aligns with their intent. E.g., Google Gemini provides multiple options to help user explore, refine and make better decisions.How to use this patternExplain the purpose of variation: Help users understand that differences across outputs are intentional and meant to offer choice.Enable edits: Let users rate, select, remix, or edit outputs seamlessly to shape outcomes and provide feedback. E.g., Midjourney helps user adjust prompt and guide your variations and edits using remix10. Provide data sourcesArticulating data sources in a GenAI application is essential for transparency, credibility and user trust. Clearly indicating where the AI derives its knowledge helps users assess the reliability of responses and avoid misinformation.This is especially important in high stakes factual domains like healthcare, finance or legal guidance where decisions must be based on verified data.How to use this patternCite credible sources inline: Display sources as footnotes, tooltips, or collapsible links. E.g., NoteBookLM adds citations to its answers and links each answer directly to the part of user’s uploaded documents.Disclose training data scope clearly: For generative tools, offer a simple explanation of what data the model was trained on and what wasn’t included. E.g., Adobe Firefly discloses that its Generative Fill feature is trained on stock imagery, openly licensed work and public domain content where the copyright has expired.Provide source-level confidence:In cases where multiple sources contribute, visually differentiate higher-confidence or more authoritative sources.11. Convey model confidenceAI-generated outputs are probabilistic and can vary in accuracy. Showing confidence scores communicates how certain the model is about its output. This helps users assess reliability and make better-informed decisions.How to use this patternAssess context and decision stakes: Showing model confidence depends on the context and its impact on user decision-making. In high-stakes scenarios like healthcare, finance or legal advice, displaying confidence scores are crucial. However, in low stake scenarios like AI-generated art or storytelling confidence may not add much value and could even introduce unnecessary confusion.Choose the right visualization: If design research shows that displaying model confidence aids decision-making, the next step is to select the right visualization method. Percentages, progress bars or verbal qualifierscan communicate confidence effectively. The apt visualisation method depends on the application’s use-case and user familiarity. E.g., Grammarly uses verbal qualifiers like “likely” to the content it generated along with the userGuide user action during low confidence scenarios: Offer paths forward such as asking clarifying questions or offering alternative options.12. Design for memory and recallMemory and recall is an important concept and design pattern that enables the AI product to store and reuse information from past interactions such as user preferences, feedback, goals or task history to improve continuity and context awareness.Enhances personalization by remembering past choices or preferencesReduces user burden by avoiding repeated input requests especially in multi-step or long-form tasksSupports complex tasks like longitudinal workflows like in project planning, learning journeys by referencing or building on past progress.Memory used to access information can be ephemeralor persistentand may include conversational context, behavioural signals, or explicit inputs.How to use this patternDefine the user context and choose memory typeChoose memory type like ephemeral or persistent or both based upon use case. A shopping assistant might track interactions in real time without needing to persist data for future sessions whereas personal assistants need long-term memory for personalization.Use memory intelligently in user interactionsBuild base prompts for LLM to recall and communicate information contextually.Communicate transparency and provide controlsClearly communicate what’s being saved and let users view, edit or delete stored memory. Make “delete memories” an accessible action. E.g. ChatGPT offers extensive controls across it’s platform to view, update, or delete memories anytime.13. Provide contextual input parametersContextual Input parameters enhance the user experience by streamlining user interactions and gets to user goal faster. By leveraging user-specific data, user preferences or past interactions or even data from other users who have similar preferences, GenAI system can tailor inputs and functionalities to better meet user intent and decision making.How to use this patternLeverage prior interactions: Pre-fill inputs based on what the user has previously entered. Refer pattern 12, Memory and recall.Use auto complete or smart defaults: As users type, offer intelligent, real-time suggestions derived from personal and global usage patterns. E.g., Perplexity offers smart next query suggestions based on your current query thread.Suggest interactive UI widgets: Based upon system prediction, provide tailored input widgets like toasts, sliders, checkboxes to enhance user input. E.g., ElevenLabs allows users to fine-tune voice generation settings by surfacing presets or defaults.14. Design for co-pilot / co-editing / partial automationCo-pilot is an augmentation pattern where AI acts as a collaborative assistant, offering contextual and data-driven insights while the user remains in control. This design pattern is essential in domains like strategy, ideating, writing, designing or coding where outcomes are subjective, users have unique preferences or creative input from the user is critical.Co-pilot speed up workflows, enhance creativity and reduce cognitive load but the human retains authorship and final decision-making.How to use this patternEmbed inline assistance: Place AI suggestions contextually so users can easily accept, reject or modify them. E.g., Notion AI helps you draft, summarise and edit content while you control the final version.user intent and creative direction: Let users guide the AI with input like goals, tone, or examples, maintaining authorship and creative direction. E.g., Jasper AI allows users to set brand voice and tone guidelines, helping structure AI output to better match the user’s intent.15. Design user controls for automationBuild UI-level mechanisms that let users manage or override automation based upon user goals, context scenarios or system failure states.No system can anticipate all user contexts. Controls give users agency and keep trust intact even when the AI gets it wrong.How to use this patternUse progressive disclosure: Start with minimal automation and allow users to opt into more complex or autonomous features over time. E.g., Canva Magic Studio starts with simple AI suggestions like text or image generation then gradually reveals advanced tools like Magic Write, AI video scenes and brand voice customisation.Give users automation controls: UI controls like toggles, sliders, or rule-based settings to let users choose when and how automation can be controlled. E.g., Gmail lets users disable Smart Compose.Design for automation error recovery: Give users correction when AI fails. Add manual override, undo, or escalate options to human support. E.g., GitHub Copilot suggests code inline, but developers can easily reject, modify or undo suggestions when output is off.16. Design for user input error statesGenAI systems often rely on interpreting human input. When users provide ambiguous, incomplete or erroneous information, the AI may misunderstand their intent or produce low-quality outputs.Input errors often reflect a mismatch between user expectations and system understanding. Addressing these gracefully is essential to maintain trust and ensure smooth interaction.How to use this patternHandle typos with grace: Use spell-checking or fuzzy matching to auto-correct common input errors when confidence is high, and subtly surface corrections.Ask clarifying questions: When input is too vague or has multiple interpretations, prompt the user to provide missing context. In Conversation Design, these types of errors occur when the intent is defined but the entity is not clear. Know more about entity and intent. E.g., ChatGPT when given low-context prompts like “What’s the capital?”, it asks follow-up questions rather than guessing.Support quick correction: Make it easy for users to edit or override your interpretation. E.g., ChatGPT displays an edit button beside submitted prompts, enabling users to revise their input17. Design for AI system error statesGenAI outputs are inherently probabilistic and subject to errors ranging from hallucinations and bias to contextual misalignments.Unlike traditional systems, GenAI error states are hard to predict. Designing for these states requires transparency, recovery mechanisms and user agency. A well-designed error state can help users understand AI system boundaries and regain control.A Confusion matrix helps analyse AI system errors and provides insight into how well the model is performing by showing the counts of - True positives- False positives- True negatives- False negativesScenarios of AI errors and failure statesSystem failureFalse positives or false negatives occur due to poor data, biases or model hallucinations. E.g., Citibank financial fraud system displays a message “Unusual transaction. Your card is blocked. If it was you, please verify your identity”System limitation errorsTrue negatives occur due to untrained use cases or gaps in knowledge. E.g., when an ODQA system is given a user input outside the trained dataset, throws the following error “Sorry, we don’t have enough information. Please try a different query!”Contextual errorsTrue positives that confuse users due to poor explanations or conflicts with user expectations comes under contextual errors. E.g., when user logs in from a new device, gets locked out. AI responds: “Your login attempt was flagged for suspicious activity”How to use this patternCommunicate AI errors for various scenarios: Use phrases like “This may not be accurate”, “This seems like…” or surface confidence levels to help calibrate trust.Use pattern convey model confidence for low confidence outputs.Offer error recovery: Incase of System failure or Contextual errors, provide clear paths to override, retry or escalate the issue. E.g., Use way forwards like “Try a different query,” or “Let me refine that.” or “Contact Support”.Enable user feedback: Make it easy to report hallucinations or incorrect outputs. about pattern 19. Design to capture user feedback.18. Design to capture user feedbackReal-world alignment needs direct user feedback to improve the model and thus the product. As people interact with AI systems, their behaviours shape and influence the outputs they receive in the future. Thus, creating a continuous feedback loop where both the system and user behaviour adapt over time. E.g., ChatGPT uses Reaction buttons and Comment boxes to collect user feedback.How to use this patternAccount for implicit feedback: Capture user actions such as skips, dismissals, edits, or interaction frequency. These passive signals provide valuable behavioral cues that can tune recommendations or surface patterns of disinterest.Ask for explicit feedback: Collect direct user input through thumbs-up/down, NPS rating widgets or quick surveys after actions. Use this to improve both model behavior and product fit.Communicate how feedback is used: Let users know how their feedback shapes future experiences. This increases trust and encourages ongoing contribution.19. Design for model evaluationRobust GenAI models require continuous evaluation during training as well as post-deployment. Evaluation ensures the model performs as intended, identify errors and hallucinations and aligns with user goals especially in high-stakes domains.How to use this patternThere are three key evaluation methods to improve ML systems.LLM based evaluationsA separate language model acts as an automated judge. It can grade responses, explain its reasoning and assign labels like helpful/harmful or correct/incorrect.E.g., Amazon Bedrock uses the LLM-as-a-Judge approach to evaluate AI model outputs.A separate trusted LLM, like Claude 3 or Amazon Titan, automatically reviews and rates responses based on helpfulness, accuracy, relevance, and safety. For instance, two AI-generated replies to the same prompt are compared, and the judge model selects the better one.This automation reduces evaluation costs by up to 98% and speeds up model selection without relying on slow, expensive human reviews.Enable code-based evaluations: For structured tasks, use test suites or known outputs to validate model performance, especially for data processing, generation, or retrieval.Capture human evaluation: Integrate real-time UI mechanisms for users to label outputs as helpful, harmful, incorrect, or unclear. about it in pattern 19. Design to capture user feedbackA hybrid approach of LLM-as-a-judge and human evaluation drastically boost accuracy to 99%.20. Design for AI guardrailsDesign for AI guardrails means building practises and principles in GenAI models to minimise harm, misinformation, toxic behaviour and biases. It is a critical consideration toProtect users and children from harmful language, made-up facts, biases or false information.Build trust and adoption: When users know the system avoids hate speech and misinformation, they feel safer and show willingness to use it often.Ethical compliance: New rules like the EU AI act demand safe AI design. Teams must meet these standards to stay legal and socially responsible.How to use this patternAnalyse and guide user inputs: If a prompt could lead to unsafe or sensitive content, guide users towards safer interactions. E.g., when Miko robot comes across profanity, it answers“I am not allowed to entertain such language”Filter outputs and moderate content: Use real-time moderation to detect and filter potentially harmful AI outputs, blocking or reframing them before they’re shown to the user. E.g., show a note like: “This response was modified to follow our safety guidelines.Use pro-active warnings: Subtly notify users when they approach sensitive or high stakes information. E.g., “This is informational advice and not a substitute for medical guidance.”Create strong user feedback: Make it easy for users to report unsafe, biased or hallucinated outputs to directly improve the AI over time through active learning loops. E.g., Instagram provides in-app option for users to report harm, bias or misinformation.Cross-validate critical information: For high-stakes domains, back up AI-generated outputs with trusted databases to catch hallucinations. Refer pattern 10, Provide data sources.21. Communicate data privacy and controlsThis pattern ensures GenAI applications clearly convey how user data is collected, stored, processed and protected.GenAI systems often rely on sensitive, contextual, or behavioral data. Mishandling this data can lead to user distrust, legal risk or unintended misuse. Clear communication around privacy safeguards helps users feel safe, respected and in control. E.g., Slack AI clearly communicates that customer data remains owned and controlled by the customer and is not used to train Slack’s or any third-party AI modelsHow to use this patternShow transparency: When a GenAI feature accesses user data, display explanation of what’s being accessed and why.Design opt-in and opt-out flows: Allow users to easily toggle data sharing preferences.Enable data review and deletion: Allow users to view, download or delete their data history giving them ongoing control.ConclusionThese GenAI UX patterns are a starting point and represent the outcome of months of research, shaped directly and indirectly with insights from notable designers, researchers, and technologists across leading tech companies and the broader AI communites across Medium and Linkedin. I have done my best to cite and acknowledge contributors along the way but I’m sure I’ve missed many. If you see something that should be credited or expanded, please reach out.Moreover, these patterns are meant to grow and evolve as we learn more about creating AI that’s trustworthy and puts people first. If you’re a designer, researcher, or builder working with AI, take these patterns, challenge them, remix them and contribute your own. Also, please let me know in comments about your suggestions. If you would like to collaborate with me to further refine this, please reach out to me.20+ GenAI UX patterns, examples and implementation tactics was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #genai #patterns #examples #implementation #tactics
    UXDESIGN.CC
    20+ GenAI UX patterns, examples and implementation tactics
    A shared language for product teams to build usable, intelligent and safe GenAI experiences beyond just the modelGenerative AI introduces a new way for humans to interact with systems by focusing on intent-based outcome specification. GenAI introduces novel challenges because its outputs are probabilistic, requires understanding of variability, memory, errors, hallucinations and malicious use which brings an essential need to build principles and design patterns as described by IBM.Moreover, any AI product is a layered system where LLM is just one ingredient and memory, orchestration, tool extensions, UX and agentic user-flows builds the real magic!This article is my research and documentation of evolving GenAI design patterns that provide a shared language for product managers, data scientists, and interaction designers to create products that are human-centred, trustworthy and safe. By applying these patterns, we can bridge the gap between user needs, technical capabilities and product development process.Here are 21 GenAI UX patternsGenAI or no GenAIConvert user needs to data needsAugment or automateDefine level of automationProgressive AI adoptionLeverage mental modelsConvey product limitsDisplay chain of thought (CoT)Leverage multiple outputsProvide data sourcesConvey model confidenceDesign for memory and recallProvide contextual input parametersDesign for coPilot, co-Editing or partial automationDefine user controls for AutomationDesign for user input error statesDesign for AI system error statesDesign to capture user feedbackDesign for model evaluationDesign for AI safety guardrailsCommunicate data privacy and controls1. GenAI or no GenAIEvaluate whether GenAI improves UX or introduces complexity. Often, heuristic-based (IF/Else) solutions are easier to build and maintain.Scenarios when GenAI is beneficialTasks that are open-ended, creative and augments user.E.g., writing prompts, summarizing notes, drafting replies.Creating or transforming complex outputs (e.g., images, video, code).E.g., converting a sketch into website code.Where structured UX fails to capture user intent.Scenarios when GenAI should be avoidedOutcomes that must be precise, auditable or deterministic. E.g., Tax forms or legal contracts.Users expect clear and consistent information.E.g. Open source software documentationHow to use this patternDetermine the friction points in the customer journeyAssess technology feasibility: Determine if AI can address the friction point. Evaluate scale, dataset availability, error risk assessment and economic ROI.Validate user expectations: - Determine if the AI solution erodes user expectations by evaluating whether the system augments human effort or replaces it entirely, as outlined in pattern 3, Augment vs. automate. - Determine if AI solution erodes pattern 6, Mental models2. Convert user needs to data needsThis pattern ensures GenAI development begins with user intent and data model required to achieve that. GenAI systems are only as good as the data they’re trained on. But real users don’t speak in rows and columns, they express goals, frustrations, and behaviours. If teams fail to translate user needs into structured, model-ready inputs, the resulting system or product may optimise for the wrong outcomes and thus user churn.How to use this patternCollaborate as a cross-functional team of PMs, Product designers and Data Scientists and align on user problems worth solving.Define user needs by using triangulated research: Qualitative (Market Reports, Surveys or Questionnaires) + Quantitative (User Interviews, Observational studies) + Emergent (Product reviews, Social listening etc.) and synthesising user insights using JTBD framework, Empathy Map to visualise user emotions and perspectives. Value Proposition Canvas to align user gains and pains with featuresDefine data needs and documentation by selecting a suitable data model, perform gap analysis and iteratively refine data model as needed. Once you understand the why, translate it into the what for the model. What features, labels, examples, and contexts will your AI model need to learn this behaviour? Use structured collaboration to figure out.3. Augment vs automateOne of the critical decisions in GenAI apps is whether to fully automate a task or to augment human capability. Use this pattern to to align with user intent and control preferences with the technology.Automation is best for tasks users prefer to delegate especially when they are tedious, time-consuming or unsafe. E.g., Intercom FinAI automatically summarizes long email threads into internal notes, saving time on repetitive, low-value tasks.Augmentation enhances tasks users want to remain involved in by increasing efficiency, increase creativity and control. E.g., Magenta Studio in Abelton support creative controls to manipulate and create new music.How to use this patternTo select the best approach, evaluate user needs and expectations using research synthesis tools like empathy map (visualise user emotions and perspectives) and value proposition canvas (to understand user gains and pains)Test and validate if the approach erodes user experience or enhances it.4. Define level of automationIn AI systems, automation refers to how much control is delegated to the AI vs user. This is a strategic UX pattern to decide degree of automation based upon user pain-point, context scenarios and expectation from the product.Levels of automationNo automation (AI assists but user decides)The AI system provides assistance and suggestions to the user but requires the user to make all the decisions. E.g., Grammarly highlights grammar issues but the user accepts or rejects corrections.Partial automation/ co-pilot/ co-editor (AI acts with user oversight)The AI initiates actions or generates content, but the user reviews or intervenes as needed. E.g., GitHub Copilot suggest code that developers can accept, modify, or ignore.Full automation (AI acts independently)The AI system performs tasks without user intervention, often based on predefined rules, tools and triggers. Full automation in GenAI are often referred to as Agentic systems. E.g., Ema can autonomously plan and execute multi-step tasks like researching competitors, generating a report and emailing it without user prompts or intervention at each step.How to use this patternEvaluate user pain point to be automated and risk involved: Automating tasks is most effective when the associated risk is low without severe consequences in case of failure. Low-risk tasks such as sending automated reminders, promotional emails, filtering spam emails or processing routine customer queries can be automated with minimal downside while saving time and resources. High-risk tasks such as making medical diagnoses, sending business-critical emails, or executing financial trades requires careful oversight due to the potential for significant harm if errors occur.Evaluate and design for particular automation level: Evaluate if user pain point should fall under — No Automation, Partial Automation or Full Automation based upon user expectations and goals.Define user controls for automation (refer pattern 15)5. Progressive GenAI adoptionWhen users first encounter a product built on new technology, they often wonder what the system can and can’t do, how it works and how they should interact with it.This pattern offers multi-dimensional strategy to help user onboard an AI product or feature, mitigate errors, aligns with user readiness to deliver an informed and human-centered UX.How to use this patternThis pattern is a culmination of many other patternsFocus on communicating benefits from the start: Avoid diving into details about the technology and highlight how the AI brings new value.Simplify the onboarding experience Let users experience the system’s value before asking data-sharing preferences, give instant access to basic AI features first. Encourage users to sign up later to unlock advanced AI features or share more details. E.g., Adobe FireFly progressively onboards user with basic to advance AI featuresDefine level of automation (refer pattern 4) and gradually increase autonomy or complexity.Provide explainability and trust by designing for errors (refer pattern 16 and 17).Communicate data privacy and controls (refer pattern 21) to clearly convey how user data is collected, stored, processed and protected.6. Leverage mental modelsMental models help user predict how a system (web, application or other kind of product) will work and, therefore, influence how they interact with an interface. When a product aligns with a user’s existing mental models, it feels intuitive and easy to adopt. When it clashes, it can cause frustration, confusion, or abandonment​.E.g. Github Copilot builds upon developers’ mental models from traditional code autocomplete, easing the transition to AI-powered code suggestionsE.g. Adobe Photoshop builds upon the familiar approach of extending an image using rectangular controls by integrating its Generative Fill feature, which intelligently fills the newly created space.How to use this patternIdentifying and build upon existing mental models by questioningWhat is the user journey and what is user trying to do?What mental models might already be in place?Does this product break any intuitive patterns of cause and effect?Are you breaking an existing mental model? If yes, clearly explain how and why. Good onboarding, microcopy, and visual cues can help bridge the gap.7. Convey product limitsThis pattern involves clearly conveying what an AI model can and cannot do, including its knowledge boundaries, capabilities and limitations.It is helpful to builds user trust, sets appropriate expectations, prevents misuse, and reduces frustration when the model fails or behaves unexpectedly.How to use this patternExplicitly state model limitations: Show contextual cues for outdated knowledge or lack of real-time data. E.g., Claude states its knowledge cutoff when the question falls outside its knowledge domainProvide fallbacks or escalation options when the model cannot provide a suitable output. E.g., Amazon Rufus when asked about something unrelated to shopping, says “it doesn’t have access to factual information and, I can only assists with shopping related questions and requests”Make limitations visible in product marketing, onboarding, tooltips or response disclaimers.8. Display chain of thought (CoT)In AI systems, chain-of-thought (CoT) prompting technique enhances the model’s ability to solve complex problems by mimicking a more structured, step-by-step thought process like that of a human.CoT display is a UX pattern that improves transparency by revealing how the AI arrived at its conclusions. This fosters user trust, supports interpretability, and opens up space for user feedback especially in high-stakes or ambiguous scenarios.E.g., Perplexity enhances transparency by displaying its processing steps helping users understand the thoughtful process behind the answers.E.g., Khanmigo an AI Tutoring system guides students step-by-step through problems, mimicking human reasoning to enhance understanding and learning.How to use this patternShow status like “researching” and “reasoning to communicate progress, reduce user uncertainty and wait times feel shorter.Use progressive disclosure: Start with a high-level summary, and allow users to expand details as needed.Provide AI tooling transparency: Clearly display external tools and data sources the AI uses to generate recommendations.Show confidence & uncertainty: Indicate AI confidence levels and highlight uncertainties when relevant.9. Leverage multiple outputsGenAI can produce varied responses to the same input due to its probabilistic nature. This pattern exploits variability by presenting multiple outputs side by side. Showing diverse options helps users creatively explore, compare, refine or make better decisions that best aligns with their intent. E.g., Google Gemini provides multiple options to help user explore, refine and make better decisions.How to use this patternExplain the purpose of variation: Help users understand that differences across outputs are intentional and meant to offer choice.Enable edits: Let users rate, select, remix, or edit outputs seamlessly to shape outcomes and provide feedback. E.g., Midjourney helps user adjust prompt and guide your variations and edits using remix10. Provide data sourcesArticulating data sources in a GenAI application is essential for transparency, credibility and user trust. Clearly indicating where the AI derives its knowledge helps users assess the reliability of responses and avoid misinformation.This is especially important in high stakes factual domains like healthcare, finance or legal guidance where decisions must be based on verified data.How to use this patternCite credible sources inline: Display sources as footnotes, tooltips, or collapsible links. E.g., NoteBookLM adds citations to its answers and links each answer directly to the part of user’s uploaded documents.Disclose training data scope clearly: For generative tools (text, images, code), offer a simple explanation of what data the model was trained on and what wasn’t included. E.g., Adobe Firefly discloses that its Generative Fill feature is trained on stock imagery, openly licensed work and public domain content where the copyright has expired.Provide source-level confidence:In cases where multiple sources contribute, visually differentiate higher-confidence or more authoritative sources.11. Convey model confidenceAI-generated outputs are probabilistic and can vary in accuracy. Showing confidence scores communicates how certain the model is about its output. This helps users assess reliability and make better-informed decisions.How to use this patternAssess context and decision stakes: Showing model confidence depends on the context and its impact on user decision-making. In high-stakes scenarios like healthcare, finance or legal advice, displaying confidence scores are crucial. However, in low stake scenarios like AI-generated art or storytelling confidence may not add much value and could even introduce unnecessary confusion.Choose the right visualization: If design research shows that displaying model confidence aids decision-making, the next step is to select the right visualization method. Percentages, progress bars or verbal qualifiers (“likely,” “uncertain”) can communicate confidence effectively. The apt visualisation method depends on the application’s use-case and user familiarity. E.g., Grammarly uses verbal qualifiers like “likely” to the content it generated along with the userGuide user action during low confidence scenarios: Offer paths forward such as asking clarifying questions or offering alternative options.12. Design for memory and recallMemory and recall is an important concept and design pattern that enables the AI product to store and reuse information from past interactions such as user preferences, feedback, goals or task history to improve continuity and context awareness.Enhances personalization by remembering past choices or preferencesReduces user burden by avoiding repeated input requests especially in multi-step or long-form tasksSupports complex tasks like longitudinal workflows like in project planning, learning journeys by referencing or building on past progress.Memory used to access information can be ephemeral (short-term within a session) or persistent (long-term across sessions) and may include conversational context, behavioural signals, or explicit inputs.How to use this patternDefine the user context and choose memory typeChoose memory type like ephemeral or persistent or both based upon use case. A shopping assistant might track interactions in real time without needing to persist data for future sessions whereas personal assistants need long-term memory for personalization.Use memory intelligently in user interactionsBuild base prompts for LLM to recall and communicate information contextually (E.g., “Last time you preferred a lighter tone. Should I continue with that?”).Communicate transparency and provide controlsClearly communicate what’s being saved and let users view, edit or delete stored memory. Make “delete memories” an accessible action. E.g. ChatGPT offers extensive controls across it’s platform to view, update, or delete memories anytime.13. Provide contextual input parametersContextual Input parameters enhance the user experience by streamlining user interactions and gets to user goal faster. By leveraging user-specific data, user preferences or past interactions or even data from other users who have similar preferences, GenAI system can tailor inputs and functionalities to better meet user intent and decision making.How to use this patternLeverage prior interactions: Pre-fill inputs based on what the user has previously entered. Refer pattern 12, Memory and recall.Use auto complete or smart defaults: As users type, offer intelligent, real-time suggestions derived from personal and global usage patterns. E.g., Perplexity offers smart next query suggestions based on your current query thread.Suggest interactive UI widgets: Based upon system prediction, provide tailored input widgets like toasts, sliders, checkboxes to enhance user input. E.g., ElevenLabs allows users to fine-tune voice generation settings by surfacing presets or defaults.14. Design for co-pilot / co-editing / partial automationCo-pilot is an augmentation pattern where AI acts as a collaborative assistant, offering contextual and data-driven insights while the user remains in control. This design pattern is essential in domains like strategy, ideating, writing, designing or coding where outcomes are subjective, users have unique preferences or creative input from the user is critical.Co-pilot speed up workflows, enhance creativity and reduce cognitive load but the human retains authorship and final decision-making.How to use this patternEmbed inline assistance: Place AI suggestions contextually so users can easily accept, reject or modify them. E.g., Notion AI helps you draft, summarise and edit content while you control the final version.Save user intent and creative direction: Let users guide the AI with input like goals, tone, or examples, maintaining authorship and creative direction. E.g., Jasper AI allows users to set brand voice and tone guidelines, helping structure AI output to better match the user’s intent.15. Design user controls for automationBuild UI-level mechanisms that let users manage or override automation based upon user goals, context scenarios or system failure states.No system can anticipate all user contexts. Controls give users agency and keep trust intact even when the AI gets it wrong.How to use this patternUse progressive disclosure: Start with minimal automation and allow users to opt into more complex or autonomous features over time. E.g., Canva Magic Studio starts with simple AI suggestions like text or image generation then gradually reveals advanced tools like Magic Write, AI video scenes and brand voice customisation.Give users automation controls: UI controls like toggles, sliders, or rule-based settings to let users choose when and how automation can be controlled. E.g., Gmail lets users disable Smart Compose.Design for automation error recovery: Give users correction when AI fails (false positives/negatives). Add manual override, undo, or escalate options to human support. E.g., GitHub Copilot suggests code inline, but developers can easily reject, modify or undo suggestions when output is off.16. Design for user input error statesGenAI systems often rely on interpreting human input. When users provide ambiguous, incomplete or erroneous information, the AI may misunderstand their intent or produce low-quality outputs.Input errors often reflect a mismatch between user expectations and system understanding. Addressing these gracefully is essential to maintain trust and ensure smooth interaction.How to use this patternHandle typos with grace: Use spell-checking or fuzzy matching to auto-correct common input errors when confidence is high (e.g., >80%), and subtly surface corrections (“Showing results for…”).Ask clarifying questions: When input is too vague or has multiple interpretations, prompt the user to provide missing context. In Conversation Design, these types of errors occur when the intent is defined but the entity is not clear. Know more about entity and intent. E.g., ChatGPT when given low-context prompts like “What’s the capital?”, it asks follow-up questions rather than guessing.Support quick correction: Make it easy for users to edit or override your interpretation. E.g., ChatGPT displays an edit button beside submitted prompts, enabling users to revise their input17. Design for AI system error statesGenAI outputs are inherently probabilistic and subject to errors ranging from hallucinations and bias to contextual misalignments.Unlike traditional systems, GenAI error states are hard to predict. Designing for these states requires transparency, recovery mechanisms and user agency. A well-designed error state can help users understand AI system boundaries and regain control.A Confusion matrix helps analyse AI system errors and provides insight into how well the model is performing by showing the counts of - True positives (correctly identifying a positive case) - False positives (incorrectly identifying a positive case) - True negatives (correctly identifying a negative case)- False negatives (failing to identify a negative case)Scenarios of AI errors and failure statesSystem failure (wrong output)False positives or false negatives occur due to poor data, biases or model hallucinations. E.g., Citibank financial fraud system displays a message “Unusual transaction. Your card is blocked. If it was you, please verify your identity”System limitation errors (no output)True negatives occur due to untrained use cases or gaps in knowledge. E.g., when an ODQA system is given a user input outside the trained dataset, throws the following error “Sorry, we don’t have enough information. Please try a different query!”Contextual errors (misunderstood output)True positives that confuse users due to poor explanations or conflicts with user expectations comes under contextual errors. E.g., when user logs in from a new device, gets locked out. AI responds: “Your login attempt was flagged for suspicious activity”How to use this patternCommunicate AI errors for various scenarios: Use phrases like “This may not be accurate”, “This seems like…” or surface confidence levels to help calibrate trust.Use pattern convey model confidence for low confidence outputs.Offer error recovery: Incase of System failure or Contextual errors, provide clear paths to override, retry or escalate the issue. E.g., Use way forwards like “Try a different query,” or “Let me refine that.” or “Contact Support”.Enable user feedback: Make it easy to report hallucinations or incorrect outputs. Read more about pattern 19. Design to capture user feedback.18. Design to capture user feedbackReal-world alignment needs direct user feedback to improve the model and thus the product. As people interact with AI systems, their behaviours shape and influence the outputs they receive in the future. Thus, creating a continuous feedback loop where both the system and user behaviour adapt over time. E.g., ChatGPT uses Reaction buttons and Comment boxes to collect user feedback.How to use this patternAccount for implicit feedback: Capture user actions such as skips, dismissals, edits, or interaction frequency. These passive signals provide valuable behavioral cues that can tune recommendations or surface patterns of disinterest.Ask for explicit feedback: Collect direct user input through thumbs-up/down, NPS rating widgets or quick surveys after actions. Use this to improve both model behavior and product fit.Communicate how feedback is used: Let users know how their feedback shapes future experiences. This increases trust and encourages ongoing contribution.19. Design for model evaluationRobust GenAI models require continuous evaluation during training as well as post-deployment. Evaluation ensures the model performs as intended, identify errors and hallucinations and aligns with user goals especially in high-stakes domains.How to use this patternThere are three key evaluation methods to improve ML systems.LLM based evaluations (LLM-as-a-judge) A separate language model acts as an automated judge. It can grade responses, explain its reasoning and assign labels like helpful/harmful or correct/incorrect.E.g., Amazon Bedrock uses the LLM-as-a-Judge approach to evaluate AI model outputs.A separate trusted LLM, like Claude 3 or Amazon Titan, automatically reviews and rates responses based on helpfulness, accuracy, relevance, and safety. For instance, two AI-generated replies to the same prompt are compared, and the judge model selects the better one.This automation reduces evaluation costs by up to 98% and speeds up model selection without relying on slow, expensive human reviews.Enable code-based evaluations: For structured tasks, use test suites or known outputs to validate model performance, especially for data processing, generation, or retrieval.Capture human evaluation: Integrate real-time UI mechanisms for users to label outputs as helpful, harmful, incorrect, or unclear. Read more about it in pattern 19. Design to capture user feedbackA hybrid approach of LLM-as-a-judge and human evaluation drastically boost accuracy to 99%.20. Design for AI guardrailsDesign for AI guardrails means building practises and principles in GenAI models to minimise harm, misinformation, toxic behaviour and biases. It is a critical consideration toProtect users and children from harmful language, made-up facts, biases or false information.Build trust and adoption: When users know the system avoids hate speech and misinformation, they feel safer and show willingness to use it often.Ethical compliance: New rules like the EU AI act demand safe AI design. Teams must meet these standards to stay legal and socially responsible.How to use this patternAnalyse and guide user inputs: If a prompt could lead to unsafe or sensitive content, guide users towards safer interactions. E.g., when Miko robot comes across profanity, it answers“I am not allowed to entertain such language”Filter outputs and moderate content: Use real-time moderation to detect and filter potentially harmful AI outputs, blocking or reframing them before they’re shown to the user. E.g., show a note like: “This response was modified to follow our safety guidelines.Use pro-active warnings: Subtly notify users when they approach sensitive or high stakes information. E.g., “This is informational advice and not a substitute for medical guidance.”Create strong user feedback: Make it easy for users to report unsafe, biased or hallucinated outputs to directly improve the AI over time through active learning loops. E.g., Instagram provides in-app option for users to report harm, bias or misinformation.Cross-validate critical information: For high-stakes domains (like healthcare, law, finance), back up AI-generated outputs with trusted databases to catch hallucinations. Refer pattern 10, Provide data sources.21. Communicate data privacy and controlsThis pattern ensures GenAI applications clearly convey how user data is collected, stored, processed and protected.GenAI systems often rely on sensitive, contextual, or behavioral data. Mishandling this data can lead to user distrust, legal risk or unintended misuse. Clear communication around privacy safeguards helps users feel safe, respected and in control. E.g., Slack AI clearly communicates that customer data remains owned and controlled by the customer and is not used to train Slack’s or any third-party AI modelsHow to use this patternShow transparency: When a GenAI feature accesses user data, display explanation of what’s being accessed and why.Design opt-in and opt-out flows: Allow users to easily toggle data sharing preferences.Enable data review and deletion: Allow users to view, download or delete their data history giving them ongoing control.ConclusionThese GenAI UX patterns are a starting point and represent the outcome of months of research, shaped directly and indirectly with insights from notable designers, researchers, and technologists across leading tech companies and the broader AI communites across Medium and Linkedin. I have done my best to cite and acknowledge contributors along the way but I’m sure I’ve missed many. If you see something that should be credited or expanded, please reach out.Moreover, these patterns are meant to grow and evolve as we learn more about creating AI that’s trustworthy and puts people first. If you’re a designer, researcher, or builder working with AI, take these patterns, challenge them, remix them and contribute your own. Also, please let me know in comments about your suggestions. If you would like to collaborate with me to further refine this, please reach out to me.20+ GenAI UX patterns, examples and implementation tactics was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos
  • Labor dispute erupts over AI-voiced Darth Vader in Fortnite

    Pray I don't alter it any further

    Labor dispute erupts over AI-voiced Darth Vader in Fortnite

    SAG-AFTRA claims Epic didn't negotiate video game AI voice replacement terms.

    Benj Edwards



    May 19, 2025 4:50 pm

    |

    46

    David Prowse as Darth Vader and Carrie Fisher as Princess Leia filming the original Star Wars.

    Credit:

    Sunset Boulevard/Corbis via Getty Images

    David Prowse as Darth Vader and Carrie Fisher as Princess Leia filming the original Star Wars.

    Credit:

    Sunset Boulevard/Corbis via Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    On Monday, SAG-AFTRA filed an unfair labor practice charge with the National Labor Relations Board against Epic subsidiary Llama Productions for implementing an AI-generated Darth Vader voice in Fortnite on Friday without first notifying or bargaining with the union, as their contract requires.
    Llama Productions is the official signatory to SAG-AFTRA's collective bargaining agreement for Fortnite, making it legally responsible for adhering to the union's terms regarding the employment of voice actors and other performers.
    "We celebrate the right of our members and their estates to control the use of their digital replicas and welcome the use of new technologies," SAG-AFTRA stated in a news release. "However, we must protect our right to bargain terms and conditions around uses of voice that replace the work of our members, including those who previously did the work of matching Darth Vader's iconic rhythm and tone in video games."

    An official promo image for Darth Vader in Fortnite.

    Credit:

    Disney / Starwars.com

    The union's complaint comes just days after the feature sparked a separate controversy when players discovered that they could manipulate the AI into using profanity and inappropriate language until Epic quickly implemented a fix. The AI-controlled in-game character uses Google's Gemini 2.0 to generate dialogue and ElevenLabs' Flash v2.5 AI model trained on the voice of the late James Earl Jones to speak real-time responses to player questions.

    For voice actors who previously portrayed Darth Vader in video games, the Fortnite feature starkly illustrates how AI voice synthesis could reshape their profession. While James Earl Jones created the iconic voice for films, at least 54 voice actors have performed as Vader in various media games over the years when Jones wasn't available—work that could vanish if AI replicas become the industry standard.
    The union strikes back
    SAG-AFTRA's labor complaintdoesn't focus on the AI feature's technical problems or on permission from the Jones estate, which explicitly authorized the use of a synthesized version of his voice for the character in Fortnite. The late actor, who died in 2024, had signed over his Darth Vader voice rights before his death.
    Instead, the union's grievance centers on labor rights and collective bargaining. In the NLRB filing, SAG-AFTRA alleges that Llama Productions "failed and refused to bargain in good faith with the union by making unilateral changes to terms and conditions of employment, without providing notice to the union or the opportunity to bargain, by utilizing AI-generated voices to replace bargaining unit work on the Interactive Program Fortnite."
    The action comes amid SAG-AFTRA's ongoing interactive media strike, which began in July 2024 after negotiations with video game producers stalled primarily over AI protections. The strike continues, with more than 100 games signing interim agreements, while others, including those from major publishers like Epic, remain in dispute.

    Benj Edwards
    Senior AI Reporter

    Benj Edwards
    Senior AI Reporter

    Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

    46 Comments
    #labor #dispute #erupts #over #aivoiced
    Labor dispute erupts over AI-voiced Darth Vader in Fortnite
    Pray I don't alter it any further Labor dispute erupts over AI-voiced Darth Vader in Fortnite SAG-AFTRA claims Epic didn't negotiate video game AI voice replacement terms. Benj Edwards – May 19, 2025 4:50 pm | 46 David Prowse as Darth Vader and Carrie Fisher as Princess Leia filming the original Star Wars. Credit: Sunset Boulevard/Corbis via Getty Images David Prowse as Darth Vader and Carrie Fisher as Princess Leia filming the original Star Wars. Credit: Sunset Boulevard/Corbis via Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more On Monday, SAG-AFTRA filed an unfair labor practice charge with the National Labor Relations Board against Epic subsidiary Llama Productions for implementing an AI-generated Darth Vader voice in Fortnite on Friday without first notifying or bargaining with the union, as their contract requires. Llama Productions is the official signatory to SAG-AFTRA's collective bargaining agreement for Fortnite, making it legally responsible for adhering to the union's terms regarding the employment of voice actors and other performers. "We celebrate the right of our members and their estates to control the use of their digital replicas and welcome the use of new technologies," SAG-AFTRA stated in a news release. "However, we must protect our right to bargain terms and conditions around uses of voice that replace the work of our members, including those who previously did the work of matching Darth Vader's iconic rhythm and tone in video games." An official promo image for Darth Vader in Fortnite. Credit: Disney / Starwars.com The union's complaint comes just days after the feature sparked a separate controversy when players discovered that they could manipulate the AI into using profanity and inappropriate language until Epic quickly implemented a fix. The AI-controlled in-game character uses Google's Gemini 2.0 to generate dialogue and ElevenLabs' Flash v2.5 AI model trained on the voice of the late James Earl Jones to speak real-time responses to player questions. For voice actors who previously portrayed Darth Vader in video games, the Fortnite feature starkly illustrates how AI voice synthesis could reshape their profession. While James Earl Jones created the iconic voice for films, at least 54 voice actors have performed as Vader in various media games over the years when Jones wasn't available—work that could vanish if AI replicas become the industry standard. The union strikes back SAG-AFTRA's labor complaintdoesn't focus on the AI feature's technical problems or on permission from the Jones estate, which explicitly authorized the use of a synthesized version of his voice for the character in Fortnite. The late actor, who died in 2024, had signed over his Darth Vader voice rights before his death. Instead, the union's grievance centers on labor rights and collective bargaining. In the NLRB filing, SAG-AFTRA alleges that Llama Productions "failed and refused to bargain in good faith with the union by making unilateral changes to terms and conditions of employment, without providing notice to the union or the opportunity to bargain, by utilizing AI-generated voices to replace bargaining unit work on the Interactive Program Fortnite." The action comes amid SAG-AFTRA's ongoing interactive media strike, which began in July 2024 after negotiations with video game producers stalled primarily over AI protections. The strike continues, with more than 100 games signing interim agreements, while others, including those from major publishers like Epic, remain in dispute. Benj Edwards Senior AI Reporter Benj Edwards Senior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 46 Comments #labor #dispute #erupts #over #aivoiced
    ARSTECHNICA.COM
    Labor dispute erupts over AI-voiced Darth Vader in Fortnite
    Pray I don't alter it any further Labor dispute erupts over AI-voiced Darth Vader in Fortnite SAG-AFTRA claims Epic didn't negotiate video game AI voice replacement terms. Benj Edwards – May 19, 2025 4:50 pm | 46 David Prowse as Darth Vader and Carrie Fisher as Princess Leia filming the original Star Wars. Credit: Sunset Boulevard/Corbis via Getty Images David Prowse as Darth Vader and Carrie Fisher as Princess Leia filming the original Star Wars. Credit: Sunset Boulevard/Corbis via Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more On Monday, SAG-AFTRA filed an unfair labor practice charge with the National Labor Relations Board against Epic subsidiary Llama Productions for implementing an AI-generated Darth Vader voice in Fortnite on Friday without first notifying or bargaining with the union, as their contract requires. Llama Productions is the official signatory to SAG-AFTRA's collective bargaining agreement for Fortnite, making it legally responsible for adhering to the union's terms regarding the employment of voice actors and other performers. "We celebrate the right of our members and their estates to control the use of their digital replicas and welcome the use of new technologies," SAG-AFTRA stated in a news release. "However, we must protect our right to bargain terms and conditions around uses of voice that replace the work of our members, including those who previously did the work of matching Darth Vader's iconic rhythm and tone in video games." An official promo image for Darth Vader in Fortnite. Credit: Disney / Starwars.com The union's complaint comes just days after the feature sparked a separate controversy when players discovered that they could manipulate the AI into using profanity and inappropriate language until Epic quickly implemented a fix. The AI-controlled in-game character uses Google's Gemini 2.0 to generate dialogue and ElevenLabs' Flash v2.5 AI model trained on the voice of the late James Earl Jones to speak real-time responses to player questions. For voice actors who previously portrayed Darth Vader in video games, the Fortnite feature starkly illustrates how AI voice synthesis could reshape their profession. While James Earl Jones created the iconic voice for films, at least 54 voice actors have performed as Vader in various media games over the years when Jones wasn't available—work that could vanish if AI replicas become the industry standard. The union strikes back SAG-AFTRA's labor complaint (which can be read online here) doesn't focus on the AI feature's technical problems or on permission from the Jones estate, which explicitly authorized the use of a synthesized version of his voice for the character in Fortnite. The late actor, who died in 2024, had signed over his Darth Vader voice rights before his death. Instead, the union's grievance centers on labor rights and collective bargaining. In the NLRB filing, SAG-AFTRA alleges that Llama Productions "failed and refused to bargain in good faith with the union by making unilateral changes to terms and conditions of employment, without providing notice to the union or the opportunity to bargain, by utilizing AI-generated voices to replace bargaining unit work on the Interactive Program Fortnite." The action comes amid SAG-AFTRA's ongoing interactive media strike, which began in July 2024 after negotiations with video game producers stalled primarily over AI protections. The strike continues, with more than 100 games signing interim agreements, while others, including those from major publishers like Epic, remain in dispute. Benj Edwards Senior AI Reporter Benj Edwards Senior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 46 Comments
    0 Comentários 0 Compartilhamentos
  • Epic Games debuts voice-interactive Darth Vader in Fortnite — and it's already being tricked into swearing

    By
    Mitch Picasso Fox News
    Published
    May 17, 2025 8:18am EDT |
    Updated
    May 17, 2025 4:32pm EDT close Mastercard, Paypal, Visa announce pay-by-chatbot feature Soon you’ll be able to pay for things like clothes and food through a chatbot, bypassing apps and websites. Epic Games released an AI-powered Darth Vader character that players can speak to via microphone.Fortnite and Disney teamed up to unveil an AI character that responds to a player's voice in real time. According to Fortnite, AI Vader can react to players by answering questions and providing strategies.The tech uses Google's Gemini 2.0 Flash model to generate Vader's responses, according to Epic Games.The voice of James Earl Jones, the late actor who voiced Vader, was generated using ElevenLabs' Flash v2.5 model. Fortnite and Disney have brought an AI-powered Darth Vader into the game that can interact with players on May 16, 2025.The family of Jones said in a statement to Epic, "James Earl felt that the voice of Darth Vader was inseparable from the story of Star Wars, and he always wanted fans of all ages to continue to experience it. We hope that this collaboration with Fortnite will allow both longtime fans of Darth Vader and newer generations to share in the enjoyment of this iconic character."The game's site states that audio and transcriptions from players are not stored and are solely used to prompt Darth Vader's responses. Epic also says it does not use the players' interactions to train AI models.SCARLETT JOHANSSON TAKES AIM AT COMPANIES USING HER LIKENESS, VOICE IN AI Fortnite players are finding ways to trick the new AI character into cussing.Players quickly found ways to get Vader to respond with profanity by having him repeat offensive language used by them.Australian streamer and gamer Kathleen Belsten, who goes by the handle "Loserfruit," shared a video online of her managing to prompt the AI Darth Vader to say "f---" during in-game dialogue.META TARGET OF PARENTS GROUP CRUSADE AFTER SCATHING CHILD SEXUAL EXPLOITATION REPORT Epic Games uses Gemini and ElevenLabs tech to introduce an AI-powered Darth Vader.Another X user with the handle @GasSpares was able to get the AI character to repeat a slur.Fortnite offers a feature that allows players to report such events, according to the site's FAQ. The company said it immediately issued a "hot fix" that will prevent players from making the character use profanity or slurs.The site also states that "Players under 13 or their country’s age of digital consent, whichever is higher, will need permission to talk with Darth Vader. These players will see an in-game prompt to get parental permission." Mitch Picasso is a Fox News digital production assistant. You can reach him at @mitch_picasso on Twitter.
    #epic #games #debuts #voiceinteractive #darth
    Epic Games debuts voice-interactive Darth Vader in Fortnite — and it's already being tricked into swearing
    By Mitch Picasso Fox News Published May 17, 2025 8:18am EDT | Updated May 17, 2025 4:32pm EDT close Mastercard, Paypal, Visa announce pay-by-chatbot feature Soon you’ll be able to pay for things like clothes and food through a chatbot, bypassing apps and websites. Epic Games released an AI-powered Darth Vader character that players can speak to via microphone.Fortnite and Disney teamed up to unveil an AI character that responds to a player's voice in real time. According to Fortnite, AI Vader can react to players by answering questions and providing strategies.The tech uses Google's Gemini 2.0 Flash model to generate Vader's responses, according to Epic Games.The voice of James Earl Jones, the late actor who voiced Vader, was generated using ElevenLabs' Flash v2.5 model. Fortnite and Disney have brought an AI-powered Darth Vader into the game that can interact with players on May 16, 2025.The family of Jones said in a statement to Epic, "James Earl felt that the voice of Darth Vader was inseparable from the story of Star Wars, and he always wanted fans of all ages to continue to experience it. We hope that this collaboration with Fortnite will allow both longtime fans of Darth Vader and newer generations to share in the enjoyment of this iconic character."The game's site states that audio and transcriptions from players are not stored and are solely used to prompt Darth Vader's responses. Epic also says it does not use the players' interactions to train AI models.SCARLETT JOHANSSON TAKES AIM AT COMPANIES USING HER LIKENESS, VOICE IN AI Fortnite players are finding ways to trick the new AI character into cussing.Players quickly found ways to get Vader to respond with profanity by having him repeat offensive language used by them.Australian streamer and gamer Kathleen Belsten, who goes by the handle "Loserfruit," shared a video online of her managing to prompt the AI Darth Vader to say "f---" during in-game dialogue.META TARGET OF PARENTS GROUP CRUSADE AFTER SCATHING CHILD SEXUAL EXPLOITATION REPORT Epic Games uses Gemini and ElevenLabs tech to introduce an AI-powered Darth Vader.Another X user with the handle @GasSpares was able to get the AI character to repeat a slur.Fortnite offers a feature that allows players to report such events, according to the site's FAQ. The company said it immediately issued a "hot fix" that will prevent players from making the character use profanity or slurs.The site also states that "Players under 13 or their country’s age of digital consent, whichever is higher, will need permission to talk with Darth Vader. These players will see an in-game prompt to get parental permission." Mitch Picasso is a Fox News digital production assistant. You can reach him at @mitch_picasso on Twitter. #epic #games #debuts #voiceinteractive #darth
    WWW.FOXNEWS.COM
    Epic Games debuts voice-interactive Darth Vader in Fortnite — and it's already being tricked into swearing
    By Mitch Picasso Fox News Published May 17, 2025 8:18am EDT | Updated May 17, 2025 4:32pm EDT close Mastercard, Paypal, Visa announce pay-by-chatbot feature Soon you’ll be able to pay for things like clothes and food through a chatbot, bypassing apps and websites. Epic Games released an AI-powered Darth Vader character that players can speak to via microphone.Fortnite and Disney teamed up to unveil an AI character that responds to a player's voice in real time. According to Fortnite, AI Vader can react to players by answering questions and providing strategies.The tech uses Google's Gemini 2.0 Flash model to generate Vader's responses, according to Epic Games.The voice of James Earl Jones, the late actor who voiced Vader, was generated using ElevenLabs' Flash v2.5 model. Fortnite and Disney have brought an AI-powered Darth Vader into the game that can interact with players on May 16, 2025. (Photo by OLIVIER CHASSIGNOLE/AFP via Getty Images)The family of Jones said in a statement to Epic, "James Earl felt that the voice of Darth Vader was inseparable from the story of Star Wars, and he always wanted fans of all ages to continue to experience it. We hope that this collaboration with Fortnite will allow both longtime fans of Darth Vader and newer generations to share in the enjoyment of this iconic character."The game's site states that audio and transcriptions from players are not stored and are solely used to prompt Darth Vader's responses. Epic also says it does not use the players' interactions to train AI models.SCARLETT JOHANSSON TAKES AIM AT COMPANIES USING HER LIKENESS, VOICE IN AI Fortnite players are finding ways to trick the new AI character into cussing. (Photo Illustration by Thomas Fuller/SOPA Images/LightRocket via Getty Images)Players quickly found ways to get Vader to respond with profanity by having him repeat offensive language used by them.Australian streamer and gamer Kathleen Belsten, who goes by the handle "Loserfruit," shared a video online of her managing to prompt the AI Darth Vader to say "f---" during in-game dialogue.META TARGET OF PARENTS GROUP CRUSADE AFTER SCATHING CHILD SEXUAL EXPLOITATION REPORT Epic Games uses Gemini and ElevenLabs tech to introduce an AI-powered Darth Vader. (Photo illustration by Jakub Porzycki/NurPhoto via Getty Images)Another X user with the handle @GasSpares was able to get the AI character to repeat a slur.Fortnite offers a feature that allows players to report such events, according to the site's FAQ. The company said it immediately issued a "hot fix" that will prevent players from making the character use profanity or slurs.The site also states that "Players under 13 or their country’s age of digital consent, whichever is higher, will need permission to talk with Darth Vader. These players will see an in-game prompt to get parental permission." Mitch Picasso is a Fox News digital production assistant. You can reach him at @mitch_picasso on Twitter.
    0 Comentários 0 Compartilhamentos