Curated stories on user experience, usability, and product design. By
@fabriciot
and
@caioab
.
@fabriciot
and
@caioab
.
1 people like this
397 Posts
2 Photos
0 Videos
0
Reviews
Share
Share this page
Recent Updates
-
Customer Support is often a way to measure the value of designuxdesign.ccWhy reducing customer complaints is something businesses pay attention toContinue reading on UX Collective0 Comments ·0 Shares ·60 Views
-
The Digital Lifecycle in UX: How Your Designs Contribute to Wasteuxdesign.ccToday, solutions make carbon emissions checks of digital products easily available, so why arent we dealing with the problem of waste?Continue reading on UX Collective0 Comments ·0 Shares ·30 Views
-
Why your keyboard layout is stuck in the 1800suxdesign.ccThe enduring design legacy of QWERTY in modern UXContinue reading on UX Collective0 Comments ·0 Shares ·37 Views
-
Sidebar is back, Duolingos strategy, AIUX frameworks, accessible outcomesuxdesign.ccWeekly curated resources for designersthinkers andmakers.The news that Sidebar.io was taking a break felt a bit like a heartbreak. Sidebar has been one of my favorite sources to keep up with design, and with content that would make me a better, smarter, more informed designer.No noise, no endless scrolling, just the goodstuff.5 links a day. Thatsit.WellSidebar is back from its breakBecome an Expert with the Online Master in UI/UX Design at LABASAD [Sponsored] Learn advanced UI/UX design with a 100% online, practical methodology. In 12 months, master wireframing, user research, interface design, prototyping, design systems, typography, and colour theory using industry-standard tools like Figma. Limited places, start May2025!Editor picksThe fastest gun in UX Why your team is telling the wrong story.By PavelSamsonovDuolingos gamification strategy The good, the bad, and the ugly.By TiinaGolubWhy TikTok users are flocking to Xiaohongshu Part UX, part politics.By DaleyWilhelmMeta and Spotifys AI takeover Is this the end of human-created content?By Angele LenglemetzHuman flourishing in the Age of AI Challenges, strategies, and opportunities.By JoshLaMarThe UX Collective is an independent design publication that elevates unheard design voices and helps designers think more critically about theirwork.Subway Stories: an interactive visualization Make methinkA journey of craft built on trust, confidence, and focus Experiencing imposter syndrome has taught me to accept feedbackboth positive and constructivewith openness and to lean on my teammates for perspective and support. Its uncomfortable, but its also a reminder that the work youre doing matters.Consistency means nothing Modern software is a paradox. On one hand its more conformist than ever. On the other hand, most are hilariously inconsistent in execution. The majority of software from large orgs leans towards disorder. This is often due to how large orgs operate.Stop trying to schedule a call with me Chances are, I signed up to see if your tool can do one specific thing. If it doesnt, Ive already mentally moved on and forgotten about it. So, when you email me, Im either actively evaluating whether to buy your product, or I have no idea why youre reachingout.Little gems thisweekHow first impressions drive AI adoption By Tetiana SydorenkoDesigners: always read the comments By Euphrates DahoutThe future of design systems is decentralized By Oscar Gonzalez, WASTools and resourcesHuman-centered AI frameworks Adopting a structured approach to AI initiatives.By RobChappellDesign systems and accessibility What was top of mind for the international community in 2024?By MatheusCervoAccessible outcomes To create more accessible outcomes, we need better design tools.By Nik JeleniauskasSupport the newsletterIf you find our content helpful, heres how you can supportus:Check out this weeks sponsor to support their worktooForward this email to a friend and invite them to subscribeSponsor aneditionSidebar is back, Duolingos strategy, AIUX frameworks, accessible outcomes was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·56 Views
-
The UX of drafting in spaceuxdesign.ccHow I escaped the pull of the pageUI.Image by theauthor.The best writing tool Ive discovered last year was to stop drafting on a page and use a canvas-based user interface (UI) instead. In this article, Ill share what motivated this change and reflect on the strategies that help me make the most ofit.As a professor and researcher in Human-Computer Interaction, my focus is on scientific writing. Still, I expect these takeaways to apply more broadly to any writing involving iteration and engagement with othersources.A year of drafting inspaceMy Miro recap below tells me Ive created about 1500 digital sticky notes. Why so many? Well, I started 2024 off by writing an essay on the fragmentation of writing with AI tools. When a reviewer later asked me to share my process, I ironically noted that I did not myself involve AI tools in drafting it but nevertheless embraced fragmentationvia sticky notes on a canvas. Writing off the page in that project, I have not returned to substantially drafting with a page-based UIsince.My Miro year in review2024.My Miro board for the essay even made it into the appendix of the paper, as shown below. I have since received requests and comments on this figure, which have inspired me to write this short reflection.The canvas I used to develop ideas and arguments and a review of related work for anessay.Non-linear drafting on acanvasHere are the key lessons Ive learned and the reasons I now prefer a canvas-based approach to drafting.Noteworthy beginningsSticky notes on the canvas look and feel tangibly like objects meant to be worked with. For me, they offer much better affordances for an early draft than bullet points on a page. They reify unordered thoughts, allowing me to directly manipulate them: I can move them aside, revisit some later, rearrange them, or connectthem.I beat the blank page with a canvas, one note at atime.Zoom-to-frameAs my draft evolves and grows, I can zoom in and focus on a specific aspect, blocking out the rest. Complementary, I can zoom out to recover the bigger picture. This flexibility allows me to frame and reframe my writing sessions by dynamically adjusting the viewport.Iterating inspaceOn a canvas, iteration is seamless. I can duplicate text objects to explore ideas and variations, keeping the original visible. Afterwards, I decide whether to delete it or retain it for comparison or future use. In contrast to this immediacy, copying a text file to v2 feels disconnected from drafting and the content drops out ofsight.The spatial arrangement of text iterations adds meaning, such as when layouting paragraphs vertically to show progression, while exploring alternatives horizontally.Mapping outmeaningMore broadly, a canvas offers the freedom to map meaning spatially. This area is related work, while that area gathers study findings. I can cross, connect, and redefine these regions as I go. In contrast, files and ordered page sections twist my drafting hand into unwanted linearisation and prematurely defined boundaries.Infinite marginsA canvas offers limitless space to engage with related work. I dont need to cram thoughts into page margins or comment bubbles in a PDF viewer. Instead, I can give a reference space by adding its abstract, text snippets, figures, and so on, via copy-paste and screenshots. Crucially, I can add my own thoughts throughout. By deconstructing and reorganising the presentation of related work, I construct an overview ready to hand, rather than scattering thoughts in margins across documentfiles.Constructing an overview of related work with screenshots, snippets, and annotations.Multimodal referencesOn a canvas, the representation of references can use images, text, even videos, more effectively than on a page view. I dont have to think about page layouts when adding figures from related work for context. I also dont have to worry about images moving on the page as I addtext.Related material and my text are not meant to be linearised at the draftingstage.Let me overlap things, draw onto them, add arrows and text on top and around images, and so on. The canvas supportsthis.Rewarding divergenceAs a side-effect of the above, I experience the result of interleaving drafting with research into external sources as much more rewarding on a canvas than in a text file. Perhaps this is because the multimodal representation better reflects the invested effort. Beyond its analytic nature, it becomes a constructive process in itself. At the end of the session, I see: Today, Ive built this overview of related work. Look at thatamazing!Mise enplaceThis ties together the ideas of mapping, focus, and integrating related work: I rarely write just to produce text but rather interleave reading and writing. On a canvas, I can draft (and think) in context and I can prepare this context material directly on my writing surface ahead of time. Besides related work and results from own data analyses, I particularly like to prepare and lay out argumentation structures in thisway.Gathering and connecting results and argumentation structure, then drafting next tothem.To conclude, Ive (re)discovered the benefits of a mix of Human-Computer Interaction concepts for my personal writing workflow, in particular at the drafting stage. These include zoom-and-pan UIs, focus and context, reification, direct manipulation, and various aspects of interaction design for sensemaking activities and building personal information spaces.Breaking silosThis is not about Miro specificallyor even just about writing. In a broader view, my use of Miro is a workaround for dealing with application silos. Applications today are not truly interconnected. Navigating between them forces me to interrupt drafting, track where information is stored, and shift my focus between apps to gather the required resources.While drafting a paper, instead of tracking information across apps, I want to focus on argumentation, related work, connections, data, meaning, and perspectives.Dropping everything into one canvas app via copy-paste is far from ideal but it bootstraps a seamless information space for writing and referencingand ultimately thinkingthat I can usetoday.ResourcesSome of the canvas screenshots above are taken from the Miro board for this paper. I also wrote about it here on Medium. Finally, here are some suggestions for reading more about the mentioned HCI concepts:Direct manipulationZoomable user interfacesReificationSensemakingPersonal information spacesThe UX of drafting in space was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·63 Views
-
Design in the age of AI: the death of PMs & engineersuxdesign.ccAI & economic pressures lead to an inevitable futureone where Designers do more while PMs & Engineers disappear.Continue reading on UX Collective0 Comments ·0 Shares ·38 Views
-
The future of social media: 3 predictions for 2025 & beyonduxdesign.ccFrom hyper-personalized algorithms to smart data investing, a few predictions for the next generation of social media networksContinue reading on UX Collective0 Comments ·0 Shares ·60 Views
-
Stop building products for your imaginary usersuxdesign.ccDear Mary-the-Marketer, you dont existsigned, every PMContinue reading on UX Collective0 Comments ·0 Shares ·51 Views
-
Designers: always read the commentsuxdesign.ccHow harnessing user feedback made Material 3 the worlds most popular designkit.Its an internet truism that if you want to maintain your sanity, you should never read the comments.More than 1 in 4 Americans have had their day ruined by a mean online comment, and many influencers and brands are shutting off comments entirely. However, research suggests thats not the best approach. A study from the Harvard Business Review found that turning off comments can actually make people see influencers as less likable and sincereeven more so than if they just left the negative commentsup.As a UX Designer and the manager of the Material 3 Design Kita Figma library with 3.5 million users and countingIve had to learn the hard way not to take negative feedback personally. Today, Ive fully embraced comments. I believe that reading and considering this feedback is an essential part of making a valuable design resource. Whether positive or negative, user feedback helps me and the Material team get people what theyneed.Read on for the most helpful comments Ive received on our Figma library, and what I learned along theway.The switch component is amess.This comment alerted me to a major bug with Materials switch componenta small but surprisingly complicated UI element that had 40 possible variants. We dug into the switch and found out that the component was not set up properly, with issues from inconsistent icon logic to incorrect actions and state layers. Once we got the greenlight, we restructured the switch from scratchand then realized that this issue was symptomatic of the kit as awhole.I used this comment to help make a case for why my team should take ownership of the M3 Design Kit. We needed a cohesive approach to structuring all the other components in the kit. We also created a QA process to catch mistakes in component construction and functionality before theyre published.Please add the keyboard &numpad.When working on a complex system like Material Design, its easy to get caught up in our own bubble. This feedbackthat keyboard and number pad elements were missing from the kitlet us know that there were gaps in our offering. Since things like keyboards are part of the Android System UI, we didnt think about them as part of Material Design. But this comment helpfully reminded us that Figma users dont care about the nitty gritty distinctions between internal Google teams. Theyre just looking for the UI elements they need to make their product! By adding these utility elements to the kit, we could make it more helpful to our users. Weve since added other utilities such as avatars, device frames, example screens, andmore.I dont get updates from Google, so whats thepoint?This comment raised an extremely valid provocation: How do you deliver a design system outside of your own Figma organization? Yes, Figma has community files, but they can only be used by making a copy of themand those copies dont receive updates when the original file is changed. If I added a new feature to the community file, users had to make a new copy of the file and manually port over the update into their existing designsit just wasnt working. Material Design is ever-evolving, and we want all makers to be able to use the latest and greatest that Material has tooffer.I took this comment to heart, deeply considered the available options, and filed a feature request with Figma. As a result, Figma developed the UI Kits feature, which was unveiled in June of 2024. The Material 3 Kit, Apples iOS UI kit, and Figmas Simple Design System are all now default kits in new Figma files, and users can inherit updates from theauthors.The file is corrupted! Everything is blank! Where are the components?After the UI Kit feature was rolled out, the prompt on our page changed from Get a copy to Open in Figma. An influx of comments like the above let us know that something was not right with the new experience of getting to the kit. Clicking Open in Figma opened a new untitled file with the kit enabledbut the users couldnt see that it was enabled, because the file was blank and the assets panel wasnt immediately visible. We worked with Figma to improve the UI Kit flow and wording, and made it so that when you create new files with the UI kit, the assets panel is revealed. I could tell this effort helped when the incoming comments were about other things.TakeawaysWith all this experience in mind, here are tips for anyone making a Figma Community File or UIKit:Listen to your customers. Whether they are helpfully pointing out bugs or angrily remarking that something sucks, its feedback! Take ego out of it and use the feedback as a tool for improvement.Test everything. When you make a new component or add properties and variants, test to make sure they are working properly. As a user, try to pull in an instance of your component and apply it to every option. Is the instance operating as intended when you select different properties? Consistency and quality will always lead to a better experience for your customers, and its worth investing in.Design Systems are not set it and forget it. They take careful tending, like a garden, as things evolve and the seasons change. Dont expect to be able to set up a file and then neglectit.Building a design system for millions is a constant conversationone that requires listening, iterating, and embracing the unexpected. But Ive learned that when we design with the community, the results can be truly transformative for everyone.A huge thank you to every single person who has left a comment, reported a bug, or requested a feature. Your feedback fuels the evolution of MaterialDesign.Euphrates DahoutUX Designer, GoogleHave a question, suggestion, or wild design idea? Let usknow:Material 3 DesignKit@GoogleDesign on Instagram@GoogleDesign onXImages by Arthur RibeiroVergani.Designers: always read the comments was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·35 Views
-
In the future, humans will exclusively create content for AIuxdesign.ccIntelligent machines were meant to enhance human creativity, but in time, human creativity will exist only to sustain them.Continue reading on UX Collective0 Comments ·0 Shares ·44 Views
-
Design systems and accessibilitya 2024 retrospectiveuxdesign.ccWhat were the discussions and suggestions proposed by the international community in2024?Image: Cottonbro Studios.Many designers and developers care about accessibility in digital products, as pointed out by Forrester in the Config 2024 analysis. The desire is there, but its not always possible to implement accessibility systematically and at scale in daily practice. Why? We lack time and study, and theres an overwhelming amount of content toconsume.In 2024, more than 40 resources on Accessibility in Design Systems were published in English, totaling over 12 hours of videos and podcasts. This volume is no coincidence: creating accessible and scalable interfaces is a challenge that goes beyond the conventional digital accessibility debate. Deep questions are being raised, such as: What does scalable accessibility in digital products truly mean? What is Shift Left, and how can an accessible DS evolve? Is it possible to have control within a team and document all accessibility tests in aDS?Who can absorb so much content without getting lost along the way? The truth is that, by trying to keep up with so much information, we end up consuming only a fraction and fail to get a panoramic view of the key debates. With that in mind, I decided to create this retrospective, analyzing all these resources to identify the main insights and discussions that shaped the pastyear.The idea is simple: to offer an overview that helps us understand whats currently on the agenda when it comes to accessibility in Design Systems. What progress has been made? Which questions remain unanswered? Where can we dive deeper into specifictopics?This text is an invitation to that conversationless about what we already know and more about what we are still discovering as a community. Feel free to contact me via my LinkedIn for any type of feedback.SectionsMethodologyWhy Design Systems for Accessibility?What Is Accessibility in DesignSystems?How to Evolve an Accessible DesignSystem?How to Test an Accessible DesignSystem?How to Organize Accessibility Tests?How to DocumentTests?1. MethodologyBefore diving into the discussions, lets take a look at the methodology behind this study to understand what was said in the international community about Accessibility and Design Systems in2024.If youd rather skip this part, feel free to jump straight to the next section to see the research results. But if youre interested in how the data was gathered, take your time to go through this step-by-step breakdown.Steps of the 2024 retrospective systematic review.All data collection was conducted through a systematic review of educational materials published on two intersecting topics: Design Systems and Accessibility.Before starting the material collection, every systematic review must establish an objective. The goal of this retrospective was to understand the discussions surrounding the challenges of building accessible digital products, particularly through Design Systems. This is a specific topic within the UX Design and development field that generates a large volume of content every year, allowing for retrospectives to identify key debates that took place over an extended period, such as ayear.Any retrospective review needs to be transparent about how its data collection and analysis were conducted to avoid privileging certain documents over others. Below are the basic inclusion and exclusion criteria for the materials collected in thisstudy:The content had to be published in2024.The material needed to be in English, as the goal was to provide an overview of the international scenario as comprehensively as possible.The material had to have an educational purpose to capture the challenges shared with the community. Educational materials were understood as: articles (informal and non-academic), podcasts, or videos created/published by industry experts addressing the topic. Therefore, this article does not review other types of materials, such as accessibility documentation within Design Systems published in 2024. If youre interested in this type of documentation, check out the example of the Carbon Design System. Although essential as reference points, such documentation is often very specific to an organizations guidelines and lacks the broader professional debate found in educational publications.Materials had to be free of access restrictions. Due to copyright constraints, paid educational materialssuch as course content or subscription-only articleswere not analyzed or cited. Furthermore, its worth noting that such paid content was not found in abundance during the searches conducted.Based on these criteria, the search terms used were Design System and Accessibility in major search tools similar to Google. An exploratory analysis was also conducted with additional terms, such as Design System and Blindness or Design System and Neurodivergency. However, no significant number of relevant materials from 2024 was found using these terms. As a result, it was deemed unnecessary to search for more specific terms beyond Accessibility to conduct this retrospective discussion.Finally, for full transparency, you can find all the materials collected at the end of the article in the references section. The most notable ones are cited with links throughout the text. Some materials, while not explicitly mentioned in the article, were analyzed and contributed to the broader arguments found across multiple resources.Is it clear how this review was carefully conducted to ensure we didnt miss anything on this topic? If so, lets move on to the synthesis of thedebate.2. Why Design Systems for Accessibility?Before diving into the details, its worth taking a moment to reflect: why are Design Systems increasingly seen as an effective path to improving digital product accessibility? The answer begins with three words: speed, consistency, and technical accuracy.Quote from Cntia Romero about the use of accessible components. Source: Supernova, 2024.In a scenario where digital interfaces are becoming increasingly complex, a well-structured Design System is like a compass that keeps accessibility on the right track. Tyler Hawkins, software engineer at Webflow, summarizes this idea well: each component needs to meet a series of technical criteria to be accessible to different audiences. Without a solid and centralized foundation, this can turn into chaos, especially in companies that manage multiple products simultaneously.This scalability is also discussed for the creation of more customizable platforms. Several materials argue that platforms built with Design Tokens help create specific themes for certain communities with some type of disability. One example of this can be found in Georgi Georgievs (designer at PROS in Bulgaria) text about the creation of high contrast modes (High Contrast Modes) from Design Systemsa dark theme that differs from the common Dark Mode by being intended for users with some kind of visual impairment or photosensitivity (sensitivity tolight).As expected due to the debates of the past decades, this scalability is widely defended with the application of the Web Content Accessibility Guidelines (WCAG) as an international guideline. This is because the WCAG provides research-based requirements that must be met by objective and measurable criteria to assess the success or failure in creating an accessible digital product. Many professionals highlighted the historical role of the WCAG in creating a common language capable of internationally standardizing the legal accessibility requirements for digital productsa reflection of the transition from digital accessibility being a recommendation to an obligation in many countries in the last decade. In 2024, several governmental DSs advanced in their compliance processes with the WCAG guidelines, driven by national legal requirementssuch as the United States Web Design System (USWDS) in the United States and the NL Design System in the Dutch government.However, a challenge in achieving this standardization is discussed by Amy Cole (Digital Accessibility Lead at USWDS): many designers and developers fear WCAG because it is a very technical language. This fear stands out as a major obstacle, fueling a widespread call for more accessible educational materials and dedicated accessibility study groups to foster awareness within organizations. It is not trivial that, along with the WCAG, supplementary tools are always mentioned to assist in this challenge, such as the IBM Equal Access Toolkit, the Microsofts Inclusive Design Toolkit, The A11y Project, and DequeSystems.Therefore, having a DS also helps in this process of conformation, as the team can create cascading processes that start with specialists and reach professionals who are not specialized in accessibility more smoothly.But, all of this depends on what is understood by accessibility and technical compliance3. What Is Accessibility in DesignSystems?Daniel Henderson-Ede (Accessibility Specialist at Pinterest) brings an important reflection: accessibility goes beyond ensuring that a component is technically compliant with the WCAG. He explains that, although accessible components are key pieces to start an accessible interface properly, the complete puzzle only forms when the experience as a whole is considered. A classic example of this is the focus order in an interface: if the components are not organized logically, keyboard navigation becomes frustrating, even if each individual piece meets the standards.Diagram on Technical Compliance and Inclusive Design. Source: Author of the text,2024.For this reason, one of the biggest debates that permeates the construction of an accessible DS is about the understanding of what accessibility really is. Cintia Romero (designer at Pinterest) describes that this accessibility tied to the WCAG guidelines is just an initial technical compliance. This does not imply that compliance is not essential, but she points out something important that many professionals brought up in 2024: mere adherence to standards does not necessarily mean meeting the real needs of these usergroups.Thats why she highlights Inclusive Design, alongside technical compliance, as a key approach within Design Systemsensuring a more human-centered and holistic perspective in interface testing. Here, accessibility is integrated into User-Centered Design methodologies, ensuring that multiple usage scenarios for diverse users are thoughtfully considered. Complementing this view, Hidde de Vries (Accessibility Specialist at NLDS in the Netherlands) proposes that, instead of treating disability as a personal limitation to be mitigatedwhat would be a medical model of thinking , we should adopt the social model of accessibility, which makes us consider the social context around a person with some condition.To understand how to approach this social responsibility, Greg Weinstein (designer at CVS Health) helps us by mentioning that an Inclusive Design System is also connected to intersectionality a concept borrowed from authors such as Kimberl Crenshaw (1989). This concept is used to show that different characteristics (such as race, class, sexualityor even different types of coexisting disabilities) overlap and interact, creating particular experiences. Thus, it is not just the screen reader (or any other technology) that needs to work well in this more holistic perspective! It may be necessary to think about the elderly person with low vision who also needs simple interfaces due to their age. Or, as another example, it may be necessary to consider the low-income user with hearing loss who faces financial difficulties in purchasing moderndevices.In the end, this conversation shows that a Design System goes beyond ensuring perfect compliance in isolated components. It provides the foundation to achieve this compliance systematically, but its true value lies in allowing it to deepen and expand over time. It is about testing solutions in real contexts, with real people. It is a continuous process of evolution, where technique inevitably meets humanity.4. How Do You Evolve an Accessible DesignSystem?Youve probably heard the saying that accessibility only truly works when its considered from the very beginning of the processan approach known as ShiftLeft.Simon Mateljans analogy comparing cakes and Design Systems. Source: UXCamp Australia, 2024.Simon Mateljan (Design Manager at Atlassian) makes a simple and effective analogy: creating accessibility in a Design System is like adding eggs to a cake recipe. If you forget that ingredient at the beginning, the final result will never turn out as expected. This logic makes sense, as ensuring accessibility from the start helps permeate it through all stages of development. But the debates of 2024 showed that this journey is far from linear! There is no perfect starting pointthe process is continuous and full of adjustments.Sophie Beaumont (Design System Team Lead at BBC) shows how Shift Left can happen in the evaluation of component reuse across different contexts. She describes a case where the BBC team tried to reuse an existing component to create a content timeline. Although the Design System already had a visually similar component to what the team had designed, it did not meet accessibility requirements in the new context. After internal discussions and technical evaluations, the team concluded that forcing the use of the old component would harm the experience for users with disabilities.This decision led BBC to strengthen a work process that prioritizes functionality over appearance when evaluating component usage. This work process is crucial to prevent a Design System from creating rigidity, making accessibility harder rather than improving it. In addition to this, Feli Bernutz (iOS Developer at Spotify) presents The Game Plan (timestamp: 16:55)a work methodology that evaluates when to use a ready-made solution and when to think outside the box to create something more customizable.UX Design draft for accessibility assessment. Source: Sophie Beaumont, BBC,2024.Furthermore, unforeseen issues can create unexpected needs, going beyond the simple creation of new components. One example of this is Pinterest, which allows users to add alternative texts (Alt Text) when creating Pins. This open and collaborative content model presents unique challenges for accessibility, as many users either do not create descriptions or produce poor-quality AltText.Due to the platforms design, Pinterest cannot simply impose rigid limitations to ensure Alt Text consistency. Instead, the team invests in educating users by offering clear instructions through interface components on how to create useful and contextualized descriptions. This process shows that, in certain cases, it is necessary to adapt the technical guidelines of the WCAG to the specificities of each digitalservice.Therefore, many statements from 2024 emphasized that mistakes are part of the Design System creation process, especially in contexts where needs vary as the product matures. Even in the Encore Design System at Spotify (a product of huge scale), the approach is iterative: rather than pursuing a perfect solution from the start, the team seeks progressive improvements at each step, climbing one step at a time. At the UXCon in Vienna, Join Wendy explains that the creation of inaccessible DSs at the beginning does not mean the end of the processit may, in fact, be the beginning of understanding particular challenges.So, how to balance all of this? On one hand, Shift Left suggests that accessibility should be a concern from the very beginning. On the other hand, the continuous discovery process reveals specific needs that require adjustments and adaptations. In the end, building an accessible Design System is less about achieving a perfect state and more about staying open to learn, test, and constantly improve.5. How to Test an Accessible DesignSystem?Because of everything that has been said so far, it is clear that an accessible Design System is not born readyit is a continuous construction, refined over time and, mainly, through many tests. But, given the complexity of creating truly inclusive interfaces, the question arises: what kind of testing needs to be done to achieve thisgoal?Diagram showing automated testing (speed) versus manual testing (depth). Source: Author,2024.In 2024, the debate about accessibility in digital products highlights the importance of automated testing (and the possible use of AI), especially in Design Systems that are already implemented at the code level. Tools like Axe, Lighthouse, and other automated solutions are gaining ground as major allies in this process. And, in fact, it would be naive to underestimate their importance: they offer speed in identifying common errors and helping teams achieve compliance standards.Moreover, in a Figma live session, Luis Ouriach (Designer Advocate at Figma) and Daniel Henderson-Ede (Accessibility Specialist at Pinterest) emphasized that native apps, such as those developed for iOS and Android, present unique challenges. Unlike the web, where standards like HTML and ARIA provide a solid foundation, native platforms have specific toolkits, such as UIKit on iOS and Jetpack Compose on Android. This fragmentation requires teams to adapt their practices to ensure accessibility across different environments.Daniel also pointed out that validation tools for native apps are less integrated into the development flow. While solutions like Axe and Lighthouse work directly on the web, mobile apps rely on tools like Xcode Accessibility Inspector or Accessibility Scanner, which, although useful, have limitations and do not easily connect to the continuous development process. To address these challenges, the recommendation is to incorporate platform-specific tests and train teams on the nuances of each technologyrequiring the development of tailored testing and documentation processes within a DesignSystem.However, there are clear limits to this approach, as reminded by the WebFlow article, which showed that automated tests can only identify about 30% of accessibility issues. This limitation brings us to a crucial point that was mentioned by the vast majority of materials from 2024: automation does not replace the human eye. Manual testing and, especially, interaction with real users are indispensable to understanding the practical experiencesomething that automated reports cannotcapture.That being said, it is evident why so many materials are dedicated to explaining how to create accessibility tests, also highlighting the challenges of developing proper documentation to accompany and control this continuous process of evolution.6. How to Organize Accessibility Tests?Many materials suggest starting with the adoption or creation of checklists. Depending on the content in question, checklists can be filled out after manual tests or automated tests. Here, the debate is less about the testing technique and more about how to systematize the organization of these tests within ateam.Amy Cole, from the US Web Design System (USWDS), sees checklists as bridges between what is in the manuals (such as WCAG) and what actually happens when a user navigates a product. These guides serve as scripts that allow even less specialized teams to engage in practical testing with real users. Therefore, the checklist is discussed here not only as a tool for systematizing tests but also as a process of inclusion for those who feel disconnected from the WCAG criteria due to its technical language. This is why the USWDS team suggests questions like:Can the button be activated with the Enter key or the spacebar?Is the focus clearly visible in all buttonstates?The suggestion is to create these questions with various teams, including specialists and allowing for a collaborative and multidisciplinary view. Here, the USWDS material highlights the importance of having control through documentation to understand which WCAG criteria have been considered with each of the suggested questions.Wendy Fox, at UXCon Vienna, complements this view by discussing the importance of conducting audits to go beyond generic checklists that would apply to any scenario. A link in a dynamic carousel, for example, cannot be evaluated the same way as a static link in a text. For this reason, she advocates for personalized checklists that consider the uniqueness of each component: for a button, this might mean checking if there are visible focus states; for a modal, ensuring that keyboard navigation flows naturally. These are criteria that not only ensure compliance but also make the experience more fluid and respectful for those who rely on assistive technologies.Amy Hupe and Geri Reid, from the UK Government Digital Service (GDS), emphasize that these checklists need to consider the tools users employ. Here, different tests are suggested, such as: 1) keyboard access; 2) zoom/magnification; 3) screen readers like NVDA and JAWS; 4) eye trackers. These are suggested by the designers as a guide to understanding accessibility beyond a generic and general concept, potentially including technology type by device. Since there are many different disabilities and contexts to consider, tests can indicate which assistive technologies have more support in the Design System and which still need to evolve to create a better experience.Additionally, its important to note that most materials stress the importance of speaking with real users in qualitative research. Still, I emphasize that this remains the least discussed topic in all the 2024 materials, as much of the focus is still on the individual pieces of the Design System rather than on testing the assembled interface already presented to endusers.7. How to DocumentTests?One of the biggest challenges is the documentation of accessibility tests that allow the evolution of an accessible Design System. Its not trivial that some of the 2024 materials go in-depth into the challenges of constructing and maintaining these tests overtime.You can work with both accessibility documentation in design (using Figma as a recording space) and documentation of general tests carried out on already developed interfaces, which typically takes place in Excel spreadsheets, GitHub or on Design System websites. Theres no consensus on which of these documents is the most relevant, but different professionals defend the proposals, considering their different purposes.In Pinterests Gestalt, Cintia Romero exposed that theres an integration of checklists directly into Figma as a way to bring designers closer to accessibility practice. According to a case study by Deque, 67% of accessibility issues occur due to errors in design prototypes! This data debunks the idea that tests and accessibility documentation should only occur at the development level. For this reason, some platforms have this documentation within Figma itself so that the handoff of products is in compliance with accessibility standards before moving on to code-level tests.This documentation is often later transferred to the component page on the Design System website, as we can see next in the case ofGestalt.Accessibility documentation for the Button component from Gestalt, Pinterests Design System. Source: Supernova, 2024.This is a more concise documentation proposal. Other projects opt for much more robust and detailed documentation, containing success criteria and the type of test conducted (including which assistive technology is involved). This is the challenge faced by the USWDS team, which organizes test data (manual and automated) for all the components of their Design System. To do this, the team uses a spreadsheet that contains:Component Name: The name of the component beingaudited.WCAG Success Criterion: The specific criterion being tested, such as 1.4.4 Resize Text or 2.1.1 Keyboard Accessibility.Compliance Level: The WCAG compliance level (A, AA, orAAA).Test Type: Whether the test is related to keyboard, zoom, screen readers, design, or anotheraspect.Test Status: Whether the test passed, failed, or passed with exceptions.Additional Description: Details on how the test was conducted and what the developer should observe. There are three columns titled When you, And, and This Happens that allow you to explain a success or failurecase.Automated Test Prompt (if applicable): Exposure of the test prompt for control over how the testing was conducted.Audit Date: The date the test was conducted and revalidated to control results and possible WCAGupdates.Other columns for Notes and Common Failures: Observations on common issues found during testing, including contributions reported via the projects GitHub.Amy Cole (USWDS) displaying accessibility test documentation tables for the US Governments Design System. Timestamp: 25 minutes. Source: NL Design System Channel, YouTube,2024.In both cases, there are very important challenges highlighted in these materials. In the case of USWDS, Amy Cole explains that component re-audits are regularly carried out to understand if new scenarios need to be consideredsuch as, for example, changes in browsers for web components. Also, there may be changes in assistive technology used by a group of people, requiring new tests without losing the old documentation.Another point of attention is tests that go beyond the components themselves, as Fable comments in an article on different levels of accessibility testing in a Design System. If we find errors in the relationship between components (even though the components themselves are compliant), where and how can I document these problems? Or, if a user with a disability offers a perspective beyond what we understand as accessibility with WCAG, where can I give voice to this audience?I believe these are the challenges that, as a community, we are still figuringout.Did you enjoy accessing the content of this retrospective? Feel free to contact me via my LinkedIn for any type of feedback.ReferencesAPPFORCE. Designing APIs: How to ensure Accessibility in Design System components. AppForce, YouTube.BEAUMONT, Sophie. Shifting left: how introducing accessibility earlier helps the BBCs designsystem.BEDASSE, Kristen. Design System AccessibilityUX Case Study: Accessibility improvements to an existing designsystem.BHAWALKAR, Gina. My Takeaways From Config 2024: Impacts On Design Systems, Storytelling, And Accessibility. Forrester, 2024..BIKKANI, Aditya. A guide to accessible design system. AELData,2024.CODE AND THEORY. 3 Principles to Build an Engineered Design System that Improves Speed, Consistency, and Accessibility. Medium,2024.CODE AND THEORY. How to create an accessible design system in 60 days. Medium,2024.CONVEYUX. Greg WeinsteinInclusive user research to build an accessible design system. ConveyUX, YouTube,2024.CUELLO, Javier. Accessible Components. Design Good Practices, 2024.DEQUE SYSTEMS. Making Pinterest more inclusive through design systemsaxe-con 2023. Deque Systems, YouTube,2024.DIGITALGOV. Component-based accessibility tests for the U.S. Web Design System. DigitalGov, YouTube,2024.FABLE. Power up your design system with accessibility testing. Fable,2024.FIGMA. In the file: Design Systems and Accessibility | Figma. Figma, YouTube,2024.FRONTEND ENGINEERING & DESIGN SOUTH AFRICA (FEDSA). The NL Design System And Why Accessibility MattersHidde de Vries. FEDSA, YouTube,2024.GEORGIEV, Georgi. The importance of high contrast mode in a design system. Pros,2024.GET STARK. How to use your design system colors to fix accessibility issues with Stark in Figma and the browser. Get Stark,2024.HAWKINS, Tyler. Scaling accessibility at Webflow. Webflow,2024.HI INTERACTIVE. UX and design systems in retail: inclusivity, accessibility, and innovationHi Talks #10. Hi Interactive, YouTube,2024.INTO DESIGN SYSTEMS. Design systems accessibility meetupComponent review. Into Design Systems, YouTube,2024.INTO DESIGN SYSTEMS. Design tokens sets for accessibility needsMarcelo Paiva at Into Design Systems Conference. Into Design Systems, YouTube,2024.JOER, Jairus. Develop design systems with accessibility in mind. Aggregata, 2024.KNAPSACK. Making design systems inclusive with accessibility specialist Daniel Henderson-Ede. Knapsack, Youtube,2024.KORNOVSKA, Diyana. Building accessibility into design systems. Resolute Software, 2024.LAGO, Ernesto. Accessibility Best Practices for Design Systems. LinkedIn, 2024.LAGO, Ernesto. An Intro to Accessibility in Design Systems. LinkedIn, 2024..LAMBATE, Fahad. Designing for inclusivity: the Shift Left approach towards accessible design systems (ADS). Barrier Break,2024.LYNN, Jamie. Accessible design systems. Jamie Lynn Design,2024.MILLER, Lindsay. The importance of accessibility in design systems. Font Awesome,2024.NL DESIGN SYSTEM. Using USWDS accessibility tests to improve accessibilityAmy ColeDesign Systems Week 2024. NL Design System, YouTube,2024.ROMERO, Cintia. Accessibility in design systems: a comprehensive approach through documentation and assets. Supernova, 2024.STANKOVIC, Darko. Accessibility in Design Systems. Balkan Bros,2024.TESTDEVLAB. QualityForge: Speaker #5: Adrin BolonioDesign systems and how to use them in an accessible way. TestDevLab, YouTube,2024.UNIVERSAL DESIGN THEORY. Creating an accessible design system. Universal Design Theory,2024.UXCAMP AUSTRALIA. Simon Mateljan | Baking accessibility into your design system. UXCamp Australia, YouTube,2024.UXCON VIENNA. (In)accessible design systems: doing things wrong to get it rightWendy Foxuxcon Vienna 2023. UxCon Vienna, YouTube,2024.VAUGHAN, Maggie. Essential principles of accessible design systems. Dubbot,2024.WEBAIM. Homer Gaines: Improving accessibility through design systems. WebAIM, YouTube,2024.ZEROHEIGHT. Back to school with Amy Hupe & Geri Reid: Accessibility and design systems. Zeroheight, YouTube,2024.Design systems and accessibilitya 2024 retrospective was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·50 Views
-
Designing for the AI futureuxdesign.cc4 guidelines to design with AI inmindPhoto by Cash Macanaya onUnsplashIve been working on AI projects for a little while now, but I havent taken the time to truly reflect on how AI is reshaping my design practice. After some reflection, it has become clear that my practice is changing, and I need to continue learning and evolving. And while Im doing that, why not share what I learn along the way? So, here are 4 areas to consider as we design for thefuture.1. The Designers Superpower:Being human-focused, asking why and empathising with customers is more important thanever.The core of effective design, particularly in the age of AI, remains deeply rooted in human understanding and our abilityto:Empathise with customers: Uncover their true needs and painpoints.Differentiate the experience: Identify unique value propositions for the business and opportunities to delight and enrich customer experiencesAssess overall impact: Determine the broader consequences of design decisions.Understand user journeys: Map the highs and lows of user experiences within their ecosystems.So, where do you start? I think its the same place as always: a great problem statement. This ensures we focus on human challenges rather than technological solutions. One of my favourite ways to frame a problem statement is with the template below, which I have adapted from the Lean Ux Handbook:Our intention with [Our product/service]is to help [Specific user persona or segment] achieve [Their goals or desired outcomes]However, weve identified that [Observed behaviour, data insights, or user feedback]This results in [User impact, business impact, or operational inefficiencies]How might we address this gap and empower our customers to achieve [Desired outcomes], tracked through [success metrics]?2. Resisting the Tech FirstMindsetWhenever a new technology emerges, theres a tendency to prioritise its application more than grounding design in a humanneed.During the blockchain era, I always heard Can we put it on the blockchain? Now, with AI, I hear; How can we integrate AI into this product?While AI can be applied to virtually anything, framing challenges solely around technology leads to solutions that lack purpose. True impact comes from identifying genuine human needs and exploring how AI can effectively address them. This means going back to point 1 and focusing on being human-centred and addressing real business and customerneeds.Once you have defined a real need, here are some other interesting resources I have found useful. Google has created an overview of when and when not to useAI:ReferenceIBM has centred their AI design framework around defining the intent. This is essentially the what around using AI, and it can be used as a guiding principle to ensure alignment. They suggest selecting 12 intents and combining them with your problem statement to guide brainstorming.Reference3. Defining AIValuesEmbedding AI values ensures that the products and technology you create are aligned with the core values of the business.These values shouldreflect:Ethical Considerations: Prioritising fairness, transparency, and accountability.Human Impact: Ensuring AI enhances, rather than diminishes, the human experience.Business Alignment: Aligning AI initiatives with broader business values, privacy and security guidelines.These values should be considered throughout the product lifecycle, from research and scoping to review and iteration. Here is an example of a value alignment framework for an AI system used in a hospitalsetting:ReferenceIf you want to delve into this more, I recommend reading this article by Harvard Business Review, which contains lots of greattips.4. Diving Deeper Into Ethics andRisksEvaluating risks is essential for any project, but its considered best practice to assess the potential broader business and societal impacts of AIdesign.This involves taking the following steps:Reflect: Take a moment to review the products ethical considerations.Capture: Note potential negative implications andrisks.Iterate to Mitigate: Based on your findings, refine your product thinking and mitigaterisks.As a guide, you can use this IBM framework for evaluating primary, secondary, and tertiary effects, which is designed to help teams anticipate potential unintended consequences. Additionally, the Centre for Digital Content Technology. has published a good ethicscanvas.Drawing from ethics frameworks by organizations like UNESCO, NSW government and this research paper in Harvard Data, here are some of my guiding questions:Another resource worth exploring is IDEOs AI ethics cards, which provide great activities to prompt ethical discussions during the designprocess.ReferenceIf you are to take one thing from thisarticleDont skip the fundamentals of design thinking. Focus on solving real problems in a thoughtful, human-centred way.I hope my lessons give you fresh ideas, tools, and frameworks for designing ethically and consciously usingAI.Designing for the AI future was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·34 Views
-
Designing relationships with AI: ethics of AI empathyuxdesign.ccHow far can AI empathy really go?Continue reading on UX Collective0 Comments ·0 Shares ·61 Views
-
The good, the bad and the ugly of Duolingo gamificationuxdesign.ccTen design principles rated from best to worst.Continue reading on UX Collective0 Comments ·0 Shares ·60 Views
-
Shaping minds: how first impressions drive AI adoptionuxdesign.ccMake-or-break moments. The first interaction with an AI systemwhether its a website, landing page, or demoshapes the mental model of the system. This, in turn, determines whether it will be adopted or not. Are these decisions driven by emotion or logic? Heres how Technology Adoption Theory unpacks the mechanisms of technology acceptance, with insights applied to AIsystems.By Katie Metz,sourceIt takes only 10 seconds for someone to decide whether a website is worth their time or not. And while there is a wealth of resources on designing user-centered products, far fewer focus on how to communicate their value during those pivotal first momentsthe initial touchpoints that shape a users mental model of the system. This challenge is especially true for AI systems, which are often technically complex and hard to simplify. Ive seen talented teams build extraordinary, user-focused solutions, only to struggle with conveying their true value in a way thats clear, engaging, and instantly meaningful.This article delves into the psychological barriers to accepting and adopting new technologies, offering insights on how to highlight your products value and transform first impressions into lasting connections.New tech, oldhabitsHave you heard of Khanmigo? Its an AI-driven teaching assistant from Khan Academy, designed to guide students through their learning journey with engaging, conversational interactions. Its empathetic, engaging, and patient. Make a mistake? No problem. Itll gently explain what went wrong and how to fix it, creating a learning experience that feels less like being corrected and more like growing together. Its a glimpse into how AI can reinvent old patterns, making interactions more personal, more flexible, and, dare I say, morehuman.Source: KhanmigoOf course, kids are a relatively easy audience for Khanmigo, as they are naturally open to such innovations. They dont carry years of learning fatigue, forged by sitting through endless lectures and associating study time with boredom. AI meets them where they are, unspoiled andeager.Now imagine a different scenario: a car equipped with AI that tracks your facial expressions and eyelid movements to detect when youre too tired to drive safely. It suggests, perhaps with a subtle alarm, that you pull over for a rest. Tell that to my grandpa, though, and hed probably chuckle at the idea that a camera could know better than he does when he needs a break. There will always be early adoptersthose eager to embrace the new and excitingand those who resist, for reasons that may be logical or deeply personal. For instance, some might worry that AI will take their job, while others may mistrust the technology purely because it feels unfamiliar or intrusive. Understanding and addressing these perspectives is the first step towards designing AI systems that can bridge the gap between skepticism and acceptance.The good news? This isnt a new challenge. Humanity has faced it during every industrial revolution, each time adapting its thinking to a new normal. While I wont delve into all of these transformative erasor the ongoing Fourth Industrial RevolutionId like to focus on the most recent completed one. Lets rewind to the Third Industrial Revolutionthe dawn of the computer and internet age in the late 20th centuryand explore its key ideas of facilitating system adoption.When computers methumanityThe 1980s marked a significant turning point in the study of technology adoption, spurred by the rapid rise of personal computers and the challenge of integrating these new tools into everyday life. Researchers quickly recognized the need to focus on factors like user involvement in the design and implementation of information systems. This emphasis acknowledged a simple truth: technology is only as effective as its ability to meet the needs of the people who useit.1983, sourceOn the practical side, industry practitioners concentrated on developing and refining system designs, aiming to make them more user-friendly and effective. My favorite example is research at Xerox PARC (Palo Alto Research Center), where researchers closely observed office workers behaviors and workflows. Their insights led to the creation of the desktop metaphor, introducing familiar concepts like files, folders, and a workspace that mirrored physical desks. This innovation revolutionized graphical user interfaces (GUIs), laying the foundation for systems like Apples Macintosh and Microsoft Windows. The Dream Machine by M. Mitchell Waldrop or Dealers of Lightning by Michael Hiltzik share more details about history and impact of XeroxPARC.These parallel effortsacademic research and hands-on developmentled to the creation of numerous theories and frameworks to better understand and guide technology adoption. Among these frameworks, the Technology Acceptance Model (TAM) stands out as one of the most influential.Technology Acceptance ModelBack in 1986, Fred Davis created it to answer a simple but pivotal question: why do some people adopt new technology while others resist? TAM was designed to measure this adoption process by focusing on customer attitudesspecifically, whether the technology feels useful and easy to use. These two factors form the foundation of the model, offering a lens to understand how people decide to embrace (or avoid) new tools andsystems.The first factor, perceived usefulnessis how much a user believes the technology will improve their performance or productivity. Its outcome-oriented, zeroing in on whether the tool helps users achieve their goals, complete tasks faster, or deliver betterresults.The second factor of TAM is perceived ease of usethe belief that using the technology will be simple and free of unnecessary effort. While usefulness might get a users attention, ease of use determines whether theyll stick with it. If a system feels complicated, clunky, or overly technical, even its benefits might not be enough to win users over. People naturally gravitate toward tools that feel intuitive.Adapted from the Technology Acceptance Model (Davis, 1986),sourceIn 2000, Venkatesh and Davis expanded the original TAM model to dig deeper into what shapes Perceived Usefulness and peoples intentions to use technology. They introduced two key influences: social influencehow the opinions of others and societal norms impact adoptionand cognitive instrumental processes, which focus on how users mentally evaluate and connect with a system. Lets unpack these factors and explore how they can help shape a mental model of an AI system that fosters adoption.Perceived UsefulnessPerceived Usefulness doesnt exist in a vacuum. One of the social factors is subjective norm, or the pressure we feel from others to use (or not use) a particular technology. This ties closely to image, the way adopting a tool might enhance someones status or reputationthink of design influencers after attending Config, dissecting the latest features and showcasing their expertise.But subjective norm doesnt impact everyone the same way. Experience can dull its influence. For those just starting with a new system, social pressure often holds more weightunsure of their footing, they look to others for guidance. As they grow more comfortable, though, external opinions start to matter less, and their own evaluation takes over. Voluntariness also changes the game. When adoption is a choice, users are less swayed by others opinions. But when its requiredwhether by a workplace mandate or social obligationsubjective norm has a much strongerpull.On the cognitive side, job relevance plays a big role. Users ask, Does this technology actually help me in my specific role? If the answer is no, its unlikely theyll see it as useful. Similarly, output qualitywhether the system delivers results that meet or exceed expectationsreinforces its value. Finally, theres result demonstrability, or how clearly the benefits of the technology can be observed and communicated. The easier it is to see and measure the impact, the more likely users are to view it asuseful.Adapted from Technology Acceptance Model (TAM 2) by Venkatesh and Davis, 2000.sourceWhile product design cant directly influence subjective norm, it often plays a role in shaping imagehow people perceive themselves or imagine others will see them when they adopt the technology. Its not so much about the product itself, but what using it says about the individual. By focusing on the right narrative from the very first touchpoint, some applications make it easy for users to see how adopting the tool reflects positively onthem.Take folk.app, for instance. Instead of just listing features, it focuses on solving specific pain points, framing the app as a tool for staying organized and professional. The messages feels personal and practical. For example, a section title like Sales research, done for you suggests that without any additional effort, users will have valuable insights at their fingertips. Its not just about solving a problem; its about positioning the user as more prepared, professional, and efficient.Folk.app, sourceBraintrust takes a different angle. They highlight glowing media endorsements, signaling that the platform is widely recognised. Its not just about saying that app works; its about creating a sense that using it puts you on the cutting edge, part of a forward-thinking community. This builds image, making users feel like adopting the technology aligns with innovation andsuccess.Braintrust, sourcePerceived Ease ofUseIf perceived usefulness answers the question, Will this technology help me?, then perceived ease of use asks an equally important question: Will it be easy to figure out? Research shows that this perception is influenced by two main groups of factorsanchors and adjustments.Anchors serve as the starting point for a users judgment of ease. They include internal traits and predispositions, such as computer self-efficacya users confidence in their ability to use technologyand perceptions of external control, or the belief that support and resources are available if needed. Another anchor is computer playfulness, which reflects a users natural tendency to explore and experiment with technology. This sense of curiosity can make systems feel more approachable, even when theyre complex. On the flip side, computer anxiety, or a fear of engaging with technology, can act as a barrier, making systems seem more difficult than they really are. When applying these principles to AI systems, we see a new form of apprehension emerging: AIanxiety.Once users begin interacting with a system, adjustments come into play. Unlike anchors, which are deeply rooted in a users pre-existing traits and beliefs, adjustments are dynamicthey refine or reshape initial perceptions of ease of use based on real-world experience with thesystem.One key adjustment is perceived enjoyment, which asks whether the act of using the system is inherently satisfying or even delightful. This concept is closely tied to User delight, where interactions go beyond pure functionality to create moments of joy or surprise. Have you ever searched for cat in Google and noticed a yellow button with a paw? Thats delight. Its unexpected, playful, and entirely unnecessary for functionalitybut it sticks withyou.Another adjustment is objective usabilitythe systems actual performance as observed during use. Before interacting with the system, a user might assume it will be complex or difficult. But as they engage with the AI, accurate and intuitive responses can shift this perception, reinforcing the idea that the system is not only functional but easy touse.Adapted from Technology Acceptance Model (TAM 3) by Venkatesh and Bala,2008.Computer self-efficacya users confidence in their ability to use technologycant be controlled directly, but it can definitely be nudged in the right direction. The secret lies in making the application feel approachable, so users believe theyre capable of mastering it.One way to do this is by showcasing the experiences of others. Highlighting user reviews or testimonials isnt just about marketingit taps into the idea of Banduras Social Cognitive Theory. When people see others successfully using a tool, they start to think, If they can handle it, why cant I? Its not just about proof; its about planting the seed of possibility.Contra, sourceAnother approach is helping users form a mental map of how the technology works. GitBook, for example, pairs feature descriptions with skeleton-state interface snippetsclean, minimalist snapshots that give users just enough information to understand the basics without overwhelming them. Animations guide their focus, while interactive elements bring in a subtle gamification layer, making learning feel less like a chore and more like discovery. Its user-centric design done righta confidence boost, one step at atime.GitBook, sourceSlite provides an example of how the job relevance factor can make a product introduction resonate right from the first page. One of the challenges in introducing a knowledge base is resistance to sharing information. Studies reveal that 60% of employees struggle to obtain critical information from colleagues, often due to a phenomenon known as knowledge hidingthe deliberate withholding or concealing of information. This behavior stems from fears like losing status or job security, creating barriers to collaboration and productivity.Slite tackles this challenge head-on with a playful, relatable touch, wrapping it in humor: The knowledge base even [Name] from [one of 6 target industries] wants to use. This subtle nod to targeted pain points highlights its key differentiators: beautiful documentation, hassle-free adoption, and AI-powered search from day one, emphasizing perceived enjoymentafter all, who doesnt love beautiful, effortless solutions?Its not just about functionality; its about creating a product so intuitive and engaging that it minimizes resistance and inspires adoption, transforming apprehension into enthusiasm.Slite, sourceFinal thoughtsThe Technology Acceptance Model, while valuable, is not a universal solution but rather a frameworka lens through which we can examine and interpret the dynamics of technology adoption. Since its introduction over a quarter-century ago, it has illuminated patterns in how users perceive and engage with technology. However, it can also risk being overly generalizable, glossing over the nuanced and context-specific factors that shape user behavior. Rooted in the psychological theories of reasoned action and planned behavior, TAM serves as a navigatorhelping us better understand and adapt to the complexities of human affective reasoning. By recognizing its strengths and limitations, it can be used as a guide to create technology experiences truly resonate with the people they are designed toserve.Additional resources:Get Your Product Used: Adoption and Appropriation is a course from IxDF by Alan Dix, one of the authors of my work Bible, Human-Computer Interaction.How To Measure Product Adoption (Metrics & Tools) provides a solid overview of metrics that can help grasp the current state of productadoptionIncreasing the Adoption of UX and the Products You Design (Parts 1 and 2) are articles by Chris Kiess that provide a breakdown of the Diffusion of Innovations theory and Coopers Model of User Distribution, and relevance of Jakob Nielsens 5 components of usability.Have ideas, thoughts, or experiences to share? Leave your insights in the comments!Shaping minds: how first impressions drive AI adoption was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·51 Views
-
Sidebar is back from its breakuxdesign.ccOf ideas that cant cease toexist.Sidebars Public announcement back in June2024The news that Sidebar.io was taking a break felt a bit like a heartbreak. Sidebar has been one of my favorite sources to keep up with design, and with content that would make me a better, smarter, more informed designer.No noise, no endless scrolling, just the goodstuff.5 links a day. Thatsit.Sacha Greif has been doing an incredible job for the last twelve years of curating and maintaining Sidebarwithout skipping a beat. Thats dedication. Its easy to start something, much harder to stick with it. Thats a massive achievement and something that needs to be celebrated. As someone who also dedicates personal time to editing and curating content, I know the grind. You pour your heart into it, hoping it resonates, that it provides value. Its endless work. It feels pretty rewarding, but it is not always rewarded.When I saw the news, I felt I needed to act. Sidebarnot just the website, but the idea itselfcouldnt simply fadeaway.Sidebar has always felt different. It has advocated for a healthier web ecosystem and has always prioritized links pointing to small, curated digital gardens around the web. Links that come from the makers and doers out there. Sidebar was a signal boost for the kind of web I think many of us miss. A web built by individuals, not algorithms. It championed the small, the curated, the personal. Ive always seen Sidebar as a force of resistance or sorts. Built for people who still believe in the web as a platform for knowledge sharing, long-form writing, and community.Starting today, Im taking over the daily curation of Sidebar, as well as its management duties and operational costs. Its a big responsibility but also an honorand I cant thank Sacha enough for trusting me on this mission. Im bringing Sidebar back to basics: 5 links a day, published on the website and sent via our email newsletter. All the other features will be archived for now. As usual, folks can submit their own links. If you know of great websites that often publish great content, please drop me a note so I can add them to my watchlist.Can I keep this vision alive for a few moreyears?I dont know. I hope so. I genuinely hopeso.What I do know is that certain ideas cant just cease toexist.How you canhelp:Spread the word that Sidebar is back by sharing this post or the site (https://sidebar.io/) with your networks.Follow our RSS feed, our newsletter, our Twitter, or simply add sidebar.io to your daily browser bookmarks.If you work at or know any company that might benefit from talking to an audience of designers and makers, reach out to them about sponsoring Sidebar. Or tweet at them. This can help cover the initial costs or revamping the projectincluding hosting, database, emails, andothers.Sidebar is back from its break was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·53 Views
-
Understanding how to prioritize makes you a more effective designeruxdesign.ccEverything cant be the top priority, or else nothing isContinue reading on UX Collective0 Comments ·0 Shares ·75 Views
-
The future of design systems is decentralizeduxdesign.ccLessons from nature and technologyImagine a design system that evolves organically, free from the constraints of centralized control. A system where updates and patterns emerge naturally from its users, where collaboration isnt just encouraged but woven into its fundamental architecture. This isnt just about democratizing design decisionsits about creating systems that grow and adapt as naturally as the organizations theyserve.The challenges that design systems aim to solvecohesion, efficiency, and qualityarent unique to our field. Other domains, from natural systems like ant colonies to technological innovations like blockchain networks, have tackled similar problems through innovative approaches to decentralization. These systems have demonstrated how to build networks that are transparent, collaborative, and community-driven. Their methods of aligning participants around shared goals offer valuable lessons for reimagining how design systems operate. Their methods of aligning participants around shared goals offer valuable lessons for reimagining how design systemsoperate.While decentralization in design systems isnt a new concept, previous attempts have often fallen short due to flawed implementation and insufficient support. This exploration draws inspiration from decentralized networks to propose practical strategies for building more adaptable, inclusive, and scalable design systemsones that truly serve the needs of both their users and the broader organization.The challenge of centralizationDesign Systems teams often operate as centralized units tasked with making product teams faster while ensuring quality and cohesion across experiences. While these responsibilities naturally gravitate towards centralization, and we measure success accordingly, we should ask ourselves: arent efficiency, quality, and cohesion actually shared responsibilities across all product team membersfrom designers to PMs to engineers?The typical Design Systems story might sound familiar: a dedicated team develops foundational elements and primitivescolors, typography, iconsalong with core components like buttons, inputs, and modals. They create usage guidelines, and product teams use these building blocks to craft user experiences.This model works initially, but products and user needs arent staticthey continuously evolve. Product designers, being closer to end users than Systems designers, frequently encounter scenarios where existing components or guidelines dont quite fit new requirements. In these moments, designers face three uncomfortable options:Follow the design system strictly, potentially delivering a suboptimal user experienceWait for guidance from the Design Systems team, often delaying project timelinesCreate custom solutions outside the system to meet immediate userneedsThe third option usually winsits faster and addresses immediate needs. But this choice, multiplied across teams and projects, creates problems. The centralized governance model, while intended to maintain quality, often slows the systems ability to adapt. Even minor updates require multiple approvals, compete with other priorities, and face intense scrutiny to justify ROI and maintain consistency.This slow pace of change frustrates product teams, dampens innovation, and ultimately discourages system adoption. The result? Fragmented user experiences and accumulated design debt that typically only gets addressed during major redesignsessentially a forced reset of thesystem.Ironically, centralization in large organizations often undermines the core goals of Design Systemsinstead of increasing efficiency, quality, and cohesion, it can hinderthem.The decentralization spectrumIn her book Thinking in Systems, Donella Meadows emphasizes that a systems outcomesgood or badare primarily shaped by its structural design, not external factors. For Design Systems, this insight suggests we should focus as much on organizational structure and design as we do on defining color tokens or button variants.The balance between centralization and decentralization isnt a binary choiceits a spectrum that shifts with network scale and complexity. Each organization must find its optimal position along this continuum based on its unique needs and challenges.In smaller teams, centralization often proves effective. Decision-making is swift, feedback loops remain tight, and teams maintain their agility. However, as networks grow, decentralization becomes not just beneficial but necessary. Large-scale centralized governance inevitably creates bottlenecks, slows decision-making, and struggles with scalability. This challenge becomes particularly acute in organizations with multiple product lines, diverse user needs, and teams spread across different time zones and contexts.Design Systems can follow a similar evolution. Starting with centralization provides the necessary foundationestablishing core principles, shared vocabulary, and baseline components. But successful growth requires a gradual shift toward decentralization through well-defined roles, clear protocols, and robust collaboration mechanisms. Yet many Design Systems teams overlook this crucial transition, constrained by concerns about maintaining quality and consistency or limited by ineffective funding models. This hesitation leads to missed opportunities to build more scalable and resilient systems.The key lies not in abandoning centralized control entirely, but in thoughtfully redistributing responsibility and decision-making power across thenetwork.What makes a great decentralized system?A decentralized systems strength lies in its resilience. By distributing decision-making and action across a network, it eliminates single points of failure that often plague centralized structures. However, this resilience doesnt emerge automaticallyit must be carefully architected through thoughtful rules, clear processes, and aligned incentives.Both nature and technology offer compelling examples of successful decentralization. Ant colonies make complex decisions, such as choosing a new nest location, through a democratic process of independent exploration and collective consensus-building. Their scouts investigate options, evaluate against criteria, and use pheromone trails to vote for suitable sites. Without central control, colonies consistently make optimal choices through clear protocols and distributed intelligence.For design systems to achieve similar success, they need to establish clear foundations for how participants interact, make decisions, and stay accountable. Essentially, they must address three fundamental questions:How do participants interact? The system needs clear, efficient channels for collaboration and knowledge sharing.How are decisions made? Effective consensus protocols must balance speed withquality.How are incentives structured? The system should encourage meaningful contributions while maintaining accountability.Furthermore, looking at successful decentralized networks, from blockchain projects to natural systems, we can identify four essential characteristics that most of themshare:Clear incentivescontributions must have visible impact and recognition. In design systems, this might mean highlighting widely-adopted patterns or celebrating improvements that measurably enhance user experience.Distributed expertisedifferent decisions require different types of knowledge. The system should delegate specialized decisions, like accessibility or motion design, to subject matter experts while maintaining inclusive participation.Transparent processestrust grows from transparency, with every decision, discussion, and change being traceable and understood by the community.Communal ownershiplong-term success depends on participants feeling genuine ownership of the system through shared governance and collective decision-making.The challenges of decentralizationLike any system design, decentralization isnt without its drawbacks. Even when teams successfully establish robust protocols and scalable networks, they face inherent tensions. The most significant is the trilemma between speed, scalability, and decentralizationimproving any one aspect often requires compromising another.Some networks have found creative ways to address these challenges. One approach involves breaking the system into smaller, semi-autonomous units where trade-offs can be managed more effectively. Another strategy employs layered architectures, as demonstrated by Ethereums two-layer model: a foundational layer maintaining core standards and consensus, with a second layer enabling flexibility and scalability for specificneeds.This layered approach offers a particularly relevant blueprint for design systems. Rather than attempting complete decentralization, it suggests thoughtfully distributing control where it makes sense while maintaining strong core principles. The goal isnt to decentralize everything, but to find the right balance between centralized stability and decentralized innovation.Suggested designDrawing from the two-layer model, we can reimagine how design systems operate. Layer 1 serves as the foundationthe core design system with its style guides, UI kits, and usage guidelines. This layer, maintained by a central team, provides the stability and clarity essential for any scalablesystem.The magic happens in Layer 2, where decentralization takes root. Here, Design Systems team members transform from gatekeepers into delegates embedded within product teams. These delegates dont just relay informationthey become bridges, facilitating dialogue between teams and ensuring the core system evolves with real-world needs.This shift fundamentally changes the role of the Design Systems team. Rather than acting as creators and enforcers, they become facilitators and connectors. They guide the community toward consensus without dictating solutions. Decisions emerge from collective experience and needs, with the Design Systems team orchestrating the process rather than controlling it. Creation and maintenance of assets (components, patterns, tokens, etc.) happen communally.In this model, the true measure of success shifts. The Design Systems teams focus moves from policing outcomes to nurturing the health of the network itself. Their primary concern becomes the quality of connections and exchanges between teams, ensuring the right voices are heard and the right conversations happen. They maintain the protocols that enable effective collaboration, stepping back from prescribing specific solutions.Like a well-functioning ecosystem, the system becomes self-sustainingnot through rigid control, but through healthy interaction patterns and clear protocols that enable organic growth and adaptation.Pragmatic realismAs our organizations grow and user needs evolve, we face a choice: continue with centralized models that create bottlenecks, or evolve toward systems that scale more naturally with ourteams.The solution isnt radical decentralization, but rather thoughtful evolution towards it. Of course, this is easier said than done. However, by building strong foundations and clear protocols, we can create systems that maintain quality while enabling teams to move at the speed of userneeds.This shift requires us to rethink our rolefrom gatekeepers to facilitators, from rule-makers to community builders. The future of design systems isnt in perfecting our components or creating amazing documentation. Instead, it lies in creating environments where teams can effectively solve user problems together, guided by shared principles and protocols rather than rigidrules.The first step is acknowledging that our current approach isnt scaling with our needs. The next is having the courage to evolveit.Stay safe,Oscar.The future of design systems is decentralized was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·65 Views
-
Using AI to optimize my snow shovelinguxdesign.ccUsing AI to optimize snow shovelingAn example of how AI is changing the way I work (and live) as a productdesignerSnapshots from the AI-generated app I created. You draw the shape of your driveway and it outputs the optimum route, square footage, estimated time to walk the distance (all at a static shovel width of 2 ft). Theres a lot I would improve but I went from idea to research to app in less than anhour.Ive been challenging myself to use AI in everyday use cases so that I can get closer to the technology as a product design leader. To that end, I want to share my attempt at using the latest generation of AI tooling to build an experience that helps people shovel snow more efficientlyan often mundane and tedioustask.Some additional context aboutme:I used to be a developer in a past life but have since lost touch with most skills associated with being able to buildappsI work in tech and so Im probably at the leading edge of most of the available tooling compared to mostpeopleI have taken a app-forward approach to solving this problem and its possible there are alternative approaches using AI to do the same (like deeper research)TL;DR; I ultimately succeeded in the task and developed a small web app where I can draw the shape of my driveway and calculate the optimum shoveling route. I went from idea to research to app in less than an hour (it took me longer to write this article).Here is my approach and the tools I used to solve thisproblem.ResearchFirst things first, I needed to determine if the answer already existed. Was there research that told me the most efficient route to shoveling a driveway? To find out I went to Gemini Deep Research(GDR).Using GDR, I ran a meta-analysis across 38 different websites and papers, triggering the research with the following prompt:can you please do research on the topic of what the most efficient way to shovel a driveway? My driveway is a long rectangle and I shovel it by hand with a 2 foot wide shovel. Id like to understand how I can shovel my driveway the most efficient way, what pattern I should use to shovel the fastest and clear the area of my drive way the quickest. If there isnt enough research on snow shoveling, you can use other research that might be applicable in the areas of mathematics, geometry, and physics.Before generating the report, GDR develops a research plan, showing its proposed steps to conduct the research. Heres an example of its response to myprompt:The Gemini Deep Research plan describes the steps it will take to conduct research and lets you correct its plan of action before executing the minutes-long process.This looked good so I clicked Start Research, sat back, and sipped my coffee. In just a few minutes an exhaustive report was spit out with references clearly cited (complete with links to websites, PDFs, andimages).In my case, GDR concluded that theres little evidence of an optimal route to shovel a driveway like mine. It also gave great insights around different patterns to try, the physics of snow shoveling, and how to shovel in a healthy way to prevent injury or over exertion.GDR reports cannot be directly shared from Google, so I processed my report with Gemini NotebookML (another wonderful tool for conducting first-party research). From NotebookML I converted my findings into a podcast. Feel free to give it a listen here if youdlike.An excerpt of the Gemini Deep Research report on shoveling snow.Answering the question (and building an AI-generated app)Given that the Internets top physicists and forum writers didnt have an answer for me, I took matters into my ownhands.There are several different surfaces I need to shovel so I thought itd be neat-o for me to be able to draw the shape of those areas and have an app tell me the optimum route to clearthem.What is optimum? I defined optimum as covering a given area in the least amount of walking time necessary to saturate thespace.For creating my app I turned to v0 by Vercel, as I believe it to be at the bleeding edge of generative AI app development. With v0, you can turn text prompts into actual functioning apps with code and deployment options.An example refinement to the app. 1 of just 8 prompts it took me to create the snow shoveling estimator app.My experience was magical. It took me 8 text prompts to generate what I believe to be a pretty good MVP. Did I build the next Facebook? Absolutely not. But I went from idea, to research, to functioning tool in about anhour.Checkout my snow shoveling route estimator here: https://v0.dev/chat/driveway-snow-route-planner-UP6yTAh9Xnp?b=b_yQkw8ddGh2c its not perfect but its pretty damn cool. At this link you can see my entire chat transcript and the versions I went through to create theapp.After your prompt you can see v0 get to work, modifying the code (in this case React) and eventually pausing to generate the runtimepreview.In the app development view, the preview renders to the right of your transcript.Not an AI-generated conclusionIf I had a full 40 hour work week on this, the potential seems limitless (and its still early days for this technology). Ive been using AI on a weekly-basis now and its changing the way I approach problemsolving.I used ChatGPTs multi-modal inputs (voice and video) to help work on an old snow blower, I researched a family trip to Yellowstone (and sent the podcast to my wife), and Im using AI almost ever day in my work as a product designleader.This is one of the most exciting times to be in technology and the pace at which these innovations are happening is truly stunning.Have you been experimenting with AI? What have you been able to accomplish?Using AI to optimize my snow shoveling was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·84 Views
-
The fastest gun in UX: Why your team is telling the wrong storyuxdesign.ccDesigners are skipping steps of the process in a rush for faster outputs. But the contest that really matters is the race towards stakeholder alignment. Designers are both uniquely vulnerable to losing this race and uniquely positioned to winit.For several years, Design has been in survival mode. In a post-ZIRP economy where investment is driven by fear, the question of the ROI of design has returned from the grave with slightly different wording. Todays managers dont just want resultsthey want results fast, and they want to know how UX is going to help them dothat.As the lines between Design and Product continue to blur, one camp of designers has hewed to a line that is familiar to any product manager: that the value of design is not in the race to fastest outputs, but in the marathon towards valuable outcomes. But Product has been escaping the build trap for nearly a decade, and is no closer to making its wayout.Other designers have accepted the challenge, and doubled down on shortening their design process. On the surface, this calculus makes sense: if today value means velocity then the best way for us to demonstrate value is to hock outputs out the door as quickly as requests come in. If ship to learn is real then the faster we can ship, the faster we willlearn.But somehow, the learning has also failed to materialize. Despite what our analytics and user feedback tell us, those requests from upstream never seem to take that data into account. Build, measure, learn inevitably remains at build, build,build.The reason this keeps happening is that the feedback loop is broken. Its being intercepted at its most critical point by one charactera character well call the Fastest Gun in theWest.The Fastest Gun in the West is the hero of his own storyand wants to be the hero of everyone elses,too.The Fastest Gun in the WestproblemImpactful design decisions are typically made well above the level of product teamsCharlesLambdinThe name comes from the analogous phenomenon on Stack Overflow: the design of the system artificially inflates the salience of the first answer posted, disproportionate to its quality. A better answer posted late is buried under a worse one that has simply had more time to accruevotes.There are Fastest Guns in product orgs, as well. But rather than compete for internet points, they race for control of the narrative that frames how the business defines its priorities. The commonly-applied mechanisms of annual and quarterly planning only compound the natural anchoring bias of the first idea on thetable.What makes its way down the planning funnel is neither the most achievable output or the most impactful anticipated outcome, but the minimum viable alignment of the decision-makers involved.At a glance, this problem resembles the classic Waterfall BRD. But the situations couldnt be more different. In fact, Fastest Guns often use the vocabulary of agility and design thinking to paper over complexity and delay hard decisions. Rather than resolving disagreements, the Fastest Gun covers them with a cloud of deliberate ambiguity. He knows that the cloud cant last forever, but it doesnt need toits only there until the idea takes root as the thing we are committed to doing and is embedded in theroadmap.This is where the Fastest Gun in the West becomes a UX-specific problem. When the goal is to rush the idea from concept to backlogs as quickly as possible, user research is not just an unnecessary time sinkits a serious threat. The Fastest Gun will eagerly attack the idea of research, cultivating a sense of urgency, and claiming that it takes too long and we can defer learning about the problem until after weship.When designers accept this framing under the pressure of proving their value, they play right into the Fastest Gunshands.Potemkin DesignLeaders want the payoff of experimentation but without the cost of any dead ends.ScottBerkunThe Fastest Guns path to success relies entirely on creating a perception of a fait accomplithat the scrutiny of refinement is not necessary because it has already been completed. The fastest way to do that is to produce outputs that appear indistinguishable from the outputs of a real design process. Rather than do the work, they simply forge the receiptspopulating persona and JTBD templates with their own assumptions or LLM-generated drivel.The one thing they cant do on their own is produce high-fidelity mockups. The Fastest Gun is utterly dependent on designers to provide legitimacy to their vision, and will put tremendous pressure on design orgs to sacrifice every scrap scrutiny and process at the altar of velocity and skip directly to thisstage.But the deadline UX is rushed towards is not for getting working software into the hands of a user. Its to lock in the Fastest Guns assumptions, drowning stakeholders with trivial detail to avoid pushback on the flimsy premise underneath.A UX design practice that gives in to this working relationship may have a seat at the table, but will have nothing valuable tosay.Party in the front, business-as-usual in thebackDesign positioned in this way also takes on the entire risk when the idea underperforms the Fastest Guns lofty promises. This is becausewithout the decision-making feedback loops of the design processUX becomes entirely a delivery function. And if the vision is sound (after all, the stakeholders signed off on it) then the problems must be with implementation details.This is where the promise of ship to learn falls apart. The delivery team indeed learns something from building the product, but the decisions impacted by those learnings are not actually made in the delivery phase. The relationship between the delivery team and the Fastest Gun is not a feedback loop; choices are made based on horse-trading and internal marketing long before anyone on the ground has a say aboutthem.Fighting fire withdesignYou can be efficient or effective. When it comes to innovation, choose effective.Christina WodtkeThis is why designs appeals to quality have fallen on deaf earsin this influence system, quality is entirely besides the point. The fact that we care about quality and the Fastest Gun in the West does not is precisely what makes them thefastest.The answer is not to try and compete on velocity. No matter how many steps we cut from the design process, we will never be faster than someone who shoots from the hip. Instead, we need to engage stakeholders on a higher levelilluminate the target faster, to show how badly their shots are missing themark.A rational framing of how what we are doing rolls up to what we want to achieve is critical if we hope to be able to say no to low-quality ideas.Optimizing time to high fidelity mockups is the wrong strategy because mockups are not the appropriate tool for this. They are a tool for solutionsand at this point, you have not even framed theproblem.To beat the Fastest Gun, designers need to engage stakeholders at the level of the mental modelthe desired customer behavior change, the outcomes achieved by meeting their needs, and how they impact the business. A UI mockup doesnt carry any of that information, because the UI is not the product. It cannot tell you whether or not its premise isvalid.The lower the fidelity, the fewer distractions from the core value proposition.But low-fidelity tools can; they are designed for that purpose. The customer goal and problem, as well as the implications of the proposed solution framing, can be captured in a scenario storyboard or PRFAQ at the right level of fidelity to invite refinement rather than avoid it. These artifacts give enough clarity for a stakeholder to say no, this would not be a meaningful impact and create permission to avoid a dead end and choose another direction.If we have done our jobs right then by the time the Fastest Gun in the West tries to shoot, stakeholders will be able to see that he hasmissed.The fastest gun in UX: Why your team is telling the wrong story was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·100 Views
-
Human-centered AI: 5 key frameworks for UX designersuxdesign.ccPrioritizing user needs and adopting a structured approach to AI initiativesIllustration by RobChappell2024 was a remarkable year of progress in artificial intelligence and its amazing how swiftly this technology has become embedded into both our professional and personallives.At home, Ive enjoyed watching my three young sons immerse themselves in AI in their own playful way. Theyve been captivated by the latest wave of AI-powered toys, especially our familys new pet robot, Loona, which charms them with almost sci-fi-like conversationspowered by an integration with GPT-4o that was released in May 2024. Loona has sparked curiosity, imaginative human-robot dialogue, and even a few lively sibling debates about how it thinks. Their excitement is a reminder of how AI is already shaping the next generations relationship with technologymaking it personal, engaging and even relatable.KEYi Techs Loona robotimage source: keyirobot.comThis sense of wonder carried over in the new year for me as 2025 kicked off with an awe-inspiring NVIDIA keynote presentation at CES, showcasing the progress of humanoid robots. NVIDIA CEO Jensen Huangs demonstration highlighted his own fascination with how far robotics and AI have come, and his strong prediction of the rapid pace of change we can expect in the yearsahead.NVIDIA CEO Jensen Huangs CES 2025 keynoteJan 6,2025For UX designers, a similar curiosity is needed in how we approach working with AI. Embracing AI requires us to rethink our processes, understand the technologys underlying systems, and ensure that human values and user needs remain at the heart of what wecreate.As artificial intelligence becomes the backbone of digital innovation, our role as designers is evolving. Were not just shaping interfaces, were crafting experiences that merge human-centered principles with entirely new ways of interacting with technology. This shift demands that we think like technologists, embrace data-driven systems, and bring a user focus to AI initiatives.To guide this transition, leading tech companies and universities offer actionable strategies for human-centered AI. In this post, Ill share UX frameworks from IBM, Google, Microsoft, and Carnegie Mellon University providing insights and resources for navigating the rapid evolution of AI technologies andtools.1. IBMs AI/Human ContextModelIBMs AI/Human Context Model stands at the core of its Design for AI practice. This model provides a structured framework to ensure that AI solutions interact seamlessly with users and evolve with user input, while respecting and enhancing the context in which theyoperate.Resource: IBMs Design forAIIBMs AI/Human Context Model is designed to guide the development of AI systems that align with human needs and values. The model breaks down AI-driven experiences into critical considerations, each essential to creating purposeful, context-aware, and human-centric solutions:Understanding intent: AI systems must prioritize human-centric goals, considering user intent, emotions, and context. The intent represents the foundational purpose of the AI system, encompassing the goals, wants, needs, and values of both users and businesses. It defines the why behind the solution and ensures the system is designed with a clear, user-centered purpose.Data and policy: This refers to the raw data collected from users and the world, alongside the policies that protect and govern its use. Data forms the backbone of AI decision-making, but its collection and handling must adhere to strict ethical and regulatory standards.Context is key to effective AI interaction. IBM stresses the importance of systems understanding situational and environmental factors that influence user behavior. For example, contextual data such as location, time, or task urgency can help AI provide more personalized and relevant recommendations.Machine understanding, reasoning, knowledge and expression: This refers to the AI systems ability to interpret structured and unstructured data within the context of its domain, apply logic to analyze data and decide the best course of action, ensure knowledge repositories are updated dynamically with new insights, and communicate its responses in a way that aligns with the users context and expectations.Human reactions and system improvement loop: This emphasizes that AI systems must be designed to work with humans, not just for humans, ensuring a balance between automation and human agency. The user reaction reflects the genuine feedbackexplicit or implicitthat users provide in response to the AI systems expression. Learning is emphasized in how the system continuously improves based on user interactions and feedback, enabling it to evolve and better serve its purpose overtime.Evaluating outcomes: This emphasizes that outcomes measure the real-world impact of the AI system, representing how well it addresses user needs and solves problems effectively and ethically.2. Googles Explainability RubricGoogles Explainability Rubric provides a clear framework for creating AI systems that are transparent, fair, and user-focused by highlighting 22 key pieces of information to share with users. As AI continues to influence our how we work, interact with businesses, and even become a tool for self expression; ensuring users can understand and trust these systems iscrucial.Resource: Googles Explainability RubricThe rubric is divided into three levels of information: General, Feature, and Decisionlevels.General level: Provide a high-level overview of how your product or service works, including the role of AI. Explain the primary purpose and benefits of using AI, the business model, and how AI contributes to value creation. Highlight steps taken to ensure safety, fairness, and transparency, including engaging with communities, addressing bias, and sharing performance information.Feature level: Detail specific AI-powered features, including how they operate, when AI is active, and user control options. Explain system limitations, human involvement, and personalization options. Provide information about the data used, including training data, external inputs, and how user data is processed and utilized.Decision level: Clarify how specific AI-driven decisions are made, the systems confidence in its outputs, and how it identifies errors or low-quality results. After decisions are made, provide channels for user feedback, allow contestability, and offer clear communication about errors andrepairs.3. Microsofts Human-AI Experiences (HAX)ToolkitMicrosofts HAX Toolkit is a comprehensive framework designed for teams developing user-facing AI products. It helps conceptualize what an AI system will do and how it should behave, making it a useful tool early in the designprocess.Resource: Microsofts HAXToolkitThe HAX Toolkit is versatile, allowing teams to mix and match its design tools based on their unique needs, use cases, product category, and goals. Key components of the HAX Toolkitinclude:Guidelines for Human-AI Interaction: These are best practices for designing AI behavior during user interaction. They guide AI product planning to ensure intuitive and effective experiences.HAX Design Library: A resource hub that explains the Guidelines for Human-AI Interaction with actionable design patterns and real-world examples.HAX Workbook: A collaborative tool for teams to prioritize which guidelines to implement, fostering focused and efficient design discussions.HAX Playbook: Specifically tailored for natural language processing (NLP) applications, this playbook identifies common human-AI interaction failures and offers strategies to mitigatethem.4. HCI Institutes AI Brainstorming KitCreated by researchers at Carnegie Mellon Universitys Human-Computer Interaction (HCI) Institute, the AI Brainstorming Kit is designed to distill AI capabilities and help teams explore what to build with AI. Innovation often falters not because of technology, but due to teams choosing the wrong projects to pursue. The AI Brainstorming Kit addresses this issue, providing a structured approach to designing AI-driven solutions that are both technologically feasible and user-centered.Resource: HCI Institutes AI Brainstorming KitThe kits structured approach reduces the risk of developing irrelevant or unwanted AI solutions. By focusing on both what AI can do and what users need, the kit empowers teams to innovate thoughtfully and effectively. The kit categorizes AI functions into distinct capabilities suchas:Detecting patterns (e.g. identifying faces inimages)Forecasting trends (e.g. predicting stockprices)Generating content (e.g. creating synthetic images ortext)Automating actions (e.g. executing workflows across different apps)It provides an overview of 40 real-world AI product examples spanning diverse domains like healthcare, education, and transportation. The kit also includes tools like ideation prompts, impact-effort matrices, and performance-expertise grids to guide users in selecting high-impact, feasible ideas. To use the kit, start by reviewing AI capabilities and examples to inspire your team. Then, run structured brainstorming sessions to explore opportunities, refine concepts, and assess potential solutions. This resource is ideal for workshops, organizational strategy sessions, and innovation labs, ensuring that teams design impactful and user-centered AI products.5. Googles People + AI GuidebookCreated by People + AI Research (PAIR), a multidisciplinary team at Google, the People + AI Guidebook offers a comprehensive resource of methods, best practices, case studies, and design patterns tailored to help designers, developers, and product teams create impactful AI-driven solutions.Resource: Googles People + AI GuidebookThe guidebook introduces more than 20 design patterns offering practical, action-oriented guidance for designing AI products. These patterns focus on addressing key challenges in the product development process and are organized around common questions to help teams find relevant insights.Getting started with human-centered AI (5 patterns):Includes guidance on determining if AI adds value, setting clear user expectations, and explaining product benefits effectively.Using AI in products (3 patterns):Emphasizes leveraging AI where it excels, balancing automation with user control, and managing precision and recall tradeoffs.Onboarding users to AI features (4 patterns):Covers anchoring on familiarity, making exploration safe, and providing clear explanations of new features.Explaining AI to users (5 patterns):Focuses on explaining AI capabilities for understanding, showing model confidence appropriately, and offering deeper, contextual explanations outside of immediate usecases.Responsible dataset building (6 patterns):Highlights practices like involving domain experts, designing for data labelers, maintaining datasets, and embracing the messiness of real-world data.Building and calibrating trust (7 patterns):Guides teams on transparency about privacy settings, error accountability, and enabling user feedback and supervision.Balancing user control and automation (5 patterns):Offers advice on automating progressively, returning control to users when needed, and ensuring automation issafe.Supporting users during failures (3 patterns):Encourages planning for error resolution and ensuring users can move forward when the AI systemfails.These five frameworks provide a foundation for designing AI that fits naturally into our daily liveswhether its a playful, conversational robot toy or an app that keeps us organized and productive. As UX designers, approaching AI with human-centered frameworks means balancing new technical capabilities with responsibility, questioning the readiness and suitability of AI for each use case, and building systems with user feedback loops that drive continuous improvement.Human-centered AI: 5 key frameworks for UX designers was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·85 Views
-
Why TikTok refugees are flocking to Xiaohongshuuxdesign.ccThis decision is partly UX, partly political in the wake of the proposed TikTok ban.Continue reading on UX Collective0 Comments ·0 Shares ·79 Views
-
Metas decision, 4 pillars of content design, GenAI color harmony, quantitative personasuxdesign.ccWeekly curated resources for designersthinkers andmakers.Dear Mark,Unfortunately, when youve claimed the word meta from our lexicon and then you announce that youve done a bad job deciding whats right so youve decided to stop deciding whats rightbut you get that decision wrong, were out of words to describe theirony.Meta: you cant put the toothpaste back in the tubeNew year savings: start your product (UX/UI) design career with UX Design Institute [Sponsored] Launch your career as a certified Product (UX/UI) Designer with UX Design Institutes university credit-rated Product Design Programme. Gain in-demand UX/UI skills, build a professional portfolio, and save big with their early birdoffer.Editor picksMeta and Spotifys AI takeover Is this the end of human-created content?By Angele LenglemetzThe obscure side of Honey Deceptive tricks turned a savings tool into a trust trap.By MarcusFlecknerSeeing what nobody else can Understanding competitive advantage.By HelgeTennThe UX Collective is an independent design publication that elevates unheard design voices and helps designers think more critically about theirwork.Stimulation clickerMake methinkWhither dashboard design? Every dashboard is a sunk cost. Every dashboard is an answer to some long-forgotten question. Every dashboard is an invitation to pattern-match the past instead of interrogate the present. Every dashboard gives the illusion of correlation. Every dashboard dampens your thinking.Automated accessibility testing at Slack Automated tools can overlook nuanced accessibility issues that require human judgment, such as screen reader usability. Additionally, these tools can also flag issues that dont align with the products specific design considerations.Are we at peak shittiness? I switched from an Apple Watch to a mechanical watch for that reason (one less battery to charge!), and bought a simple nightstand alarm clock that doesnt need an app, doesnt have a screen, a Wi-Fi connection, or an unremovable battery.Little gems thisweekHuman flourishing in the age of AI By Josh LaMar(He/Him)The four pillars model of content design By AndrewTippWhat are the big opportunities to make an impact in 2025? By YaronCohenTools and resourcesQuantitative personas with latent class analysis Facilitating the creation of statistical personas.By TaliehKazemiGenAI and the tetrad color harmony Unanimous consensus among three chatbots.By Theresa-Marie RhyneThe art of storytelling and persuasion A designers guide.By AbbyAkerSupport the newsletterIf you find our content helpful, heres how you can supportus:Check out this weeks sponsor to support their worktooForward this email to a friend and invite them to subscribeSponsor aneditionMetas decision, 4 pillars of content design, GenAI color harmony, quantitative personas was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·80 Views
-
To create more accessible outcomes, we need better design toolsuxdesign.ccVisual design tools can bias designers into less accessible results. So how can these tools help improve these outcomes?Early in my career as a composer, a teacher once gave me an invaluable piece ofadvice:its essential to learn how a digital tool wants you to think, so that you can free yourself of itsbias.This came up in conversation after I had willingly given up my digital notation tool for pen and paperand how that choice had transformed my creativeoutput.I think about this piece of advice often as a designer. How the tools we use put pressure on us to deliver a specific result. And how much the defaults of a tool can skew or narrow our process without our awareness.Design tools for example have come a long way in the last 40 years, from revolutionizing the way we work, to enabling millions to become more creative, to improving the craft year over year. But in all that time they *still* primarily focus on visual artifacts. And while understandable, this focus can have an effect on a designers output.In the early 1970s, a series of landmark papers by Amos Tversky and Daniel Kahneman were published exploring Judgement under Uncertainty. And one thing they discovered was how knowledge or information easily available to a person could bias their thoughts and actions towards those thingsknown as the availability heuristic.Studies related to knowledge and decision making have also shown that people with less knowledge and fewer cues are more likely to make poorer decisions. And this raises an important question about digitaltools:In the context of a design tool like Figma, how much does the lack of an option influence a designer into producing a certainresult?Enter accessibility.Knowledge, and the unseen needs of accessibilityWhen building an accessible product, there are a significant number of details that are inherently non-visualthings modern design tools arent built tohandle:The name or semantics of something incontext.Meta-information about thepage.Or, the many ways people can interact with anelement.When a design tool doesnt help a user manage one of these details, designers need to rely on their own knowledge or a support system (like co-design activities) to ensure those details are covered. And this comes with a fair bit of risk if that knowledge is missing. As Richard Larrick and Daniel Feiler note in Expertise in DecisionMaking:domain ignorance [] leaves the decision maker blind to important interactions among factors that may be obvious to an individual experienced in thatdomain.And this supports the hypothesis that direct support from a design tool could improve a contributors output. In fact, they even state this more directly, saying:Distilling expert knowledge into a decision support system can dramatically improve experts consistency.So, lets talk about how we could dothat.Before continuing, I should clarify that I am specifically talking about tools designed to help build digital products, like Figma, Sketch, Penpot, andothers.Various ways design tools impede accessibilityThere are many different ways that accessibility can be impaired in a visual design tool. And the following are a few major categories.The most practical challenge is that often, non-visual information is not supported by a tool (or only partially). This both adds difficulty to making something accessible, and may force guesses if the right information, knowledge, or process isnt inplace.The nature of a visual communication tool also encourages designers to focus in on this aspect, and may blind them to details that fall outside of this medium. And other kinds of influence can further encourage this narrowing, like a lack oftime.Additionally, many tools have made efforts to bridge the gap between design and engineering. But the code thats produced often lacks context or specificity, and as a result isnt usable until it has been edited by a human; and thats not always possible.There are plenty of tool-specific examples as well. As much as I love Figmas Community, it has allowed Figma to offload what are ostensibly central challenges of a products experience to users to solve. And this has the potential to seed that a topic isnt actually that important, or will be invisible to a designer because of how its positioned. And Im certain Figma is aware of this as theyve done a number of things to close a few gapslike introducing Dev Mode, or committing to initiatives like their Resource Library or Shortcut.Ways our current tools can improve thepracticeWhen I began exploring this challenge, there were three questions on my mind with how design tools could help accessibility work:What decision support systems do we need to provide the mosthelp?Where is the most impactful place in the user journey to provide thesesystems?How do we provide the right breadcrumbs to expose and promote the use of thesesystems?But before answering any of them, we need to talk about limitations.There are certain kinds of accessibility challenges that design tools are incapable of measuring. For example, the level of anxiety that an experience might place on someone with ADHD. And as a result, this leaves more deterministic challenges for us to consider. Thingslikewhat something is called or how its described,properties or data about something,meta relationships,interactivity and behaviors, orother details that can easily be measured.And these challenges directly relate to the first question about decision supportsystems.A decision support system is a cognitive aid that can be used help improve the consistency and outcome of a process. And a shopping list is a great example ofone.In the context of a design tool, the underlying goals for these systems are still the same: they should help designers be reminded of, and build consistency with an accessible outcome; like making sure that structural landmarks are identified.But what might these systems look like? And where do we place them to have the most impact? There are many different possibilities depending on which design tool is being considered. So Id like to simplify the remainder of this topic by focusing on Figma specifically.Systems to improve accessibility inFigmaWriting can only do so much, so I built a prototype that you can experience yourself to get a better sense of how these new systems mightwork.https://medium.com/media/1c45465cba533088b1426120642446ce/hrefProps, data, and non-visual informationWeve already noted that a great deal of information about a product is non-visual. And Figma partially supports this challenge with Dev Mode and Annotations. However, one major shortcoming with this is that the nature of an Annotation reinforces the idea that this information is not an intrinsic part of the product.Given this and other weaknesses that exist, a more effective approach would be to build a dedicated tab in the properties panel (in design mode) that helps users include many different kinds of non-visual data. And this approach provides a lot of benefits:Information is highly discoverable as a direct part of the design experience.Deterministic properties help users in their learningprocess.A dedicated tab provides better clarity with groups of related information.It enables simpler and more robust interactions around managing this information.More practically, users can leverage these new tools to capture different kinds of important information such as applying element semantics, giving a layer the proper name, or adding properties to help describe the functionality and behavior of something.Theres an even bigger benefit to this approach too. By directly supporting real properties Figma would be able do a lot more, suchasarticulate this information directly incode;provide logic for when properties conflicted or included additional effects; andevenhelp automate certain kinds of decision making (assigning the button element to a component named button).And testing this new approach with designers showed thateven with the complexity of this additional datathese changes both helped designers be successful and had the potential to greatly improve collaboration with developers.Challenges with this approachAs much I like this approach, it does come with a few big challenges however. Shifting the entire focus of Figma so that design mode is moreholisticincreases the risk of this process being left undone as information could be pulled away from people with more domain expertise (developers); itplaces more emphasis on the challenge of how do you better integrate developers into the design space?;andhas the potential to place a lot of pressure on designers to learn how tocode.And testing echoed some of these concerns as well. In particular, how this approach does not entirely solve the literacy problem that exists, as many designers are unfamiliar with what a combobox is, or when its appropriate to apply a landmark role.Simulating the human experienceAnother big area of accessibility work is attending to the different ways people actually experience a product. From their ability to capture stimuli, to processing information, to how they interact with a product. And a lot has been done to address these experiences with tools for color contrast, vision impairments, and other needs. But these tools only partially address this challenge and always add some amount of negative friction to this kind oftask.So how could Figma better support designers here?Extending the modality of Figma to be able to support these kinds of variations in the human experience is one way to approach this problem. And a dedicated mode for this has a few important benefits.First, it allows designers to explore many different experiences, such as a person who has a vision impairment like cataracts, or uses their fingers to interact with theproduct.As an example, designers could easily simulate a color deficiency like protanopia to see how a person with red-green colorblindness might experience theproduct:The other big benefit is that a dedicated mode enables a more robust experience around measurements. Tools could be configured to use specific rules, and these measurements could also facilitate more learning opportunities with the criteria they are using. It would even be possible to measure specific situations, like contrast under a specific type of colorblindness:Every designer I worked with was excited about this new simulation mode in testing, with sentiments ranging from really useful to simply phenomenal. And this feedback was really encouraging.Prototyping and better interaction supportTheres one last situation Id like to cover. And its an interesting one because on the surface the solution doesnt look like an accessibility-focused improvement.One of the biggest challenges in accessibility is designing a great interactive experience for everyone; especially for those using a keyboard. Thankfully, Figma already has a partial solution to this challenge with prototyping features. And this presents a great opportunity to improve two things at once: creating more accessible outcomes by helping designers build better prototypes. There are some challenges to addresshowever.The biggest challenge is with how Figmas prototypes currently work. And a few barriers arethatwhen keys are selected as a trigger, only a single key can be assigned;only a small number of trigger options are available; andthe concept of focus does notexist.In addition, restricting actions to a single trigger adds unnecessary friction to building complex behaviors. And addressing this workflow improvement first would improve the overall experience.The first adjustment lets a designer include multiple keys for an action, as there are many situations where a collection of keys are equally valid triggers.The biggest change however is in the types of triggers designers can choose. And by changing them to match real events, not only are many more kinds of prototyping behaviors available but accessibility needs (for multi-modal experiences) can also be achieved.The last change is a small, but impactful one. Prototypes in Figma currently support keyboard use with screen readers. And while helpful, in practice this approach creates some awkwardness as the concept of focus doesnt really exist. And thats a crucial piece thatsmissing.Thankfully, this can easily be remedied by giving designers the ability to mark specific layers as being focusable and letting actions pass focus to those layers. And this would allow prototypes to behave a lot more like a realproduct.Earlier when talking about non-visual information, I also included a section for focus that I didnt note. And I did so because not only can it be a real design property that can be affected by other properties, but it also helps set the foundation for essential navigational behaviors aswell.All in all, these changes greatly improve the kinds of prototypes and accessible behaviors that designers can create. But they also help Figma to become a much better source of truth by capturing and communicating this information in a measurable way. And thats a big opportunity for Figma to then translate these behaviors into real, accessible code for developers to build from. And many of the designers I worked with echoed these sentiments.Real value exists with directsupportDuring this project I worked with a number of designers with different levels of accessibility experience to help build more confidence in this hypothesis. And the feedback they gave in testing the prototype I built was very positive overall: from the improved support and capabilities designers would gain, to the collaboration improvements, and generally how interested they were in using these tools in their work. And I need to take a moment to thank them for their time and feedback on this project as it was invaluable.I strongly believe that a great accessibility experience is only achievable when its a natural part of the design process. And I hope that this exploration has helped to persuade you that theres real value in building a more holistic and accessible producttool.And if distilling expert knowledge into a decision support system can dramatically improve experts consistency as Larrick and Feiler note, then Im hopeful about the impact these adjustments could have on accessibility workoverall.ResourcesJudgement under Uncertainty, Tversky,KahnemanAvailability: A heuristic for judging frequency and probability, Tversky,KahnemanExpertise in Decision Making, Larrick,FeilerThe WebAIM Million,WebAIM94% of the Largest E-Commerce Sites Are Not Accessibility Compliant, Baymard InstituteTo create more accessible outcomes, we need better design tools was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·77 Views
-
Human flourishing in the Age of AIuxdesign.ccTECHNOLOGY &CULTUREChallenges, strategies, & opportunities.Credit: MarketoonistThe explosion of AI over the past two years, particularly generative AI and large language models (LLMs), has reshaped much of how we work and think about technology. For user researcher and designers, AIs impact can be grouped into three broadareas:Generative AI product development: The topics and challenges explored in research and design projects as we seek to best implement AI into new products and services.Internal processes: Systems and workflows within the workplace leveraging AI to enhance efficiency and insight as we work together to build AI-leveraged products and services.Personal practices and training: Individual use of AI to augment skills and productivity.While AI offers immense potential to accelerate and enhance product design, its critical to approach it with a balanced perspective. Alongside its benefits, AI presents potential harms and negative impacts that demand careful consideration.As this transition is happening, many of us are asking ourselves how we can approach AI in a human-centered way. Some of the key questions to guide responsible AI integration include:How can organizations advocate for the development of AI technologies that prioritize human well-being in their products and services?What strategies ensure AI can be leveraged responsibly while centering human needs during research, analysis, design, and the software development process?How can authentic human experiences be validated within a landscape increasingly shaped by artificial content?What day-to-day practices can researchers and designers adopt to maintain a balanced, human-centered approach toAI?And most importantly:How can researchers and designers learn, grow, and adapt while AI technology is evolving faster andfaster?In this article, I will attempt to answer these questions by revisiting foundational concepts of human flourishing, reflecting on organizational values, and synthesizing diverse perspectives from professional communities and academic literature.The goal is to help us all develop best practices for AI use that align with both ethical standards and businessgoals.And keep the human centered in theloop.A human-first approachAdopting a human-first approach provides a strong foundation for ethical and effective AI use. At its core, this philosophy emphasizes serving and empowering people through empathy, authenticity, and a commitment to collective well-being.I think of Human-First as humans serving humans. Every decision, every interaction is grounded in empathy, authenticity, and the acknowledgment of our collective humanity. We take care of ourselves and keep the health of others in mind. We create space whenneeded.Taking the core concepts of human flourishing as identified by academics and applied practitioners, (See Appendix, below, for complete list of frameworks) practitioners and organizations that embrace a human-first mindset define their values around these core principles:Purpose and Contribution: Supporting work that feels meaningful and impactful.Personal Growth and Agency: Encouraging self-determination and skill development.Holistic Well-Being: Addressing physical, mental, social, and other dimensions ofhealth.Ethical Living: Ensuring actions align with moral values and promoteharmony.A holistic view of human flourishing considers the individual, structural, systemic, and environmental levels to create sustainable, people-centered solutions.Interdisciplinary and cross-cultural frameworks (e.g., the Social Ecological Framework, The Ecology of Wellbeing, & Measuring Flourishing | Harvard), can be applied to aide us in decision-making.By rooting AI practices in these values, researchers, designers, and organizations can better navigate challenges to human flourishing in research and design. This foundation sets the stage for addressing specific AI-related concerns while advancing the shared goal of creating technologies that truly serve humanity.It is in this context that we next identify AI-related challenges to human flourishing in the context of UX Research andDesign.AI challenges to human flourishing and lessonslearnedThe more I experiment with AI tools while also conducting research and consulting with companies building generative AI-based tools, the more I can see the limitations of how AI can detract from human flourishing.While societal concerns surrounding AI are vast and complex, (See AI Risks in Appendix, below), this discussion focuses on challenges most relevant to day-to-day experience of user experience (UX) and design researchwork.Through personal (and team) experience and reviewing articles and perspectives, we have identified four key characteristics of AI as particularly threatening to human flourishing:1. OversimplificationAIs limited ability to detect and adapt to complex, changing contexts can lead to oversimplification of human behavior and realities, culturally or situationally inappropriate insights and output, and perpetuation of systemic inequities and injustices.Frequent users of AI tools have likely had an experience with AI output that isnt quite what you were looking for. In those moments, it feels like AI isnt gleaning the intent behind your question. Sometimes even the most well-crafted prompt isnt enough to overcome thisbarrier.AI models often struggle to fully understand and adapt to contextual nuances, particularly in complex or dynamic environments where human interpretation is key. It falls short in fully grasping the subtleties of human communication, including tone, cultural references, and implied meanings, as well as sussing out potential motivations or explanations for human behavior and phenomena, (What Are the Limitations of AI in Understanding Context in Text?Space Coast Daily & The Context Problem in Artificial IntelligenceCommunications of theACM).As it relies on predefined rules and training data, it can fail to factor in social, cultural, systemic, and environmental influences out ofpurview.These characteristics limit AIs responsible utilization in research across contexts, cultures, and in novel situations and when completing tasks that require high-stakes or creative problem-solving.Research and design often involve making sense of a complex interplay of user and contextual factors that combine to drive behavior or shape an experienceand it seems that AI tools are not yet advanced enough to appropriately capture and process this complexity.Not all generative AI tools are the same, and having awareness about each products context windows ensures were realistic about to what extent the tool can help us center humanness in research and design, (What is a context window?).2. Propensity to generalizeAIs tendency to summarize and generalize poses a risk to the representation of diverse human experiences and inclusivity. While AI excels at processing large datasets to identify common patterns and deliver efficient summaries, this strength can become a limitation when it oversimplifies nuanced perspectives or excludes less common experiences.For example, AI-powered search engines highlight the most popular answers but may exclude context-specific insights, leading to incomplete or biased conclusions, (The AI Summarization Dilemma: When Good Enough Isnt Enough: Center for Advancing Safety of Machine IntelligenceNorthwestern University & AIs dark secret: Its rolling back progress on equality | Context).Similarly, relying solely on AI for research analysis can result in a surface-level understanding of the average. AI analysis may lead to the full spectrum of participant responses not being considered, ignoring minority perspectives and creating exclusive products or experiences.There are times in analysis where you want to understand the broad themesbut there are also times where it's important to understand individual nuances.These limitations are particularly dangerous when researching diverse populations or designing solutions that require high degrees of sensitivity and nuance. Indeed, as weve experimented with AI tools for research and analysis, we have found outputs to be inadequate and potentially misleading, and we find ourselves needing to reintegrate human subtleties and dive deeper into oversimplified insights.3. Lack of transparencyAI tools lack of transparency and data privacy guardrails can infringe on our basic human right to privacy and decrease our sense of agency to choose our relationship with technology.Despite efforts to improve transparency and develop privacy-centric AI, using AI often still feels like working with a black boxwith users still missing deep understanding of how AI processes their data and clear, succinct explanations of privacy practices, (We Must Fix the Lack of Transparency Around the Data Used to Train Foundation Models Special Issue 5: Grappling With the Generative AI Revolution & Transparency is sorely lacking amid growing AI interest |ZDNET).This has implications as we use AI to collect and analyze data from people and as we try to develop AI-powered products that promote user consent, agency, and empowerment.As researchers and designers, we have a duty to protect personally identifiable information (PII) of research participants and the intellectual property of ourclients.We have a responsibility to ensure that our research participants have the power to consent to how their data is used and to our consumers to create products and experiences that do not lead to privacy breaches and data exploitation. In our experience, when using popular AI tools to their full functionality, we cannot guarantee those protections will beupheld.4. AI isn't aware of its' ownbiasAIs tendency to reproduce bias and generate inaccurate output can exacerbate existing social inequalities and creates threats to informed decision-making.For example, the UK passport photo checker showed bias towards women and darker-skinned people:https://www.bbc.co.uk/news/technology-54349538.ampThe tendency for AI tools to perpetuate and exacerbate human biases present in their training data is probably the most commonly discussed threat of AI, so we wont discuss this issue in depth here, (Battling Bias inAI).Bias can lead to discriminatory experiences for research participants, skewed insights, narrowed scope of potential design directions, and designs that cater to hegemonic identities and majority usergroups.Hallucinations: Beyond biased output, there is the potential for hallucinations which produce nonexistent or inaccurate outputs, (When AI Gets It Wrong: Addressing AI Hallucinations and BiasMIT Sloan Teaching & Learning Technologies). This misinformation could affect research and product decisions in majorways.In another example, Air Canada's chatbot lied to a passenger about bereavement fares but the customer later won thecase:The passenger claimed to have been misled on the airlines rules for bereavement fares when the chatbot hallucinated an answer inconsistent with airline policy. The Tribunal in Canadas small claims court found the passenger was right and awarded them $812.02 in damages and court fees the court found Air Canada failed to explain why the passenger should not trust information provided on its website by its chatbot. Source:ForbesWhile being aware that generative AI products can produced biased or inaccurate information is a good first step, we feel there is still unmet need for transparency and diversification of training datasets and extensive training on critical evaluation of AI output. AI must be leveraged judiciously and always in service of human-centered needs.Addressing threats to human flourishingAs we advance our use of AI, we must remain committed to prioritizing the human experience and fostering the well-being of our colleagues, research participants, clients, and the customers who use the products we helpcreate.Technology should contribute to the well-being, growth, and fulfillment of people and their communities.Addressing the four challenges discussed above, here are four strategies to combat the limitations.Strategy #1: AI as a complement, not a replacementAI has proven to be a powerful tool in research, but its greatest potential lies in complementing, not replacing, human expertise. Understanding where AI excels and where humans bring unique value allows us to strike the rightbalance.Where AIshines:Processing large data sets: AIs computational power allows it to analyze vast amounts of data far faster than humans, making it an indispensable tool for pattern recognition and large-scale analysis.Generating initial ideas: AI is excellent at sparking brainstorming by presenting diverse, unbiased possibilities, which can help overcome creativeblocks.Recognizing patterns: AIs pattern-recognition capabilities are unmatched for identifying trends and correlations across datasets.Where humansshine:Empathy and connection: These are foundational to qualitative research. Building trust, reading body language, and engaging authentically are uniquely human abilities that technology cannot replicate.Understanding complex contexts: Humans excel at synthesizing subtle, multifaceted information that may not fit neatly into patterns.Ethical and contextual judgment: Humans bring cultural and moral considerations into decision-making, ensuring sensitivity and appropriateness.Unique insights: The creativity and contextual understanding required for truly novel insights remain human strengths.Striking the rightbalanceData collection: AI can enhance efficiency in data collection when used intentionally. For example, it can assist with participant screening during recruitment, but researcher oversight ensures quality and appropriateness. Human moderation remains indispensable for creating connection, fostering empathy, and understanding participants deeply.While AI moderation is effective for executing qualitative research quickly and at scale, (Accelerating Research with AI), it cannot replicate the depth of human engagement.Data analysis: In analysis, AI can be valuable for identifying major themes and aiding qualitative data coding, providing researchers with a head start. Transcriptions alone are a great start. When it comes to summaries, most tools I've tried are ok at this, but the future promise isthere.Examples of Transcription & Summaries from Dovetail. Source:NN/gHowever, interpreting participant behavior, understanding nuances in communication, and recognizing diverse perspectives still greatly rely on human expertise. AI serves as a tool for initial synthesis and as a point of comparison, but humans are indispensable in making sense of the human experience.Data generation: Using AI to generate qualitative data, such as having AI simulate human responses, can jeopardize the integrity of research by misrepresenting authentic experiences.That said, there are cases where AI-generated responses can enhance research outcomes. For instance, immersive AI avatars have been used effectively in healthcare provider (HCP) market research to elevate engagement and provide richer insights, offering a viable alternative in specific contexts, (How we elevated HCP market research engagement and insights using AI avatars for an immersive experienceResearch Partnership).https://medium.com/media/036b708af41108976a6f4b266f2e10f6/hrefBy leveraging AI as a complement to human expertise, we can enhance efficiency and scalability without compromising the depth and integrity of research. The key is intentionalityusing AI where it excels while relying on human strengths to truly understand and connect withpeople.Strategy #2: Contextually and culturally-aware implementationHuman diversity is central to effective cross-cultural research and design, and understanding the differences between individuals, their daily contexts, and broader sociocultural environments is key to generating meaningful insights. This same principle applies to the thoughtful integration of AI into practices and workflows.AI implementation should be deliberate and context-sensitive, with careful consideration of when and how AI is the right tool for the task. Context plays a pivotal role in determining whether AI enhances or detracts from the goals of a given project. Tailoring AI strategies to align with cultural nuances, environmental factors, and user needs ensures that technology complements, rather than complicates, the work at hand. Forexample:Building rapport: If establishing trust and encouraging participants to open up about sensitive topics is essential, AI may not be the bestfit.Anonymity preferences: In contrast, participants may prefer the perceived neutrality and anonymity of an AI moderator when discussing highly personal or taboo subjects.Cultural perceptions: In Western Europe, heightened concerns about AI and data privacy influence how AI is received and used, requiring careful consideration of tools and methods, (Will the EU AI Act work? Lessons learned from past legislative initiatives, future challenges | IAPP & How concerned are Europeans about their personal data online? | European Union Agency for Fundamental Rights).Social dynamics: In Brazil, where authentic social connections are highly valued, human-to-human interaction may be preferred for meaningful engagement, (How to Apply Cultural Knowledge in Your Brazilian Localization Strategy).Research goals: For tactical questions or high-level sentiment analysis, AI can effectively identify trends and major pain points. For deeper explorations of complex motivations or mental models, human-led research is often more appropriate.When implementing AI, its essential to stay well-informed about the tools capabilities and limitations, including its context windows and potential blind spots. Organizations designing AI products should prioritize localization and enhanced context sensitivity to ensure these tools address diverse human needs effectively.By thoughtfully balancing human expertise with AI-driven methods, its possible to create solutions that honor cultural uniqueness while leveraging technology to deepen understanding and foster meaningful connections.Strategy #3: Privacy and consent practicesEffective AI implementation requires balancing innovation with robust privacy and consent practices. Popular AI platforms often retain data input to train their tools, raising concerns about confidentiality and data security.Zoom subtly updated their terms of service in March, 2023, leading to a backlash and then backpedaling and clarification inAugust:Thread source onXTo address these risks, organizations should establish clear policies to safeguard sensitive information, including personally identifiable information (PII) and proprietary data, (Can GPT-4o Be Trusted With Your Private Data? | WIRED). These should be shared openly and in advance. Practices like anonymization and secure data storage can help minimize risks from the outset. For organizations seeking greater control, developing proprietary AI models is an option worth exploring.Transparency is a cornerstone of effective privacy and consent practices. Providing research participants with detailed information about AI use in consent forms and participation materials enables them to make fully informed decisions about how their data is handled. Encouraging team members to share questions or concerns about AI tools fosters a culture of open dialogue and ethical accountability, ensuring that privacy practices stay aligned with both internal values and external expectations.Additionally, applying user experience (UX) and human-centered design principles to AI technologies can make privacy and security features more transparent, accessible, and empowering. This ensures that consent goes beyond a checkbox to become a meaningful and informed part of the user experience, (The AI Consent Conundrum: Do We Truly Understand What We Agree To? | by Neria Sebastien, EdD |Medium).By adopting these strategies, organizations can align their AI practices with both ethical standards and user expectations, creating tools and systems that promote trust and human flourishing.Strategy #4: Ongoing AI training & discussionAs AI evolves rapidly, staying informed, critically evaluating its capabilities, and understanding its impact are essential for leveraging its full potential. A team-based approach to AI training encourages shared learning and open discussions about its possibilities and limitations. This not only helps refine policies and address concerns but also fosters innovation as the technology progresses.Effective AI strategies involve tackling key topics such as maintaining non-disclosure and data privacy requirements while using AI, reviewing outputs to identify and mitigate bias or misinformation, and finding ways to enhance efficiency and effectiveness. These conversations are vital for ensuring that AI is used responsibly and productively.A human-first philosophy should guide theseefforts.Organizations should regularly assess AIs impact not only on participants, consumers, and clients but also on internalteams.The aim is to ensure AI supports meaningful workallowing people to build new skills, refine creative and critical thinking, and stay engaged in tasks that are both purposeful and impactful. AI should empower teams to feel more efficient and effective while safeguarding their sense of purpose, (Finding Meaningful Work in the Age of AI | LinkedIn).AI training and policies must remain flexible and adaptable. As technology evolves or reveals limitations, organizations should be prepared to recalibrate their approach, ensuring that human values remain at the center of innovation. By embracing this mindset, businesses can harness AIs potential while ensuring it serves peoplefirst.AI and opportunities to promote flourishingWhile this article has primarily focused on the ways AI challenges human flourishing and the strategies we, as researchers and designers, use to mitigate these risks, its equally important to recognize AIs potential to promote flourishing.When developed and applied with the specific aim of enhancing human lives, AI can paradoxically address even those areas where it poses the greatest risks, transforming them into opportunities for growth and well-being. Here are a few ways were excited about AI contributing to human flourishing:Inclusive and accessible products: AI has the power to make products more inclusive and accessible by collaborating with diverse users and understanding their needs. When designed thoughtfully, AI can personalize experiences to adapt to individual abilities, preferences, and identities, (How Artificial General Intelligence Could Redefine Accessibility).For instance, AI-powered voice assistants can be trained to recognize diverse speech patterns, accents, and variations, breaking down communication barriers and fostering a sense of belonging for all users, (Voice-activated Devices: AIs Epic Role in Speech Recognition).Automating low-level tasks and assisting with complex ones: AI can strategically automate repetitive and unfulfilling tasks, freeing people to focus on creative, meaningful, or strategic activities. By reducing human error and alleviating mental and physical stress, AI helps protect our sense of purpose and enhances productivity, (The Ultimate Guide To Using (or Avoiding) AI AtWork).Conversely, AI can also act as a creative assistant for more complex, cognitively demanding tasks, such as brainstorming, design, writing, and art creation. By broadening our thinking and inspiring new possibilities, AI supports higher-level cognitive work and innovation, (Creativity was another of ChatGPTs conquests. Heres why its more computable than we think. | by Paul Pallaghy, PhD |Medium).Insights for positive behavior change: AI-powered analytics can identify patterns in behavior and generate actionable insights to encourage positive changes. For example, these insights can help improve products designed for health and education, empowering individuals to achieve their goals more effectively and efficiently.How are Machine Learning and Artificial Intelligence Used in Digital Behavior Change Interventions? A Scoping ReviewMayo Clinic Proceedings: DigitalHealthCSRWireA Bridge to Success: Using AI To Raise the Bar in Special EducationEnhanced data privacy and security: AI has the potential to improve data privacy and security through advanced capabilities such as anomaly detection, encryption, and access control management. Technologies like differential privacy and federated learning allow for valuable insights to be drawn from data while maintaining safeguards to protect sensitive information. These tools, when implemented conscientiously, can create systems that prioritize the privacy and security of research participants andclients.Generative AI & Data Security: 5 Ways to Boost Cybersecurity |BigIDAre Data Privacy And Generative AI Mutually Exclusive?What is federated learning?IBMResearchHowever, its important to acknowledge the inherent risks and challenges. The data-hungry nature of AI training often incentivizes excessive data collection, which can conflict with privacy objectives. Additionally, the complexity of AI systems sometimes makes it difficult to ensure that privacy protections are upheld consistently across applications. As a result, the risks associated with AIs use in privacy-sensitive contexts often outweigh the potential benefits unless organizations approach implementation with exceptional care and transparency.This dual perspective highlights the need for cautious optimism. While AI can enhance privacy in theory, realizing these benefits in practice requires prioritizing ethical design, robust regulation, and a commitment to limiting data use to what is strictly necessary. By balancing these considerations, organizations can mitigate risks and responsibly explore AIs potential for improving data security.Checking bias: AI can act as a gut check or an additional data point to help illuminate biases or blind spots in human decision-making when it is developed to be inclusive and address bias from the start. When trained on diverse datasets, AI tools can provide thoughtful recommendations, offering value in contexts ranging from product development to broader decision-making processes.Can the Bias in Algorithms Help Us See Our Own? | The Brink | Boston UniversityHow AI can end bias |SAPBridging cultural divides: While AI still has a long way to go in context sensitivity, its capabilities in real-time language translation and diverse content promotion are already helping bridge cultural and community barriers. For example, AI can enable more inclusive international research and create richer digital experiences that celebrate global diversity.By intentionally designing AI to prioritize accessibility, security, and cultural sensitivity, we can harness its immense potential to foster connection, creativity, and well-being, ultimately driving human flourishing in ways that mattermost.Bridging Cultural Divides: AI in Global Content Strategy | by Phan Nython |MediumThe Role of AI in Bridging Cultural Gaps within RemoteTeamsBuild Cross-Cultural Bridges, Not Barriers, WithAIConcluding thoughts & nextstepsAI is a moving target, evolving rapidly in ways that challenge and inspire. As researchers, designers, and technologists, we have a unique responsibility to approach AI criticallyassessing how it both promotes and threatens human flourishing. With regulation, governance, and accountability structures still taking shape, our vigilance and ethical commitment are more important thanever.To ensure AI enhances rather than detracts from human flourishing, here are a few actionable steps:Apply a human-first lens: Continuously evaluate how AI tools align with values like inclusivity, transparency, and ethical responsibility.Balance AI with human expertise: Leverage AIs strengths while retaining the depth, empathy, and nuance that only humans can bring. I like to think of this as keeping a, "Human in theloop."Foster open dialogue: Share learnings and raise concerns within your teams and professional communities to shape better practices collectively.Explore the resources and appendix: Dig deeper into the resources referenced throughout this article and the extensive Appendix that follows to expand your understanding and spark newideas.Advocate for responsible AI: Push for thoughtful regulation and design that centers human well-being at everylevel.Engage in conversation: Talk to your colleagues and friends. Talk to your manager. Talk to your clients. You can even talk to me. Whether youre seeking practical insights, curious about integrating these strategies, or just exploring the topic in a collaborative way conversing with others will bring these ideas to the forefront and keep us all moving forward in a human-centered way.As researchers and designers shaping the products billions of people use daily, we hold the power to keep humans at the heart of this technology.By being intentional, we can ensure AI evolves into a force that uplifts and empowers, rather than one that diminishes ordivides.Huge thank you to my colleagues Katie Trocin and LaToya Tufts for the lit review, content development, editing, and discussion that lead to the creation of thisarticle.Josh LaMar is the Co-Founder and CEO of Amplinate, an international agency focusing on cross-cultural Research & Design, based in the USA, France, Brazil, and India. As the Chief Strategy Officer of JoshLaMar Consult, he helps Entrepreneurs grow their business through ethical competitive advantage.Appendix: References and ResourcesHuman Flourishing FrameworksSocial Ecological FrameworkThe Ecology of WellbeingMeasuring Flourishing |HarvardAuthentic Happiness |PennPhilosophies of HappinessOn the promotion of human flourishing |PNASRethinking flourishing: Critical insights and qualitative perspectives from the U.S. MidwestPMC (nih.gov)Measures of Community Well-Being: a Template (springer.com)Flourish: A Visionary New Understanding of Happiness and Well-beingRadically Human Technology: Enhancing Connection and Wellbeing (Or Finding your Ikigai Kairos ) | by Nichol Bradford | Transformative Technology |MediumTHE 17 GOALS | Sustainable Development (un.org)Universal Declaration of Human RightsAmnesty InternationalAyurvedas Edge Over Western Psychology (bwwellbeingworld.com)AI RisksAI Risks that Could Lead to Catastrophe | CAIS (safe.ai)The AI Risk Repository (mit.edu)Limitations ofAIWhat Are the Limitations of AI in Understanding Context in Text?Space CoastDailyThe Context Problem in Artificial IntelligenceCommunications of theACMWhat is a contextwindow?The AI Summarization Dilemma: When Good Enough Isnt Enough: Center for Advancing Safety of Machine IntelligenceNorthwestern UniversityAIs dark secret: Its rolling back progress on equality |ContextWe Must Fix the Lack of Transparency Around the Data Used to Train Foundation Models Special Issue 5: Grappling With the Generative AI RevolutionTransparency is sorely lacking amid growing AI interest |ZDNETBias inAIBattling Bias inAIWhen AI Gets It Wrong: Addressing AI Hallucinations and BiasMIT Sloan Teaching & Learning TechnologiesTheres More to AI Bias Than Biased Data, NIST Report Highlights |NISTEliminating Algorithmic Bias Is Just the Beginning of Equitable AI (hbr.org)Can the Bias in Algorithms Help Us See Our Own? | The Brink | Boston UniversityHow AI can end bias |SAPStrategy 1: ComplementAccelerating Research with AI |NN/gHow we elevated HCP market research engagement and insights using AI avatars for an immersive experienceResearch PartnershipStrategy 2: Contextually AwareWill the EU AI Act work? Lessons learned from past legislative initiatives, future challenges |IAPPHow concerned are Europeans about their personal data online? | European Union Agency for Fundamental RightsHow to Apply Cultural Knowledge in Your Brazilian Localization StrategyStrategy 3: Privacy &ConsentCan GPT-4o Be Trusted With Your Private Data? |WIREDThe AI Consent Conundrum: Do We Truly Understand What We Agree To? | by Neria Sebastien, EdD |MediumStrategy 4: OngoingTrainingFinding Meaningful Work in the Age of AI |LinkedInOpportunitiesHow Artificial General Intelligence Could Redefine AccessibilityVoice-activated Devices: AIs Epic Role in Speech RecognitionThe Ultimate Guide To Using (or Avoiding) AI AtWorkCreativity was another of ChatGPTs conquests. Heres why its more computable than we think. | by Paul Pallaghy, PhD |MediumHow are Machine Learning and Artificial Intelligence Used in Digital Behavior Change Interventions? A Scoping ReviewMayo Clinic Proceedings: DigitalHealthCSRWireA Bridge to Success: Using AI To Raise the Bar in Special EducationGenerative AI & Data Security: 5 Ways to Boost Cybersecurity |BigIDAre Data Privacy And Generative AI Mutually Exclusive?What is federated learning?IBMResearchCan the Bias in Algorithms Help Us See Our Own? | The Brink | Boston UniversityHow AI can end bias |SAPBridging Cultural Divides: AI in Global Content Strategy | by Phan Nython |MediumThe Role of AI in Bridging Cultural Gaps within RemoteTeamsBuild Cross-Cultural Bridges, Not Barriers, WithAIAI Failures9 AI fails (and how they could have been prevented)12 famous AI disasters16 biggest AIFails17 Screenshots Of AI Fails That Range From Hilarious To Mildly Terrifyingr/aifails |RedditHuman flourishing in the Age of AI was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·98 Views
-
Meta and Spotifys AI takeover: is this the end of human-created content?uxdesign.ccAI Vs Human influencer at Spotify and Meta. Source: Yoga withAdrienneThe proportion of AI-generated content has been increasing on Spotify. Some businesses now specialise in generating low-cost AI generated songs and playlist under artist profiles that cant be differentiated. You dont know it, but if you are a regular user of Spotify, you most likely have listened to AI-generated music.These music arent songs that were crafted by artists going through a breakup and needing to express their emotions, they are the output of an AI that learnt to do music, using these artists work. The problem was uncovered by some users after they started to notice that the same song was played under different names and artists. Often they are part of playlists meant to fill the silence such as chill orfocus.https://medium.com/media/ecca6d05b8b922dedd88b8d4837ea590/hrefIts interesting because Meta and Spotify once connected humans and supported their creativity by giving them a free, opened platform. But now, the increasing focus on revenue and the rise of AI in the last few years means that this relationship is stumbling. The AI-generated songs were uncovered after a user stumbledacrossThey are not happy with being a simple platform, and want a share of the content creation pie either by creating content themselves or finding cheaper content where theycan.This is a big shift as it questions the relationship that Spotify or Meta will have with artists and influencers in thefuture.Can the platforms create AI-generated content while remaining fair, transparent and authentic to their mission? How does it impact the way we connect to people and consumemusic?The rise of the creatoreconomyThe creator economy has boomed in recent years. Between 2020 and 2023, the number of creators monetizing their work online grew by over 30%, and global creator economy is now valued at over $191billion.By 2030, the creator economy is expected to surpass $525billion.Platforms like YouTube, Instagram, Spotify, and now TikTok have become the backbone of this economy, connecting creators with their audiences and taking a cut of the revenue in exchange for their services.Platforms and tools for content creators. Source: https://grin.co/blog/understanding-the-creator-economy/For years, this model worked wellat least for those who made it on these platforms and the platforms themselves.Creators earned income from ad revenue, brand partnership, and fan support, while platforms thrived by hosting and distributing this content. The more eyeballs they were getting, the more ad revenue making it a win-win relationship. But the pressure to generate revenue only increased and pushed them to want a slide of the content creationpie.Enter AI.With the ability to generate content at scale and at minimal cost, platforms saw a way to take a larger share of the revenue by becoming content creators themselves.Spotifys AI shift: what happens when the platform becomes thecreator?AI-generated music is created using algorithms trained on vast amounts of data from human-made compositions.Heres how it typically works:How is AI-generated musiccreatedSpotify doesnt label these AI-generated tracks as such, so users often dont realize theyre listening to machine-made music.Spotify doesnt label AI-generated tracks andartistsFor instance, a Focus playlist might include real songs (made by a human) and AI-generated piano tracks with no way to differentiate them.While they may sound harmless or even pleasant, they raise significant questions about transparency and artistic integrity.Why is Spotify leaning into AImusic?Daniel Ek, CEO of Spotify recently said to the BBC that hehadno plans to completely ban content created by artificial intelligence from the music streaming platform.Spotifys embrace of AI-generated music is less likely about improving the listening experience than it is about cutting costs and boostingprofit.Heres why its such an attractive strategy for the platform:Lower licensing fees: Spotify pays royalties for every stream of human-made music, which adds up quickly. Spotify, Liz Pelly discovered, not only has partnerships with a web of production companies, which, as one former employee put it, provide Spotify with music we benefited from financially, but also a team of employees working to seed these tracks on playlists across the platform. In doing so, they are effectively working to grow the percentage of total streams of music that is cheaper for the platform.The majority of Spotifys revenue is distributed to recording owners, songwriters, and publishers. However, by producing its own content, Spotify retains full control of the revenue, acting as the recording owner, songwriter, and publisher all in one. Source:https://www.hypebot.com/hypebot/2021/11/how-spotify-royalties-actually-work.html2. Algorithmic synergy: AI music fits perfectly into Spotifys algorithmic playlists. Its tailored to match the moods and themes these playlists aim to evoke, ensuring users stayengaged.3. Endless content: With AI, Spotify can generate infinite tracks to fill playlists, ensuring theres never a shortage of content, no matter how niche thetheme.While its easy to see why this is a win for Spotifys bottom line, its harder to see how it benefits usersor the music industry as awhole.The UX problem: no transparency, notrustSo to begin, Spotify uses artificially intelligent music-generation. Not particularly a big deal, until one considers the whole thing is set up to be profoundly opaque: it doesnt flag when a track is AI-generated, nor does it give one the option to filter themout.This lack of transparency has several consequences:1. Trust: Users dont know and cant decide not to listen to AI music which is a problem if you value authentic, human-made art and freedom ofwill.2. Undermining artists: By prioritising AI songs, Spotify impacts the exposure of human artists and their revenue. Over time, AI music composers could take over humanartists.3. Passive consumption: As The New Yorkers Kyle Chayka pointed out, Spotify design and algorithm encourages passive consumption of vanilla content, instead of exploring new music. Over time, users become more dependent on the playlists instead of forming their own musicaltastes.Music has always been more than just relaxing background soundits an art, a cultural expression, and a deeply personal experience.Fixing Spotify: what needs tochangeIf Spotify wants to regain the trust of its users, it needs to rethink its approach to AI-generated music.Heres what the platform coulddo:1. Be transparent: Clearly label AI-generated tracks and inform users when theyre listening to machine-made music.Example of how Spotify could inform users that the artist they are listening to is AI generated2. Give users a choice: Allow listeners to filter change their preferences in profile to exclude AI-generated music.Example of ways Spotify could give users achoice3. Support their artists: Ensure that human song writer continue to be at the center of the platforms mission, rather than being sidelined by prioritiding cheaper AImusic.Whats next forSpotify?Spotifys strategy with AI-generated music is a symptom of a larger issue: a shift from customer-centricity to monetization-first.If the platform goes too far, it risks alienating the very users who made it a global phenomenon.Metas pivot: from connecting people to competing withthemMeta has a somewhat similarstory.It was once the quintessential enabler of social interaction, built its empire as a platform where billions of users could share their lives through photos, videos, and stories. For creators, platforms like Facebook and Instagram became the essential tools to build an audience, connect to them and eventually monetize.Instagram (and now TikTok) are the main platforms for amateur creators. Source:https://influencermarketinghub.com/income-disparity-creator-economyBut with the rise of AI and the continuous pursuit of profitability, similarly to Spotify, Meta has also ventured into the dangerous territory of content creation.How Metas AI-generated personasworkIn 2024, Meta introduced AI-generated profiles.These profileslike Liv, a fictitious Black queer momcreated posts, shared images, and even interacted with users in ways meant to mimic human behavior.The personas were meticulously crafted to appeal to diverse audiences, and were posting some cute posts about their family time, ice-skating Sundays, charitable events, and so on, all this using AI-generated images.Source: https://x.com/DramaAlert/status/1875217669089288610?mx=2The underlying technology combined advanced AI language models and image generators, enabling these AI profiles to simulate complex identities and narratives.Zoomed in image of one of the post from Lizs profile. Some progress still remains when it comes to generating images offeet.US-based users could chat and interact with them, blurring the line between authentic social interactions and artificial connection.The fallout: a case study in brokentrustThe profiles were labeled as AI-managed by Meta, but the reception wasnt great, highlighting a few problems.Livs profile, for example, portrays a marginalized identity that was entirely fabricated by a team largely composed of white male developers.Some angry users on X labeled the project as digital blackface, highlighting how it trivialized real experiences and diluted the value of genuine representation.Chatting with these AI profiles only made mattersworse.When questioned by users, Livs AI admitted that no Black creators were involved in her designmaking me wander what the process to get this approved by senior leadership was. This revelation deepened public mistrust, exposing the lack of diversity and ethical consideration behind theproject.Within 24 hours, Meta removed the AI-generated profiles, issuing a statement that the profiles were part of an early experiment.Why Meta is pushing into contentcreationThe motivations behind Metas foray into AI-generated personas areclear:1. Revenue retention: Again, by generating its own content, Meta no longer needs to share ad revenue with content creators. This allows the company to maintain full control over monetization.Infuencer marketing and ad share revenues are some of the main source of income for influencers. Source: https://influencermarketinghub.com/income-disparity-creator-economy/2. Engagement optimization: AI-generated content can be optimised to increase user interaction, keeping people on the platform longer and boosting ad impressions and, therefore revenue.3. Infinite content creation: AI can generate infinite content at scale, ensuring a constant stream of new posts without the costs of paying creators.While these strategies align with Metas monetisation objectives, they erode their vision and mission as social media, which is to connect realpeople.Metas AI personas: when fake profiles spark realproblemsSimilarly to Spotify, Metas experiment revealed deeper issues that go beyond public backlash:1. Erosion of authenticity: By introducing AI-generated personas made for connection, Meta forgets the importance of trust. The blurred line between real and fake interactions creates a dystopian sense of disconnection.2. Ethical oversights: Once again, the lack of diversity in tech impacts its ability to ethically create unbiased content and risks further alienating the very communities these profiles tried to represent.3. Competition with creators: By generating its own content, Meta competes with human creators who rely on the platform for visibility andincome.Metas pivot toward creating content underscores its desire to dominate every aspect of its ecosystemfrom hosting content to generating and monetizing it.The bigger picture: platforms vs.creatorsThe shift from enabling to competing with creators is a risky gamble for platforms like Meta andSpotify.On one hand, creating content in-house offers some monetisation opportunities and a way to fill gaps in their ecosystems. On the other hand, it undermines the trust and loyalty of creators and users, the lifeblood of these platforms.Key impactsFor creators: As platforms generate their own content, creators face increased competition for visibility and revenue. This will drive smaller creators away or force them to find other platforms where they can better create, grow, and retain their audience.For users: The lack of transparency on AI-generated content erodes trust for users who come to find authentic connections andcontent.For platforms: While this strategy may boost short-term profits, it risks hurting their network effects by driving users and content creators to other platforms.Content clash, Platforms vs. creators in the AIageMeta and Spotifys pivot from empowering creators to competing with them reveals a troubling truth about tech today: profitability beats everything.By turning to AI-generated content, these platforms are chasing short-term gains at the cost of their users and creators.For creators, its a wake-up call to diversify and reclaimcontrol.For users, its a stark reminder to ask for transparency and authenticity in an increasingly synthetic world.The future belongs to platforms that can innovate without sacrificing their communities, and they will need to strike a balance between making money and giving creators control over their audiences.Interesting reads to gofurther:The ghosts in themachineIs There Any Escape from the Spotify Syndrome?Metas AI bots are weird (and really fuckingbad)Enjoyed this? support my work by Subscribe to my newsletter for more deep dives!Meta and Spotifys AI takeover: is this the end of human-created content? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·104 Views
-
The Meta decision: you cant put the toothpaste back in the tubeuxdesign.ccYou can relinquish fact-checking responsibilities but can you ever be neutralagain?Photo sourceIn recent years weve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content. This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes We want to undo the mission creep that has made our rules too restrictive and too prone to over-enforcement.- Excerpt from Metas press release on January 7, 2025 (emphasis mine)Is it possible for a post to be so wrong but also kind of right at the same time? Of course it is. Nuance is a dying art. But thats the challenge with deciding whats right and wrong: things can be completely true, partially true, or not true atall.Unfortunately, when youve claimed the word meta from our lexicon and then you announce that youve done a bad job deciding whats right, so youve decided to stop deciding whats right, but you get that decision wrong, were out of words to describe theirony.Thats effectively what Meta has done in their announcement this week to stop using independent fact-checkers on their platforms and instead shift to a crowdsourced community notes model, similar to Twitter Xs approach. The public response has been palpable. In the same week that the incoming president-elect repeatedly trolls about wanting to annex Canada, a culture war is brewing over raw milk consumption, and conspiracy theories about California wildfires are spreading as fast as the fires themselves, this seems like the absolute wrong time to step away from fact-checking.At the same time, I can empathize with Metas statement. I design technology for a living. I have to make tough ethical decisions. I encounter scope creep and mission creep all the time. I, too, have approached a problem with good intentions only to find an adverse unintended consequence. In their own words, we didnt want to be the arbiters of truth and I wouldnt want to entrust them with that responsibility either.Many product decisions are unidirectional. Ive had to shelve some of my riskier design ideas because consumer trust is a delicate matteronce you release a feature to the public you cant always pull it back. But then again, Im not usually the type to move fast and breakthings.Not every ethical dilemma carries the same weight, and this one is heavy. I cant blame Meta for initially wanting to stay neutral, but when they decided in 2016 to use third-party fact checking to moderate content across their platforms, they altered the social fabric of the internet in a way they can never fully takeback.Once youve had a finger on the scale of truth, any omission of fact-checking becomes a permission to lie. The misinformation is already out there and you cant put that toothpaste back in thetube.You used to be able to lie on theinternetDoes it make me sound old to say the internet used to be a different place? In the halcyon days of the early 2000s, the internet still felt largely like the Wild West. People werent constantly online, identities were still mostly anonymous, and communities were spread thinner across esoteric websites and interests. You almost expected anything you read online to be a lie, and sometimes that was half of thefun.Im pretty sure I posted this on MySpace and thought it was hilariousSo what changed? For one thing, as we began to spend more time online, we moved more of our IRL social lives to the internet, which made it beneficial for everybody to be in the same few places. Aggregators began to distill the best content from corners of the internet into just a few destinations. Memes transformed from inside joke shibboleths to a shared cultural identity. And going viral went from an innocent, seemingly random phenomenon to a carefully calculated, focus-grouped, business proposition.In other words, we became a captive audience and people learned that they could profit off our attention.The internet became too legit toquitBy the early 2010s, things on the internet began to matter. By then, seemingly everyone had a digital presence; if you werent there you were probably missingout.People started to take notice when online movements proved they could mobilize people and ideas in powerful ways. 2011s Arab Spring revolutions proved that Twitter X was more than just a place to talk about your lunch. Similar movements around the world followed. Even fringe phenomena like Twitch Plays Pokemon demonstrated that the internet hivemind was more than just a theory or a joke. But any scenario where that could be used for good meant that it could also be used forbad.The timeline followed pretty swiftly from Facebooks 2012 social experiment about manipulating emotions to Cambridge Analyticas social media influence in the 2016 election. In the same blink of an eye, fake news entered our everyday vocabulary and desensitized us while sites like InfoWars lost all touch withreality.Lying on the internet was no longer fun. By the late 2010s you were more likely to be pulling your friend out of a pyramid scheme or worrying that your parents would fall for a cryptoscam.We wanted the truth. We couldnt handle thetruth.Can any one entity really be an unbiased judge of the truth? In hindsight, you might wonder why anyone would willingly step into the morass of content moderation, but in the context of the 2010s you can understand why Metaand its peershad to step in. Since 2016, Meta has had mechanisms in place to proactively flag content thats known to be false or bury content that aligns with hoaxes. Facebook automatically flags and removes posts and comments that share similarities with hate speech. While the policies were in place, Meta claimed that they were working as intended, but this weeks announcement contradicts that. (See how the truth canchange?)Then again, its now 2025, content moderation policies have been in place for nine years, and anecdotally Im not sure that I feel any more insulated from fake news than I did before. But I can assume that Im already in a media-literate bubble and not likely to encounter as much fake news in the first place. Those outside my bubble might see more flagged content, but that only fuels the fire among those who believe that content moderation is biased against their point of view. Conspiracy theories are only strengthened by the idea that they dont want you to see them. At the end of the day, how effective is fact-checking among the willfully ignorant?Omission becomes permissionWhether the previous fact-checking mechanisms were effective or not, you cant simply remove the mechanisms without creating a vacuum. It would be different if there had never been a moderation playbook in the first place, but thats not the case. Since, by declaration, these new rules are relaxed, then that becomes a vulnerability to anyone who would want to exploit it. A community notes approach can only do so much when fake news proponents can now kick down all the doors that used to hold them atbay.Worse yet, by relaxing these rules, Meta has redefined its Community Standards and, in the process, explicitly defined new forms of hate speech that are acceptable. Again, it would be one thing if these examples of hate speech were never defined in the first place, but by walking back from one moral stance to another, it provides a list of socially permissible ways to bully or harass a formerly protected class of people. Among that list, ethnic groups can now be called filth, women can be referred to as household objects, and LGBT individuals can be called mentally ill (and are notably the only exception among disallowed mental condition insults).If youve ever met a bully or troll, you already know that when a boundary is (re)drawn, they will crowd that line as much as they can get awaywith.What now?The sociologist in me reminds me to have faith in both humans and academia.If I can set aside my cynicism, I can remember that most people are not bad-faith actors. Most people are not willfully ignorant. Most people do want the real news. Those are the people who usually sort things out for themselves pretty well. In the absence of independent fact-checkers it becomes even more imperative that we do our own research and call out the bullshit when we see it. I dont agree that community notes are the best approach, but they work better if good people contribute to them. And if youve read this far into this essay then youre probably one ofthem.And now, more than ever, I appreciate the important research of Professor Kate Starbird at the UW Center for an Informed Public, which has been studying, tracking, and understanding the spread of false information long before it was cool. Theres always a strong source of truth in peer-reviewed research, even if you have to seek it out for yourself.The Meta decision: you cant put the toothpaste back in the tube was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·87 Views
-
Competitive advantage comes from seeing what nobody else canuxdesign.ccA company operates in an ecosystem of opportunities, where competitive advantage can be won by seeing what others are not even aware of(1).Examples of three different types of maps: AIDA, Journeys and Systems. Illustration by theauthor.Common customer insight models reduce an organizations ability to take advantage of its own unique experience, knowledge and competencies simply because when adding our own insights onto a map we tend to make the brutest simplifications:Reducing the customer to a manageable objectwithout external influence, tobeshuffled down a predictable linearpathtowards an unavoidable goalignoring everything that doesnt fit into the models one-dimensional view.Ive observed a fewthings:The customer does not operate in a vacuum. They are one part of a larger system. A system with many levers organizations can choose to influenceif we can seethem.We usually do. We usually see them and know them. But fail to choose maps of our environment (e.g. market) that allow us to include them limiting our decision-making, strategies, and execution.Success can be determined by our awareness of the situation we operate in (2). Our ability to work together as an organization on the same map (3). Set strategy, plan, and execute. The map we make will limit or expand the opportunities we see, enabling us to make better decisions than our competition (5), and delivering better outcomes to our customers and ourselves.Success can be determined by our awareness of the situation we operatein.And this is where the opportunity lies: the better we are able to map out what leads to what we want, the better we will be able to outperform our competitors and over-deliver to our customers and ourselves.If we make a map of the worldthat:Removes all internal expertise and competencies (what makes us unique)(6) andpaints the same picture of the world as everyone else has (including our competition)then we will make the same decisions as everyone else, offering the customer the same value as everyone else, competing on price or efficiency alone ( a race to the bottom ). Or to repeat the David Ogilvy quote: if you got nothing to say, sing it(7).If we are in a race to the bottom because of our focus on price or efficiency, then its not because thats the only choice we could make. Its because its the only choice we couldsee.But if we make a map of the world that nobody else has, which captures nuances, influences and levers our organization is fit to take advantage of. Then we can set our strategy, plan and execution based on our own strengths, offering the customer relevant value nobody else does, winning in the marketplace (6)(8).Lets compare threemapsThe first map is the AIDA-model. Its designed for a media environment anno 1898 and as with other linear or hierarchical models it has been thoroughly debunked (4) as a poor predictor of human behavior. The AIDA model is a reflection of what the organization wishes the customer would do in a simplified universe that only includes theproduct.Illustration of AIDA-model.The second map is the customer journey. Another linear model that is most efficient at removing information. It pretends that people are on journeys towards purchasing products. Models like these not only removes vast amounts of insights from understanding what creates a customer, they also tend to silo thinking to only one or a few types of influences coming from only one area of the organization (9).The third map is a causal diagram (10). Not perfect, but more inclusive and representative than the others. Its main weakness is that it can only represent known insights and relationships (11), but this goes for allmaps.With a Causaldiagram:Everyone can add their insights to themap.It manages to represent the most significant known forces of influence from across the entire organization and ecosystem (12).It is a shared map. It manages to connect different areas of the organization through the same view of the ecosystem they together are trying to influence.It helps the organization find a shared narrative and language which leads to shared discussions about their purpose, role, and goals(13).An illustrative causal diagram mapping out influences on the decision making of a physician. Made by the author with input from Perplexity.The immediate challenge with a system map is that people balk at the first impression. But a system map is far easier to understand and use than a statistical or linear model. Because the latter is a distortion: its a simple, clean model, but it represents a version of the world that nobody is familiar with and everyone has to learn as an alternative to what they alreadyknow.A causal map in contrast represents the relationships and influences we already see and recognize, even as small children (11). It doesnt create an alternative narrative, it visualizes our own narrative. Once we learn how to read it, understanding it, sharing it, and collaborating on it becomesnatural.The simplest possible way to read a system map / causal diagram. Illustration by theauthor.Ps. if you want to make your own system map this is the simplest place tostart.Now Imagine!Which of these three maps best captures and represents the true environment our offering operatesin?And if we wanted to use a map to identify our best opportunities to have influence, which map would wechoose?Winning is not about making better decisions than everybody else, its routed in our ability to see what nobody elsesees.Our decisions follow our insights, not the other wayaround.Using the same methodology, models, and simplifications as everyone else sets us up for expensive failure from the start. It narrows our opportunities, removes our unique competencies, and puts us in a competitive space where we are not competing based on our strengths but on universal commodities (hygiene factors) like price or efficiency.We win by the quality and strength or our strategies, planning and execution. But its all rooted in our ability to see (14). Having a map of the world nobody else has helps us see opportunities nobody else does and the possibility to coordinate and compete based on our own unique expertise and strengths.Having the right map is the springboard to the rest of what we do to win(2).Sources / furtherreading:(1). Gary Hamel, source unkown, https://www.garyhamel.com/(2). Simon Wardley, Situation Normal, Everything Must Change, https://www.youtube.com/watch?v=Ty6pOVEc3bA(3). IBM C-Suite Study, IBM Study: C-suite Leaders Look to Customers to Steer Business Strategy, https://www.ibm.com/blogs/think/nl-en/2013/10/07/ibm-study-c-suite-leaders-look-to-customers-to-steer-business-strategy/(4). AIDA-Model, Wikipedia, https://en.wikipedia.org/wiki/AIDA_(marketing)(5). Helge Tenn, Customer as Competitive Advantage, https://uxdesign.cc/customer-as-competitive-advantage-19a6ede62852(6). David J. Collis and Michael G. Rukstad, Can you say what your strategy is, https://hbr.org/2008/04/can-you-say-what-your-strategy-is(7). David Ogilvy, behance, https://www.behance.net/gallery/1625743/If-You-Have-Nothing-To-Say-Sing-It(8). Mark Lipton, Walking the talk (really!): why visions fail, https://iveybusinessjournal.com/publication/walking-the-talk-really-why-visions-fail/(9). Based on conversations with the CustomerC-community in Norway, https://www.linkedin.com/posts/helgetenno_customerexperience-customer-business-activity-7276133041905766400-n7EW/(10). HBR Faculty, Causal Diagrams: Draw Your Assumptions Before Your Conclusions, https://www.harvardonline.harvard.edu/course/causal-diagrams-draw-your-assumptions-your-conclusions(11). Judea Pearl and writer Dana Mackenzie, The book of why, https://en.wikipedia.org/wiki/The_Book_of_Why(12). Donatella Meadows, Dancing with Systems, https://donellameadows.org/archives/dancing-with-systems/(13). Clayton Christensen, unkonwn reference, https://en.wikipedia.org/wiki/Clayton_Christensen(14). Christian Madsbjerg, Look, https://madsbjerg.com/Competitive advantage comes from seeing what nobody else can was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·91 Views
-
Creating quantitative personas using latent class analysisuxdesign.ccHow the person-oriented approach facilitates the creation of statistical personas.Photo by Craig Whitehead onUnsplashHave you ever wondered if theres a better way to understand your users beyond simple survey metrics such as averages and medians? In my previous article, I discussed the person-oriented approach and compared it to the variable-oriented approach, describing how the person-oriented approach sees users as whole, unique entities, while the variable-oriented approach breaks users into parts and misses the big picture. Here, I will explain how much of a difference the person-oriented approach can make in your analyses as a UX researcher and how it will help you create data-driven personas. To do this, I will use artificial data generated by ChatGPT-4o from a survey Icreated.To better understand the user base and create quantitative personas, a UX researcher usually conducts surveys aimed at gathering insights into users behavior, habits, and past experiences. In this article, we assume the role of a UX researcher at a startup developing a product related to books, and we are interested in the users reading habits. To find out more about these habits, we have run a survey containing the following questions:Hypothetical survey questions. (Diagram created by theauthor)Imagine you have gathered 1000 responses for this short survey (the following visualizations are based on the data generated via Chat-GPT 4o, you can find the dataset here). How would you report theresults?Taking the routine, variable-oriented approach, you would answer these questions likethis:Users Preferred Reading Medium (Graph created by theauthor)Users Frequent Reading Conditions (Graph created by theauthor)Users Reading Frequency (Graph created by theauthor)While such data are informative, they often lack meaningful connections between different responses. For instance, you may know how many users prefer reading audiobooks, but it does not reveal how this preference correlates with other aspects, such as reading frequency or context. In the variable-oriented approach, these relationships are analyzed separately using correlations. For example, you may find that individuals who read in the mornings are more likely to prefer audiobooks. Although these correlations can be insightful, they fail to provide a comprehensive picture of the holistic identity of yourusers.To address this limitation, you may include demographic questions in your survey, such as age, gender, education level, or income, to provide more context and depth. However, demographics alone are insufficient to understand users mindsets or predict their future behaviors.To truly understand who the users are, we need a deeper analysis. This is where the person-oriented approach becomes invaluable.The use of person-oriented analysis to achieve deeper userinsightsIn the person-oriented approach, rather than analyzing each survey question independently, the goal is to understand participants as a whole. This involves identifying clusters of users with similar behaviors. To accomplish this, you can employ Latent Class Analysis(LCA).The meaning of latent in latent classanalysisBefore exploring how the person-oriented approach utilizes Latent Class Analysis (LCA), its essential to understand the term latent. In this context, latent refers to something that exists but is not immediately visible or directly measurable. LCA identifies these hidden variablesunderlying patterns or traits that go beyond the observable data, such as responses to survey questions. This method allows researchers to uncover and interpret these unseen factors that shape observable behaviors, classifying users based on these deeper, often unmeasured, characteristics.The person-oriented approach builds on this foundation by enhancing your analysis in three keyways:Discovering your participant groupshelping you identify distinct groups within your userbase.Revealing the unobservablesuncovering hidden patterns that typical survey metricsmiss.Adding dimensionality to the dataenabling a richer, more nuanced view of users behaviors and motivations.In the following sections, we will explore each of these aspects in depth and illustrate how they come to life in our example researchproject.1. Discovering your participant groupsParticipants who would select audiobooks in response to our survey question. (Illustrations generated by DALLE and arranged by the author in thediagram)In this illustration, we observe three distinct participants who have chosen audiobooks as their preferred reading medium. While all of them share this preference, their behaviors and preferences differ significantly. These differences become clear when we analyze their responses across all survey questions. For example, a participant who listens to audiobooks only a few times a month during their commute contrasts sharply with someone who listens daily everymorning.Rather than analyzing each survey question separately, this approach examines each participants entire set of responses across all questions. For example, a participant might indicate they listen to audiobooks while commuting a few times a month. These complete sets of responses are then classified using Latent Class Analysis (LCA), allowing us to group participants based on shared characteristics.By applying LCA to our mock dataset, we identified two distinct participant groups, known as LatentClasses:Group 1: The steadyscholarsThe first group identified through Latent Class Analysis. (Illustrations generated by DALLE and arranged by the author in thediagram)Group 2: The spontaneous explorersThe second group identified through Latent Class Analysis. (Illustrations generated by DALLE and arranged by the author in thediagram)In the charts above, you see the probabilities with which each group of users answered our questions. As shown, these two groups provided notably different responses. This insight allows us to take the next step: Identifying the underlying variables or characteristics that distinguish these groups from oneanother.2. Revealing the unobservablesLatent Class Analysis (LCA) aims to infer unobservable variables from observable ones. In this study, the observable variables are:Conditions in which users readbooksUsers readingmediumsUsers reading frequencyLCA enables us to go beyond these surface-level variables, linking them together to create a more comprehensive picture that adds depth to our understanding of user behavior.To identify the unobservable variables and interpret these groups, we need to examine the patterns in their responses. The Steady Scholars (group 1), for example, show a strong preference for physical booksa more conservative choice. They also tend to read daily, suggesting a propensity for maintaining routines. This groups second-most likely choice is reading a few times a week, and their selected reading times are regular and rhythmic, indicating a set routine. Overall, these patterns imply that the steady scholars may be conscientious, routine-oriented, and possibly more conservative in theirhabits.In contrast, The Spontaneous Explorers (group 2) lean toward more modern reading mediums, unlike the more traditional preferences seen in The Steady Scholars. They also show little regularity in reading frequency and reading conditions, suggesting a preference for novelty and spontaneity. This pattern implies a group of individuals who may be more novelty-seeking and less likely to adhere to strict routines, showing a lower level of conscientiousness compared to the steady scholars.In summary, these interpretations reveal two key factors differentiating the groups: openness to new experiences and conscientiousness. These two factors, which we might call our latent variables, represent the deeper traits underlying the observed behaviors. Interpreting these latent variables, however, requires a strong understanding of psychological theories of personality to draw meaningful conclusions.The process of deducing unobservable and latent variables from observable data for the steady scholars (group 1). (Illustrations generated by DALLE and arranged by the author in thediagram)The process of deducing unobservable and latent variables from observable data for the spontaneous explorers (group 2). (Illustrations generated by DALLE and arranged by the author in thediagram)Taking a look at what we have done here, we understand that the flow of interpretation, coming up with classes, and identifying the latent variables is as shownbelow:Diagram illustrating the process of discovering latent variables. (Diagram created by theauthor)Finding the unobservable variablesIdentifying unobservable variables requires a solid theoretical foundation, often found in personality psychology. Because these participant groups are likely to differ qualitatively, their traits are assumed to be rooted in stable personality characteristics rather than temporary states. Personality psychology provides scientifically grounded theories to guide this analysis, focusing on enduring traits that can distinguish betweengroups.Once you have identified potential personality traits that correspond to the latent classes youve found, you can generate various hypotheses to explore further. In our example, we inferred that openness and conscientiousness might underlie the observed behaviors in each group. With these assumptions, we can hypothesize additional characteristics and behaviors that may be associated with eachgroup:Additional traits of users with high openness to experience:Early adoption of new features orservicesHigher frequency ofbrowsingTendency to explore a variety ofgenresAdditional traits of users with high conscientiousness:Greater loyalty to theplatformMore frequentusageHigher likelihood of engaging with triggered notificationsIt is important to recognize, however, that not all behavioral differences stem from personality traits; environmental and social contexts can also shape user behaviors and should be considered in the analysis.3. Adding dimensionality to thedataEach dataset we work with can be thought of as having a dimensionality, especially when visualized. Consider binary data, for example, where responses to yes/no questions could be represented by a single dot that appears when the answer is yes and doesnt appear for no. This type of data is essentially 0-dimensional, as it contains only presence or absence. Lets revisit one of the variable-oriented results displayed earlier:Users Preferred Reading Medium (Graph created by theauthor)The data from this question can be viewed as 0-dimensional. It is composed of four binary questions (e.g., Do you usually read physical books?), and each participants response can be represented by a single dot. With a sample of 1,000 responses, we have a set of 0-dimensional data pointseach dot representing a participants answer to these yes/no questions.Illustration of user responses represented in a 0-dimensional space. (Graph created by theauthor)In this figure, each dot represents a participants response in a 0-dimensional space.By contrast, ordinal datasuch as responses on a Likert scale ranging from very bad to very goodhave a 1-dimensional nature because they map along a single line between two extremes. For instance, in our survey question How frequently do you read books? responses form a 1-dimensional dataset, representing a continuum from Daily toNever.Users Reading Frequency (Graph created by theauthor)Mapping users reading frequency onto a line in a 1-dimensional space. (Graph created by theauthor)These examples capture the dimensions typically used in the variable-oriented approach. In the person-oriented approach, however, the number of dimensions may increase with the number of survey questions, as each questions response is viewed as anaxis.In our 3-question survey example, for instance, the person-oriented approach sees a participants responses as coordinates in a 3-dimensional space, where each axis represents one survey question.A 3D space illustrating how survey questions contribute to the dimensionality of data in the person-oriented approach. (Axes derived from the colourbox)In this view, the data can span across as many dimensions as there are survey questions. But the story doesnt end here. When adopting the person-oriented approach, we assume that latent or hidden variables influence participants responses. Latent Class Analysis enables us to identify and interpret these underlying variables, representing participants placement in a space defined by the latent variables discovered.The space defined by latent variables, where dimensionality increases with the number of detected latent variables. (Graph created by theauthor)To deepen our understanding, lets turn back to our example of book readers. We previously identified three users who had selected audiobooks as their preferred readingmedium.Three distinct respondents who selected audiobooks as their preferred reading medium. (Diagram created by theauthor)Their responses can be visualized as coordinates on a 3-dimensional graph, with each dot representing one participant:Observed Variables: In the person-oriented approach, each survey participant is represented as a dot in an x-dimensional space, where x corresponds to the number of survey questions. (Graph created by theauthor)In the person-oriented approach, our participants are initially mapped in a 3-dimensional space based on their observed responses, as we had three survey questionsobserved variables. However, this is only the starting point. The X-dimensional space formed by observed responses can be refined into a simpler, more insightful space defined by latent (unobservable) variables. In our hypothetical analysis, we identified two such variablesopenness to new experiences and conscientiousness, both key personality factors.In this new, higher-level space, we no longer map individual participants; instead, we map classes or groups of participants identified through LCA. With two identified latent variables, our space becomes 2-dimensional, as illustrated below.Mapping user groups in an x-dimensional space, where x corresponds to the number of detected latent variables. (Graph created by theauthor)This approach offers a richer, more dimensional insight into user behaviors, helping us build a more comprehensive understanding of the user base and their unique characteristics.Why these analysesmatterGaining a deeper understanding of our users allows us to better predict their behavior when introducing new features, even when we are unsure how they might interact with them. As UX researchers, we typically avoid asking future-oriented questions, as such questions often fail to accurately reflect what users will do in the future. This limitation hinders our ability to reliably forecast user behavior.However, by leveraging the deep insights outlined in this article and understanding how users are segmented based on their personality traits, we can enhance our ability to predict their actions, decisions, and emotions when faced with new features or products.This is not how real-world data usuallylooksIn real-world datasets, user data seldom falls into such neat categories. Instead, distributions typically follow normal or exponential patterns, with group differences emerging as subtle shifts within these distributions. This makes LCA particularly valuable in real-world applications, where it excels at detecting anomalies and uncovering hidden structures within complexdata.Final thoughtsThis exercise highlights just how powerful Latent Class Analysis can be in user research. By combining a structured dataseteven an artificially generated onewith a method that goes beneath surface-level data, were able to reveal deeper patterns and traits that might otherwise go unnoticed. In a perfect world, real-world data would offer such clear divisions, but part of the value in LCA lies precisely in its ability to navigate and make sense of the messiness inherent in real data. As researchers, our goal isnt just to classify users but to understand the complex motivations and characteristics that drive their behavior. LCA provides us a unique lens for this purpose, pushing our understanding of users beyond broad demographics into the realm of nuanced, psychology-backed insights. This journey with LCA is just the beginningtheres always more to uncover beneath thesurface.Creating quantitative personas using latent class analysis was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·112 Views
-
Building a product ecosystemuxdesign.ccDesign in a mergingworldPhoto byauthorAs industries consolidate, we face the challenge of joining products that were designed in isolation. It requires making connections, across screens and betweenpeople.The story, and whatfollowsFor a while now, EdTech, the domain in which I work, has been in its consolidation era. It is not unique. As companies merge and acquire in order to meet strategic needs and keep up with competition, their product portfolios swell.On a purely narrative level, the sequence of events looks clean and logical: the business identifies a strategic need, finds a product that fills it, and acquires that product. The acquirer is on an upward trajectory, continuously building on success, each new acquisition a step on the staircase.Here in the UX trenches, where it falls on us to turn that narrative into a coherent experience that provides actual user (and business) value, things are messier. The merging of products creates a system of systems, which raises new and unique design challenges. This piece will dig into some of those challenges, and offer suggestions for navigating them.Merging productsthe risk and therewardWhen we talk about the collection of products that coexist under a shared umbrella, we tend to use optimistic metaphors. We have a suite, a portfolio, or an ecosystem. This makes sense in an aspirational way, and it reflects the vision. An ecosystem is characterized by balance andharmony.Theres another metaphor we could use, though, that serves as a cautionary tale: the story of Frankensteins monster.SourceDr. Frankenstein had two main challenges in his quest to create life, both daunting. First: make a coherent organism out of disparate parts. The vital systems must connect and the pieces must fit together to form one functional whole. Here he succeeded.Second: dont freak out the villagers. Here is where he failed, and where we should pay attention. As we stitch together our products, even if everything functions as designedidentities are connected, data flows freely, and users can navigate between appsall of our efforts are wasted if our new amalgamation fails to improve on the disjointed set of product experiences that our users are accustomed to.We hope, in making these connections, to create a whole that is much greater than the sum of its parts by enabling each product to build on and enhance the value of the others. But, if were not painstakingly thoughtful about how we proceed, it could all go wrong, and quickly. Before we sketch any sketches or move any pixels, we need a firm grounding in:common user tasks and needs across the products were connecting (how could using these things together make ones lifebetter?)constraints and requirements within the specific domain (what are the rules we need to be awareof?)the mechanics of each product (where is the devil in the details?)In the absence of any of the above, we risk creating change for changes sake. Few things enrage a user more than having to relearn a system for no benefit at all. Lets not rouse the angrymob.Mapping thesystemOur job is to take the overwhelming complexity underlying our nascent system of systems and channel it into an experience that looks so clear and logical that it seems obvious, even if it was far from obvious when we started. In order to get there, we have to wade through a giantmess.Use casesOnce we have conducted research to sufficiently understand who our target users are and what problems we think we can solve for them, we can narrow down to a prioritized list of use cases. At this point, a good way to make sense of our current state and the work still ahead is to make it visual. For each use case identified, we can map the specific connections, as well as the inputs and outputs of eachproduct.Diagram made inFigmaIts crucial to be thorough and specific when documenting the gaps. For example: Data from Product C synchs on nightly cadence, but to adequately support Use Case 1 we need to feed all user activity from Products B and C into Product As recommendation engine in realtime.Once the gaps are documented, ask: are they showstoppers? What would it take to address them? Can we work around them? Do the answers to those questions change how we want to prioritize?To platform or not toplatformAs we analyze each use case, we notice a recurring need to define logic that will govern what information to display, what actions become available under which conditions, and, sometimes, what to recommend. Where does all this logic live? Is it spread out among the products, or is there a central logic layer that handles it all? If form follows function, then this line of thinking may lead us to the idea that we need a central platform experience connecting the products.What does that look like? Again, its useful to diagram all of the data that must flow into the platform from the products, and viceversa.Diagram made inFigmaAs we fill in the details, the map begins to look quite wild, and we will naturally question its utility. This does not look like an artifact we would hand off to a dev team. So, why did we do it, and who was itfor?Three things: first, the artifact itself is less important than the journey we took to create it. Having immersed ourselves in the process, we have gained a deep and abiding understanding of the challenges we face and what we have to do to meet them. Second, despite its complexity, it can be a very useful reference when creating actual documentation for hand off. Third, hopefully at least one member of the dev team was deeply involved in this process, and has gained the same deep understanding.TacticsOnce weve established our conceptual framework, we turn our attention to the equally challenging questions that arise as we start to make it all tangible.Balancing I.A.N.Lets now consider our new platform, connecting all the products in our ecosystem. What is it, exactly? It can be useful to think about each screen as having some combination of three components:Information (what can you learnhere?)Action (what can you dohere?)Navigation (where can you go fromhere?)The tendency and the temptation when combining multiple product experiences is to go all in on navigation. In many cases, this approachthe wall of tilesis perfectly fine. These are the cases in which each product is mostly inert and self-contained. In other cases, though, this approach leaves a lot on thetable.I.A.N. is unbalancedThe front page of our platform is incredibly valuable real estate. Its where our users may be introduced to newly-added products, and its the primary provider of an ecosystem-level context. So, lets ask ourselves: how can we help users save time here that would otherwise be spent hopping from app to app attempting to gain a holistic view of their current situation? What insights can we provide that incorporate information from all of their apps? What recommendations can we make based on those insights, and how can we make them easy to follow? How can we do all of this without overwhelming our users with too much information? In other words, how do we balance ourI.A.N.?Cross-product navigationWhen you introduce a new product to your ecosystem, you may be connecting your users to an unfamiliar experience. We hope we have built up enough credibility and loyalty with these users that they are willing to trust that this product that has abruptly appeared in their lives is worth the time and effort it will take to incorporate it into their routine. That trust can be quickly undermined if the cross-product experience is confusing and disjointed.We are responsible for helping users understand why they are being sent to this new place, and what they can do when they get there. To that end, good UX writing is invaluable. If we are unable to briefly and clearly explain why this product integration is beneficial to the user in context, that may be a sign that we need to re-examine our strategy.A stark mismatch in look and feel can also add to a feeling of incoherence. While a wholesale UI update across all products based on a new, shared design system is likely unrealistic, there are smaller steps that can be taken to reassure the user that they have not been suddenly transported to Oz. Brand identity is key here. Even limited alignment of product logos and strategic adjustments to color palette and typography can go a long way to establishing a visual connection between the familiar experience and the newone.SourceThink of a fully consistent, ecosystem-level style as a longterm goal, and take a phased approach that allows you to continuously, if slowly, keep moving in that direction.Wheres mycheese?!Its inevitable: when we change a core experience, no matter how thoughtfully we planned and how much we believe we have improved it, a subset of users will be angry. They will also be vocal. Be aware going in, and prepare yourself emotionally, while also recognizing the validity of their anger (as well as the validity of our need to step away from it and take abreath).Change management is a tough nut to crack, which is not to say that theres nothing we can do to make the transition less painful. To the extent that we can help our users anticipate the coming change, we should. Even better to provide a level of control over the transition by giving users the ability to opt in and back out of the new experience for some set amount oftime.Living throughchangeWhile the focus of this piece has been on the impact of corporate consolidation on product design, there is a larger picture to keep in mind here. When companies combine, the impact on workers can be dramatic. The joining together of people, products and systems creates work, disrupts routines, and shakes up the social order. It is not surprising that employees tend to leave en masse in the wake of theseevents.Within a newly-expanded UX team, the change may mean more meetings, new processes and tools to learn, and more layers of decision making. It can feel slower, and less nimble. As a team gets larger, communication becomes harder. But as we adapt to new ways of working, we learn from each other. Fresh ideas keep us from getting stuck in oldhabits.For many of us, particularly those of us who are neurodivergent, the expansion of the team can be jarring and uncomfortable. We have established routines, habits, and trust with our old team. Now, we feel like we have to rebuild our professional equity from scratch. But hopefully, as we form relationships with new colleagues, we remind ourselves of our strengths, and reaffirm ourvalue.We may feel defensive, as our design debt and the state of our design systems are suddenly exposed to strange new colleagues. But, as we feel defensive, we recognize those same feelings in our counterparts, and realize that we all have room to improve along with reasons to beproud.Relationships are everythingFor workers in startups and smaller companies that are acquired by one of the big fish, its reasonable and justified to worry about assimilation into the larger system. They may feel a sense of protectiveness over this thing they so lovingly designed, and a fear that the special qualities that won over a devoted base of users will erode as they get incorporated into the big new platform.These new teammates are crucial to the effort. The ecosystem is defined by its dependencies. We rely, heavily, on our partners to also commit to making it work, and so we must build trust with a wide range of stakeholders, each of whom have competing incentives. This commitment will eat up large portions of their roadmap. It will require engineering teams to connect disparate technologies, and to work through the bugs and frustrations that inevitably arise in the process. It will present design challenges that can feel overwhelmingly complex.It may be tempting, after a point, to declare good enough! on the organizational level and consider it a wrap. Dont give up! The most profound benefits of an ecosystem, and the most rewarding relationships made along the way, can take a long time to develop. Its our job to create the conditions under which they willprosper.Building a product ecosystem was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Comments ·0 Shares ·114 Views
More Stories