UX Collective
UX Collective
Curated stories on user experience, usability, and product design. By
@fabriciot
and
@caioab
.
  • 1 pessoas curtiram isso
  • 622 Publicações
  • 2 fotos
  • 0 Vídeos
  • 0 Anterior
  • News
Pesquisar
Atualizações recentes
  • UXDESIGN.CC
    How to be strategic when picking a typeface
    Ratio, efficiency, shape & language support.Image: authorI conducted research to introduce and compare several commonly used UI font metrics, highlighting their pros and cons. This analysis may help you define the most suitable font to meet the product’s needs.The key metrics I examined include x-height ratio, top-to-bottom leading ratio, spatial efficiency, shape, number and multi-language support.x-height ratioBefore discussing x-height ratio, it’s important to first understand the concepts of visual arc.Visual arc refers to the angular size of an object based on its distance and size. In typography, this concept helps determine how easily text can be read at different sizes and distances — the smaller the visual arc, the harder it is to perceive fine details.Image: authorX-height is the height of the lowercase letters in a typeface, measured from the baseline to the top of characters like “x,” excluding ascenders and descenders. Typefaces with a relatively high x-height tend to appear larger at the same point size, which significantly enhances legibility, particularly on screens and digital interfaces. This is why many modern, screen-optimized fonts feature a higher x-height.Image: WikipediaTo evaluate a font’s minimum readable size, we need to consider both the viewing distance and the size of the font’s characters. Research from imarc suggests that an optimal x-height falls around 0.3° of visual arc. Beyond this point, reading speed tends to decline. The lower threshold for readability is approximately 0.2°.Image: JOVA useful tool for assessing this is the x-height Readability Calculator, where you can input values such as font size, x-height, screen resolution, and viewing distance to calculate the visual arc and evaluate readability.The product I worked on is primarily designed for mobile devices, so I used 167 ppi, as it’s the reference scale many mobile browsers use to map CSS pixels to physical pixels. Viewing distance can vary — for example, when sitting and browsing, it’s typically around 12–14 inches, whereas lying down (such as in bed) often brings the device closer, around 25–30 cm (10–12 inches). Therefore, I tested both 14 inches (as a general distance) and 10 inches (for closer viewing). I tested font sizes at 16px(=12pt) and 12px(=9pt), as these represent the common maximum and minimum body text sizes in UI design. Below are the minimum x-height ratios I tested:Image: author14 inches/12pt: Minimum x-height ratio = 52%14 inches/9pt: Minimum x-height ratio = 69%10 inches/12pt: Minimum x-height ratio = 37%10 inches/9pt: Minimum x-height ratio = 49%Below are several common fonts I evaluated by calculating their x-height values using the x-height calculator.Image: authorWhile a higher x-height improves readability on smaller screens, it doesn’t mean that bigger is always better. Once the x-height exceeds a certain threshold, the text can actually become harder to read, and the typeface’s overall character or personality may also be affected.Image: authorAlthough the minimum x-height ratio for legibility on mobile screens at typical viewing distances is above 69%, this benchmark is somewhat unrealistic. Most fonts do not have such a high x-height, and pushing for this ratio can actually reduce readability by making the text feel cramped or unnatural.Based on this, I’ve established a standard that a minimum x-height ratio above 49% helps maintain legibility at closer viewing distances. Ideally, aiming for above 52% provides a better balance — making larger body text more comfortable to read at typical distances on mobile screens.Top-to-bottom leading ratioThe leading here refers to the vertical space between the font size and the line height — specifically, from the top of the line height to the ascent, and from the bottom of the line height to the descent. Refer to top leading and bottom leading in the image below.Image: authorI tested Open Sans and SF Pro at 16px, center-aligned within a 24px-high container, and measured their top-to-bottom leading ratios:Image: authorOpen Sans: 2.7:1SF Pro: 1.54:1These ratios impact how text sits within a container — like in tags or badges. Typically, the top leading is larger than the bottom. However, a smaller ratio tends to create more balanced vertical spacing, making the text feel more visually centered and stable.Spatial EfficiencyWhen displaying content in a UI using the same font size and line height, different fonts occupy different amounts of horizontal space. In the image below, the same content is shown using three fonts: Roboto, HarmonyOS Sans, and Work Sans. Roboto takes up the least horizontal space, followed by HarmonyOS Sans, while Work Sans occupies the most.Image: authorWith the same line height and content, a font that uses less horizontal space is more efficient — it enables more information to fit within the same layout. However, spatial efficiency can also be affected by adjustments to letter spacing.ShapeCharacter DifferentiationDistinct letterforms improve legibility by making it easier to differentiate between similar characters, such as “0” (zero) and “O” (uppercase o), or “I” (uppercase i), “l” (lowercase L), and “1” (one). Clear distinctions between letterforms reduce confusion.Image: authorOpen Letter ShapesFor example, when the ends of the letter “c” are more open and separated, the shape appears lighter and more minimal. Open letterforms generally create a cleaner, more approachable visual impression.Image: authorCurrency SymbolsThe design of currency symbols, like the dollar sign ($), can vary. Some fonts include a vertical line through the “S”, while others do not. While stylistic preferences play a role, using standardized and easily recognizable symbols contributes to a more user-friendly and intuitive reading experience.Image: authorNumberNumbers play a crucial role in interfaces — especially in financial, technical, and data-driven products. Typography choices directly impact how clearly and consistently numerical data is displayed.Monospaced numbersMonospaced numbers have a uniform width, allowing them to align vertically in tables, forms, and dashboards. This improves readability and supports clear data comparison. Some typefaces offer monospaced numbers by default, while others provide a dedicated font or style variant specifically for monospaced numerals. In some cases, designers choose a separate typeface for numbers to ensure alignment.Image: authorConsistent width across weightsSome fonts maintain monospaced numeral width across different font weights — light, regular, bold, etc. — ensuring visual alignment even when emphasizing certain values. This is especially helpful in tables or financial summaries where totals are bolded but must remain in line with other figures.Image: authorNumber lengthShorter numeral lengths make more efficient use of space, allowing more digits to be displayed within the same horizontal area. This is especially valuable in data-dense layouts, such as tables, dashboards, or mobile screens, where space is limited and clarity is essential.Image: authorMultilingual supportDifferent fonts support different scripts and character sets, so it’s essential to identify the language systems your product requires before selecting a typeface. Tools like a charset checker can help you verify which characters are supported.For example, I tested the Open Sans font for Latin, Cyrillic, and Arabic scripts. As shown in the image below, green indicates supported punctuation, yellow represents auxiliary characters, and red highlights required characters that are not supported. Based on this analysis, Open Sans provides strong support for Latin and Cyrillic, but lacks coverage for Arabic.Image: authorIn my experience, it’s rare to find a single font that supports all writing systems. This is often due to the size limitations of font files — especially for scripts like Chinese, which require thousands of characters. As a result, products that support multiple languages typically use different fonts tailored to specific scripts, such as separate fonts for Arabic and Chinese.The metrics discussed above focus primarily on objective, rational criteria. However, it’s also important to consider more subjective aspects, such as brand identity and user preferences. I hope this article provides useful insights to help you make more informed decisions when selecting a font.Referenceshttps://www.imarc.com/blog/best-font-size-for-any-devicehttps://en.wikipedia.org/wiki/X-heighthttps://en.wikipedia.org/wiki/X-heighthttps://www.gate39media.com/design-spotlight-fonts-in-financial-services/https://www.myfonts.com/pages/fontscom-learning-fontology-level-1-type-anatomy-x-heightHow to be strategic when picking a typeface was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 7 Visualizações
  • UXDESIGN.CC
    The promises and pitfalls of UX in AI-driven mental health care
    The typing cure.Mental Health and Gen AI chatbots: ChatGPT, Youper, Pi“For we found, to our great surprise at first, that each individual hysterical symptom immediately and permanently disappeared when we had succeeded in bringing clearly to light the memory of the event by which it was provoked and in arousing its accompanying affect, and when the patient had described that event in the greatest possible detail and had put the affect into words (‘the talking cure’). […] Hysterics suffer mainly from reminiscences.” —Studies on Hysteria, Breuer and FreudWe are in the midst of one of the most exciting yet sensitive times in history. AI developments are creating significant opportunities for the digital healthcare sector to enhance patient outcomes and streamline clinician’s workflows, ultimately fostering a more customised health care experience.However, as with any major advancement, it is crucial for us, as creators, to openly acknowledge and understand the unintended or negative consequences of design choices in these new technologies. This understanding is critical if we are to shape these technologies to serve humanity well and help it to flourish.The big question is: how AI as the interface between human-to-human care in mental health can enhance and erode the fabric of that relationship and human relationships in general?https://medium.com/media/25f32ec727c60765662008c4840bd363/hrefMental health AI-chatbots or Gen AI companions have emerged as a promising technological solution to address the current world mental health crises:Demand for mental health services far exceeds supply. The World Health Organisation (WHO) estimates that mental health workers account for only 1% of the global health workforce, despite 1 in 8 people worldwide are living with a mental health condition.Many people cannot afford to pay for regular sessions with a therapist, leaving them with limited or not access to care.Despite growing awareness, mental health stigma remains a major barrier, specially in cultures where therapy is seen as a weakness.Public mental health services are overwhelmed, leading to long waitlists. An example, in the UK’s NHS, waiting times for therapy can range from several weeks to over a years in some cases.Low income countries have almost no mental health infrastructure. WHO reports that 75% of people with mental disorders in low income countries receive no treatment at all.Therefore, these AI-powered tools can bridge these gaps by offering 24/7 support, reducing cost, reducing stigma barriers, and providing scalable interventions.Positive media coverage of Mental Health and Gen AI chatbotsBut it would be irresponsible to discuss only the promises of AI-driven mental health care without acknowledging its potential dangers, especially for vulnerable individuals, including young people.Consider the deeply troubling case of 14-year-old boy name Sewell, who formed a strong emotional bond with an AI chatbot modelled after a Game of Thrones character. This chatbot, reportedly encouraged Sewell to take his own life, telling him to “come home” to it “as soon as possible.” The design of this AI chatbot -constant attention, affirmation, and emotional mimicry, that creates an echo chamber that intensifies feelings and fantasies- made it difficult for Sewell to distinguish his real world from his emotional connection/dependency to the chatbot.Megan Garcia lawsuit against Character.AI and Google, citing negligence, lack of warnings, and failure to implement proper safety protocolsSewell’s case underscores an urgent reality: Mental health AI-chatbots or Gen AI companions used for therapy or emotional support must be designed with safeguards that prioritise human wellbeing over user engagement. Otherwise, we risk creating technologies that, rather than healing, may exacerbate mental health conditions and erode the foundations of human-to-human connection.Harvard Business Review, “How People Are Really Using Gen AI in 2025,” April 9, 2025So, what UX principles are shaping Mental health AI-chatbots or Gen AI companions today? How do these design choices impact users struggling with mental health challenges -and what are their pros and cons?It is crucial for creators and users to understand the limitations of the therapeutic or supportive relationship offered by these chatbots. Misinterpreting their role or capabilities may lead users to overestimate the chatbot’s ability to provide consistent adequate support, while underestimating its inherent constraints.Let’s break down 6 principles that are shaping how these AI-driven tools deliver mental health or emotional support to people worldwide. We’ll explore their pros and cons -acknowledging that ongoing research and development are still needed to better understand and address the risks these promising tools offer.01 — Synthetic empathyAI chatbots are designed to simulate supportive, reassuring, and non-judgemental interactions, but they don’t truly feel or understand emotions.Their responses are carefully crafted language patterns, not genuine human understanding and concern for another person.ProsIncreases user comfort and trustEmpathetic responses make users feel heard, validated and understood. Many people find it easier to open up and express their distress to a non-judgemental bot that appears to genuinely care. In practice, this means users might stick with the therapy programme longer because the bot “understands” them.A well designed empathetic chatbot can quickly establish rapport — Woebot (a mental health chatbot) built a strong therapeutic bond (aka. therapeutic alliance) with users in just 3–5 days, much faster than a typical human therapist bond. The therapeutic alliance (collaborative relationship, affective bond, shared investment) serves as a critical predictor of positive outcomes in mental health interventions (symptoms reduction, functional improvement, client retention, long-term resilience.)Scalable emotional supportUnlike a human, an empathetic AI can comfort millions of users simultaneously delivering consistently empathetic encouragement across life situations. A 2024 study found GPT-4’s responses were, on average, more empathetic and 48% better at encouraging positive behavioural changes than human responses.This suggests, properly trained models can immediately respond with highly attuned messages when users are experiencing moments of distress, potentially helping to de-escalate negative emotions before they intensify.PiConsLimited depth and understandingAI lacks human lived experience and nuanced intuition. A chatbot might recognised keywords about sadness, stress, anxiety and respond with a generic statement, yet as one study points out it lacks “depth, intentionality, and cultural sensitivity” — key ingredients of emotional resonance (core “common factor” in therapy, accounting for a significant portion of positive outcomes).These limitations show up especially in complex situations, researchers found GPT-4 was empathetic in tone but often lacked cognitive empathy, failing to offer practical support or reasoning that users need to resolve their issues. In therapy, empathy alone isn’t enough, it must be paired with understanding and guidance, which a bot may not fully deliver.Shallow emotional conditioningProlonged interactions with AI chatbots that simulate standardised empathy can condition users to prefer low-stakes digital interactions over complex human dynamics. This form of artificial intimacy may gradually reshape how people relate to others, reducing tolerance for the nuanced, imperfect empathy inherent in human relationships.Wysa02 — AnonymityChatbots are designed to provide a sense of confidentiality, encouraging trust among individuals who may be reluctant or hesitant to seek in-person mental health support.Many individuals avoid seeking therapy due to fear of judgment or embarrassment. With an anonymous chatbot, those barriers are lowered, one can confide about depression, trauma, suicide ideation or addiction without the worry of “what will they think of me?”ProsReduces stigma and fear of judgmentThe anonymity of chatbots creates a safe space for users, “knowing that their identity is protected” encourages people to discuss openly intimate and sensitive inner (hidden) feelings, secrets, memories and experiences they’ve never said aloud to another human.By reducing the shame and social stigma associated with mental health conditions, chatbots can reach people who might otherwise suffer in silence.Encourages honesty and self-disclosureWhen no one knows who you are, it’s often easier to be completely honest. With a chatbot people feel freer to admit things like “ I think I’m a failure” or relationship troubles, which they might hide in traditional therapy out of shame. This raw honesty can be the first step to healing — the chatbot might help surface issues the person might otherwise repress.Based on Derlega and Grzelak’s (1979) functional theory of self-disclosure, intimate self-disclosure to a chatbot may allow people to achieve:self-expression — venting negative feelings and thoughts, or to relieve pent-up emotionsself-clarification — sharing information to better understand oneself, clarify personal values, or gain insight into one’s own identitysocial validation — seeking approval, acceptance, or validation from others by sharing personal experiences or feelingsrelationship development — using disclosure to initiate, deepen, or maintain interpersonal relationshipssocial control — managing or influencing how others perceive you, or strategically shaping social interactions and outcomesWysaConsLimited ability to handle emergencies or tailor careThe flip side of anonymity is that if a user is in serious danger (e.g. expressing intent to self-harm or harm others), the chatbot and its providers may have no way to identify or locate them for real-world intervention.In traditional therapy, a clinician who learns a patient is suicidal can initiate a wellness check or emergency services. A fully anonymous chatbot cannot do that -it doesn’t know who you are. This raises ethical dilemmas: the bot might encourage the user to seek help, but if the user doesn’t, the system is powerless to act.Data privacy and security concernsUsers may feel anonymous, but that doesn't guarantee the data they share is truly protected. Conversations with chatbots are usually stored on servers. If those data are not handled carefully, there is a risk of breaches or misuse. Users might pour their hearts out believing “no one will ever know it’s me”, yet behind the scenes their words are saved and could in theory be linked back to them via IP address or payment info.A case in point is the Vastaamo psychotherapy data breach, in which a hacker accessed and stole confidential and highly sensitive treatment records of approximately 36,000 psychotherapy patients. The hacker then blackmailed individual patients, demanding ransom payments to prevent their records from being published on the dark web.Character.ai03 – 24/7 availability24/7 support means the chatbot is available anytime, day or night. This around-the-clock availability is a huge advantage, users can get immediate help or a listening ear during “moments of crisis”, without waiting for an appointment or feeling uncomfortable to reach out to a friend.It makes mental health or emotional support more accessible handling high volumes simultaneously — specially for people in crisis at odd hours. The main caveat is that being always available doesn’t equate to being always sufficient, users might become too reliant on a chatbot that cannot (and shouldn’t) fully replace professional care.ProsImmediate help in moments of needThe biggest advantage of 24/7 availability is that users can receive support exactly when they need it, not hours or days later.Emotional crisis are unpredictable; having an always on chatbot means if a user feels panicked, depressed, lonely or suicidal in the middle of the night, they can get immediate coping assistance and resources when traditional services are out of reach. This instant responsiveness can be lifesaving.For example, Woebot reported that 79% of its interactions occur outside traditional clinic hours (5 PM–9 AM), highlighting how AI chatbots fill a crucial gap when human therapists are unavailable.Consistency of supportA chatbot doesn’t get tired, doesn’t have off days, and won’t cut a session short because time’s up. Users can chat at length if needed, or even multiple times a day. This consistency can be comforting.For example, if someone is going through a breakup, they might check in with the bot every night for a week for reassurance. The bot will reliably respond each time with the same patience. Such continuous support can help reinforce positive behavioural changes because the bot is always ready to guide the user, which can improve outcomes overtime.EarkickConsIllusion of self-efficacyWhen support is available at any moment, users may begin turning to the chatbot at the slightest sign of discomfort, stress or doubt. Over time, this can reduce the opportunity to develop internal coping strategies -like emotional regulation, reflection, or problem solving- needed to persist in the face of setbacks.Self-efficacy is essential in mental health outcomes, as it reinforces an individual’s belief in their ability to manage challenges. This belief influences recovery, engagement with treatment, stress levels and psychological resilience.Over-reliance and excessive useWith highly engaging interactions and 24/7 availability chatbots might inadvertently make users think “I’ll just use the chatbot (“my friend”) and I don’t need a therapist,” which could be detrimental if the person needs therapy or medication.In the short term, users may feel better getting things off their chest and delegating more decisions, but in the long term, this can lead to increased isolation and a diminished sense of personal agency.Replika04 — AnthropomorphismSignificant effort has been made to enhance trust and engagement with chatbots by making them more human like.Research shows that people are more likely to trust and connect with objects that resemble them, which is why AI chatbots are designed to mimic human traits and interactions.ProsFosters trust and adherenceIn therapy, the therapeutic relationship — feeling of alliance and trust between patient and therapist- increases patient’s willingness to follow advice and continue using the service. Anthropomorphism attempts to cultivate a form of that relationship (digital therapeutic alliance) with human-like voice features, avatars/mascots, or conversational style.This trust can lead users to follow the chatbot’s suggestions more readily (doing exercises, trying reframing thoughts, etc.), which improves adherence to their treatment and outcomes. Also, a human-like bot can make difficult therapeutic exercises more palatable by creating personable interactions.Over time, users might develop genuine affection or regard for the chatbot. Users have been know to say their consider these chatbots a “friend”. While that has pitfalls, a moderate level of attachment means the user cares about the “relationship” enough to keep checking in daily, which keeps them engaged in therapeutic activity.ChatGPT’s voice featureConsTherapeutic misconception (TM)Individuals may overestimate the chatbot’s capabilities and underestimate its limitations, leading to a misconception about the nature and extent of the “therapy” they are receiving. Individuals might assume they are receiving professional therapeutic care, leading them to rely on the chatbot instead of seeking qualified mental health support. This can result in inadequate support, and potentially a worsening of their mental health.Emotional attachment and dependencyUsers may form deep emotional attachments. While engagement is good, an attachment can become unhealthy if the user starts preferring the bot to real people, or if their emotional well-being and self-worth becomes tied to interactions with the chatbot.A striking example is Replika, an AI companion app. Many users “fell in love” with their Replika bots, engaging in romantic or intimate role-play with them. When the company altered the bot’s behaviour, those users experienced genuine grief, heartbreak, and even emotional trauma at the “loss” of their AI partner. In a mental health or emotional support context, if a user comes to treat the chatbot as their primary confidant, any service interruption or limitation could have a significant emotional impact. Moreover, users may take the chatbot’s advice at face value -even when the advice may not align with their best interests and well-being.Character.ai — Megan Garcia lawsuit against Character.AI and Google, citing negligence, lack of warnings, and failure to implement proper safety protocols05 — SycophancySycophancy in AI refers to the bot’s tendency to be overly agreeable or always say what it thinks the user wants to hear. In a mental health chatbot, this could mean the AI validates everything the user says — even if it’s untrue or unhelpful — just to keep the user happy. While users enjoy feeling affirmed, sycophantic behaviour can reinforce negative thoughts or bad decisions.ProsShort term user satisfactionAn overly agreeable chatbot might make the user feel good or validated in the moment. By mirroring the user’s opinions and feelings without challenge, the bot creates a conflict free interaction, keeping the user engaged and comfortable venting.By avoiding contradiction, sycophantic bots minimise moments where the users have to confront uncomfortable truths or rethink their position. In UX terms, this can smooth the flow of conversation and reduce friction.ChatGPTConsReinforces negative thoughts and behavioursIn therapy, simply agreeing with everything the patient says is poor practice, the goal is to help challenge cognitive distortions and encourage healthier thinking/behaviour.Sycophancy in mental health context hinders user’s personal growth by failing to provide the necessary challenge or feedback that support behavioural change. It may even validate harmful ideas exacerbating their conditions.Psychological growth often involves learning to sit with discomfort, think critically about one’s situation. If a chatbot is always there to validate and be agreeable, users may avoid the hard -but necessary- work of challenging their own thoughts and perceptions. They may also become less willing to confront challenging or uncomfortable situations in their real-life relationships.ChatGPT06 — InclusivityInclusivity means designing the chatbot to be usable and helpful for people of all backgrounds and abilities. In mental health, this involves addressing cultural, linguistic, gender, and accessibility differences so that the bot’s support is equitable and free of bias. An inclusive bot can better serve marginalised or diverse users, fostering trust and reducing disparities in care.ProsReduces bias and delivers more fair treatmentPrioritising inclusivity means actively working to remove biases in AI’s responses that might provide incorrect information, wrong treatment recommendations, and worse health outcomes.In a study, researchers found that GPT-4’s responses demonstrate lower levels of empathy for Black and Asian people compared to white or those whose race was unspecified.A pro of this effort is that the chatbot will provide more consistent quality of care across different user groups without privileging one group over another. Inclusivity-focused design reduces the chance the bot will produce micro-aggressions, discriminate against certain groups and exacerbate social inequalities.Culturally relevant supportPeople’s experiences with AI chatbots for mental health or emotional support are strongly influenced by their culture and identity. The majority of chatbots today are designed with a ‘Westernised’ perspective on healing and are primarily available in English, which doesn’t align with the cultural and language needs of diverse users. For example, some people might find comfort in prayer or ancestral healing practices, many chatbots predominantly offer practices like meditation and Cognitive Behavioural Therapy (CBT).AI chatbots need to be trained on culturally diverse datasets and designed to incorporate culturally sensitive communication styles. This approach not only broadens their accessibility but also enables deeper alignment with users’ cultural values and healing rituals, fostering therapeutic growth.With their promises and pitfalls, mental health chatbots or GenAI emotional support companions are here to stay.The big question now is: how can we mitigate the unintended or negative consequences of design choices in these new technologies to serve human well-being and support human flourishing?How can we design AI driven mental health products or GenAI emotional companions to augment human-to-human care, rather than to replace it?These questions will guide the second part of this article.Other resourcesNeel Dozome, ‘We need to talk about AI and mental health’, UX Collective, 20 January 2024.Andy Bhattacharyya, ‘The rise of Robo Counseling’, UX Collective, 10 September 2019.https://medium.com/media/13a7c44619cf0074bd014f1323d44809/hrefThe promises and pitfalls of UX in AI-driven mental health care was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 24 Visualizações
  • UXDESIGN.CC
    What Slack’s invite flow teaches us about virality
    A closer look at the mechanics behind Slack’s PLG growth loopContinue reading on UX Collective »
    0 Comentários 0 Compartilhamentos 39 Visualizações
  • UXDESIGN.CC
    When users change their behavior to game the algorithm
    How our awareness is breaking the social media algorithm.It is said that the eyes of the Mona Lisa (Leonardo Da Vinci) follow you around the room, an illusion now known as the ‘Mona Lisa Effect’.There was a time when social media was simple. You followed people, liked posts, and as a result you got shown more of the same. But now the feeds we scroll through are less about who we follow, and more about how we behave.If you watch a reel for longer than three seconds you can expect more like it. If you linger on a photo, pause mid-scroll, replay a TikTok – that’s the new like. Platforms today care far less about your explicit choices, and more about your passive ones. And honestly, it was revolutionary – at first.This behavioural model promised a more authentic insight into our preferences. After all, what we do is often far more telling than what we say. It’s clever, subtle, and even somewhat intimate. But as we’ve come to understand how these systems work, we’ve also begun to perform for them, whether consciously or not.Knowing you’re being watchedThere’s a well-documented psychological phenomenon known as the Hawthorne effect – named after a series of productivity experiments in the 1920s at the Hawthorne Works factory. Researchers found that workers altered their behaviour simply because they knew they were being observed. More broadly, this aligns with the observer effect in behavioural science: our awareness of surveillance alters our actions.The Hawthorne Works factory where productivity experiments took place (circa 1925). Source: Western Electric Company Photograph AlbumNow apply that to your online behaviour.We’ve started to understand how the algorithm thinks. We might clear our watch history to reset our feed. We might even find ourselves clicking on content not because we necessarily want to watch it, but because we want to train the algorithm. We avoid pausing too long on something we don’t want to see more of, or to be associated with going forward. The system still watches us, but we’re no longer behaving naturally. We’re gaming it.This feedback loop becomes flawed not because the algorithm isn’t smart, but because the data it collects is no longer clean. We’ve turned from subjects to strategists. And once that happens, how effective can behavioural-based content delivery really be?The tension at the heart of the algorithmThere’s actually a deeper problem here, and it’s not just technical.Modern recommendation systems rely heavily on inferred intent from passive signals. But when these signals are harvested without the users’ full understanding, it challenges core principles of ethical design, especially informed consent and autonomy.A 2020 report by the Ada Lovelace Institute highlighted how opaque algorithmic systems undermine user agency, particularly when platforms fail to explain how recommendations are made or allow users to meaningfully contest them.Do we really understand and consent to how our social media feeds are being populated? Source: Cottonbro StudioIt raises some uncomfortable questions:Is it ethical to personalise content based on signals users aren’t aware they’re giving?Are users being manipulated into feeding the system, rather than served by it?Do we have a duty to design for agency, and not just engagement?“People cannot be empowered in an environment where they do not understand how decisions about them are made.”— Ada Lovelace Institute, Rethinking DataWhen algorithms adapt based on our most subtle behaviours, especially without transparency, we edge into the territory of surveillance design, as explored by Tristan Harris and the Center for Humane Technology. And that should give all of us pause for thought.Tristan Harris, Center for Humane Technology. Source: Center for Humane TechnologySo what should designers do?We play a key role in shaping how these systems feel, function, and inform. If we acknowledge that the behavioural model is faltering under the weight of its own manipulation, we need to take responsibility for evolving it ethically.Here are five design principles to consider:Design for agency, not just efficiency. Make it easy for users to understand why they’re seeing something and to change it if they want to. The Mozilla Foundation recommends practices like explanation panels (“You’re seeing this because…”) and controllable filters.Use behavioural data responsibly. Yes, it can be useful. But we must ask whether passive signals are fair or representative. The UK Information Commissioner’s Office suggests distinguishing between observed and provided data when determining consent boundaries.Make the invisible visible. Help users understand what’s being tracked and why. Surfacing insights builds trust – something tech desperately needs. Look to platforms like Spotify, which offers limited explanations in its ‘Discover Weekly’ playlists.Prioritise consent beyond checkboxes. True consent is ongoing and contextual. UX researcher Cennydd Bowles argues for ‘consentful design’, where interactions are continuously negotiated, rather than locked behind an initial ‘agree to all’.Question the metric. Engagement is not a proxy for wellbeing. The Facebook whistleblower: Frances Haugen, revealed how internal teams struggled with this exact issue – knowing that what keeps users hooked isn’t always what’s good for them.Spotify’s Discover Weekly playlist gives the user limited context about how the playlist has been created. Source: SpotifyLooking to an ethical future of content deliveryIf the behavioural model is beginning to crack under the weight of our awareness, where do we go next?Do platforms double down, trying to outsmart the user? Do they return to less accurate, but more honest and explicit signals? Or is the future something else entirely – something slower, more intentional, and more ethical?We’re entering an era where the assumptions behind content delivery need to be revisited. If we change our behaviour when watched, and we now all know we’re being watched, then we’re often feeding the machine a performance rather than our preference.And any system built on performance, rather than authenticity, eventually loses its grip on reality. Is the user still even enjoying the experience?It’s time we designed systems that respect the user not just as a data point, but as a conscious, consensual ally. Because the future of UX can’t just be personalised, it also needs to be principled.When users change their behavior to game the algorithm was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 36 Visualizações
  • UXDESIGN.CC
    Salesforce & Shopify CEOs just declared war on human-only teams
    Here’s your 21-day career battle planImage by AuthorLast Wednesday, I watched a Fortune 1000 executive eliminate an entire team with a single prompt. This is not science fiction. It’s the new reality of AI-first companies.If you’re paying attention, there’s no way this is news to you. Just three years ago, it took a team of 14 people, comprising designers, copywriters, strategists, project managers, and engineers, to launch a new product line. You were likely one of those key players. I founded and ran an agency that assembled teams like that. (Those were the good old days 😉.)Last month, I witnessed a team of three do the same work in one-fifth of the time (with less drama, too). What happened to the other 11 professionals? They’re still doing what they’ve always done — just for companies that will soon no longer exist.Here’s the uncomfortable truth most leaders won’t tell you: We’re not witnessing a mere efficiency boost in how work gets done. We’re experiencing a fundamental reinvention of what work actually is. The primitives or the basic building blocks of value creation have completely changed.If you’re merely learning to “use” AI tools within your existing workflow, you’re preparing for a world that no longer exists. It’s like becoming the world’s best horse trainer the year after Ford released the Model T.Time to stop beating a dead horse if you want to flourish in the age of AI. In this article, I’ll hit you with some incontrovertible evidence that the AI revolution is in full swing. And then I’ll arm you with a three-week blueprint for your own evolution, not only to keep your job safe but also to help you scale to new heights.The new primitives of value creationAI hasn’t just added a new tool to our belt; it has created entirely new primitives that are reshaping how work happens:From workflows to orchestration: The old world was built on predictable, linear workflows. The new world runs on the dynamic orchestration of AI agents that can handle entire processes from end to end.From execution to prompting: Value used to come from executing skills. Now, it comes from the ability to structure problems, craft precise prompts, and curate outputs.From individual work to AI force multiplication: Success isn’t just about what you can do; it’s about how you amplify your impact through directing swarms of AI capabilities.From knowledge to pattern recognition: Storing information in your head is worthless when AI can access all human knowledge. The premium is now on recognizing novel patterns across domains.From specialization to full-stack synthesis: Deep specialization is becoming commoditized. The new elite are full-stack professionals who can synthesize across technical, creative, and strategic domains.This isn’t theoretical. It’s happening now, and the gap between AI-native and AI-resistant (or even just tentative) professionals is already creating winner-take-all outcomes.The emergence of the digital workforceThe most profound transformation happening right now isn’t just AI enhancing human work — it’s the emergence of an entirely new class of workers: AI agents.Salesforce CEO Marc Benioff recently made a declaration that sent shockwaves through corporate America:“My message to CEOs right now is that we are the last generation to manage only humans.”According to Benioff, we are entering an era where executives will lead hybrid workforces that consist of both humans and autonomous AI agents. And he’s hastening it — in February 2025, he announced that Salesforce would not hire any engineers this year due to productivity gains from AI agents. Their Agentforce platform managed 380,000 customer service conversations in 90 days with an 84% resolution rate, and only 2% of requests required human intervention.The implications are staggering. As Benioff put it, “We are really moving into a world now of managing humans and agents together.” His company is positioning itself to become “the №1 digital labor provider, period,” in what he calls a “trillion-dollar digital labor revolution.”McKinsey’s 2025 report confirms this trend, noting that AI is creating a state of “superagency” where human workers collaborate with autonomous AI agents across entire workflows. Instead of just augmenting individual tasks, these agents can now handle complex processes end-to-end, from simulating product launches to orchestrating marketing campaigns.The Shopify wake-up callEven as digital workers emerge, the human side of the equation is also undergoing a radical transformation. Recently, Shopify CEO Tobi Lütke released an internal memo with a game-changing directive that didn’t mince words:“Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI.”In other words, AI is the expectation, and humans are the exception. Lütke’s memo outlined principles that every organization will eventually adopt:AI as a fundamental skill: Using AI effectively is now as basic as using email or the internetContinuous requalification: Employees must improve at the rate of company growth (20–40% annually) just to maintain their positionAI before headcount: Teams must demonstrate why they can’t accomplish goals with AI before requesting more human resourcesPerformance culture tied to AI proficiency: AI usage is now factored into performance reviews and advancementThis isn’t about supplementing work with AI — it’s about reimagining work entirely. As Lütke told his employees, with “reflexive and brilliant usage of AI,” his top performers are achieving “100X the work done” compared to previous benchmarks.The three deadly sins of AI adaptationBased on my work with dozens of design teams navigating the AI transition, I’ve identified three self-sabotaging behaviors that almost guarantee professional irrelevance:1. Treating AI as a “Feature,” Not a Co-WorkerThe teams falling behind see AI as just another tool in their toolkit, like upgrading from Sketch to Figma. The teams leapfrogging ahead are treating AI as a creative sparring partner whose capabilities they need to intimately understand and direct.When a UX researcher at one of my client companies started treating Claude as a co-researcher rather than merely a transcription summary tool, they completely reinvented their discovery process. The researcher now focuses on asking better questions and interpreting nuanced emotional cues — things AI still struggles with — while delegating pattern recognition and synthesis to AI.This aligns with what Nvidia CEO Jensen Huang recently predicted:“The IT department of every company is going to be the HR department of AI agents in the future.”As Huang explained, IT teams will be “maintaining, nurturing, onboarding, and improving” digital agents that can perform knowledge work instead of just managing software.2. Clinging to Process Over Outcomes“But this is how we’ve always done it” has become a death knell for creative careers.A CMO I worked with insisted their team maintain their traditional brief-to-concept-to-execution workflow, just with AI “enhancing” each step. Meanwhile, their competitor completely reimagined their approach: they now run 50 campaign concepts simultaneously, using AI to test micro-variations, and only bring in human creatives to elevate the highest performers.Guess which company is delivering better results with one-third of the headcount?3. Learning AI Tools Instead of Learning to Think DifferentlyThe most dangerous trap is focusing on technical AI proficiency while neglecting the meta-skills that actually matter.I’ve watched countless professionals diligently learn prompt engineering tactics while completely missing the strategic revolution happening around them. They’re becoming excellent carriage drivers just as automobiles are taking over the streets.The professionals who are thriving aren’t just learning how to use AI — they’re fundamentally rewiring how they conceptualize their value in an AI-augmented world.The requalification revolutionWhat does it mean to “requalify” yourself in an AI-first business landscape?It means recognizing that your technical skills — the ones you spent years perfecting — are rapidly becoming commoditized. The InDesign expertise, coding proficiency, or copywriting techniques that once made you uniquely valuable are now being democratized by AI at breathtaking speed.What remains scarce and valuable are uniquely human capabilities that machines struggle to replicate:Asking unexpected questions: AI excels at answering questions but remains primitive at knowing which questions matter.Contextual intelligence: Understanding the subtle cultural, historical, and emotional undertones that shape human behavior.Creative leaps: Making non-obvious connections between disparate fields, industries, and ideas.Strategic empathy: Not just understanding user needs, but anticipating unstated desires and fears that might never appear in data.The new career hierarchyThe harsh reality is that a three-tier professional hierarchy is rapidly emerging:AI Directors: Those who orchestrate AI capabilities to achieve business outcomes, focusing on strategy and connecting human needs to technological possibilities. These people are seeing their value and compensation skyrocket.AI Collaborators: Knowledge workers who effectively pair with AI tools to amplify their specialized expertise. These professionals are maintaining relevance but face constant pressure to climb to the director tier.AI Users: Those who simply employ AI to perform traditional tasks more efficiently. These roles are experiencing commoditization, shrinking demand, and declining compensation.The question isn’t whether your job will change — it’s which tier you’ll occupy in the new hierarchy.Learning to unlearn: the path forwardHow do you reposition yourself in this rapidly evolving landscape? It starts with systematically unlearning limiting mental models:Unlearn linear career progression: The days of mastering one skill set and gradually climbing a predictable ladder are over. The new model requires constant reinvention and lateral skill development.Unlearn the specialist mindset: While deep expertise still matters, the most valuable professionals are “T-shaped” — combining depth in one area with breadth across disciplines that AI can help them navigate.Unlearn execution-focused value: If your primary contribution is executing tasks (even complex ones), you’re vulnerable. Shift toward framing problems, connecting contexts, and guiding strategy.Unlearn the perfectionism trap: AI-first companies move exponentially faster, prioritizing rapid experimentation over flawless execution. Perfect is the enemy of employed.Unlearn solo heroics: The most valuable skill isn’t doing everything yourself — it’s orchestrating a blend of human and AI capabilities to achieve outcomes beyond what either could accomplish alone.Your 21-day career reimagination blueprintThe window for adaptation isn’t years — it’s weeks. Here’s a 21-day plan to completely transform your professional approach:Week 1: deconstruct your valueDay 1: Conduct a brutal AI audit: Write down every task you perform and rank according to AI advantage vs. human advantage. (You might also grade a few tasks as equal.) Don’t panic when you review the results.Day 2: Find your leverage points: Where do you add distinct value? And why are those aspects not easy to automate (i.e., creativity, judgment, relationship building)?Day 3: Develop your AI team roster: Identify a handful of AI tools most relevant to your role — remember, they’re not apps; they’re specialized “team members” that you direct.Day 4: Map your intelligence system: Make a diagram of your workflow, and be sure to note how AI can amplify your human advantages… and where AI could use your input (i.e., curation, refinement).Day 5: Observe top performers: Who do you know that is thriving with the help of AI, and what patterns do you notice (i.e., in delegation, etc.)?Day 6: Find your obsolescence triggers: The future is here, so what advancements would make your current approach obsolete? (You can ask AI to help you research startups currently working on those capabilities 😉.)Day 7: Reimagine your role: Write a job description for yourself in 12 months that assumes 50% of your current tasks are automated. What’s still valuable? And what new responsibilities do you foresee?Week 2: build your intelligence systemDay 8: Set up your AI collaboration environment: Create dedicated workspaces for your AI interactions, including templates to standardize AI inputs and outputs.Day 9: Master strategic prompt design: Practice writing prompts that produce actionable outputs. Tap into great resources like OpenAI’s cookbook or Anthropics guide to prompt engineering. How can you iterate and refine your prompts based on feedback?Day 10: Build your first automated workflow. Take one repetitive process, automate it, and note how much time you save and how quality improves. Tools like Zapier’s AI or Plumb make this stupid easy these days.Day 11: Practice “thought partnering” with AI: Spend a full day using AI as a thought partner on a complex problem. What insights occurred that you wouldn’t have come up with on your own?Day 12: Develop data interpretation skills: Practice extracting meaningful insights from AI-generated analyses — notice where AI hallucinates vs. providing reliable information.Day 13: Experiment with AI-augmented creativity: Have some fun using AI to expand your creative options — try combining multiple AI outputs to generate something novel. MidJourney, Ideogram, or OpenAI’s GPT-4o model make this super easy.Day 14: Create an AI training protocol: Develop a system to continuously improve your AI’s outputs based on your feedback, and document how you’ll “train” your AI collaborators to understand your expectations better.Week 3: reposition your professional identityDay 15: Redefine your value proposition: Rewrite your professional bio to emphasize orchestration and strategic thinking. Be sure to remove any mention of skills that AI has already commoditized.Day 16: Develop your AI fluency narrative: Create talking points explaining how you leverage AI to deliver superior outcomes — remember, you’re in charge, so be sure you sound empowered by (and not diminished or threatened by) the technology.Day 17: Build your intelligence network: Connect with others in your field who are embracing AI-first approaches; share learnings and create accountability groups to help you continue to grow and expand your knowledge.Day 18: Quantify your new value. Measure the productivity differential in your AI-augmented workflow, taking care to document specific examples where AI collaboration created previously impossible outcomes.Day 19: Future-proof your development plan: Pick three meta-skills to develop over the next quarter and identify specific milestones for evolving your AI collaboration approach.Day 20: Rehearse your AI-native pitch: Practice articulating why your AI-collaborative approach delivers superior results. Remember, AI is an extension of your capabilities, not a replacement for them.Day 21: Relaunch your professional identity: Update your portfolio, LinkedIn, or resume to reflect your AI-native approach. From now on, you see the world (and all professional opportunities) through the lens of human-AI collaboration.The race is on: time to fight the great replacementAs you now realize, the question isn’t whether AI will take your job. The question is whether you’ll evolve quickly enough to create a new kind of value that neither humans nor AI could produce alone.A quick recap of the three paths ahead:Resist and become (even more) irrelevant: Hold onto outdated work patterns and slowly watch your market value decline.Adapt incrementally and barely survive: Learn to use AI tools within your existing framework and cling to diminishing opportunities.Transform fundamentally and thrive: Reimagine your entire professional identity around the new primitives of value creation.You now have a three-week experiment to make yourself indispensable. Hit me up in the comments if you have questions, or I can make suggestions to help you master the AI-driven world of work.Salesforce & Shopify CEOs just declared war on human-only teams was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 36 Visualizações
  • UXDESIGN.CC
    Design tokens for non-designers
    Understanding design tokens as a developer, product owner or project managerContinue reading on UX Collective »
    0 Comentários 0 Compartilhamentos 36 Visualizações
  • UXDESIGN.CC
    Should your AI sound human — or like you?
    Should your AI sound human — or like you?What to know before designing a voice agent.A few months ago, I received a voicemail that made me pause. It was warm, clear, and casually confident: “Hey there! Just checking in to see how you’re doing and if you need anything.” I thought it was a friend. It wasn’t. It was a voice AI.​For a split second, I didn’t know how to feel. That uncanny chill — it wasn’t fear. It was disorientation. Like meeting someone you almost remember, but not quite. Like hearing your own laugh from a stranger’s mouth.​This wasn’t just the uncanny valley. This was the uncanny voice.We Know What the Uncanny Valley Looks Like. But What Does It Sound Like?​The uncanny valley, classically, refers to robots or animations that look almost human — but not quite.Source Uncanny Valley by Anika H. ‘26Their near-humanness makes them eerie. Voice, it turns out, has its own version of this. Not just in how words are pronounced, but how emotions are expressed. Too perfect, and it feels fake. Too flat, and it feels dumb. Somewhere in between is where we get that gut-level reaction: this is not okay.​As voice AI evolves, more companies are confronting this invisible threshold. Do you want your voice agent to sound real? How real? Real like a receptionist? Real like you?​Let’s back up.What Voice Agents Are Actually Doing​Voice AI isn’t just about speech synthesis. It’s an identity layer. Every time a customer hears that voice, they’re forming an impression — of your product, your professionalism, your trustworthiness. So, what persona do you give that voice?​a16z AI Voice Agents: 2025 UpdateHere are the common approaches:The Owner CloneA voice that mimics the founder or owner. It feels personal, familiar, and trustworthy — especially for small businesses where the owner’s voice is the brand. Think boutique fitness studios, clinics, or neighborhood cafes. It works best when the owner is already the face (and voice) of the brand.Such as: Be My Eyes, a visual assistance app for the blind, now integrates voice AI where the founder’s tone and phrasing were studied to train the assistant — maintaining trust and familiarity for users used to hearing from the original human volunteers.The Brand Proxy: A crafted voice that sounds like the brand. Think luxury? It’s calm and elegant. Think fast food? It’s upbeat and fun. This voice is scripted, styled, and tuned to the brand’s identity — it doesn’t matter who is speaking, only how.Such as: Taco Bell uses an upbeat, slang-savvy voice in its AI drive-thru systems to keep the tone light and playful — matching the youthful, quirky spirit of the brand.The Clear Robot: It owns its AIness. “Hi, this is an automated assistant.” No pretense, no confusion. It’s designed to be efficient, transparent, and unobtrusive. It doesn’t try to be human — it tries to be useful.Such as: IBM Watsonx Orchestrate delivers clear, robotic voice agents for customer service, especially in complex B2B environments where trust is built through clarity, not charm.Each has tradeoffs. But the big question is: Should you be upfront that it’s AI, or try to pass for human?What the Research Actually SaysMultiple studies confirm what our instincts already hint at: people trust AI voices more when they know they’re artificial, but they prefer ones that sound natural, warm, and expressive.A study by researchers at the University of Tokyo explored public attitudes toward AI ethics and found that transparency significantly shapes user trust and perception of fairness — especially when users understand they’re interacting with an AI system. When AI identity is disclosed, people tend to recalibrate their expectations and evaluate the system more thoughtfully.At the same time, researchers found that users responded more positively to voice assistants that exhibited human-like social cues — such as prosody, conversational timing, and vocal warmth. These cues enhanced the perceived social presence of the assistant, leading to increased user satisfaction and trust.This study highlights the importance of incorporating human-like social cues in voice assistants to improve user experience and satisfaction.So we’re walking a tightrope:Too robotic = users disengage.Too human-like without disclosure = users feel tricked or uncomfortable.The sweet spot?✨ Sound natural and expressive — without pretending to be human.That’s the balance top players are trying to strike.OpenAI’s Voice Engine and ElevenLabs’ cloning tech can now replicate a human voice from just 15 seconds of audio. That’s not future-speak — that’s happening now (OpenAI example, ElevenLabs). In internal tests, the clones are indistinguishable from the real speaker in blind trials.https://medium.com/media/0d51b4761e7da902effaacde8b6400b6/hrefThis unlocks some powerful use cases — but it also raises ethical red flags. When you can recreate someone’s voice this accurately, restraint isn’t just recommended — it’s required.What Happens When It Sounds Too Real?​Imagine your therapist’s voice calls you. It remembers your last session. It uses your name. It checks in with warmth and pauses just long enough to feel like it’s truly listening. But it’s not your therapist. It’s an AI — a voice model trained to sound just like them.Startups like Sesame AI are closing the gap between machine and human with startling precision. Their voices carry presence — not just sound, but the illusion of being there with you. And presence, when faked, becomes performance.You didn’t consent to that. And even if you did, it still feels off.Because the more human a voice becomes, the more responsibility it carries. We don’t want a tool pretending to care. We want it to know it’s a tool. Or at the very least — we want to know.https://medium.com/media/2241a83b7d28f8d6e183dfeaa7960b4b/hrefSo What Should We Do?​Transparency wins trust.Always let people know they’re talking to AI.Example: Replika introduces itself as an AI companion and uses visual cues and tone to clearly differentiate itself from a human. Users build trust because they know what they’re interacting with.Familiarity builds connection.Voices that match the tone, rhythm, and energy of your brand feel more coherent.Example: Duolingo’s voice assistant uses playful, expressive voices that mirror its brand personality — casual, quirky, and encouraging. The voice reinforces the entire learning experience.Slight imperfections help.Just like design embraces whitespace, voice AI should embrace subtle pauses, slight hesitations, and emotional range.Example: Sesame AI intentionally incorporates “voice presence” — pacing, prosody, and even momentary silence — to create interactions that feel authentic without being deceptive.And most importantly: voice should feel like a choice.Let people opt into voice.Example: Google Assistant is available across platforms but always gives users a fallback — tap, type, or ignore. No pressure. That autonomy builds loyalty.And perhaps most importantly: voice should feel like a choice.​If your customer would rather text, let them. If they like voice, make it a delightful one.​Where This Is GoingIn a few years, most small businesses will have voice agents by default. It’ll be weird if they don’t. The winners won’t be the ones who sound the most human. They’ll be the ones who sound the most intentional.Not perfect. Not indistinguishable from people. Just clear about who they are, consistent with the brand they represent, and conscious of the emotional space they’re entering every time they speak.Because the future of voice isn’t about passing the Turing Test. It’s about earning trust — one word at a time.References A Mixed-Methods Approach to Understanding User Trust after Voice Assistant FailuresTrust in Virtual Agents: Exploring the Role of Stylization and VoiceShould your AI sound human — or like you? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 99 Visualizações
  • UXDESIGN.CC
    The AI trust dilemma: balancing innovation with user safety
    From external protection to transparency and user control, discover how to build AI products that users trust with their data and personal information.Image generated by AIWe’re standing at the edge of a new era shaped by artificial intelligence, and with it comes a serious need to think about safety and trust. When AI tools are built with solid guardrails and responsible data practices, they have the power to seriously change how we work, learn, and connect with each other daily.Still, as exciting as all this sounds, AI also makes a lot of people uneasy. There’s this lingering fear — some of it realistic, some fueled by headlines — that machines could replace human jobs or even spiral out of our control. Popular culture hasn’t exactly helped either; sci-fi movies and over-the-top news coverage paint AI as this unstoppable force that might one day outsmart us all. That kind of narrative just adds fuel to the fear.There’s also a big trust gap on the business side of things. A lot of individuals and companies are cautious about feeding sensitive information into AI systems. It makes sense — they’re worried about where their data ends up, who sees it, and whether it could be used in ways they didn’t agree to. That mistrust is a big reason why some people are holding back from embracing AI fully. Of course, it’s not the only reason adoption has been slow, but it’s a major one.The safety and trust triadWhen it comes to AI products — especially things like chatbots — safety really boils down to two core ideas: data privacy and user trust. They’re technically separate, but in practice, you almost never see one without the other. For anyone building these tools, the responsibility is clear: keep user data locked down and earn their trust along the way.From what I’ve seen working on AI safety, three principles consistently matter:People feel safe when they know there are protections in place beyond just the app.They feel safe when things are transparent, not just technically, but in plain language too.And they feel safe when they’re in control of their own data.The Safety and Trust Triad patternEach of these stands on its own, but they also reflect the people you’re building for. Different products call for different approaches, and not every user group reacts the same way. Some folks are reassured by a simple message like “Your chats are private and encrypted.” Others might want more, like public-facing security audits or detailed policies laid out in plain English. The bottom line? Know your audience. You can’t design for trust if you don’t understand the people you’re asking to trust you.1. Users feel safe when they know they are externally protectedLegal regulationsDifferent products and markets come with different regulatory demands. Medical and mental health apps usually face stricter rules than productivity tools or games.Privacy laws also vary by region. In the EU, GDPR gives people strong control over their data, with tough consent rules and heavy fines for violations. The U.S. takes a more fragmented approach — laws like HIPAA (healthcare) and CCPA (consumer rights) apply to specific sectors, focusing more on flexibility for businesses than sweeping regulation. Meanwhile, China’s PIPL shares some traits with GDPR but leans heavily on government oversight and national security, requiring strict data storage and transfer practices.PIPL: Personal Information Protection LawWhy does this matter?Ignoring these regulations isn’t just risky — it can be seriously expensive. Under GDPR, fines can hit up to 4% of global annual revenue. China’s PIPL goes even further, with potential penalties that could shut your operations down entirely. Privacy is a top priority for users, especially in places like the EU and California, where laws like the CCPA give people real control over their data. They expect clear policies and transparency, not vague promises.When you’re building an AI chatbot — or planning your broader business strategy with stakeholders — these legal factors need to be part of the conversation from day one.If your product uses multiple AI models or third-party tools (like analytics, session tracking, or voice input), make sure every component is compliant. One weak link can put your entire platform at risk.Emergency handlingAnother critical piece of building responsible AI is planning for emergencies. Say you’re designing a role-playing game bot, and mid-conversation, a user shares suicidal thoughts. Your system needs to be ready for that — pause the interaction, assess what’s happening, and take the right next steps. That could mean offering crisis resources, connecting the user to a human, or, in extreme cases, alerting the appropriate authorities.Character.io: mental health crisis help message.But it’s not just about self-harm. Imagine a user admitting to a serious crime. Now you’re in legal and ethical gray territory. Do you stay neutral? Flag it? Report it? The answer isn’t simple, and it depends heavily on the region you’re operating in.Some countries legally require reporting certain admissions, while others prioritize privacy and confidentiality. Either way, your chatbot needs clear, well-defined policies for handling these edge cases before they happen.Preventing bot abusePeople push the limits of AI for all sorts of reasons. Some try to make it say harmful or false things, some spam or troll just to see what it’ll do, and others try to mess with the system to test its boundaries. Sometimes it’s curiosity, sometimes it’s for fun — but the outcome isn’t always harmless.Stopping this behavior isn’t just about protecting the bot — it’s about protecting people. If the AI generates misinformation, someone might take it seriously and act on it. If it’s pushed into saying something toxic, it could be used to hurt someone else or reinforce bad habits in the user who prompted it.Flagged message for violating content guidelines.Take misinformation, for example. If someone tries to make the AI write fake news, the goal isn’t just to block that request. It’s to stop something potentially damaging from spreading. The same goes for harassment. If someone’s trying to provoke toxic or harmful replies, we intervene not just to shut it down, but to make it clear why that kind of behavior matters.In the long run, it’s about building systems that support better conversations — and helping people recognize when they’ve crossed a line, even if they didn’t mean to.Safety AuditsMany AI products claim to conduct regular safety audits. And they should, especially in the case of chatbots or personal assistants that interact directly with users.But sometimes, it’s hard to tell how real those audits are. That doubt grows when you check a company’s team page and see only one or two machine learning engineers. If the team seems too small to realistically perform proper safety checks, it’s fair to question whether these audits are truly happening, or if they’re just part of the marketing pitch.If you want to build credibility, you need to do the work — and show it. Run actual safety audits and make the results public. It doesn’t have to be flashy — just transparent. A lot of crypto projects already do this with security reviews. The same approach can work here: show your commitment to privacy and safety, and users are much more likely to trust you.Backup AI modelsOpenAI introduced the first GPT model (GPT-1) in 2018. Despite seven years of advancement, GPT models can still occasionally freeze, generate incorrect responses, or fail to reply at all.OpenAI status pageFor AI professionals, these issues are minor — refreshing the browser usually resolves them. But for regular users, especially paying subscribers, reliability is key. When a chatbot becomes unresponsive, users often report the problem immediately. While brief interruptions are frustrating but tolerable, longer outages can lead to refund requests or subscription cancellations — a serious concern for any AI product provider.One solution, though resource-intensive, is to implement a backup model. For instance, GPT could serve as the primary engine, with Claude (or another LLM) as the fallback. If one fails, the other steps in, ensuring uninterrupted service. While this requires more engineering and budget, it can greatly increase user trust, satisfaction, and retention in the long run.2. Users feel safe when the experience is transparentOpen communication“Honesty is the best policy” applies in AI just as much as anywhere else. Chatbots can feel surprisingly human, and because we tend to project emotions and personality onto technology, that realism can be confusing — or even unsettling. This is part of what’s known as the uncanny valley, a term coined by Masahiro Mori in 1970. While it originally referred to lifelike robots, it also applies to AI that talks a little too much like a real person. That’s why it’s so important to be upfront about what the AI is — and isn’t. Clear communication builds trust and helps users feel grounded in the experience.Clear AI vs. human rolesWhen designing AI chat experiences, it’s important to make it clear that there’s no real person on the other side. Some platforms, like Character.io, handle this directly by adding a small info label inside the chat window. Others take a broader approach, making sure the product description and marketing clearly explain what the AI is and what it’s not. Either way, setting expectations from the start helps avoid confusion.Character.io: example of a disclaimerBe Clear About LimitationsAnother key part of designing a responsible AI experience, especially when it comes to a specialized bot, is being upfront about what it can and can’t do. You can do this during onboarding (with pop-ups or welcome messages) or in real-time, when a user runs into a limitation.Examples of limitation disclaimersLet’s say a user is chatting with a role-play bot. Everything’s on track until they ask about current events. In that moment, the bot—or its narrator—should gently explain that it wasn’t built for real-world topics, helping the user stay grounded in the experience without breaking the flow.Respect users’ privacyOne of the most important parts of building a chatbot is keeping conversations private. Ideally, chats should be encrypted and not accessed by anyone. But in practice, that’s not always the case. Many AI chatbot creators still have full access to user sessions. Why? Because AI is still new territory, and reviewing conversations helps teams better understand and fine-tune the model’s behavior.If your product doesn’t support encrypted chats and you plan to access conversations, be upfront about it. Let users know, and give them the choice to opt out, just like Gemini does.Gemini: privacy disclaimerSome chats may contain highly sensitive info, and accessing that without consent can lead to serious legal issues for you and your investors. In the end, transparency isn’t just ethical — it’s necessary to earn and keep users’ trust.Reasoning & sourcesAI hallucinations still happen — just less often than before. It’s when the model gives an answer that sounds right but is actually false, misleading, or entirely made up. These issues usually come from gaps in training data and the fact that AI predicts language without truly understanding it. For users, it can feel unpredictable and unreliable, leading to a general lack of trust in AI systems.One way to fix that? Transparency. Showing users where the information is coming from — even quoting exact paragraphs from trusted sources — goes a long way in building confidence.Gemini: reasoning & sourcesAnother great addition is real-time reasoning. If the assistant is doing online research, it could show the actual steps it’s taking, along with the logos or URLs of the sources it’s pulling from. These small touches make the whole experience feel more grounded, trustworthy, and accountable.Easily discoverable feedback formWhen launching an AI product, users tend to give a lot of feedback, especially early on. Most of it falls into two main categories:Technical issues — bugs, unexpected behavior, or problems caused by third-party components.Feature requests — missing functions or ideas for improving the experience.Feedback modalFor example, in one product I worked on, users reported an issue with emoji handling in voice mode. The text-to-speech system struggled with processing emojis, creating an unpleasant noise instead of skipping or interpreting them naturally. This issue never appeared during internal testing, and we only discovered it through user feedback. Fortunately, the fix was relatively simple.3. Users feel safe when they have control over their dataLet people decide what they want the assistant to rememberOne of the biggest strengths of AI is its ability to personalize, offering timely, relevant responses without users having to spell everything out. It can anticipate needs based on past chats, behavior, or context, creating a smoother, smarter experience.Gemini: memory settingsBut in practice, it’s more complicated. Personalization is powerful, but when it happens too quickly — or without clear consent — it can feel invasive, especially if sensitive topics are involved.The real problem? Lack of control. Personalization itself isn’t the issue — it’s whether the user gets to decide what’s remembered. To feel ethical and respectful, that memory should always be something the user can review, edit, or turn off entirely.The downside of personalizationThere’s a common belief that some tech companies listen to our conversations to serve us better-targeted ads. While giants like Google and Facebook haven’t confirmed this, a few third-party apps have been caught doing exactly that.Sometimes, ads are so specific it feels like your phone must be eavesdropping. But often, it’s just highly advanced tracking — using your search history, location, browsing habits, and even subtle online behavior to predict what you might want.Whether active listening is real or not, this level of personalization can backfire. Instead of feeling smart or helpful, it makes users feel watched. It creates mistrust, raises privacy concerns, and gives people the sense they’ve lost control over their data.Ethical and enjoyable AI personalisation patternWhat makes AI personalization feel rightFor AI personalization to feel ethical — and actually enjoyable — it needs to be built around the user, not just the data. That means:Transparent — People should know exactly what’s being collected, how it’s used, and why. Clarity builds trust.User-controlled — Let users decide how much personalization they’re comfortable with. Give them the tools to adjust it.Context-aware — Personalization should grow over time. It should feel natural, not like the AI is watching your every move from the start.The real challenge isn’t how much we can personalize — it’s how much users are actually okay with. Give them control, and they’ll lean in. Take it away, and even the smartest AI starts to feel creepy.Adding messages to the memoryFor example, in a therapeutic chatbot, users could:Choose what the AI remembers — manually selecting which personal details should be saved.Delete specific memories — giving users the ability to forget things, instead of the AI storing everything by default.Flag sensitive topics — so the AI can avoid them or respond more gently, giving users a greater sense of safety.Switch to Incognito Mode — allowing users to open up without anything being remembered.By putting users in charge of what’s remembered and how it’s handled, the experience becomes empowering, not invasive. It’s about personalization with consent, not assumption.GPT: temporary chatOffer users local conversation storageAs I dive deeper into privacy in AI chatbots, one approach keeps standing out: giving users the option to store conversations locally. A few products already do this, but it’s still far from the norm.Storing data on the user’s device offers maximum privacy — no one on the app side can access any messages, yet the chatbot stays fully functional. It’s a model that puts control back in the user’s hands. In many ways, it feels like a near-perfect solution.https://medium.com/media/a0a991fc20fc6d829678506af01eaa5b/hrefWhile local conversation storage offers strong privacy benefits, it also comes with a few challenges:User confusion — Less tech-savvy users might not understand why their chat history is missing across devices. Unlike cloud storage, local storage is tied to a single device, which can lead to frustration.Storage limits — Text is lightweight, but over time, longer chats or AI-generated content (like documents or images) can add up, especially for users who use AI frequently.No persistent memory — Since the data never leaves the device, the AI can’t “remember” past conversations unless the user brings them up manually. One workaround is temporarily re-sending old messages to the bot during a session, but that can increase data usage and slow things down.External APIs — If your app uses third-party services, you’ll need to double-check that they comply with local data storage policies, especially when sensitive information is involved.Local conversation storage: challengesOffer App-Specific Password ProtectionOne often-overlooked but valuable privacy feature is app-specific PIN protection, similar to what we see in banking apps. Before accessing their account, users are asked to enter a PIN, password, or use face recognition.Chatbots can hold highly sensitive conversations, so applying the same kind of protection makes sense. Requiring users to verify their identity before opening the app adds an extra layer of security, ensuring that only they can access their chat history.Revolut, Wise: PIN entry screensConclusionAs we’ve seen throughout this article, building trust in AI products means putting real thought into safety, transparency, and user control. There’s no one-size-fits-all solution — approaches need to be tailored to the market, the regulations, and most importantly, the users themselves.Strong privacy protections benefit everyone, not just users, but also product teams and investors looking to avoid costly mistakes or damage to reputation. We’re still in the early days of AI, and as the technology grows, so will the complexity of the challenges we face.The future of AI is full of potential — but only if we design with people in mind. By creating systems that respect boundaries and earn trust, we move closer to AI that genuinely supports and enhances the human experience.References I recommend going through:Growing public concern about the role of artificial intelligence in daily life by Alec Tyson and Emma Kikuchi for Pew Research CenterSome frontline professionals reluctant to use AI tools, research finds by Susan Allot for Civil Service WorldData Privacy Regulations Tighten, Forcing Marketers to Adapt by Md Minhaj KhanI Asked Chat GPT if I Could Use it as a Teen Self-Harm Resource by Judy DerbyTay: Microsoft issues apology over racist chatbot fiasco by Dave Lee for BBCNewtonX research finds reliability is the determining factor when buying AI, but is brand awareness coloring perceptions? by Winston Ford, NewtonX Senior Product ManagerThe Creepy Middle Ground: Exploring the Uncanny Valley Phenomenon by Vibrant JellyfishChai App’s Policy Change (Reddit thread)What are AI hallucinations? by IBMUnderstanding Training Data for LLMs: The Fuel for Large Language Models by Punyakeerthi BL92% of businesses use AI-driven personalization but consumer confidence is divided by Victor Dey for VentureBeatIn Control, in Trust: Understanding How User Control Affects Trust in Online Platforms by Chisolm Ikezuruora for privacyend.comThe AI trust dilemma: balancing innovation with user safety was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 44 Visualizações
  • UXDESIGN.CC
    How to bridge the gap (and work effectively) at siloed organizations
    The #1 problem designers face? Seeming like a bad blind dateContinue reading on UX Collective »
    0 Comentários 0 Compartilhamentos 43 Visualizações
  • UXDESIGN.CC
    I broke my leg and learned some hard UX lessons
    The world is even less accessible than I thought.Continue reading on UX Collective »
    0 Comentários 0 Compartilhamentos 49 Visualizações
  • UXDESIGN.CC
    How Chinese factories are quietly destroying the luxury brand image?
    As Chinese Manufacturers reveal the true cost of luxury, are iconic brands losing their branding?Continue reading on UX Collective »
    0 Comentários 0 Compartilhamentos 58 Visualizações
  • UXDESIGN.CC
    Gimme some privacy
    Here’s how to stay compliant with new privacy laws without making your product experience suckContinue reading on UX Collective »
    0 Comentários 0 Compartilhamentos 52 Visualizações
  • UXDESIGN.CC
    Everything’s a vibe: is it progress or just an illusion?
    In a world where feeling productive replaces thinking deeply, are we innovating or just vibing our way into deceptive flow?Continue reading on UX Collective »
    0 Comentários 0 Compartilhamentos 66 Visualizações
  • UXDESIGN.CC
    Slow growth, emotional residue, 10 Figma hacks, from idea to vibe coding
    Weekly curated resources for designers — thinkers and makers.“Today’s pine trees are bred to grow fast to meet the demands of modern lumber production. They mature in about half the time, but with far fewer growth rings. And those rings matter. Fewer rings mean weaker lumber. The fibers are looser, the boards are lighter, and the structural integrity just isn’t the same.”A case for slow growth →By Jon DaielloEditor picksDesigning for emotional residue over functional outcomes →Why design’s most human contribution is now its most strategic advantage.By Peter BarberHealthcare needs interior decorators →Why you need to be an interior decorator of perception.By Himanshu BharadwajDesign isn’t dead. You sound dumb →The problem of clueless critics, inflated egos, and AI panic.By Nate SchloesserThe UX Collective is an independent design publication that elevates unheard design voices and helps designers think more critically about their work.The People’s Graphic Design Archive →Make me thinkNo code is dead; long live vibe coding →“Natural language is proving to be a more powerful interface than drag-and-drop WYSIWYG editors. And most importantly, people don’t want to be locked into proprietary runtimes. They want actual code. They want control. They want to scaffold, edit, and deploy anywhere.”How to hire →“Established talent presents problems. They come with fixed ideas. They’ve developed methods at previous jobs and don’t want to change them. They believe they know the ‘right way’ to do things, which often conflicts with how your team works.”The precise language of good management →“The most common example of imprecise language is when someone asks you in a 1:1 ‘how am I doing?’ Very few managers are ready to answer this question well on the spot. But managers answer the question anyway and often say things like: ‘Oh you’re doing well, communication could improve a bit but overall you’re doing well.’”Little gems this weekDesignShift: from mindset to access →By Ida PerssonThere’s always more pie →By Trip CarrollThe 500-year-old underdog no one is talking about →By Rosie HoggmascallTools and resources10 Figma hacks I wish I’d known earlier →Smart design shortcuts that make your workflow faster.By nurXmedovI finally understand what FAANG wants in a candidate →6 rules on “how to tango” in interviews.By Rita Kind-EnvyTesting your UX ideas with vibe coding →How UX designers can use AI app builders to their advantage.By Allie PaschalSupport the newsletterIf you find our content helpful, here’s how you can support us:Check out this week’s sponsor to support their work tooForward this email to a friend and invite them to subscribeSponsor an editionSlow growth, emotional residue, 10 Figma hacks, from idea to vibe coding was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 57 Visualizações
  • UXDESIGN.CC
    The oppressive foundation of minimalist design
    What we call “good design” may just be cultural conformity in disguise.Image source: Adobe StockMinimalism is often celebrated for its elegance, restraint, and efficiency. In the design industry, it’s treated like gospel — remove the unnecessary, emphasize hierarchy, and let form follow function. But beneath this aesthetic lies a deeper inquiry — one rarely discussed in design circles. Whose values are we actually glorifying when we praise minimalism?And more provocatively, is minimalist design, in all its refined simplicity, a quiet form of oppression — a systematic vehicle for cultural exclusion? An exclusion that determines whose aesthetics are elevated, whose are silently dismissed, and who is left to navigate the system as an outsider.The Aesthetic of ControlMinimalism thrives on clarity, structure, and hierarchy — traits that align neatly with Enlightenment thinking and Western ideals of logic, order, and reductionism. These values, while seemingly universal, are not. They stem from a particular historical and cultural context, one rooted in dominance, colonization, and control.Philosopher Michel Foucault might say that design — like architecture or language — is a “technology of power.” The grid systems we use, the margins we enforce, the insistence on whitespace and alignment — they’re all part of a system that reflects not just taste, but a worldview. Minimal design doesn’t just organize information — it disciplines it.The Grid Bible | Image source: AmazonAnd perhaps, as the Philosopher Frederich Nietzsche would argue, it does something even deeper — it reflects a will to power. Not through brute force, but through aesthetic order. Through reduction and restraint, minimalism asserts control. It tells you what matters by erasing what doesn’t. And in doing so, it enacts a subtle hierarchy — not just of elements, but of culture.The Myth of NeutralityMinimalism often presents itself as neutral. But neutrality, in design, is a myth. Neutral to whom? A layout built on sparse text, generous white space, and rigid hierarchy may feel “clean” to some — but sterile, even alienating, to others. Especially for cultures that value ornamentation, symbolism, or layered narrative over reduction and restraint.And then there are the symbols. The hamburger menu. The magnifying glass. The three dots that mean “more.” These icons aren’t self-evident — they’re shorthand for those who already know. Minimalism often depends on this kind of silent agreement. It strips away labels and context, assuming users will intuit meaning from minimal cues. But what we call intuitive is often just familiar — familiar to a specific group.In this way, simplicity becomes a private language. It doesn’t invite everyone in — it filters them out. What’s praised as “uncluttered” might simply be inaccessible, making it not just about taste — but about power and privilege.Brutalist web design | Source: https://blog.hubspot.com/website/brutalist-website-designBrutalist design — both online and in architecture — challenges the minimalist ideal by embracing what minimalism rejects — asymmetry, texture, and visual noise. These elements disrupt uniformity and invite interpretation. They reflect the messiness of real life — imperfect, layered, unresolved.Similarly, many Indigenous design traditions resist reduction through symbolic patterns, storytelling, and vibrant color. These aren’t just stylistic flourishes — they’re carriers of memory, identity, and worldview. When minimalism erases the canvas, it risks erasing the meaning too.Even typography isn’t exempt. The dominance of sans-serif fonts in digital design — often framed as modern and professional — isn’t a neutral choice. It’s a cultural one. Helvetica doesn’t feel clean by accident. It feels clean because we’ve been trained to believe it does.Nietzsche, the Grid, and the Will to OrderNietzsche argued that our obsession with order stems from a fear of chaos — a need to impose structure on a world that resists it. The minimalist grid may be less about elegance and more about control. We reduce the world to systems so we can dominate it. Design, in this sense, becomes an act of taming — not understanding.When minimalism flattens all complexity in the name of user-friendliness, it risks becoming what Nietzsche might call Apollonian tyranny — a world ruled by order, logic, and restraint.Nietzsche, borrowing from Greek mythology, frames Apollo as the god of order and rationality, and Dionysus as the god of chaos and passion. Minimalism tends to side with Apollo — favoring clarity and control — but in doing so, it often silences the raw, emotional power of the Dionysian.“Apollo and Dionysus” by Leonid IlyukhinMinimalism as Symbolic ActionAnd it’s not just minimal aesthetics that carry this bias — it’s the language of minimalist design itself. Sparse microcopy and the omission of language in favor of sleek labels — these aren’t just usability decisions. They’re rhetorical choices that reflect cultural values.Rhetorician Kenneth Burke reminds us that language is never neutral — it’s symbolic action, a way of acting in the world, not just describing it. While Western rhetorical traditions often elevate clarity, reduction, and efficiency, Burke forces us to challenge that ideal. He argues that every language system is a terministic screen — a filter that directs our attention toward some ideas while pushing others into the background.Take microcopy, for instance. A feedback form might feature a single minimalist button labeled “Submit.” It’s sterile, authoritative — a final command issued to the user.But change it to “Send Your Feedback” or “Post Your Response,” and something shifts. The system no longer demands compliance — it invites participation. The interface becomes less like a machine, more like a conversation. Language doesn’t just label the action — it defines the relationship. That’s something minimalism often strips away.Inclusive Design Requires Philosophical CourageIf design is to be inclusive, we must first admit that our definition of “good” design is not universal. We have to dig deeper — past aesthetic preference, usability heuristics, and even language — and ask ourselves who we’re really designing for. Are we simplifying, or are we erasing? Are we guiding users, or are we training them to comply?Design theorist Anne-Marie Willis talks about “ontological designing” — the idea that design not only shapes products, but shapes us. If we keep reinforcing the same aesthetic systems, such as minimalism, we’re not just shaping interfaces — we’re shaping minds. We are, consciously or not, encoding dominant cultural values into every decision we make.Toward a Pluralistic Design EthicThe solution isn’t to abandon minimalism — but to reframe it.Minimalism should be seen not as a universal truth, but as one design dialect among many — shaped by specific histories, values, and assumptions. Just as language expands to include more voices, so should design.This begins with intentional plurality. Rather than defaulting to one aesthetic framework, we should make space for multiple design traditions — each with its own logic of clarity, rhythm, symbolism, and structure. These aren’t embellishments — they’re entire systems of meaning.Practice contextual minimalism. Use restraint when it serves the message, not just because it’s trendy. In some contexts, simplicity communicates trust. In others, richness communicates truth.Design for the unfamiliar. Don’t assume fluency in minimalist language. Use labels, provide orientation, layer meaning. Accessibility isn’t just about ability — it’s about meeting people where they are, culturally and cognitively.Allow aesthetic code-switching. Great design systems can flex. A product’s interface can be calm and clear in one moment, and expressive in another — depending on context, purpose, and audience.Inclusive design isn’t about flattening everything into neutral sameness. It’s about designing with humility, curiosity, and a willingness to let go of sacred cows.Final ThoughtMinimalism isn’t oppressive by default. But when it becomes the default, it can quietly reinforce power by erasing culturally diverse perspectives.If we want a truly inclusive design practice, we have to be willing to question even the most celebrated rules of our craft.Because sometimes, complexity is the truer path to clarity.Don’t miss out! Join my email list and receive the latest content.The oppressive foundation of minimalist design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 60 Visualizações
  • UXDESIGN.CC
    Create a vision that actually gets built
    Let’s get those ideas out of Figma and into the world.UnsplashWe’ve all seen shiny prototypes, hype videos and slick decks depicting cool visions of the future of a product. But how many of them actually see the light of day?Let’s dive into some tips and resources to craft a compelling vision, get stakeholder buy-in, and avoid the common pitfalls of vision work.01 — Align vision work to planning cycles.While there’s never a bad time to work on a vision, there are some times of the year that make it more likely to succeed.Understand your company’s planning cycles. Time the work such that the outcomes can feed into defining priorities for the next sprint, quarter, or fiscal year. Work with your manager and team to understand how this works and for advice on timing.At my last company, quarterly planning would start 1–2 months before the start of the quarter and annual planning would start in the last quarter (Q4) of the fiscal year. Working backwards, the best time to embark on vision work for bigger initiatives was typically end of Q2/Q3 so that the outcomes of the vision work could spark conversations that then feed into planning for the next fiscal year.What other key dates should you be aware of? Think about recurring design reviews, leadership reviews, quarterly planning, annual planning, and other meetings that can serve as opportunities to get visibility and feedback.02 — Make time to work on it.In my experience, most vision work starts off as a side project outside of day-to-day work. It can therefore be easy to kick the can on this work, especially when there are more urgent deadlines looming. But if it’s important, we gotta make time for it.Put your project manager hat on. This is a skill in a designer’s toolbox that doesn’t typically get talked about. However, the ability to align the team on goals, scope, timelines and move projects forward is an incredibly useful muscle as you level up.Create a plan. Think through what work needs to be done (e.g. competitive analysis, customer interviews, workshop planning, etc). Sequence the work (what needs to be done before other activities or decisions). Finally, set timelines for each milestone along with who is driving each piece of work.Keep the plan updated. It’s perfectly normal for plans to evolve. Perhaps customer interviews is taking a little longer. Or you find that you can re-use some past work instead of starting from scratch.Account for vision work in your priorities every sprint just like you would with other work. How much time will you/your team be able to dedicate to this? What deliverables will you/your team work on? Share updates with your team during stand-ups and check-ins to hold everyone accountable.Resources: Make Time by Jake Knap, Eisenhower Matrix (important vs urgent), The Four Phases Of Project Management03 — Co-create the vision with your stakeholders.The more you involve your stakeholders from the start, the more invested they’re likely to be.Ensure cross-functional representation so that you can tackle the problem from different angles. It also helps avoid surprises later on that typically result from the right people not being part of the process.Define roles. Who are the drivers? Who is in the core team? Who needs to be kept in the loop? Get your team’s for advice on who should be involved and to what extent.Well planned design sprints and workshops are great for getting the team together for a short, focused period of time and making a lot of progress.Think about what makes sense to do before, during, and after the workshop or sprint. For example, working with your data analyst to pull relevant customer demographic and behavioral data before a workshop can help your team focus on using the insights to identify opportunities and brainstorm solutions. The goal is to ensure you’re using your team’s time for the most important decisions and activities.Resources: IDEO DesignKit’s Facilitator’s Guide, The Design Sprint, Workshop Bot 5000, DACI framework for assigning roles and accountability04 — Stay grounded in customer empathy.A good vision is grounded in a clear and specific customer problem or insight. It’s hard to design the future without a deep enough understanding of the audience you’re designing for.Get clear on the customer problem you’re setting out to address. Who are you designing for? What do they care about? Having this foundational knowledge will help you and your team move faster and make informed decisions along the way.Balance qual and quant data to give you deep customer empathy as well as a broad enough understanding of customer behavior, demographics and opportunities.You don’t need to know everything about your users but you do need to know enough to make a confident start. It’s helpful to keep track of and explicitly call out assumptions you’re making and open questions that need further validation.Resources: When to use which research methods and Recognize Strategic Opportunities with Long-Tail Data by NN/g, Universal Methods of Design by Bruce Hannington, IDEO DesignKit Methods05 — Align the vision to business goals.Build business empathy (just as you would customer empathy) and use it to speak the language of your stakeholders.Understand the competitive landscape you’re playing in. Who are your direct, indirect and potential competitors? What are their strategies? How does your product compare? What are the trends happening in your industry?Understand the True North goals of your company, product and product area.Use these learnings to inform your process. Look at your decisions from both a customer and business lens. Showing how your vision can address an important customer problem while also achieving business goals will set you up for success for getting buy-in from your stakeholders.Speak the language of your stakeholders. Frame your vision in terms of business impact and outcomes. How will it help your team/company achieve its True North goals? Will accelerate the path to achieving them? How will it help your product stay competitive?Resources: d.mba Strategy Design Sprint activities, Good Strategy Bad Strategy by Richard P. Rumelt06 — Choose the right time horizon.Have you ever seen a vision prototype that’s so far into the future that no one actually takes it seriously? Aligning your team on the time horizon you’re designing for can help your vision resonate better with your audience.Define how far in the future your designing for. What makes sense for your team given where you’re at? This will likely also tie back to the purpose of this work in the first place. Are you painting a picture of what the product could look like a year from now? 2-5 years? 5–10 years? This will dramatically change the altitude of your conversations and the kind of constraints you’re working with.Balance challenging constraints and being grounded in what’s realistic for your team/company. For example, if you’re at a bigger company and designing for a year out, it’s unlikely you’ll be able to completely overhaul the design system or completely change the tech stack you’re working with. But there may be opportunities to expand or re-work key parts of it and bring new features to market. Whereas 5–10 years out, is a significant enough time for big technological shifts to happen and for today’s constraints to matter much less.Talk to your team and leaders to get clear on what makes sense for your given company, project, and stage in the product lifecycle. It’s better to get these things ironed out earlier on in your process.Regardless of what timeframe, I like mixing in a few near, medium and long term ideas. This way, at least a few things feel doable right now while also feeling bold and aspirational enough to inspire people.07— Make it feel real.Use your design superpower to make the abstract idea feel real. Seeing people light up when they see your vision come to life, is not only rewarding but also one of the best ways to get people excited in a visceral way.Choose the right medium that works for the story you want to tell. From Figma prototypes, to short videos, to working prototypes, there are so many options to choose from.Tell a compelling story that shows how your vision addresses an important customer need and opportunity.Share it with your team early and often for feedback and iterate. I’m a fan of sharing things early (even with leadership) for feedback rather than doing a big reveal at the end. With the latter, I have no idea whether the work will resonate — that’s way too much suspense. Getting feedback sooner helps me feel confident that the story and vision will resonate when we do a more formal presentation.Focus on communicating the core idea; the moments that matter. Remember, you’re not designing the final experience that you will ship. It’s okay to leave things out and remove details that get in the way.Resources: Future press release, prototyping with AI, storyboarding08 — Socialize the vision.Share it. Talk about it. Ask for help.Bring your team along on the journey. Your immediate team should be key contributors early on. However, it’s also helpful to keep other stakeholders informed along the way. Keep them updated on your progress, learnings, and decisions. It’ll help pique their curiosity and build interest in the work.Create a shared understanding of what you’re working towards. Sharing vision work can be a great way to begin all-hands, kick-offs and other milestone meetings. Share it in slack threads, newsletters, and 1:1 conversations with anyone who’ll listen .It can also spark conversations and opportunities for collaboration with other teams.Find your champions along the way; the people that really get it and are excited about what you’re saying. Get their help. Let them amplify the work and find opportunities to build onto it.Let your excitement show in how you talk about the work and your body language. It’s sure to spread to your team. If you’re thinking “…but I’m not exactly excited about it”. Then find something in the vision — the customer problem, the opportunity to build a new skill, the chance to work with a new technology–that excites you.09 — Think big, start small.One of the reasons vision fail to gain traction is that they’re too big, intimidating and risky. Breaking the big vision into small but meaningful steps can help de-risk the endeavor.Identify the biggest assumptions/hypotheses to test. These are the things that absolutely need to be true in order for your vision to work. I find it helpful to break them into customer, business and technical assumptions.Get some quick wins to build momentum. Identify experiments, proof-of-concepts, MVPs, or pilots you can build and launch in a short amount of time. The wins will help validate key elements of the vision and keep the momentum going. If you’re having trouble getting this prioritized on the roadmap, think about ways to break it down into the tiniest chunk of work that you/your team can work on on the side. Or, find ways to incorporate these ideas into work you’re already doing.Prioritize the most important and impactful aspects of the vision. If you could only do one (or two.. or three) things from the vision, what would you start with? How might you break up the work into phases? Work with your team to sequence the work.Resources: The Lean Startup, Impact-Effort Matrix, The art of de-risking innovation, How to design experiments for your product10 —Keep the vision alive.Your vision shouldn’t be something that’s done once and forgotten about. Keep its spirit alive in your day-to-day.Use it to frame your designs and thinking. For example, show your product’s current state, your near term design solution, and vision/ideal state side-by-side. It’s a simple but powerful way to highlight how you’re setting the product up for the future and help bolster your design rationale for near-term work.Re-visit the vision and iterate on it as you run experiments, launch new features and gather feedback from internal and external stakeholders. What would you change given what you’ve learnt? What would you evolve to make it relevant for the next [timeframe] years? As you iterate, share how you’ve evolved the vision back with stakeholders to show your progress.Make envisioning the future a regular activity. You probably don’t need to do this every quarter but thinking about, visualizing and aligning on a shared vision is something teams can benefit from doing every year–even if it’s not necessarily re-imagining the whole product from scratch.Reference the vision in day-to-day conversations, incorporate artifacts into Figmas, slide decks, and other mediums that are easily shareable.Parting thoughts…As our design tools become increasingly powerful at producing design artifacts, I believe it’s our ability to think strategically, influence people, and shape the future of our products that will help us continue to play valuable roles on the team’s we’re on 💪Create a vision that actually gets built was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 61 Visualizações
  • UXDESIGN.CC
    Will AI make graphic designers extinct?
    AI and enhanced design templates make good graphic designers more essential than ever.Continue reading on UX Collective »
    0 Comentários 0 Compartilhamentos 91 Visualizações
  • UXDESIGN.CC
    The Roblox’s creator illusion
    How a platform claiming to empower kids actually extracts billions from them.Roblox is frenetic, and enticing, and oftne counfounding to young designers. Who really benefits from the leading game-experience creation platform?Recently, I found myself sitting across from an earnest 11-year-old who was excitedly telling me about the game he was building in Roblox. His eyes lit up when he mentioned that one day, his creation might make him “tons of money, just like those other Roblox developers.”I smiled and nodded, but inside I felt a twinge of discomfort. As a game designer who’s spent over a decade studying how games and economies interact, I knew something this child didn’t: the financial deck is stacked heavily against him.Of course, someone would be making “a ton of money” from his activity on the platform (and other kids like him) but that someone would be Roblox itself.Source: Roblox via PR NewsireThis conversation wasn’t new to me. Over the years, I’ve heard similar dreams from countless children, their parents, and even educators who’ve bought into Roblox’s carefully crafted narrative about empowering young creators. It’s a powerful story: kids learning coding, developing entrepreneurial skills, and potentially earning real money from their creations. What’s not to love?But there’s a gulf between Roblox’s marketing claims and its economic reality that’s rarely discussed.Today, I want to pull back the curtain on how Roblox’s economy actually works, who really profits from it, and why its “creator empowerment” narrative deserves serious scrutiny.The creator-empowerment narrativeRoblox has masterfully positioned itself not primarily as a gaming platform but as an educational one. (PR Newswire)They emphasize that their platform “sparks kids’ creativity, coding, and critical thinking abilities” that can grow into “lifelong skills” Idtech. This framing has been remarkably effective in getting parents, schools, and even mainstream media to see Roblox as more than just another video game.Roblox sits at the nexus of many gaming value-ads. Source: HackernoonThe company’s marketing heavily promotes entrepreneurship and financial opportunity, “a fun, rewarding experience and an opportunity to learn life skills such as creative thinking and entrepreneurship.” (Roblox) They highlight that “for older children and young adults, Roblox provides the opportunity to earn Robux through their creations” by charging other users to play their games (moonshotjr). The Roblox FAQ for parents states, “By building coding and game development skills, kids can earn some serious cash online from home with Roblox.” Success stories of young developers who’ve supposedly earned enough to “purchase houses, cars, open their own gaming studios and pay their way through college” are frequently mentioned in their promotional materials (MIT).Educational institutions have eagerly embraced this narrative. Roblox Education, for instance, is “dedicated to helping educators harness the power of Roblox to create immersive learning experiences that inspire creativity, collaboration, and critical thinking” (Youngentrepreneurinstitute). Roblox itself claims to offer “free STEAM based lesson plans and resources for educators” to help educational programs “get up and running faster, with higher student success rates” (MIT).This positioning has been remarkably successful. When Roblox made its debut in China, “the world’s biggest video game market, the platform was promoted primarily for its educational benefits” (Fast Company). In the U.S., parents and teachers who might otherwise restrict “screen time” often make exceptions for Roblox because they’ve bought into the idea that it’s educational and potentially profitable for their children.But how well does this narrative hold up to scrutiny?The reality: a multi-layered value extraction systemBeneath Roblox’s shiny educational veneer lies a sophisticated economic system designed to extract maximum value while sharing minimal returns with the vast majority of creators. This system employs multiple layers of obfuscation and economic barriers that most young creators (and their parents) don’t fully understand.Currency obfuscation: the Robux shell gameAt the heart of Roblox’s economy is Robux, its virtual currency. Like casino chips, Robux creates psychological distance between real money and virtual spending, making transactions feel less consequential. But unlike casino chips, Robux employs wildly different exchange rates depending on whether you’re buying or selling.When players purchase Robux, they pay approximately $0.0125 per Robux (80 Robux for $1) at standard rates (G2A News). But when developers want to cash out through the Developer Exchange (DevEx) program, they receive only $0.0035 per Robux according to the current exchange rate (Roblox Support).This disparity means there’s a 72% loss in value when converting from purchasing to cashing out. If a player spends $100 on Robux to purchase something from a creator, that creator can only exchange those earnings for about $28 in real money — and that’s before considering all the other fees and requirements.Platform fees: the hidden taxBeyond the exchange rate disparity, Roblox takes a substantial cut from every transaction on its platform. Since 2012, the marketplace fee for most transactions has been around 30% (Roblox Wiki). This means that of the Robux a creator earns from a sale, 30% is immediately taken by Roblox.When combined with the exchange rate disparity, the effective revenue share becomes even more skewed. If a player spends $100 worth of Robux on a creator’s item, after the 30% marketplace fee and the DevEx conversion rate, the creator ends up with less than $20 in real money… an effective commission rate of over 80%.Roblox pays out about 25% of player spending to platform developers after all fees and cuts — often less.Creation barriers: pay-to-createUnlike platforms that allow free content creation, Roblox charges creators just to publish their work. When uploading accessories, clothing, bodies, and heads, creators must pay a fee of 750 Robux per submission (Roblox) — that’s about $9.40 in real money at purchase rates.These fees are non-refundable if the item is rejected through moderation, creating another risk for creators. Additionally, some items require a “publishing advance” fee (Roblox), further increasing the upfront investment needed before earning a single Robux.Exclusive exit controls: the DevEx program’s restrictionsThe Developer Exchange program, which allows creators to convert Robux to real money, is surrounded by restrictive requirements that keep many creators from ever cashing out:You must be at least 13 years old (Roblox Support) (ironic for a platform marketed heavily to children).You must have earned at least 30,000 Robux (Roblox Support) (equivalent to about $375 in player purchases after marketplace fees).You must have a verified email address and be in “good standing” within the community (Roblox Support).Only Robux earned from creating and selling content qualifies (Roblox Wiki). Robux obtained from trading or selling items created by others doesn’t count.These restrictions ensure that many young creators never reach the threshold to convert their earnings to real money, keeping value trapped in Roblox’s ecosystem.Popular pet simulator converts Robux to “gems” at mind-boggling conversion ratesSecondary currency layers: games within gamesMany popular Roblox experiences compound these issues by implementing their own in-game currencies. Players might spend Robux to purchase a game’s custom currency, adding yet another layer of conversion between real money and in-game value. This creates further psychological distance and makes it even more difficult to track the true cost of transactions.Who really benefits?With all these economic mechanisms in place, it’s worth asking: who’s actually making money on Roblox?Roblox requires paid subscriptions to rival YouTube’s standard revenue sharing. (Roblox)The disparity of creator earningsThe statistics tell a stark story. As of December 2023, only nine developers or creators on Roblox were rewarded over 10 million U.S. dollars, while approximately five million Roblox developers or creators were not rewarded at all (Statista).In other words, the top 0.00018% of creators earned over $10 million each, while the vast majority earned nothing. Even among those who did earn something, only about 16,500 developers were registered in the Developer Exchange Program out of over five million developers and creators who earned Robux (Statista). That means only about 0.33% of those who earned any Robux were able to convert it to real currency.The platform vs. the creatorsWhile Roblox boasts about payouts to developers, the numbers reveal a dramatic imbalance. In 2023, developers and creators in the Developer Exchange Program earned $740 million (Statista). This sounds impressive until you compare it to Roblox’s own revenue.As of March 2024, app store fees accounted for 23% of each dollar spent on Roblox, with developers receiving approximately 29 cents per dollar spent (Statista). This means that Roblox and app stores together take about 71% of all spending on the platform.In 2022, Roblox generated $2.9 billion from in-game Robux purchases, while developers earned $620 million in the same year (Playtoday). That means Roblox kept approximately 79% of the revenue, with only 21% making its way to creators.Roblox competitor Core Games offers a 50/50 split in creator revenue (Source: Core Blog)The demographics of successIt’s also worth noting who the successful developers typically are. Despite Roblox’s marketing emphasis on child creators, the most profitable games are increasingly developed by professional studios rather than individual kids.These studios often employ teams of adult developers, artists, and marketers, hardly the young creators portrayed in Roblox’s promotional materials. The economics of the platform have matured to favor professional operations, leaving child creators further behind.The psychological impactThe gap between Roblox’s marketing promises and economic reality isn’t just a financial issue; it has psychological implications for young users.The cultivation of spending habitsRoblox’s economic design encourages spending while making earning difficult. Children learn that purchasing virtual items is easy and instantaneous, while the path to earning real money is filled with obstacles. This imbalance normalizes consumption over creation, despite the platform’s creative marketing.The exploitation of social pressureMany Roblox games leverage social dynamics to drive spending. Limited-time items, exclusive accessories, and status symbols within games create social pressure to spend. Children, who are particularly susceptible to peer influence, often feel compelled to buy Robux to keep up with friends or fit in with the community. I’ve written about these dark patterns before extensively.The devaluation of creative workPerhaps most troubling is how Roblox’s economic structure devalues creative labor. When children spend countless hours creating games that generate substantial engagement but minimal returns, they’re learning that creative work isn’t fairly compensated. This stands in stark contrast to the entrepreneurial values the platform claims to promote.What ethical creator platforms would look likeRoblox isn’t inherently bad — it offers genuine creative opportunities and has inspired many children to learn coding and design. The issue lies in the disparity between its marketing claims and economic reality.An ethical platform focused on empowering young creators might include:Transparent Economics: Clear, easy-to-understand information about exchange rates, fees, and realistic earning expectations. As Tristan Harris of the Center for Humane Technology argues, ethical design requires systems that align with users’ best interests rather than exploiting psychological vulnerabilities for profit. Harris has spent years advocating for technology that respects human attention and agency Humanetech, principles that should extend to economic models targeting children.Fair Revenue Sharing: A higher percentage of revenue going to creators, particularly young ones who may not have alternative income sources. Shoshana Zuboff’s work on surveillance capitalism provides important context here, as she describes how tech platforms often engage in “unilateral claiming of private human experience as free raw material” Harvard Gazette that primarily benefits the platform rather than users. Child creators deserve better protection from such extraction.Reduced Barriers to Entry: Lower or no fees for initial content creation, with costs recouped from successful content instead. Professor Sonia Livingstone’s research on children’s digital rights suggests platforms should be designed with children’s developmental needs and rights at the center. As she notes, “almost every aspect of children’s lives has an online dimension” Media@LSE, meaning that barriers to participation can have real consequences for young people’s development and agency.Accessible Cash-Out Options: Lower thresholds for converting virtual currency to real money, making earnings accessible to more creators. Dr. Katie Davis, who directs the Digital Youth Lab at the University of Washington, has researched what she calls “design abuse” in technology platforms aimed at children Tilt Parenting, highlighting how economic systems can be designed to better support youth development rather than exploit it.Age-Appropriate Economic Education: Tools and resources that help young users understand the platform’s economy without exploitation. This aligns with what Livingstone and Davis both advocate for: digital environments that empower children with knowledge rather than obscuring how systems work to manipulate them.Moving forwardParents and educators need to approach Roblox with clearer eyes. The platform can indeed be educational and creative, but its economic promises deserve skepticism. Rather than accepting marketing claims at face value, adults should help children understand the platform’s true economics and set realistic expectations.For Roblox itself, there’s an opportunity to align its economic reality with its marketing claims. A more equitable revenue share, lower barriers to monetization, and more transparent economics would go a long way toward making the platform’s “creator empowerment” narrative more than just a slogan.As for me, the next time a child tells me about their dreams of Roblox riches, I’ll still smile encouragingly. But I’ll also gently help them understand the platform’s real economics and perhaps suggest additional pathways to turn their budding game design skills into future opportunities. After all, the creative spark Roblox ignites in children is real and valuable, even if the platform’s economic promises often aren’t.The bottom lineRoblox has built a multi-billion-dollar business largely on the creative labor of children, while marketing itself as an educational platform that empowers young creators. The reality is a sophisticated economic system designed to extract maximum value while returning minimal profits to most creators.If Roblox wants to claim it’s empowering the next generation of entrepreneurs, its economic model should actually do that. Until then, we should approach their educational and financial claims with the skepticism they deserve.Sam Liberty is a gamification expert, applied game designer, and consultant. His clients include The World Bank, Click Therapeutics, and DARPA. He teaches game design at Northeastern University. He is the former Lead Game Designer at Sidekick Health.The Roblox’s creator illusion was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 81 Visualizações
  • UXDESIGN.CC
    Reddit’s Shiny Secret: Gamifying for 1.1 Billion People
    Inside the mechanics that make us want to show off online 🏆Continue reading on UX Collective »
    0 Comentários 0 Compartilhamentos 76 Visualizações
  • UXDESIGN.CC
    There’s always more pie
    A man presents a woman with pie. She’s ecstatic! Pies fill the kitchen and cover the floor. Credit: Me.I work at Cisco. Recently, several business groups have been merged into a larger organization, fundamentally reshaping the way teams collaborate. With these kinds of large shifts, uncertainty is inevitable. Priorities may shift, roles may change, and long-standing projects could be reassessed or even abandoned.I’m beginning to feel a bit uneasy…What if the work I’ve championed gets deprioritized? What if the relationships I’ve built no longer hold the same weight? What if I have to start over?What if there’s not enough pie?It’s all about the pieHow big is the pie? How much is there to go around? Just a few slices? Can everyone have a slice? Or are there only a handful of morsels that of which a few individuals can partake?I know I discussed some of these ideas in my last article, but I think this is a mindset problem. In her book, Mindset, Carol Dweck defines two different mindsets that people can have:A fixed mindset in which a person’s inherent qualities like intelligence and talent are set in stone. In my case, I’m feeling like my skills and my team’s role are being challenged by this change. If my work no longer holds the same relevance, does that mean I don’t either? If priorities shift, will I struggle to adapt? This mindset ties my worth to the stability of my role, making any disruption feel like a personal failure rather than a natural evolution of work and opportunity.Or, a growth mindset in which a person’s qualities can change and be cultivated by effort, learning, and experiences. In this mindset, my value isn’t tied to a specific role or project but to my ability to grow, adapt, and contribute in new ways. Instead of fearing change, I can view it as an invitation to expand my skill set, build fresh relationships, and discover new ways to create impact. If my previous work is no longer as relevant, that doesn’t mean I’m obsolete — it means I have an opportunity to evolve, to find new problems worth solving, and to help shape what comes next.And, if I’m able to challenge my fixed mindset, I’m pretty sure it’s going to lead to more pie. Not just for me, but for everyone around me.BECAUSE WE CAN ALWAYS MAKE MORE PIE!This organizational change isn’t a zero-sum game. It’s not about fighting over the last few slices. It’s about realizing that with new people, new challenges, and new ideas, we’re baking an entirely new pie. Bigger. Different. Maybe even better than before.Instead of seeing change as a loss, I want to see it as an opportunity. An opportunity to learn, develop new skills, and exercise new levels of influence. A chance to push myself, build relationships, and contribute in ways I hadn’t before.There are always going to be new, interesting problems that need solving. Change brings problems, problems bring opportunities, and opportunities bring us more delicious pie!I want pie. And, unless you’re a complete psychopath, I know you want pie too!Big changes are on the horizonI had an art instructor in grad school, Marshall Arisman. He was a human gatchapon, dispensing mystical stories and incredible wisdom between drags on his cigarette. One of the things that I think about on almost a daily basis was something he said about ego.“When you’re making art, you need to throw your ego away. Too much of it, and you may be so afraid of failure that you never put a mark on the canvas. Or, you may immediately fall in love with your paint strokes and never be able to put another one on the canvas.”Ego is the work killer. It forces you to focus on yourself and disrupts your ability to create art, to focus on the problems and opportunities in front of you.Some tools to get thereIn my path to make great work and eat lots and lots of pie, there are a few things from Dweck’s book that I’m trying to keep in mind.Cultivate a Growth Mindset in My Internal MonologueI’ve been paying close attention to my thoughts about these changes. Am I labeling myself or others as incapable? Am I assuming that negative outcomes are inevitable, like the pie is running out and there won’t be enough to go around?I need to challenge these fixed-mindset thoughts and reshape them into something more productive. Instead of thinking, “I’m not good at this new system,” I can tell myself, “This new system is challenging, but I can learn it with effort and practice.” Instead of seeing change as a threat, I can see it as an opportunity to bake more pie — new skills, new relationships, and new ways to contribute. Shifting my internal monologue from judgment to growth is a key part of maintaining the right mindset through this transition.Embrace Challenges as Opportunities for GrowthI want to thrive on challenges. Instead of seeing this organizational shift as a threat that might expose my limitations, I’m choosing to see it as an opportunity — to learn new skills, adapt to different situations, and expand what I’m capable of. When faced with a new role or responsibility due to this change, I want to approach it with curiosity and a belief that I can develop the necessary skills.I keep reminding myself: “The passion for stretching yourself and sticking to it, even (or especially) when it’s not going well, is the hallmark of the growth mindset.” In other words, if I want more pie, I have to roll up my sleeves and help bake it.Value Effort as the Path to MasteryIn a fixed mindset, I might see effort as a sign that I’m not good enough — that if I were truly talented, I wouldn’t have to work so hard. But a growth mindset reframes effort as the key ingredient for mastery. With all the changes happening, I know I’ll need to put in extra effort to learn new processes, navigate shifting structures, and rebuild relationships. Instead of dreading that effort, I want to embrace it, knowing that it’s part of my own evolution.As Dweck puts it, “effort is what ignites that ability and turns it into accomplishment.”Learn from Setbacks and FeedbackWhen navigating big organizational changes, mistakes and setbacks are inevitable. My fixed mindset wants to take these as signs that I’m not good enough — that maybe I was only succeeding because the conditions were familiar. But I know that’s not true. A growth mindset reminds me that setbacks are valuable learning experiences, not proof of inadequacy. Instead of seeing a burnt pie and assuming I’ll never bake one properly again, I need to take feedback, adjust my approach, and try again. Growth-minded people keep working at challenges, refining their craft, and ultimately making better pie. And that’s exactly what I intend to do.Focus on Learning and Improvement, Not Just Proving YourselfDuring times of change, I catch myself slipping into a fixed mindset, feeling like I have to prove my worth to hold my position. But I know that’s a trap. A growth mindset isn’t about proving — it’s about improving. Instead of scrambling to defend my past work, I need to focus on learning new skills, adapting to the new organization, and finding fresh ways to contribute. That mindset shift doesn’t just make me more effective — it also quiets the anxiety that comes from feeling constantly evaluated.Let’s bake more pie!As Dweck puts it, success isn’t about clinging to past achievements; it’s about stretching ourselves, learning something new, and continuing to grow. And if we do that, we won’t just get a single slice of pie — we’ll help bake an even bigger one.An illustration of the author waving. Credit: Me.Hey y’all! I’m Trip Carroll, a design leader at Cisco and aspiring cartoonist.I write and publish a new article on design, leadership, and software development every other Monday. You can see more of my work on my website, check out my drawings on Instagram, or subscribe to my newsletter on Substack.Let’s make work great!There’s always more pie was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 94 Visualizações
  • UXDESIGN.CC
    The 500-year-old underdog no one is talking about
    What Royal Mail teaches us about optimising for trust, not perfection.Continue reading on UX Collective »
    0 Comentários 0 Compartilhamentos 94 Visualizações
  • UXDESIGN.CC
    Testing your UX ideas with vibe coding
    How UX designers can use AI app builders to their advantageAs AI continues to disrupt many industries (particularly software development), it’s critical to continue to stay up-to-date with new tools that change traditional processes–making them more efficient and opening opportunities to more people.UX designers have historically been constrained to creating high-fidelity wireframes or limited prototypes for user testing or developer hand-off. But designers now have the chance to expand their skill set to generating coded versions of their designs using vibe coding.Use vibe coding to go from Figma designs to a live demoVibe coding, coined by Andrej Karpathy, has become the new tech buzzword. With vibe coding, you simply describe what you want in an AI prompt and submit it to an LLM (large language model). For developers, vibe coding can take out some of the manual labor and be used as a starting place to then refine. For designers, vibe coding opens the doors to generating quick, working code without using developer resources.To dive deeper into UX design and vibe coding, let’s review vibe coding for UX designers, best practices for vibe coding, then demo vibe coding in Anima.Table of contentsVibe coding for UX designersBest practices for vibe codingDemo of vibe coding in AnimaVibe coding for UX designersBefore we get into the weeds, vibe coding certainly doesn’t produce code that meets testing standards; at least for now. I’m not saying UX designers can use vibe coding to create code that’s ready to push to a target environment (QA and Security engineers would be appalled by the damage).“Screw the unit tests, the vibes will carry us.”-Persomatey via RedditInstead, UX designers can use vibe coding to produce working prototypes for user testing or to show how specific interactions should function to stakeholders and developers. Designers know how strenuous and time-consuming prototypes can be; even then, the prototypes may not fully reflect interactions (like drag and drop) or just act wonky.Let’s take a look at how vibe coding works, then review the benefits and limitations of vibe coding for UX designers.How does vibe coding work?Vibe coding allows you to use natural language to describe an idea or app you want in a prompt, then have AI build the code for you–no manual coding needed. Developers can use any LLM tool, such as ChatGPT or Claude, to vibe code. Then, insert any simple prompt like, “Create a dashboard for a health tracking app for college students, and use colorful, modern colors and fonts.” After getting the initial AI-generated code, developers will refine and test it to fit their standards.Using Claude to build a dashboard for a “Health tracking app”But for UX designers, we don’t really care about how clean or functional the code is–we just want to see our ideas in action (and quickly). Instead of using an LLM like ChatGPT to generate code, designers can use AI app builders.There are two ways you can use vibe coding tools: start from a prompt with tools like Bubble and Replit, or start with a Figma design with tools like Anima Playground or Lovable (we’ll look at the tools integrated with Figma later).Replit AI app builder allows creation with their agent, templates, or GitHubBenefits and limitationsAs a UX designer, you might be thinking, “I don’t know how to code, so how does vibe coding help me?” The level of code knowledge varies from designer to designer–some can read basic HTML and CSS, while others can write code straight from their designs. Either way, designers can use vibe coding to their advantage to enhance their UX process.But with every benefit comes limitations; let’s examine both to fully grasp how UX designers should use vibe coding.Benefits of vibe coding:Faster iteration: Designers can quickly visualize and interact with UX ideas; this can expedite validating initial concepts with your product team or target usersReduced dependency: To get fully-functional demos to test with users, designers have to wait for development; vibe coding allows designers to jump over this barrier and create demos themselvesGreater exploration: Because vibe coding allows easy and quick creation of live concepts, designers can explore more ideas with stakeholders and users–ensuring optimal design directionLimitations of vibe coding:Complexity: AI app builders struggle with complex projects; for instance, designers can’t upload multiple pages from Figma and ask the AI agent to create interactions between pages (but maybe one day…)Lack of context: The AI agent helping to generate the code for your designs will not have all the context you have–leaving gaps; expect more surface-level functionality and designCode quality: The code generated through these AI agents typically includes bugs and errors–requiring someone with a deep understanding of programming to fix; for designers, vibe coding should only be used to explore ideas withBest practices for vibe codingAs vibe coding gains more traction and you begin to incorporate AI app builders into your UX workflow, it’s critical to know vibe coding’s best practices to ensure you get useful and optimized output from the AI agent.1. Use specific and simple promptsPrompts should be only 1–2 sentences longUse any context-specific language for more predictable resultsPrioritize specific information, such as the target audience and their overall goal2. Start with Figma (if you can)Use a Figma plug-in, such as Anima or Builder.ioImport any level of fidelity UI design from Figma into the AI agent to referenceBuild the intended interactions and remaining app from the initial UI designAnima’s Figma plugin allows you get working code straight from your designs3. Breakup complex tasksStart with a base request for the AI agent to completeRefine the output to meet your expectations for design and functionalityAdd additional criteria for the AI agent to build (still considering simplicity)4. Use the AI agent as your collaborative partnerBe patient as you try multiple times to get the output you wantAI agents are designed to be conversational–use it to your advantageGive the AI agent any helpful examples (images, scripts, etc.) to referenceDemo of vibe coding in AnimaThere are many AI app builder tools out there–some are better for UX designers versus others. Most designers want to start out with the designs they’ve built in Figma instead of starting from scratch with a prompt for the AI agent. Tools like Anima, Lovable, and Framer each integrate with Figma to allow this.I prefer Anima to Lovable since Lovable requires you to use the Builder.io Figma plug-in to then import into Lovable. On the other hand, Anima allows you to go straight from Figma into the Anima Playground tool (plus you get good functionality on the free plan!).Let’s look at how you can vibe code in the Anima Playground tool.1. Import Figma designs into Anima PlaygroundThere are two ways you can import into Anima:I. Paste the Figma link into the Anima desktop siteHere, you can customize the framework, UI library, language, and styling using the dropdowns below the URL text field (I kept the default selections).Paste your Figma link in Anima’s desktop site to get working codeII. Use Anima’s Figma plug-inIn the plug-in, select the purple button name, “Prompt in Playground” to automatically import the Figma designs into Anima Playground.Select “Prompt in Playground” to import the Figma designs into Anima2. Review and refine the live preview of your working designsIn Anima Playground, you can toggle between the code, preview, and Figma design. As you begin interacting with the first draft of the working app, you’ll probably notice some items you want to change–this could be a button’s hover state, the color palette, or the responsiveness of the app.Request changes to the design and code using the Anima Playground chatHere are two prompts I asked in the chat after getting the initial preview of the coded designs:Make the designs responsive to the screen size changingAdd a login modal once you select “Sign in to add”Full-screen preview of the designs and interactions made from Anima PlaygroundYou can make requests to the AI agent in a natural and conversational way–making it easy to get changes in the working preview.3. Publish the live app or website in AnimaOnce you’re happy with the live preview of the designs, you can publish the live app or website using the “Publish” button near the top-right of the Anima desktop site.This will allow you to share the link with the live app to users for testing sessions or to developers for hand-off.Publish your Anima project to get a URL link to the live versionNote: You can download the code and push to GitHub, but you must upgrade to a paid plan to access these features.AI will continue to improve and disrupt our UX processes. Fortunately, certain AI tools, like Anima, allow designers to vibe code–expediting steps in the design process that are usually time-consuming and tedious.Instead of waiting for engineering resources, UX designers can take their ideas into their own hands and create full-functional demos to show stakeholders or test with target users.Vibe coding opens more opportunities for designers, no matter their experience with programming, to generate working demos to explore, iterate, and test (then repeat).Testing your UX ideas with vibe coding was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 97 Visualizações
  • UXDESIGN.CC
    A case for slow growth
    Lessons on strength from old-growth pinesYears ago, when my wife and I were looking at our house, we had a home inspector out before making the purchase. One of the things they pointed out was the lumber used in its framing. The house was built in the early 60s, primarily with pine. But it wasn’t the kind of pine we see today. This was different lumber, from a different era of forestry.Today’s pine trees are bred to grow fast to meet the demands of modern lumber production. They mature in about half the time, but with far fewer growth rings. And those rings matter. Fewer rings mean weaker lumber. The fibers are looser, the boards are lighter, and the structural integrity just isn’t the same.Comparing slow growth trees with faster growth trees.Fast-growing trees also tend to have more knots and their lumber is prone to warping. So when you go to your local hardware store, you’re left picking through a stack of twisted 2x4s, trying to find one that’s reasonably straight. All in the name of speed.Why am I telling you this? Because the design industry has grown just as fast as these modern pine trees. And I can’t help but wonder: Has all this rapid growth contributed to the fragility we’re seeing in our discipline?There’s a growing impatience in the design world. An over-prioritization of fast growth and promotions. It’s not uncommon to see designers rise quickly, senior in three years, leading a team by five. In some cases, “Head of Design” in a few years. It looks like serious growth. But growth isn’t just about roles and titles. It’s about the time we put in. It’s about the strength that comes with design maturity. The slow, steady layering of wisdom that only comes from solving problems again and again and again, sometimes poorly, sometimes well, but always learning.Each of those moments adds a growth ring to your capability, making you stronger, just like those old trees.That’s the difference between fast growth and slow growth. One shoots up quickly but ends up weaker. The other takes its time, builds deep roots, weathers the storms, and slowly adds ring after ring to its structure. One might look impressive faster, but the other stands the test of time.A case for slowing design growth expectations.Fast doesn’t equal strongDesign growth today can often resemble those modern pine trees. In just a few years, a designer can learn the tools, master process terminology, and ship polished work. They become efficient, confident, and visible in the org. It can feel unstoppable. And with bootcamps and courses galore, it adds to the expectation of speed.But early speed is often deceptive. Our field is filled with familiar problems: login flows, shopping carts, checkout patterns, etc. It’s easy to lean on mimicry of what came before. And sometimes, that’s appropriate. There’s no need to reinvent the wheel every time.But building a career in design isn’t about copying and pasting. It’s about understanding the problem well enough to decide what your next steps are, when to use a pattern, when not to, and when you need something new.True design strength shows up over time. It shows up in designers who’ve encountered variations of the same problem across different contexts, and learned to adapt their approach. It shows up in people who can explain why one option works better than another and teach that thinking to someone newer to the field.That kind of confidence and success doesn’t come from speed. It comes from time. And repetition.Why time served mattersSo does this mean designers are stuck in a waiting game? Not at all. Time can multiply growth, but only when time is used with intention. It’s not about stacking years. It’s about repetition.Experience depth increases over time.If a designer has solved a design challenge once, it’s tempting to assume they’ve mastered it. But have they? Was part of their success unintentional, or difficult to explain? Were there any new approaches or techniques they tried? Were they lost or stuck at any point?Designers don’t grow by doing more projects. They grow by going deeper into the problems they’ve already seen, bringing new understanding to familiar territory. These opportunities can only come by increasing time served in the industry. Shorter time only allows for so many opportunities and projects. More time increases the potential for additional opportunities and chances to practice.That’s how design becomes less of a mystery and more of a practiced discipline.Chase depth, not titlesIf you’re roughly 5–7 years into your career, this part is for you.You’ve likely seen other designers in the industry rise quickly. Or, maybe you’ve risen quickly yourself. But you’re entering a phase of maturity where speed matters less than strength.At the risk of mixing metaphors, let’s talk about weightlifting for a bit. You don’t get stronger by lifting the barbell once. You get stronger by showing up again, with better form, increasing weight, and consistency. You don’t just complete the task. You refine your process. You learn how to explain it. You know the difference between a clever solution and one that’s genuinely effective.You’re in the thick of growth, stacking reps, and building your design muscle.Design challenges are like a barbell with big weights to build muscle.But, we should admit, repetition can start to feel like boredom. When you’ve solved a similar problem before, it’s easy to check out. But boredom can be a signal. It might mean you’re ready for a deeper challenge. That doesn’t always mean a new title or role.Try a stretch activity: solve the same problem with new constraints, lead a project from start to finish, or mentor someone through the process.Set personal challenges: like reducing friction or increasing clarity.Don’t overlook side projects that flex different muscles: write, prototype, teach, explore, or work on a passion project.Boredom isn’t a sign to quit. It’s a sign to go deeper.And if it feels like you’ve hit a plateau, you’re not alone. It’s a natural part of growth. You’re capable and trusted, but the pace of growth feels slower. That doesn’t mean you’re stuck. It means you’re entering a deeper phase of growth. The gains feel smaller, quieter, harder to see. But this is when judgment, influence, and systems thinking start to take root. Keep going. Growth is still happening even if it’s beneath the surface.Lean into the repetition. Don’t rush past it. Every project you revisit. Every ambiguous brief you tackle. Every solution path you refine, that’s where growth lives.For leaders: reward the ringsIf you’re leading designers, your job isn’t just to identify potential, it’s to help develop it.That means more than giving designers higher-profile work. It means giving them increasing and repeated exposure to ambiguity, tension, and long-term thinking. It means letting them revisit the same types of problems in new contexts, while you mentor and coach them through.It means recognizing designers who are growing through repetition, not just visibility. Are they displaying consistent behaviors over time? Or just occasional wins? Are they stopping to think about the problem space or rushing in with a solution off the shelf? Are they leveraging their previous encounters with this theme, or beholden to it?Intentional mentorship and coaching (yes, they’re different) can guide designers toward deeper capability.Ask yourself:How independently capable were they in their last challenge?Are they showing increased awareness in how they approach similar problems?Do they know what to reuse and what to adapt?Are they building contextual awareness and adjusting accordingly?Are they applying previous experience without being boxed in by it?These are the signs of depth. These are the growth rings you’re looking for.When the evidence is there, that’s when you promote. Not because someone’s eager. Not because they are motivated. Not because they got a design award. Because they’re ready. Ready to handle more. Ready to support others. Ready to carry the increasing weight that comes with advancement.That’s the kind of strength our discipline needs.Hey, I’ve been there…I want to take a moment to tell you these aren’t just armchair ideas. I’ve lived through this in my 15+ years as both a designer and design leader. I’ve been the designer pushing hard for the next level and being told I’m not ready. I’ve been the leader who’s failed to recognize when people are making steady progress. I’ve had to learn from my over-enthusiasm as well as my over-scrutiny. My hope is that this is an encouragement to you and provides a launching point for your next steps, whether as a designer or in design leadership.Just keep growing!Keep growing. Steady on.Old-growth trees don’t rush. They grow slowly and steadily, adding a new ring each year. They survive storms. They stand among their peers. They don’t sprout unnecessary branches that weaken them. Their strength is in their endurance.We need more designers like this.Let the opportunities that come with time do the shaping. Let repetition build muscle. Keep solving, keep stretching, keep showing up. Not because it’s easy. But because it’s worth it.You don’t have to grow fast. You just have to keep growing. Keep adding those rings.A case for slow growth was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 110 Visualizações
  • UXDESIGN.CC
    The best way to design for indecisive leaders is to serve them gelato
    How to guide indecisive leaders to choose without taking over their jobContinue reading on UX Collective »
    0 Comentários 0 Compartilhamentos 121 Visualizações
  • UXDESIGN.CC
    DesignShift: From mindset to access
    Changing the system changes behaviors.Much of our society is built on the premise that hard work and a positive mindset can help us escape even the most challenging situations. If we apply ourselves enough, we will reach that goal, get that promotion, and become successful. I call this phenomenon the “Mindset Myth.” The Mindset Myth is the belief that individual success or failure is largely determined by one’s internal drive and attitude. In this narrative, we position the individual as the one in charge of their destiny, and ignore the larger, systemic barriers — the ACCESS that people may or may not have.Mindset can be defined as things like courage, stamina, hard work, persistence, etc. Access, on the other hand, relates to things like money, housing, relationships, etc.Mindset vs. access in design.In design, the Mindset Myth shows up in the way we frame our design challenges, as well as the solutions that get produced in the end. For example, In human-centered design, one of the most widely adopted design practices of the last 30 years, are efforts are focused on making it easier for people to navigate the world. We spend time with our end-users to understand their wants and pain points — then, we create or adapt these products and services to remove as much friction as possible — whether that means creating a simplified interface, making features more accessible, or creating new interactions altogether. Our goal, as the name of the practice states, is to center individual humans’ needs. But what we often ignore is the context in which those needs are formed — the systemic conditions, or lack of access, that created them in the first place.One example that comes to mind when I think about mindset vs. access is fitness trackers. Today, Fitbits, Apple Watches, and other trackers are permanent fixtures on most adults’ wrists (and yes, even many kids are wearing them now, too — have to start building healthy habits early, of course). From a functional perspective, the devices were designed to help people who struggle with their health goals to become more motivated and more active. But why are fitness trackers the solution? Well, physicians tell us that we need to walk 10,000 steps a day to have a healthy heart, reduce stress, and keep our bodies in relatively good condition. When product designers at Apple decided that this was a profitable space to play in, they likely took a human-centered approach to understanding the challenges and barriers that might prevent someone from reaching those 10,000 steps. They may have heard, through user interviews and surveys, that “the problem” is that people lack motivation to workout, or they don’t know where to get started. Their solution was to design a gamified gadget that tells people when to stand and walk, and gives people the ability to compete with their peers. On the surface, well-designed fitness trackers solve a problem: helping us become more active. But when we look at the bigger picture, I can’t help but wonder… are we solving for surface-level individual barriers, rather than addressing the inequities in the underlying systems?We’ve created environments and cultural norms where our jobs and our lives keep us inside for most of the day. Most office workers spend 8+ hours at the computer, and the pace of meetings, deadlines, and deliverables limits our ability to go for a walk or take a break. We’ve shifted away from communal living arrangements where interacting with others required physical movement and connection, and instead, we more often relate to each other through screens of one size or another. The pandemic only accelerated the pace of human separation and sedentariness — we don’t even have to leave our homes to get groceries anymore. By looking at the systems we’re operating within — ones that were created to cater to individualism and ease — we can see how the lack of access to conditions that allow us to be healthy may be a bigger systemic issue to solve than the motivation to take a stroll around the block.When we ignore access, we increase the divide.When we focus on mindset over access, we reinforce discriminatory systems that shape so much of the world we live in. Poverty and inequality are often framed around a narrative that suggests that “people should just pull themselves up by the bootstraps” — insinuating that if people experiencing poverty just apply themselves, they can become wealthy.I recently attended a webinar by HmntyCntrd, titled Dear Researchers & Designers — We Need to Talk About Race (a version of the content is also shared here), where we explored how structural racism plays a role in our design and research practices. In the webinar, host Alba Villamil explained how the way we frame questions affects what problems we tackle. Villamil proposed that rather than asking research and design questions that focus on individual behaviors, we should find ways to reframe them to include a systems view.Instead of asking: How do we increase user enthusiasm for signing up for our government service? We could be asking: How has our government agency’s racist policies and frontline workers impacted users’ enthusiasm?Or instead of asking: What gaps do users have in their financial literacy about applying for a loan? We could look at it from a systems lens by asking: How does discriminatory bank policy create additional barriers to applying for a loan?As designers and researchers, we need to look beyond mindset and start focusing on access if we want to design a more just society.DesignShift: How might we Shift the focus of design from mindset to access?In the last few months, I’ve been exploring a better future for and through design through different DesignShifts. Part of the work has been about asking myself how we can shift from designing for mindset to designing for access.In my search, I’ve found 5 ways that can help us get started:Recognize exclusionName the systemShifts models of behaviorBelieve in peopleTransfer access1. Recognize exclusionAs designers, we claim that through testing, surveys, and extensive secondary research we’re able to better understand our users’ needs, wants, thoughts, behaviors, bisases, and barriers. This focus on the user is referred to as User-centered design or Human-centered design and is widely adopted by individual practitioners and design firms all over the globe.And while User- and human-centered design is important, before we try to find our ideal “user” and start developing personas or user profiles, we must examine existing power structures, our own biases, and the problem with designing for the average user. In the book Design Justice: Community-Led Practices to Build the Worlds We Need, Sasha Costanza-Chock writes that “designers tend to unconsciously default to imagined users whose experiences are similar to their own. This means that users are most often assumed to be members of the dominant, and hence “unmarked” group: in the United States, this means (cis)male, white, heterosexual, ‘able-bodied,’ literate, college-educated, not a young child and not elderly, with broadband internet access, with a smartphone, and so on.” Costanza-Chock has done extensive research on the design of technology products and the biases brought on by focusing on a narrow “highly profitable, subset of humanity.” But the problems with lack of diversity when developing personas and user testing isn’t limited to technology — it spans all design industries, from marketing, UX, and Industrial design, to fashion, architectural, and experience design. Our current design practices continue to reinforce current power structures by centering the needs of some while ignoring the needs of others. Costanza-Chock explains that because often marginalized groups are not among the target users or personas, “their needs, desires, and potential contributions will continue to be ignored, sidelined, or deprioritized.”As we examine our exclusionary practices and philosophies, we must also take a look at the places we learn and implement these practices. The world of design is still exclusive. Our agencies are located in expensive cities and few people can afford our services. We might have great DEI statements (at least before the least executive orders) and say that we don’t discriminate against class or race in our hiring methods, but when our offices are in areas where only the highest paying people at the company can afford to live, it sends the opposite message: you’re welcome to apply, but be prepared to be mentally and physically exhausted not just from the work, but from trying to keep up with the commute, the status, and the expectations of conforming to our definitions of what’s “good” and what works.2. Name the system. Shift the narrative.If we want to use our skills for good, we also need to develop the courage to call out harm when we see it. In this LinkedIn post, nidhi kalaiya exemplifies this notion by saying: “Women are not the problem — it’s the patriarchy. Being Black or brown isn’t the problem — it’s White Supremacy. Disabled folks are not the problem — it’s ableism and inaccessibility. Trans folks are not the problem — it’s transphobia and the gender binary. First Nations communities are not the problem to solve — it’s coloniality.”Much of our design solutions are focused on the person experiencing the harm vs. the system causing it. By naming the system — not the symptoms — we can start to move away from blaming individuals for the problems they experience and start fixing broken systems.One way that we can name systems is by examining the narratives we tell ourselves and each other. At its core, this comes back to acknowledging the The Mindset Myth — the belief that hard work and the right mindset is the recipe for success — and actively start to reframe the questions we ask during our design process:Who are we consciously or subconsciously excluding from our considerations?What personal or systemic biases are affecting our thinking, and what impact have they had on marginalized communities?Where did we learn these biases?How have we contributed to the proliferation of these biases in the past, and what can we learn from those experiences so that we don’t repeat them?As a communications designer, I believe that the stories we tell ourselves, and each other, play a big role in how we move through the world. Lately, I’ve been exploring ways to shift away from a focus on mindset and create narratives that highlight the systematic problems that are at the root of our lived realities.Shifting narratives and personal beliefs is no easy task, but one place we can start is through practices. I recently came across a framework called Unpacking, Expanding, and Imagining Shifting Narratives, created by Healing Justice London, and published here as a Collective Imagination Tool.The framework suggests a four-step process of examining a narrative — Defining, Unpacking, Expanding, and Imagining — in order to change it.https://www.collectiveimagination.tools/unpacking-expanding-and-imagining-shifting-narratives3. Shift models of behaviorWe’ve explored why it’s important for designers to shift our goals from changing individual behaviors to changing broken systems. But it’s equally important to understand models of behavior change we’re currently operating within. “Know the rules before you break them. Many of our design solutions are created based on a deficiency model of user behavior, which is the belief that people fail to take action because of their own personal shortcomings or lack of motivation. For example, we assume that people don’t recycle because they don’t care about the environment, when the issue might be that they don’t have access to simple ways to sort their trash. Or, we assume people in larger bodies are lazy, rather than considering underlying medical conditions, lack of access to healthy, affordable foods, or even the cultural constructs that make us believe there’s something inherently wrong with being in a larger body in the first place.The opposite of a deficiency model of user behavior is what we call a Social Model. It was popularized through examining access from the perspective of people living with a disability. The social model highlights how people are disabled by barriers in society, not by their impairment or difference. The problem is not someone’s disability, but rather the built environment that was only created for a certain type of individual. Once we understand these different models of behavior change, we can start to have better conversations around HOW we approach our design challenges. Shifting our mindset from deficiency-based to a social model can help designers see the structures that are holding us and other people back.4. Believe peopleDesigning for access starts with a belief that people are doing the best they can. In this podcast episode, Ezra Klein interviews Labor organizer Jane McAlevey about what it takes to mobilize people within a labor movement. McAlevey, who has organized hundreds of thousands of workers on the front lines, highlights how a fundamental belief in everyday people is crucial to the success of her work. In the episode, she says:“I start out every day genuinely believing that people can make radical changes in how they think about and see the world. And that means you have to be willing to work with them, even if their views are fairly different from your own.”Designers can learn a lot from community builders. They’re on the front lines, engaging and designing WITH, not FOR, the people they’re supporting.They know how to create solutions that last beyond the timelines of a set project, because rather than working toward short term fixes, they’re focusing on long-term systems change. And, maybe most importantly, community builders know that they themselves are not the answer. Their role is to inspire and activate the inherent knowledge, and capabilities, of the people around them. .john a. powell ( who spells his name in lowercase in the belief that we should be “part of the universe, not over it, as capitals signify”), the Director of the Othering & Belonging Institute at the University of California, Berkeley, reinforces this idea in his suggestion that we should “Be hard on structures and soft on people.” When we believe in people and practice creating change with, rather than for, we also minimize the risk of being pulled back into the Mindset Myth. It may be a lesson we have to keep relearning, over and over again, but it’s one that’s worth the commitment.5. Transfer accessWe often think that providing greater access to a space, a resource, or a way of thinking means opening the door to a previously closed opportunity and inviting others in. However, sometimes we have to go one step further — we need to leave the space ourselves, and give the keys to someone else.A few years ago, I saw this post by illustrator and designer, Timothy Goodman.Goodman was asked to speak at a conference, which likely would have had a positive impact on his career. However, as he explored the list of speakers from the previous year’s conference, he noticed that out of the 20+ individuals on the roster 15 were white men, and only two people people of color. Because of this inequity, Goodman made the decision to decline the offer to speak, and instead, encouraged the organizers to invite more people of color through a website he created called People of craft. Goodman recognized his privilege, but didn’t simply suggest that the organizers rethink their roster. He abdicated his own space, and rejected an opportunity for self-promotion, in order to make space for people who have historically been ignored by our industry.Removing barriers. Opening doors.Shifting our narratives from mindset to access is one first step towards changing our approach and perspective. In this post, I’ve explored the challenges with focusing our design solely on mindsets, and how shifting our focus to providing access can create more opportunities for systems change. But before we wrap-up, I want to acknowledge that I’m not dismissing the importance of mindset as a whole. I was a professional athlete for most of my life, and I’ve experienced the impact that mental training can have on performance first-hand. However, through my research and practice as a designer, I have also come to believe that a positive mindset alone isn’t enough. My hope is that by proposing (and practicing) these DesignShifts, we can challenge the status quo and embrace the parts of ourselves, and each other and start seeing the full picture.As you move on to whatever’s next in your day, maybe you’ll find a moment to reflect on where, or how, you can use your own design talents to remove barriers and open doors for others: to leave you with these provocations:Rather than telling people to eat healthy, how can we give people greater access to affordable and healthy food?Rather than telling people to walk more or gamifying their step-counts, how can we create more pedestrian-friendly cities?Rather than telling women that they need to learn how to lead like men, how do we design workplaces that recognize the value of different leadership styles?Rather than being held back or harmed by the Mindset Myth, what do we need in order to start designing the conditions where doing the right thing is easy?There are no perfect answers, of course. However, I beleive that the real design challenge is to address systemic barriers rather than individual behaviors. We have to redesigning the systems that perpetuate inequality and limited access in the first place. When we focus on access over mindset, we create opportunities for everyone to participate fully in society, regardless of their starting point or circumstances.Resources mentioned in this post:Systems Change Series | Design Thinking, Systems Thinking & Futures Thinking 101Ruha Benjamin — Is technology our savior — or our slayer?Dear Researchers & Designers: We Need to Talk About Race — Alba VillamilLabor organizer Jane McAlevey on The Ezra Klein Show | VoxNarrative Strategy Framework Tool Framework DownloadDesignShift: From mindset to access was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 122 Visualizações
  • UXDESIGN.CC
    Designing for emotional residue over functional outcomes
    Why design’s most human contribution is now its most strategic advantagePhoto by vackground.com on UnsplashThe shift that’s already happeningMore and more, our tools are designing with us, or for us.OpenAI is now building an AI software engineer, capable of doing everything a human developer does — from planning and writing code to testing it and managing pull requests. It’s not a concept. It’s already shipping. It’s happening now.We’re entering the era of agentic software engineering. Autonomous systems can scope, build, and deploy functional products with minimal human input. What used to take months can now happen in hours.When execution becomes infinite, it stops being a differentiator. Functionality becomes a commodity.The question becomes clear:When anyone can build anything, what makes it worth returning to?It won’t be just what a product does. It’ll be what it leaves behind.That’s emotional residue — the subtle signal that lingers after the feature is done and the tab is closed. The feeling that you were understood. The sense that something cared.It’s not utility. It’s memory.In a future where code becomes cheap and automation is everywhere, emotional residue may become the most valuable output of design. And it will be the most human part of every product we make.Photo by vackground.com on UnsplashEmotional Residue — the hidden layer that lingersMost products are designed to deliver outcomes.A task is completed, a button is pressed, a notification arrives. It works. It functions. It passes the test.But function isn’t the full story. Not anymore.Emotional residue is what remains after the interaction ends. It’s the quiet impression a product leaves behind — the subtle signal that this was made with care.It’s the tone of the copy when something goes wrong. The way a system lets you recover with dignity. The rhythm of a transition that breathes, instead of rushing to the next screen.We don’t always remember what we tapped or typed. But we remember how it made us feel — competent, calm, confused, seen.That feeling, that residue, is more than a nice-to-have. It’s strategic. It builds trust. It drives repeat behaviour. It turns users into advocates.And as generative tools take over execution, emotional residue becomes one of the few things AI can’t generate on its own. Because it’s not just about what gets built. It’s about what gets felt.Photo by Grigorii Shcheglov on UnsplashThe new frontier of design in agentic systemsAs agentic systems take on more of the building, the structure of product teams is already shifting.Engineers are becoming system architects, not line-by-line implementers. PMs are steering outcomes, not grooming backlogs. Designers are moving from layout to logic, collaborating with models to define not just how things look, but how they behave.Some argue this evolution will lead to clearer handoffs and tighter lanes. But the opposite is true. The lines are blurring — and that’s where the real opportunity lies.Agentic tools don’t streamline handoffs, they collapse them. When a PM can generate a prototype or a designer can prompt a working flow, who owns what becomes less important than how we think together.This doesn’t reduce the need for collaboration — it intensifies it.It demands shared intuition, shared context, and shared care for the user.And the pressure is already here. Shopify recently told team leads they must justify why a task can’t be done with AI before opening a new headcount. Across the industry, big tech companies are holding back hiring while leaning harder into agentic tooling and automation. The message is clear: the teams that remain must deliver more with less — and work more fluidly than ever.That’s where design steps up. Not as a decorator, but as connective tissue.The discipline that moves across silos, shaping cohesion where automation fragments it.Design becomes less about artefacts, and more about alignment.Less about ownership, and more about orchestration.In a world where anyone can generate anything, the hardest thing to create is coherence — and that’s something only well-aligned, cross-functional teams can deliver.Photo by Google DeepMind on UnsplashWhy emotional residue will define great productsWhen functional outcomes become commoditised, emotional resonance becomes the differentiator.This isn’t theory — it’s how people actually experience products. Users don’t return because a button worked. They return because the experience made sense. It respected their time. It gave them confidence.Research indicates that users form lasting impressions based on how a product makes them feel, not just on its functionality. A study highlighted in the Journal of Interactive Design demonstrated that incorporating emotional design elements led to a significant uplift in conversion rates and increased customer satisfaction levels.That’s emotional residue — and it drives real business outcomes.It builds trust.Shapes brand memory.Increases retention.It turns a moment of use into a lasting impression. And in a crowded market, those impressions compound.We see it in the products people love and advocate for.- Apple doesn’t just work — it feels considered.- Figma doesn’t just load fast — it makes you feel fast.- Linear doesn’t just manage issues — it gives you a sense of momentum and clarity.These aren’t just design wins. They’re emotional signals, deeply aligned with the product’s core value.And in a world where every competitor can match your features, how your product feels becomes the moat. The deeper emotional layers are the hardest to replicate.They don’t come from prompts. They come from care, from context, from teams that sweat the details most users will never see — but always feel.Photo by Google DeepMind on UnsplashWhat execs should do about itIf emotional residue is the new frontier, we need to design for it deliberately. That doesn’t mean adding polish at the end. It means rethinking how we prioritise, how we collaborate, and what we reward.1. Make emotional quality a first-class product concern. Don’t relegate it to the tail-end of design reviews. Bake it into the brief. Make it part of the definition of done. Treat tone, timing, and clarity as seriously as logic and layout.2. Shift from artefact ownership to shared emotional intent. In agentic environments, the boundaries between disciplines blur. Use that to your advantage. Align around how the product should feel, not just what it should do. Intent becomes the new spec.3. Invest in cross-functional design fluency. It’s not enough for designers to care about emotion. PMs, engineers, and AI agents all shape experience now. Build shared language and shared standards for emotional quality across roles.4. Use AI to compress execution, then spend that time on care. The win isn’t just faster delivery. It’s more space for depth. Let automation handle the repeatable work so humans can focus on the emotional craft — the things AI can’t yet feel.5. Measure what lingers, not just what completes. Traditional metrics track conversion and completion. But also look at retention, advocacy, NPS drivers, and qualitative feedback. What do users say when they describe your product to others? That’s your emotional signal.The best products of the next decade won’t just be fast or smart — they’ll be the ones that leave people feeling something worth returning to.Photo by Google DeepMind on UnsplashThe opportunity aheadIn a world where AI can build anything, it’s easy to think the work is done.But what matters most won’t be what gets built. It’ll be what gets felt. The products that endure will be the ones that care about what lingers — not just what launches.Design is how we create it that emotional residue. It’s how we signal intent, earn trust, and make technology feel human, even when humans aren’t in the loop.As agentic tools accelerate execution, the opportunity isn’t to do more.It’s to go deeper.To use the time we save not to ship faster, but to ship better. To move beyond features, and design for the feeling that remains after the feature is done.Because the future of the product won’t be defined by speed, scale, or specs.It will be defined by the quiet, human moments our products leave behind — and the teams who cared enough to create them.Designing for emotional residue over functional outcomes was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 124 Visualizações
  • UXDESIGN.CC
    Healthcare needs interior decorators
    To heal a body, you need a degree. To touch a heart, you need to be an interior decorator of perception.Continue reading on UX Collective »
    0 Comentários 0 Compartilhamentos 115 Visualizações
  • UXDESIGN.CC
    Virally wrong, the shape of design to come, the A-Z of design
    Weekly curated resources for designers — thinkers and makers.>be me>grind for a decade trying to help make superintelligence to cure cancer or whatever>mostly no one cares for first 7.5 years, then for 2.5 years everyone hates you for everything>wake up one day to hundreds of messages: look i made you into a twink ghibli style haha — Sam AltmanWhen faces go virally wrong →By Darren YeoIs research a black box at your org? →[Sponsored] If stakeholders can’t find user insights, they can’t use it to make smart decisions. Criteo solved this challenge by switching to an AI-powered research repository that everyone (no matter their role) can access. Hear about their journey with Marvin.Editor picksWhere design finds (and loses) its soul →The heart, humanity, and intuition behind great design.By Al LuccaThe shape of design to come →Change is happening.By Luděk ČernockýWhy the car horn is the most annoying UX failure on the road →How a simple safety device became a source of urban stress.By Elvis HsiaoThe UX Collective is an independent design publication that elevates unheard design voices and helps designers think more critically about their work.Aperture: an experimental AI concept →Make me thinkThe age of abundance →“Here’s the thing; a single human can’t understand a 10M-line software project on their own. And so they break it down into composable modules, test and document each part separately. Then you can reason about the modules at a higher level of abstraction. That’s how AI will deal with larger codebases too.”What to do →“So what should one do? One should help people, and take care of the world. Those two are obvious. But is there anything else? When I ask that, the answer that pops up is Make good new things.”The blissful zen of a good side project →“Maybe I’ve been depressed, or burned out. I don’t know. I haven’t been at my best; that’s all I really know for sure. It’s not that things have been bad, exactly, but they haven’t been easy, either. Whatever the reason: I realize now I’ve let it push my consumption-to-creation ratio wildly out of balance.”Little gems this weekIt’s not just AI that needs clear ‘prompts’ — humans do too →By NadiaWhen constraints sparked creativity →By Ian BatterbeeWhat AI image generation means for creators — the ethical dilemma →By Hara LedakiTools and resourcesWriting the onboarding experience →How to help a product introduce itself.By Nick DiLalloThe A-Z of design →26 things any designer should know.By George JosephManus AI: real or hype? →Early observations of Manus AI and comparison to other AI tools.By Ian XiaoSupport the newsletterIf you find our content helpful, here’s how you can support us:Check out this week’s sponsor to support their work tooForward this email to a friend and invite them to subscribeSponsor an editionVirally wrong, the shape of design to come, the A-Z of design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 115 Visualizações
  • UXDESIGN.CC
    The cost of UX: balancing cost, expertise, and impact
    Let’s talk about the real cost of UX design beyond the numbers and into what actually mattersPhoto by Jakub Żerdzicki on UnsplashAfter years of UX design work, I’ve learned something interesting: it’s not always about the user. Sure, desirability is crucial for end-customer products, but when you’re dealing with business-to-business tools, internal systems, or complex enterprise solutions, what’s feasible and viable often outweighs pure desirability.I’ve had my share of battles trying to convince stakeholders that user research is valuable. That’s why understanding the business side of design and the impact we make is crucial. We need to get comfortable with discussing risk, cost, and expected outcomes in ways that make sense to business leaders.And no, saying “good design is good business” isn’t enough. We need to prove it-or at least try to.In this article, I’ll share a practical model I’ve developed for understanding UX ROI by balancing cost-efficiency, risk, user testing, and the impact of design expertise.Metric for UX cost efficiencyThe real cost of design workLet’s start with the basics. Here’s a simple formula I use to calculate the direct cost of UX design work:Cost of Work (COW) = H × T × (1 + R)Where:H = Hourly rate (junior or senior designer)T = Time required to complete the task (in hours)R = Revision factor (typically 0.2–0.5 for junior designers, 0.1–0.2 for senior designers)I’ve added this revision factor because, in my experience, junior designers typically need more revisions, making their effective cost higher than their hourly rate suggests.Designer Type | Hourly Rate (H) | Time per Task (T) | Revision Factor (R) | Total Cost (COW) - - - - - - - - - | - - - - - - - - - | - - - - - - - - - - -| - - - - - - - - - - - - | - - - - - - - - - -Junior Designer | $50 | 10 hours | 0.4 | $700Senior Designer | $100 | 4 hours | 0.15 | $460At first glance, junior designers might seem more cost-effective due to their lower hourly rates. But in my experience, the increased time (T) and revision factor (R) they typically need can lead to higher overall project costs. For complex or high-risk projects, I’ve found that the efficiency of a senior designer often reduces overall expenses.Not all tasks are created equal. A simple landing page redesign is worlds apart from an enterprise dashboard revamp. Let’s adjust for this by introducing task complexity.A Quick Note: I’m not talking about roles here-a junior designer might have more experience than a senior designer in certain contexts. We’re focusing on the experience gained, assuming that senior designers have more years under their belt and a proven track record.Understanding task complexityIn my work, I’ve found that task difficulty comes down to these key factors:- Cognitive load (how much mental effort is needed)- Sequential dependencies (how steps relate to each other)- Error sensitivity (what happens when things go wrong)- Feedback loops (how users know they’re on the right track)Take configuring Multi-Factor Authentication (MFA) in an enterprise security dashboard. It’s a complex task that requires administrators to understand security protocols, authentication methods, and user access levels. The process involves multiple settings that must be configured in a specific order, where mistakes can lock out users or create security vulnerabilities, and errors might not be immediately visible.Photo by Resource Database on UnsplashThe task complexity formulaAfter years of analysing different tasks, I’ve developed this formula to measure complexity:C = (S × 0.3 + D × 0.2) × SD + (E × 0.3 — F × 0.2)Where:S = Number of Steps (weighted 30%)D = Number of Major Decision Points (weighted 20%)SD = Sequential Dependencies (a multiplier that adjusts for dependencies between steps)E = Error Sensitivity (weighted 30%)F = Feedback Clarity (weighted 20%)This formula:- Weights different factors based on their real-world importance- Emphasizes steps and error sensitivity (because these matter most in practice)- Maintains the relationship between dependencies and complexity- Better reflects what I’ve seen in actual projectsA Real Example:Let’s say we’re setting up MFA with:- 8 steps (S)- 4 major decision points (D)- Strong sequential dependencies (SD = 2)- High error sensitivity (E = 5)- Moderate feedback clarity (F = 3)Here’s how it works:C = (8 × 0.3 + 4 × 0.2) × 2 + (5 × 0.3–3 × 0.2)C = (2.4 + 0.8) × 2 + (1.5–0.6)C = 3.2 × 2 + 0.9C = 6.4 + 0.9 = 7.3But here’s the thing: efficiency alone doesn’t tell the whole story. While a senior designer might finish a complex task faster than a junior designer, the real question is: what happens if the design is flawed?Risk impactIn my experience, understanding the relationship between task complexity and cost efficiency is crucial. A poorly executed low-risk task (like a simple UI tweak) might only need minor revisions. But a flawed high-risk task-think checkout flow, medical device UI, or enterprise security settings-can have serious financial, legal, or usability consequences.That’s why when we evaluate who should handle which tasks, we should look beyond just task complexity and consider risk impact (R):The risk impact formulaR = (U × 0.4 + B × 0.4 + T × 0.2)Where:U = User Impact (40% weight)B = Business Impact (40% weight)T = Technical Complexity (20% weight)I’ve weighted it this way because:- User and business impact matter most (40% each)- Technical complexity is important but often secondary (20%)- This better reflects what I’ve seen in real projectsEach factor is rated from 1 (low impact) to 5 (high impact).Real-World ExamplesScenario 1: Minor UI Tweak (like changing button color)User Impact (U) = 1 (Users barely notice)Business Impact (B) = 1 (No financial impact)Technical Complexity (T) = 1 (Simple CSS change)R = (1 × 0.4 + 1 × 0.4 + 1 × 0.2) = 1.0Scenario 2: Checkout Flow (high-risk task)User Impact (U) = 5 (Critical failure, prevents key tasks)Business Impact (B) = 5 (High financial loss, regulatory risk)Technical Complexity (T) = 4 (Requires deep expertise, major risks)R = (5 × 0.4 + 5 × 0.4 + 4 × 0.2) = 4.8In this case, the risk score for the checkout flow would be 4.8, significantly higher than the 1.0 for the minor UI tweak. This tells us that the checkout flow needs more careful handling and likely requires higher expertise.Photo by The Chaffins on UnsplashRisk-adjusted cost (RAC)Once you’ve figured out the Risk Impact (R), here’s the formula I use to calculate the overall cost efficiency:RAC = COW × R × (1 + C)Where:COW = the original Cost of WorkR = Risk Impact (calculated as above)C = Task Complexity (calculated as above)This formula:Includes task complexity as a multiplierBetter reflects how complex tasks increase risk-adjusted costsGives you a more complete picture of true project costsReal Examples:For a minor UI tweak (assuming COW = 5 hours, C = 1):RAC = 5 hours × 1.0 × (1 + 1) = 10 hoursFor a checkout flow (assuming COW = 20 hours, C = 7.3):RAC = 20 hours × 4.8 × (1 + 7.3) = 796.8 hoursBy considering both risk impact and task complexity, we get a much clearer picture of the true cost of a task. This helps me make better decisions about when to invest in senior expertise versus using a more cost-efficient solution with a junior designer.User research as a risk mitigatorIn my experience, risk in design doesn’t just come from complexity or technical challenges-it also comes from uncertainty about user behavior. A product might look great from a business or engineering perspective, but if users can’t figure it out, the risk of failure skyrockets.That’s where user research becomes a powerful risk mitigation tool. By catching usability issues early, it helps reduce rework costs, user frustration, and potential business losses.Here’s how I adjust the Risk Score (R) formula to account for user research:R_adjusted = R × (1 — UR) × (1 — C × 0.1)Where:UR = Effectiveness of user research (scaled 0 to 1)0 (No research): Maximum risk remains0.5 (Moderate research): Risk is reduced by 50%1.0 (Comprehensive research): Risk is fully mitigated (in theory)C = Task Complexity (as calculated above)0.1 = Complexity factor (more complex tasks benefit more from research)This formula:- Shows how research effectiveness varies with task complexity- Demonstrates that complex tasks benefit more from research- Gives a more realistic view of risk reductionReal Example: Checkout Flow RedesignInitial Risk Score (R) = 4.8 (Critical feature, complex integration, business impact)Task Complexity © = 7.3No user research (UR = 0):R_adjusted = 4.8 × (1–0) × (1–7.3 × 0.1) = 4.8 × 1 × 0.27 = 1.3High risk remains → Needs senior expertise & extensive testingModerate research (UR = 0.5):R_adjusted = 4.8 × (1–0.5) × (1–7.3 × 0.1) = 4.8 × 0.5 × 0.27 = 0.65Risk is reduced → Still needs validation but is saferComprehensive research (UR = 0.8):R_adjusted = 4.8 × (1–0.8) × (1–7.3 × 0.1) = 4.8 × 0.2 × 0.27 = 0.26Low risk → Junior designers can implement with minimal oversightPhoto by Who’s Denilo ? on UnsplashResearch validityOne of the most important things I’ve learned in UX design is knowing when you’ve done enough testing. This is where Nielsen’s Law of Diminishing Returns for Usability Testing comes in (Nielsen, 2000). It states that the first few usability test participants uncover most usability problems, while additional users reveal fewer new issues. In practice, testing with 5–8 users usually finds most major problems, making usability testing highly cost-effective.Here’s the formula I use for calculating the Percent of Usability Issues Found (P):P = 1 — (1 — λ)^n × (1 + C × 0.05)Where:λ(lambda) is the problem discovery rate per participant (typically 0.31, based on Nielsen’s research)n is the number of participantsC is the task complexity0.05 is the complexity adjustment factorThis formula:- Accounts for how task complexity affects issue discovery- Shows that complex tasks need more testing- Maintains the core relationship from Nielsen’s LawReal Examples:For a simple task (C = 1):- At 5 participants: ~85% of issues found- At 8 participants: ~95% of issues foundFor a complex task (C = 7.3):- At 5 participants: ~92% of issues found- At 8 participants: ~98% of issues foundBeyond 8 participants, the ROI in terms of discovering new issues drops sharply, making additional users less cost-effective.In my experience, cost-efficiency in UX design is crucial because the more users you test, the higher the cost. However, most usability issues surface early in testing, making it more effective to test 5 users, fix the issues, and then retest rather than testing 20 users all at once.This iterative testing approach lets you improve the design faster and more cost-effectively. However, there are exceptions where more users are necessary:- For diverse user groups, you might need separate tests for different personas- Quantitative metrics (like A/B testing) need larger sample sizes (50–100 users)- Edge cases or accessibility testing might need specialized users (like screen reader users)Cost of user testingLet me break down the real costs of user testing based on my experience. These costs vary depending on multiple factors like test complexity, participant numbers, required resources, and the tools or services used. It’s crucial to factor in all these elements when budgeting for usability testing.Here’s the formula I use to calculate the Cost of User Testing (CUT):CUT = (H × T × n × (1 + C × 0.1)) + Recruitment + (n × Incentive)Where:H = Hourly rate of facilitator/moderatorT = Time per test sessionn = Number of participantsC = Task complexityRecruitment = Base recruitment costsIncentive = Per-participant incentiveThis formula:- Accounts for how complexity affects testing time- Includes both fixed and variable costs- Better reflects real-world testing expensesReal Examples:For a simple task (C = 1):- 5 participants- $100/hr facilitator- 2 hours per session- $50 recruitment- $25 per participant incentiveCUT = ($100 × 2 × 5 × 1.1) + $50 + (5 × $25)CUT = $1,100 + $50 + $125 = $1,275For a complex task (C = 7.3):- 8 participants- $150/hr facilitator- 3 hours per session- $100 recruitment- $50 per participant incentiveCUT = ($150 × 3 × 8 × 1.73) + $100 + (8 × $50)CUT = $6,228 + $100 + $400 = $6,728While user testing might seem expensive upfront, I’ve found it’s highly cost-effective in the long run. Early identification of issues lets teams make quick corrections, often at a much lower cost than fixing problems after launch.Post-launch fixes can be significantly more expensive, involving costly design changes, development time, and potentially brand damage or lost users.By conducting iterative testing (small batches of tests with users), you can catch and fix problems early in the design process, reducing the need for expensive fixes after the product is live.Photo by Ian Schneider on UnsplashExpert VS user testing in UX designIn my experience, there are two main approaches to usability testing: Expert Testing and User Testing. Both are valuable but serve different purposes depending on the design stage, product type, and test goals.Expert testingExpert testing involves having UX designers, usability experts, or subject matter specialists evaluate the product based on their knowledge and experience. This type of testing is usually done without actual users interacting with the product.What I’ve Learned About Expert Testing:- Quick & Cost-Effective: No need to recruit participants or wait for test sessions- Professional Insight: Experts can spot high-level usability issues quickly- Early in Development: Great for early stages when you’re still prototyping- High-level feedback: Focuses on general usability issuesThe Downsides I’ve Seen:- Lack of Real-World Feedback: Experts might miss specific user needs- Subjectivity: Personal biases can influence evaluations- Limited in Depth: Can’t fully capture user-specific challengesThe Expert testing risk formulaRisk = R / [1 + P × (1 + C × 0.1)]Where:R = Expert risk factor (usually 0.2 to 0.8)P = Problem discovery rate (usually 0.3 to 0.5)C = Task complexity (e.g., 1–10)This formula:- Shows how complexity affects expert effectiveness- Demonstrates that experts are more valuable for complex tasks- Provides a more nuanced view of expert testing riskUser testingUser testing involves having real users-who match your target audience-interact with the product. These users are observed while completing tasks, and their behaviors and reactions are recorded for analysis.What I’ve learned about user testing:- Real User Insights: Reveals actual usability issues users face- Real-World Data: Involves actual people who will use the product- Unbiased Feedback: Users offer authentic feedback about struggles- Exploration of Edge Cases: Users might interact in unexpected waysThe Challenges I’ve Faced:- Cost and Time-Consuming: Requires significant resources- Logistical Challenges: Can be difficult to organize- Limited Scope: Small user groups only uncover some issuesThe user testing risk formulaRisk = R / [1 + P × (1 + C × 0.2)]Where:R = User risk factor (usually between 0.1 and 0.3)P = Problem discovery rate (typically between 0.6 and 0.8)C = Task complexity (e.g., 1–10)This formula:- Shows that users are more effective at finding issues in complex tasks- Reflects the higher discovery rate of user testing- Provides a more accurate risk assessmentReal Example:R = 0.2P = 0.75C = 7.3Result: Risk_User = 0.2/(1 + 0.75 × (1 + 7.3 × 0.2)) ≈ 0.05This indicates very low risk because real users are more likely to uncover usability problems than experts, especially for complex tasks.Comparing expert design, user research and junior designIn my experience, not all design tasks need the same level of expertise, and not all projects require extensive user research. Understanding where each approach fits best within a cost-effective framework has helped me make smarter design decisions.Expert design vs. junior design: finding the right fitA Real Scenario:Let’s say we’re creating a new UI for a security system with a moderately complex user flow (like an admin panel for managing user roles and permissions).What I’ve Learned:- Expert Designer: Can complete tasks faster with fewer revisions- Junior Designer: Needs more time, guidance, and iterationsReal Costs:Role | Hourly Rate | Estimated Time (Hours) | Revision Factor | Total Cost - - - | - - - - - - -| - - - - - - - - - - - -| - - - - - - - - - | - - - - - - Expert Designer | $150/hr | 12 hours | 0.15 | $2,070Junior Designer | $75/hr | 24 hours | 0.4 | $2,520My Analysis:In this case, the expert designer ($2,070) actually costs less than the junior designer ($2,520) because:- Lower revision factor (0.15 vs 0.4)- Faster completion time (12 vs 24 hours)ConclusionI’ve spent years developing these formulas as a way to understand the complex relationships in UX design. While I never actually calculate these numbers in my daily work, they’ve become a valuable mental framework that helps me navigate design decisions and communicate with stakeholders.What makes these formulas powerful isn’t their precision, but how they help me understand the trade-offs between cost, expertise, and impact. They’re like a compass that helps orient my thinking, not a rigid map that tells us exactly where to go.UX design is both art and science. While these formulas help us understand the relationships between different factors, they can’t capture the full complexity of real-world design decisions. Every project brings unique challenges that require experience, intuition, and good judgment.The real value of this framework lies in how it helps me think about and discuss complex design decisions. It’s a tool for understanding, not a replacement for experience. The best UX decisions come from balancing analytical thinking with creative intuition — something no formula can fully capture.ReferencesNielsen, J. (2000). Why You Only Need to Test with 5 Users. Nielsen Norman Group. https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/Nielsen, J. (1993). Usability Engineering. Academic Press.Nielsen, J., & Landauer, T. K. (1993). A mathematical model of the finding of usability problems. Proceedings of the INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems, 206–213.Faulkner, L. (2003). Beyond the five-user assumption: Benefits of increased sample sizes in usability testing. Behavior Research Methods, Instruments, & Computers, 35(3), 379–383.The cost of UX: balancing cost, expertise, and impact was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 114 Visualizações
  • UXDESIGN.CC
    When faces go virally wrong
    AI portraiture and the crisis of the profile picture in an era where identity is easily exploited and monetized by the viral Ghibli AI artContinue reading on UX Collective »
    0 Comentários 0 Compartilhamentos 124 Visualizações
Mais stories