UX Collective
UX Collective
Curated stories on user experience, usability, and product design. By
@fabriciot
and
@caioab
.
1 χρήστες τους αρέσει
575 Δημοσιεύσεις
2 τις φωτογραφίες μου
0 Videos
0 Προεπισκόπηση
Πρόσφατες ενημερώσεις
  • Designing for conversion sounds ugly-you should learn it anyways
    uxdesign.cc
    Your designs might be having a real impact. You just may be unaware of itContinue reading on UX Collective
    0 Σχόλια ·0 Μοιράστηκε ·14 Views
  • Cursor, vibe coding, and Manus: the UX revolution that AI needs
    uxdesign.cc
    Its time to move past the command-line era ofAI.Photo by Brett Jordan onUnsplashWe keep hearing that AI is for everyone. No coding required. Your new coworker. An assistant for the masses. And yet, to get consistently useful results from todays best models, you still need to know the secret handshakes: the right phrases, the magic tags, the unspoken etiquette ofLLMs.The power is there. The intelligence is real. But the interface? Still stuck in thepast.And that disconnect became painfully clear one day, when a friend of mine tried to do something absurdlysimpleWhen ChatGPT needed a magicspellMy friendlets call him Johndecided to use ChatGPT to count the names in a big chunk of text. He pasted the text into ChatGPT and asked for a simple count. GPT-4o responded with a confident-sounding number that was laughably off. Confusion gave way to frustration as John tried new phrasing. Another round of nonsense numbers. By the fifth attempt, he looked ready to fling his keyboard out thewindow.I ambled over and suggested a trick: wrap the block of text in <text> tags, just as if we were feeding the model some neat snippet of XML. John pressed Enter and, with that tiny tweak, ChatGPT nailed the correct count. Same intelligence, but now it was cooperating.John was both relieved and annoyed. Why did a pair of angle brackets suddenly turn ChatGPT into a model of precision? The short answer: prompt engineering. That term might sound like a fancy discipline, but in practice, it can look like rummaging around for cryptic phrases, tags, or formatting hacks that will coax an LLM into giving the right answer. It is reminiscent of a fantasy novel where you have infinite magical power, yet must utter each syllable of the summoning chant or risk conjuring the wrongmonster.https://medium.com/media/54f0195b29bc89292f7db656514a657a/hrefIn an era when we supposedly have next-level AI, its hilarious that we still rely on cryptic prompts to ensure good results. If these models were truly intuitive, they would parse our intentions without requiring special incantations. Instead, were forced to memorize half-hidden hints, like travelers collecting obscure tourist phrases before visiting a foreign land. That is a design problem, not an intelligence problem.Sound familiar?If this rings a bell, its because weve seen a similar dynamic before in the early days of personal computing. As usability expert Jakob Nielsen explains, the very first user-interface paradigm was batch processing, where you had to submit an entire batch of instructions upfront (often as punched cards) and wait hours or even days for the results. This eventually gave way to command-based interfaces (like DOS or Unix) in the 1960s, where youd type in a single command, see an immediate result, and then type another command. Back then, command lines ruled the UI landscape, demanding meticulous keystrokes that only power users could navigate.Then the graphical user interface arrived, with clickable icons and drag-and-drop metaphors that pulled computing out of the realm of black-screen mysticism and into the daylight. You no longer had to type COPY A: C:\DOC.DAT; you just dragged a file into a folder. That wasnt a minor convenienceit was a revolution that let everyday people, not just specialists, feel in control of the machine for the firsttime.Early IBM PC with a command line like interface from WikipediaWe saw the same leap again and again. WYSIWYG editors let people format documents without memorizing tags or markup. Mosaic put the internet behind glass and made it something your mom could use. The smartphone touchscreen transformed phones from keypad-laden gadgets into intuitive portals you could navigate with a swipe and a tap. In every case, the breakthrough wasnt just better tech under the hoodit was better ways to interact withit.Mosaic browser gave a GUI to the Internet, from Web DirectionsThats what made ChatGPT the spark that set off the LLM explosion. Sure, the underlying tech was impressive, but GPT-style models had been kicking around for years. What ChatGPT nailed was the interface. A simple chatbox. Something anyone could type into. No setup. No API key. No notebook to clone. It didnt just show people that LLMs could be powerfulit made that power feel accessible. That was the GUI moment for large languagemodels.ChatGPT was a GUI revolution, yet its inaccessibility is remnisicent of command lineUIYet, as many people (Nielsen included) have noted, were still in the early days. Despite ChatGPTs accessibility, it still expects precise prompts and overly careful phrasing. Most people are left relying on Reddit threads, YouTube hacks, or half-remembered prompt formats to get consistent results. Were still stuck in a command line moment, wearing wizard hats just to ask a machine forhelp.The tech is here. The intelligence is here. Whats missing is the interface leap, the one that takes LLMs from impressive to indispensable. Once we design a way to speak to these models that feels natural, forgiving, and fluid, well stop talking about prompt engineering and start talking about AI as a true collaborator. Thats when the real revolution begins.Examples of emerging UX inAICursor = AI moves into the workspaceMost people still interact with LLMs the same way they did in 2022: open a chat window, write a prompt, copy the answer, paste it back into whatever tool they were actually using. It works, but its awkward. The AI sits off to the side like a consultant you have to brief every five minutes. Every interaction starts from zero. Every result requires translation.Cursor flips that model inside out. Instead of forcing you to visit the AI, it brings the AI into your workspace. Cursor is a code editor, an IDE, or integrated development environment, the digital workbench where developers write, test, and fix code. But unlike traditional editors, Cursor was built from the ground up to work hand-in-hand with an AI assistant. You can highlight a piece of code and say, Make this run faster, or Explain what this function does, and the model responds in place. No switching tabs. No copy-paste gymnastics.Demo of Cursors Changelog where codebase and AI assistant canco-existThe magic isnt in the model. Cursor uses off-the-shelf LLMs under the hood. The real innovation is how it lets humans talk to the machine: directly, intuitively, with no rituals or spellbooks. Cursor doesnt ask users to understand the quirks of the LLM. It absorbs the context of your project and adapts to your workflow, not the other wayaround.Its a textbook example of AI becoming more powerful not by getting smarter, but by becoming easier to work with. This is what a UX breakthrough looks like: intelligence that feels embedded, responsive, and natural, not something you summon from a command line with just the right phrasing.So what happens when we go one step further, and make the interaction evenlooser?Vibe coding = casual commandlineVibe coding, a term popularized by Andrej Karpathy, takes that seamless interaction and loosens it even more. Forget structured prompts. Forget formal requests. Vibe coding is all about giving the LLM rough direction in natural language, Fix this bug, Add a field for nickname, Reduce the sidebar padding, and trusting it to figure out thedetails.By Jay Thakur on HackerNoonAt its best, its fluid and exhilarating. LLMs like a talented engineers who have read your codebase and can respond to shorthand with useful action. That sense of flowof staying in the creative zone without stopping to translate your thoughts into machine-friendly commandsis powerful. It lets you focus on what you want to build, not how to phrase theask.But heres the catch: you still have to know how to talk to themachine.Vibe coding isnt magic. Its an interaction style built on top of a chat interface. If you dont already understand how LLMs behave, what theyre good at, where they stumble, how to steer them with subtle rewordings, the magic breaks down. Youre still typing into a text box, hoping your intent comes through. Its just that now, the prompts are written in lowercase and goodvibes.https://medium.com/media/06036727978af6a2ca6b2de416bbfd0f/hrefSo while vibe coding lowers the friction for seasoned users, it still relies on unspoken rules and learned behavior. The interface is friendlier, but its not yet accessible. It shows that AI can feel conversationalbut still speak a language only some people understand.If Cursor showed us what it looks like to embed AI into tools we already use, and vibe coding loosened the interaction into something more human, then whats next? What happens when you dont even need to steer, when AI takes thewheel?Manus = the interface is the intelligenceWhen Manus hit the scene, it made waves. Headlines hailed it as the first true AI agent. Twitter lit up. Demos showed the tool writing code, running it, fixing bugs, and trying again, all without needing constant human input. But heres the quiet truth: Manus runs on Sonnet 3.7, a Claude model you can already access elsewhere. The model wasntnew.What was new, and genuinely exciting, was the interface and interactions.Instead of prompting line by line, Manus lets users speak in goals: Analyze this dataset and generate a chart, or Build a basic login page. Then it takes over. It writes the code. It runs the code. If something breaks, it investigates. It doesnt ask you to prompt again. It acts like it knows what youmeant.Demo of Manus creating icons that match TechCrunch websiteThis is delegation as design. Youre no longer babysitting the model. Youre handing off intent and expecting results. Thats not about intelligence. Thats about trust in the interface. You dont need to wrangle it like a chatbot or vibe with it like a coder who knows the right lingo. You just ask and Manus handles thehow.And thats the point. The Manus moment wasnt about a smarter model. It was about making an existing model feel smarter through better interaction. Its leap forward wasnt technical it was experiential. It didnt beat other tools by out-reasoning them. It beat them by understanding the humanbetter.This is the future of AI: tools that dont just process our input, but interface with ourintent.All three examplesCursor, vibe coding, and Manusprove the same point: its not the size of the model that changes the game, its the shape of the interaction. The moment AI starts understanding people, instead of people learning how to talk to AI, everything shifts.The missing link between power andpeopleIf we return to Johns struggle with <text> tags, its clear his predicament wasnt just a glitch. It revealed how these advanced models still force us to speak their language rather than meeting us halfway. Even though powerful LLMs can write code and pass exams, many AI interactions still feel like sending Morse code to a starship. The real shortfall isnt intelligence; its an outdated, command-line-like user experience that demands ritual rather than collaboration.Tools like Cursor, Manus, and vibe coding show glimpses of a new reality. By embedding AI directly into our workflows and allowing more natural, goal-oriented conversations, they move us closer to what Jakob Nielsen calls a complete reversal of control, where people state what they want and let the computer figure out how to get there. Weve watched technology make leaps like this before, from command lines to GUIs, from styluses to touchscreens, and each time, the real revolution was about removing friction between people and possibility.AI will be no different. The next major step isnt just bigger models or higher accuracy: its designing interfaces that make AI feel like a partner rather than a tool. When that happens, well stop talking about prompts and simply use AI the way we use search bars or touchscreens. Intuitively and effortlessly.This shift is not only the future of AI, but also the future of computing itself.Cursor, vibe coding, and Manus: the UX revolution that AI needs was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·10 Views
  • Building and calibrating trust in AI
    uxdesign.cc
    How to manage the inherent uncertainty ofAI.Figure 1: The trust continuum: No trust is harmful, but overtrust is dangerous. You need to pull your users into the golden middle of calibrated trust.Trust makes relationships go roundwhether it is between people, businesses, or the products we rely on. Its built on a mix of qualities like consistency, reliability, and integrity. When any one of these breaks, the relationship cracks with it. A friend who disappears for a year, a car that wont start half the time, a teammate who never pulls their weighttrust thins out, and we start looking for alternatives.AI is a tough nut to crack when it comes to trust. By nature, its probabilistic, uncertain, and makes mistakes. Earning user trust takes real effort, and once youve earned it, you often need to dial it back. In AI, overtrust is dangerous. If users accept AI outputs by default, errors will snowball into bad decisions with real-world consequences [1][2]. Your users should become responsible collaborators who calibrate their trust by questioning, adjusting, and taking ownership of how they use the system (figure1).So, how do you build appropriate trust into your AI system? When working with AI teams, I often see trust reduced to model accuracy. Users dont trust the system because it makes mistakes. The assumption is that trust is a technical problem, and engineers or data scientists need to fixit.But thats only part of the picture. In reality, trust is primarily built through how users understand and experience your producthow well it understands their needs, communicates its value, and supports them in the moment. This article focuses on the user-facing dimensions of trust, namely the addressed use case, value creation, and user experience.Figure 2: Building trust through the user-facing components of your AI system (cf. this article for the full model of AI systems).Well explore practical techniques and design patterns for building and calibrating trust. I will illustrate these with insights from a real-world project where we used AI to support R&D teams at a major automotive manufacturer.Use case: Optimize for relevance andimpactChoosing the right use case is the first strategic step towards trust. Today, AI has become fairly accessible, and we see generic AI features like chat pop up in every other product. Often, they feel generic, awkward, and disconnected from real user needs. If you dont want to fall into this me-too bucket, you need to show users that you understand them and that your AI is there to help, not distract.At Anacode, we recently partnered with the R&D division of a major automotive manufacturer. Our goal was to track trends and emerging technologies to support the companys innovation efforts. We kicked things off with a motivated group of internal AI champions, but soon, we ran into skepticism from the wider team. These were seasoned engineers and researcherspeople who pride themselves on knowing their domain inside and out. The last thing they wanted was a black box spitting out unsolicited advice. But a tool that subtly enhances their expertise, boosts their outcomes, and improves their standing in the company? For sure, that would be interesting.Solving the rightproblemThe problem of our users wasnt a lack of intelligence or insightit was signal overload. Every week, new patents, startups, funding rounds, and papers landed in their inboxes and newsfeeds. They needed help seeing what actually mattered, so we framed the system as a radar that could spot early signals, surface momentum, and point experts toward whats worth investigating. This directly addressed their pain points and brought initialbuy-in.More generally, our use case needs to check twoboxes:It must be a problem where AI can shine and create significant value (cf. chapter 2 of my book The Art of AI Product Development on discovering the best AI opportunities).It must be perceived as a problem by the users. When AI steps into a space users are frustrated bytime-consuming, noisy, repetitive tasksits appreciated. It supports users and clears space to focus on the parts they care about, such as judgment, creativity, and strategy.Seamlessly integrating into existing workflowsYour AI should support your users, rather than disrupting their workflows and adding cognitive burden. In our case, the R&D teams already had well-worn (though not always efficient) processes and artefactsbriefs, reports, technical review decks, etc. A tool that introduced friction wouldntstick.Heres a straightforward way to plan for seamless integration:Start by mapping the existing, humanprocess.Pinpoint where AI can add real valuewhether by automating grunt work, surfacing hidden insights, or speeding up analysis.Build a tailored AI journey around thesemoments.Plan to deliver outputs in the formats users alreadytrust.In our case, the obvious move was to start with the first step in the process, namely technology landscape monitoring, where large quantities of data need to be combined, structured, and distilled into insights (figure3).Figure 3: Mapping AI functionality against the existing humanprocessAlign impact withtrustThe higher the stakes of your AI, the more carefully you need to build and support trust. A bad movie recommendation on Netflix is harmless. But a flawed R&D insight can lead to a negative impact down the road, especially if it is formulated in the upbeat, self-confident language of a modern LLM. Initially, our system monitored and quantified trends. That was a relatively safe role, which supported early adoption. But as we ventured into evaluating and recommending specific innovations, skepticism increased. Experts questioned how AI derived its suggestions and whether it understood real industry challenges.To mitigate this, we started with low-key features like highlighting competitor innovations rather than advising on the companys own R&D activities. Over time, the AI got more competent, and we expanded its scope. Framing is also importantby talking about innovation ideas rather than recommendations, we made clear that users were still in the driving seat. The expert responsibility of assessing and refining the ideas was still on theirside.Your trust capital is build and reinforced in a virtuous cycle. As users build trust into small features, they will be more likely to accept more AI over time [8]. On the other hand, as your AI system keeps improving, it will also be more likely to live up to growing expectations as you increase itsscope.ValueWhen your product gets out, it should hook users with immediate, tangible value. That initial win opens the door for engagement. But trust doesnt build overnightyou need to keep delivering and raising the bar. As users rely more on your system, value must compound. This ongoing progress is what helps offset AIs inherent imperfectionsits uncertainty, occasional errors, and evolving boundaries.Provide a fast track to measurable valueAI can create value along different dimensions, such as productivity, process improvement, and emotional benefits (see this post for more details). In my experience, starting with simple productivity and efficiency benefits is the best way to get your foot in the door. Find ways to save time and cost for your users. This is measurable, tangible, and often humanly appreciated. Once you have built an initial layer of trust, you can expand to more subtle benefits.In our example, users struggled with monitoring large quantities of data over time. Thus, we started by providing verified and relevant bits of insights, letting users want more. These were concise and factual summaries of relevant trends, for example: Solid-state battery patents are up 40% year-over-year. Toyota and Hyundai lead the activity. Capital is shifting toward next-gen anode materials. Backed by relevant data, this insight quickly made it into internal reports and attracted more users. Not because it was new informationeveryone knew solid-state was heating upbut because it was framed cleanly and reliably and saved hours of hunting and verification.Thats where AI gains its right to play: cost or time savings that are apparent in existing workflows and decisions. By contrast, if experts have to engage in a long discovery to see the value of your AI tool, theyll likely drop the ball (I could as well search for this data myself).Demonstrate integrity with realistic communicationThis principle is simple, but hard to stick to. In a world of marketing superlatives and inflated promises, being honest can feel scary. But in the long run, it pays off. Avoid the trap of overpromising, whether it is about the accuracy, scope, or value of your AI. If the product doesnt hold up, users will get frustrated and disengage (Weve seen this beforebig claims, no follow-through.)In the UX section, you will also learn how to communicate limitations and uncertainty inside the product using design patterns like confidence scores and oversight prompts.Inject domain expertise into yoursystemEspecially in B2B, trust crumbles fast when AI doesnt get its domain. If your system talks in generic terms or misses the nuance experts expect, forcing them to edit the outputs constantly, theyll walk away. For the latest, this will happen when they think something along the lines of Im spending as much time fixing this as I would doing it from scratch.In our case, early versions of the system flagged autonomous vehicle software as a key trend. This was correct, but it felt shallow and obvious. After fine-tuning for industry-specific relevance, the system started surfacing more granular signalslike anode-free lithium battery R&D, or the use of self-supervised learning in in-cabin driver monitoring systems. It got proficient at using the jargon of its users, and the insights could be directlyreused.My article Injecting domain expertise into your AI system provides a comprehensive overview of the methods to customize your AI system for specific domains. There is a learning curve to thisyour AI doesnt need to act like a domain expert from the beginning. Often, giving users an opportunity to enrich your system with their domain knowledge will not only improve performance but also create a sense of ownership and deepen engagement. This leads us to the next trust-building technique, namely the visible compounding of value through continuous improvement.Commit to continuous improvementOne of the best practices of successful AI development is to launch early and collect relevant data and feedback for improvement. Releasing an imperfect system can be scarybut in the end, AI is never perfect, so get used to the idea. In the first three months, our system surfaced several useful signals, but many insights were still not to the point. While this was enough to demonstrate initial value, we were clearly in for a race with time. Once users use your AI more frequently, they develop a relationship of their own. Their expectations grow as they become more proficient with the tool and want to rely on it in their dailywork.You need to catch this momentum and continuously optimise your system. Fortunately, in most cases, you can create feedback mechanisms that not only point you to the shortcomings of your system, but also allow to collect meaningful training data to improve your system over time. The section on Feedback and learning will dive into different ways to collect feedback from yourusers.The use case you address and the value you provide are at the core of your AI product strategy and positioning. They motivate your users to buy and use your application. But the trust muscle is built over time. On a day-to-day basis, your users will be interacting with your AI via the user interface, and this is where the real funbegins.User experienceThere are plenty of design patterns you can use to help users build and calibrate their trust. These relate to transparency, control, and feedback collection. At the end, they boil down to mitigating the uncertainty and the risks of errors made by theAI.Transparency: Aligning user expectationsUsers build mental models of the products they interact with [7]. Trust will break if it turns out that these models are not aligned with reality. Now, in AI, a lot of the action happens under the hood, away from the eyes of your users. An AI app that relies on ChatGPT will behave differently than one that uses a deeply customised LLMthough the interaction will look the same on the surface. To avoid mismatched expectations, you need to crack open the black box of your system and explain its relevant workings to your users. Here are some design patterns to achievethis:In-context explanations: Help users understand how an output was generated, right where they see it. For example, a system tracking emerging technologies might explain a trend by breaking down its inputslike recent patent spikes, venture capital activity, and competitor filings. In graphical interfaces, this can happen with interactive elements like tooltips. In conversational interfaces, you provide the explanation directly in the conversational flow.Footprints / chains-of-thought let users trace how the AI reached its conclusion. In our case, clicking on a suggested innovation revealed its source trail: the documents, events, and filters that led to its prioritization.Figure 4: Disclose the AIs thinking to show how the AI reached its conclusionCaveats and disclaimers can be used to highlight known limitations of an output or dataset. If data coverage is sparse, or signals conflict, clearly state that early so users dont make decisions based on flawed assumptions.Figure 5: Proactively communicate the limitations of yoursystemCitations: Link outputs to source datawhether those are documents, reports, or APIsso users can verify the AI output themselves.Figure 6: Display the raw sources, allowing users to check the data themselvesEverboarding and guidance: Keep users informed as the system evolves. When features or logic change, provide lightweight tooltips or embedded guidance to explain whats different and why itmatters.Verification-focused explanations: Instead of just explaining how the output was formed, help users evaluate its reliability. Include self-critiques (This signal may be inflated by cross-market overlap) or alternative interpretations to activate user judgment.When thinking about transparency, pay special attention to communicating the uncertainty of your AI. This will help users calibrate their trust and spot errors. Here are some techniques:Confidence scores: Use simple, visible indicatorslike percentages or a low/moderate/high scaleto signal how much confidence the system has in aresult.Figure 7: Confidence scores allow to highlight results that require further verificationFor text content, you can use both visual and linguistic indicators of uncertainty. For example, you can visually highlight phrases, numbers, or facts that need validation.Figure 8: Inline highlights can signal bits of uncertainty, keeping the rest of the insightintactYou can also prompt the generating LLM to use uncertain language to hedge questionable content (This may suggest, Data is inconclusive, Further validation recommended., etc.)AI explanations are a living topicthey will evolve quickly as you start getting feedback, questions, and complaints from your users. I prepared a cheatsheet that you can have at hand when updating your explanations, which you can downloadhere.Control: Putting users into the drivingseatTransparent outputs have little value if users cannot act on the additional information. To establish collaboration between users and AI, you need to give them control and a sense of responsibility. Here are some design patterns that can be applied before, during, and after the AIsjob:Pre-task setup: Let users configure the scope of AI taskse.g., data sources, timeframes, entity filtersbefore launching analysis. This increases relevance and avoids genericresults.Adjustable signal weighting: Let users decide how much weight to give different sources or criteria. In trend monitoring, one user might prioritize venture activity; another, academic citations. Explain the impact of thesignals.Figure 9: Allow users to decide how much weight they want to put on specific datasourcesInline editing and actions: Make parts of the output editable or replaceable in place. Users should be able to refine vague terms or tweak filters without rerunning the entiretask.Figure 10: Proficient users can be given the right to override AI suggestionsEmergency stop (abort generation): Allow users to halt generation processes if they notice early errors or misalignment. As most of us have experienced when using ChatGPT&Co., this can save a lot of time when the user sees that the AI got off-track.Intentional friction: Integrate friction to activate critical thinking. Challenge users with questions like: What assumptions does this result rely on? or Could this be explained by noise? These prompts for critical thinking are also called cognitive forcing functions (CFFs; cf. [5]). They help calibrate reliance, especially early in the trustjourney.Error management: Mitigate the risks of AImistakesEven if your engineers optimize AI performance to death, mistakes will still slip through to your users. You need to turn your users into collaborators who help you catch these failures and turn them into learning opportunities. Here are some best practices to keep them aware andalert:Highlight error potentials during onboarding: Set realistic expectations about model performance. About 1 in 10 results may need review is better than pretending the system is flawless. Communicate the strengths and weaknesses of the system, and introduce users to both strong and flawed outputs. This shapes realistic expectations and shows how collaboration with AI works in practice.Inline feedback mechanisms: Enable users to flag issues directly where they occurmisclassifications, false positives, outdateddata.Immediate acknowledgment and recovery: After feedback is submitted, respond visibly and offer a fix. Thanks for flaggingwould you like to regenerate the output without that item? Communicate the impact of the users feedback as precisely as possible: Your feedback will be integrated in the next release at the end of the month. encourage user to build feedbackmuscleFeedback and learning: Let the system grow with theuserTrust deepens when users feel they have a voice. If they know they can shape the systemand see that their input drives real improvementsthey stop seeing the AI as a black box and start treating it as a partner. Thats how you turn passive users into co-creators.Start by capturing implicit feedback: Where do users zoom in? What do they edit or delete? What do they consistently ignore? These behavioral signals are gold for tuning relevance. Then, layer in explicit feedback mechanisms:Binary feedback: A simple thumbs up/down gives you an instant signal on output quality. Its low-effort and useful atscale.Figure 11: The ubiquitous thumbs up/down widget allows for quick (though shallow) feedback collectionFree-text inputs: Especially useful early on, when youre still learning how your system is missing the mark. It requires more effort to parse, but chances are that the insights are worthit.Figure 12: Free-text feedback can be useful to discover new issues and failuremodesStructured feedback: As your system matures and common failure patterns emerge, offer users predefined categories to speed up feedback and reduce ambiguity.Figure 13: Use structured feedback when you are already aware of the major failure modes of yoursystemIntegrating these patterns is technically easy, but without clear incentives, many busy users might ignore them. Here are some tips to pull your users into the feedbackloop:Communicate the impact of the feedback clearly. This will show users that it helps improve the system, which is a reward initself.Consider constructive extrinsic rewards. One advanced mechanism we used was unlocking deeper customization and control for users who consistently engage with the system. This not only incentivizes them to provide feedback, but also supports power users without overwhelming novices.SummaryTrust builds gradually, across every interaction, every insight delivered (or missed), every decision your users make with your AI by their side. Time is importantyour AI will (hopefully) improve, producing more relevant insights and fewer errors. Your users will also changetheyll become more skilled, more reliant, and more demanding. If your system doesnt grow with them, confidence and trust canerode.Thats why trust in AI requires careful planning and a layered strategy:Use case and value communication create the strategic foundation: Is this solving a real problem? Can users clearly see thebenefit?UX handles the day-to-day work of trust: transparency, control, and feedback loops shape how users build their mental model of your product and calibrate trust in realtime.AI is tricky by designuncertainty and errors are part of the package. But if youre intentional and build your product for clarity and collaboration, trust can become your strongest asset and differentiator. It will fuel adoption, invite feedback, and turn cautious users into advocates.References and furtherreadingsHoward, A. (2024). In AI We TrustToo Much? MIT Sloan Management Review. Retrieved from https://sloanreview.mit.edu/article/in-ai-we-trust-too-much/Sponheim, C. (2024). When Should We Trust AI? Magic-8-Ball Thinking. Nielsen Norman Group. Retrieved from https://www.nngroup.com/articles/ai-magic-8-ball/McKinsey & Company (2023). Building AI trust: The key role of explainability. QuantumBlack, AI by McKinsey. https://www.mckinsey.com/capabilities/quantumblack/our-insights/building-ai-trust-the-key-role-of-explainabilityMicrosoft Aether Working Group (2022). Overreliance on AI: Literature Review. Microsoft Research. https://www.microsoft.com/en-us/research/wp-content/uploads/2022/06/Aether-Overreliance-on-AI-Review-Final-6.21.22.pdfMicrosoft Research (2025). Appropriate Reliance: Lessons Learned from Deploying AI Systems. https://www.microsoft.com/en-us/research/wp-content/uploads/2025/03/Appropriate-Reliance-Lessons-Learned-Published-2025-3-3.pdfMicrosoft Learn (n.d.). Overreliance on AIGuidance for Product Teams. Microsoft AI Playbook. https://learn.microsoft.com/en-us/ai/playbook/technology-guidance/overreliance-on-ai/overreliance-on-aiGoogle PAIR (2019). People + AI Guidebook: Designing human-centered AI products. https://pair.withgoogle.com/guidebook/Kniuksta, D., & Vedel, S. (2023). Deep dive: Engineering artificial intelligence for trust. Mind the Product. Retrieved from https://www.mindtheproduct.com/deep-dive-engineering-artificial-intelligence-for-trust/DNV (2023). Building trust in AI: Creating responsible AI systems through digital assurance. DNVFuture of Digital Assurance. https://www.dnv.com/research/future-of-digital-assurance/building-trust-in-ai/Janna Lipenkova (2025). The Art of AI Product Development, chapter 10. Manning Publications.Anacode GmbH (2025). Cheatsheet: Explaining your AIsystems.Note: Unless otherwise noted, all images are by theauthor.Building and calibrating trust in AI was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·10 Views
  • How GenAIs build diverging color schemes
    uxdesign.cc
    Gemini and Copilot suggestions for Pantone Mocha Mousse divergence.Continue reading on UX Collective
    0 Σχόλια ·0 Μοιράστηκε ·11 Views
  • Why dont they spend (enough) on research?
    uxdesign.cc
    The illusion and the curse of knowledge behind thescene.When I ask you to think about a close relative and tell me about them, you can easily do it and make more or less accurate statements about them. But when I ask you to think about your users thats an impossible task. Its simply not feasible to know hundreds, thousands, or even millions of people well. In such cases, you have two options. Either you pick one well-known customer from the crowd and think about them, orand this is what matters hereyou view them through a concept or amodel.ModelYou build this mental model using information gathered from reality and, importantly, by simplifying it. This process is called induction. Later, when you use this modelor lets say, this boxto communicate with your customers or to develop products and services for them, thats deduction.Obviously, the more information you gather, the more sophisticated your modelbecomes.The problem starts when the model is built from very little information, or when decisions are made solely based on one persons model or opinionregardless of everything else. This person is known in the literature as a HIPPO: the Highest Paid Persons Opinion. Unfortunately, many organizations work likethis.Lets go back to simplification and the process of creating models. Heres where humans, as the problem, come into play. The problem is always us. We, humans, tend to distort information. And not just a little, but significantly. Whats more, we do it without even realizing it.Influencing phenomenaLet me highlight two major biases that significantly influence us, based on my own experience.The illusion of knowledgeLet me share a brief study in which Frank Keil and his students asked participants to rate how well they understood the functioning of everyday objects like zippers, toilets, and ballpoint pens on a scale from 1 to 7. On average, participants rated their knowledge between 45. Then, Frank and his team asked them to explain in detail how these objects work. Most of them had no clue, that their real results were between12.This is called the illusion of knowledgewe tend to overestimate our knowledge.What does it mean in relation to clients: We believe we know our users better than we actuallydo.The curse of knowledgeIn 1990 at Stanford University, Elizabeth Newton divided people into two groups: tappers and listeners. Each tapper was asked to tap the rhythm of a well-known song (e.g. Happy Birthday) on a table while a listener tried to guess thesong.Before the task, Elizabeth asked the tappers to estimate what percentage of songs the listeners would guess correctly. The tappers estimated 50%, while the actual result was only 2.5%. A huge difference.Elizabeth called this the curse of knowledgewe cannot disregard ourselves from what we alreadyknow.What does it mean in relation to clients: We assume users and customers have far more knowledge than they actuallydo.I dont know how you feel, but I am often told by my clients: We already know enough, what is more a lot about our customers. To be honest we as designers are not an exception tothat.In my experience, the illusion of knowledge and the curse of knowledge are the primary reasons why companies and clients are reluctant to spend (enough) on research.Scientific methodologiesScience currently identifies nearly 220 such biases. It would be foolish to think humans are rational creatures. Scientific methodologies have evolved precisely because of our biases, aiming to make experiments and research more objective and rational. The goal is practically to remove humans and their weaknesses from the equation.AIt is out of question: you should do research. You need to research in order to understand your users better and to refine your model.Invest init!Im well aware that there are many other factors contributing to why companies dont spend (enough) on research. If you know of any, feel free to share them in the comments. In the post, I highlighted the two most important biases.Why dont they spend (enough) on research? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·14 Views
  • AI in automotive design simulations: breaking the barriers to adoption
    uxdesign.cc
    Key challenges slowing AI adoptionand how to overcomethemOptimizing vehicle designimage designed byFreepikImagine an automotive engineer optimizing a new vehicle design by running high-fidelity (detailed and realistic) simulations. Typically, running such simulations (also referred to as virtual-prototyping) for crash tests, aerodynamics, and structural integrity, takes days. Now, imagine an AI-enhanced system that cuts that time by half, while improving usability and providing better insights. Sounds like a game-changer, right? Yet, despite the potential benefits, AI adoption in engineering simulations for automotive design remains surprisingly low. Why is thisso?This article explores the key factors behind the low adoption of AI in engineering simulations, identifies the most critical of these factors, and suggests potential ways to address them. The information presented is based on recently conducted industry research (Kini, 2024) which derives information from existing literature, from real-world examples and case studies, and from interviews with experts in the field of simulations and AI. A survey of automotive industry professionals was also conducted as part of the research, to validate the gathered information and insights. Although this article is written with AI/simulation vendors, automotive engineers, and decision-makers in mind, it is relevant to anyone working in AI, product design or new technology adoption.The promise of AI in engineering simulationsAs organizations lock horns in a fiercely competitive market, the need to deliver new products with higher frequency is expected to escalate. A McKinsey/NAFEMS joint article (Ragani, Stein, Keen, & Symington, 2023) states that Automotive, Aerospace other major industries expect one-third of their revenues to come from the sale of new productsamounting to over $30 trillion in revenues over the next 5 yearshighlighting the importance of an agile and fast product development process.Virtual prototyping has shortened design cycles and sped up product developmentvehicle design testing durations have reduced from months, when using only physical prototypes, to days when also employing virtual prototyping simulations (HBR, 2021). Such high-fidelity simulations though are computationally demanding and still considered time-consuming. The simulation industry has thus continued its relentless search for ways to reduce the number of simulations needed to be run, and the simulation and analysis time, to optimize a design. The advent of Artificial Intelligence (AI), and its rapid progress in the last decade, has provided the simulation industry with a potentially powerful lever to achieve those objectives.AI offers solutions that can dramatically accelerate simulations, reduce computational costs, and enhance design flexibility. This includes use of Traditional AI applications for faster predictionby using existing data, intelligent parameter selection, and smarter algorithms. Traditional AI applications also help improve outcomes by covering a broader design space. And the use of Generative AI applications, for generative design (generating entirely new concepts that designers would otherwise not have thought of), synthetic data generation and creation of virtual engineering assistants.An overwhelming majority of industry professionals, working on engineering simulations and/or AI, feel that AI can bring great value to simulations.Industry professionals opinion regarding the perceived value or importance of AI for simulations (Kini,2024)But if AI is so promising and capable, most automotive companies would have already adopted it to enhance productivity and streamline workflows. Yet, this is not the case and AI adoption remainslow.Breaking the AI barrier: Why adoptionstallsDespite all the buzz, the adoption of AI for engineering simulations remains low (~20%) in production. This is despite the fact that over 90% of companies have explored or piloted AI projects. The following visual shows the percentage of companies in the various stages of AI adoption, highlighting a gap between exploration, piloting and full deployment/adoption in production.Companies in various stages of AI adoption (Kini,2024)What is keeping these companies from progressing from the exploring/piloting stage to adopting AI for production? An expansive set of possible reasons/factors were collected from existing research and expert interviewsthese have been categorized into Technological, Organizational and Environmental factors, as shownbelow.Factors impacting the AI adoption decision (Kini, 2024) categorized using the T-O-E framework (Tornatzky, Fleischer and Chakrabarti, 1990)An industry survey was carried out to ascertain which of these factors were of primary or highest concern. The most critical factors were found tobe:AI Integration: Concerns and complexities related to integrating AI into existing workflowseven when companies are successful in exploratory projects, practical roadblocks arise during full-scale integration. The root causes for these could range from workflow, architecture, and data incompatibilities, to a general resistance to change. Integration challenges are real58% of surveyed industry professionals said that bringing AI into existing simulation workflows requires majoreffort.AI Explainability: Concerns about the lack of explanation given by AI algorithms to support their predictions and outputs (the black-box problem)engineers like to understand why a model gives a certain result, but with AI, that transparency isnt always there. No wonder only 13% of the respondents were satisfied with the current state of AI explainability.AI Maturity: Concerns that AI may not be sufficiently field-proven and thus not reliableAI is still seen by some as a tool thats in its infancy, and not quite ready for prime time. In fact, only 17% of surveyed professionals felt that AI tools for simulations were sufficiently mature.Budget/Investment: Realization that sizeable investment may be needed to achieve formal and full-scale AI adoptionBudget constraints remain an issue for 43% of companies.Factors of secondary concern (less critical, but important nonetheless) were:Lack of AI expertise: Concern that the skill and knowledge to understand and use AI may be lackinga split verdicthalf of the survey respondents felt that sufficient AI expertise is typically available in-house (or can easily be acquiredvia training, hiring, or outsourcing), while the other half felt AI expertise is limited and difficult toacquire.Forthcoming government regulations: Concerns regarding possible impact of anticipated regulations for the responsible use of AIhere too almost 50% expressed concerns that forthcoming regulations may impose constraints adversely impacting their use ofAI.Availability of usable data: Concern that although data exists, it may not be in a readily usable formalmost 33% of respondents expressed this concernand felt that extensive data management, storage and processing systems need to be in place to support AIuse.AI Scalability: Concern that scaling an AI application, even for the same use case (to multiple components/vehicles/sites, etc.), is difficult, and may result in limited or localized adoptionalthough only 26% of respondents had this concern, a high majority agreed that scalability should be an important consideration from the exploration stageitself.Note: The level of concern of these factors was analyzed using hypothesis testing of the industry survey responses, as shown below. For each factor, a low level of concern was initially assumed as the null hypothesis (H0, blue-shaded region). This hypothesis was then tested using the survey responses. In simple terms, the null hypothesis was rejected for factors that fell within the left-most 5% of the curve (H1; red-shaded regionmost negative)meaning, it can be concluded that these are the factors of highest concern. Full details of this statistical approach can be found in (Kini,2024).Hypothesis testing of survey responses indicating AI Integration, AI Explainability, AI Maturity and Budget Availability to be the factors of greatest concern (Kini,2024)These factors present real and daunting challenges, and have slowed AI adoption for many companies. Nevertheless, some industry leaders have taken decisive steps to successfully overcome these challengesshowing that they are not insurmountable. Audi, Volkswagen and Toyota offer valuable examples of how AI adoption can be successfully acceleratedwhether through in-house or collaborative development, strategic partnerships, or tailored AI solutions.Lessons from industryleadersAudi developed FelGAN, an AI-based software for Rim design (Audi MediaCenter, 2022). The word FelGAN is derived from Felge (meaning rim) and GAN, the acronym for Generative Adversarial Network. FelGAN, trained on historical rim design data, generates multiple realistic configurations efficiently, adapting to new design requirements. In the future, this system may be scaled to serve as a platform to be used by designers in other divisions of thecompany.Key takeaways: By developing the capability in-house, Audi has had full control over the functionality (addressing maturity and explainability concerns) and integration of the software to meet their specific needs. Such an approach also eliminates concerns about data provenance and integrity.Volkswagen recently announced the formation of an AI company named AI Labs (Volkswagen Group, 2024). This company is expected to serve as a global competence center to incubate new ideas for VW group and enable easier collaboration with tech companies and partners.Key takeaways: This set up gives VW greater control over AI developmentensuring the scalability and integration factors are scrutinized upfrontwhether built in-house or in collaboration with partners.At Toyota, use of publicly available Generative AI tools by designers had not been very effective. These tools would help generate great designs, but they would not conform to the engineering constraints that are essential for performance and safety of vehicles. Toyota Research Institute (TRI) generated a system in-house that allows these constraints to be included during the generative design process itself (Greenwood, 2023). The example they share is regarding the aerodynamic drag engineering constrainta critical design factor for energy efficiency, that the engineering team uses simulations to optimize. This allows designers to be creative with ideas, while working within the constraints that the engineering team will need to eventually impose anyway. This leads to fewer iterations between design and engineering teams (and avoids wastage of simulation hours), leading to faster and efficient productdesign.Key takeaways: TRIs success stemmed from its deep understanding of Toyotas design process and its ability to unify multiple departments under a cohesive AI strategy, ensuring seamless AI Integration. This would not have been achievable using an off-the-shelf solution developed independently by a simulation/AI vendor.These companies arent just throwing AI into their workflows and hoping for the bestthey are actively shaping how AI fits into their specific needs. Some build AI in-house for more control, while others collaborate with AI vendors for customized solutions. The key takeaway? There is no one-size-fits-all approach, but automotive companies taking an active role, in both AI strategy and development, iscrucial.How AI/Simulation vendors and automotive companies can bridge thegapIt is understandable that AI/Simulation vendors would like to develop and offer pre-made AI-enabled solutions, as selling the same software solution to multiple customers will optimize costs and generate more revenue. But such off-the-shelf offerings may experience only limited success. AI/Simulation vendors can deliver far greater value by pursuing a more collaborative approach with automotive companies.Automotive companies should be open for such collaboration (unless they have all the resources to develop and maintain AI solutions themselves). They should not see AI as an experiment, but as a well-planned strategic investment.The tables below highlight the approach Automotive companies and AI/Simulation vendors could take to work towards addressing the factors that slow down AI adoption. The specific factors that each action/approach helps address are alsomarked.Suggested approach for automotive companies to address factors ofconcernSuggested approach for AI and Simulation vendors to address factors ofconcern*AI and Simulation solutions could be offered by the same vendor or by different vendors.ConclusionAI has the potential to revolutionize automotive simulations, but adoption in production remains slow due to concerns related to integration complexity, AI explainability and maturity, budget/funding, scalability, among others. The stakeholders involved (AI/Simulation vendors, Automotive companies, technology/services partners) will need to take decisive steps to address these concerns and shortcomings, for the industry to be able to capitalize on this transformation.ReferencesGreenwood, M. (2023, September 27). Toyotas new GenAI Tool is Transforming Vehicle Design. Retrieved from Engineering.com: https://www.engineering.com/story/toyotas-new-genai-tool-is-transforming-vehicle-designHow Simulation Can Accelerate Your Digital Transformation. (2021, September 8). Retrieved from Harvard Business Review: https://hbr.org/sponsored/2021/09/how-simulation-can-accelerate-your-digital-transformationKini, S. (2024). An evaluation of the factors impacting adoption of Artificial Intelligence technology in high-fidelity Engineering Simulations for Automotive Design. Conducted as part of the Executive MBA program at S P Jain School of Global Management.Ragani, A. F., Stein, J. P., Keen, R., & Symington, I. (2023, June 21). Unveiling the next frontier of engineering simulation. Retrieved from McKinsey & Company: https://www.mckinsey.com/capabilities/operations/our-insights/unveiling-the-next-frontier-of-engineering-simulationReinventing the wheel? FelGAN inspires new rim designs with AI. (2022, December 12). Retrieved from Audi MediaCenter: https://www.audi-mediacenter.com/en/press-releases/reinventing-the-wheel-felgan-inspires-new-rim-designs-with-ai-15097Tornatzky, L. G., Fleischer, M., & Chakrabarti, A. K. (1990). The processes of technological innovation. Lexington, MA: Lexington Books.Volkswagen Group establishes artificial intelligence company. (2024, January 31). Retrieved from Volkswagen Group: https://www.volkswagen-group.com/en/press-releases/volkswagen-group-establishes-artificial-intelligence-company-18105AI in automotive design simulations: breaking the barriers to adoption was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·13 Views
  • UX or PX, the problem with research, semantic tokens, 5 tips for vibe coders
    uxdesign.cc
    Weekly curated resources for designersthinkers andmakers.The UX design community erupted when Duolingos head of design declared that product experience (PX) is their new name. Normally, impact is not really felt when its a branding exercise, but this one felt personal due to the rejection of the word user experience.UX or PX? Why naming matters By DarrenYeoOffload tedious research tasks and go straight to insights [Sponsored] Speed up your UX workflow by over 80% with Marvins AI-powered research repository. Say goodbye to bottlenecks and auto-generate actionable, shareable insights using Deep Research Analysis and ChatGPT-like prompts. Get started for freetoday.Editor picksThe real problem with research The power of research is overrated, while other aspects are overlooked.By Maxim MestovskyIs this the death or dawn of human creativity? Celebrating AI as a creativity companion.By Ian BatterbeeWe built UX. We broke UX. And now we have to fix it. Designing with strategic clarity, not just surface polish.By Dan MaccaroneThe UX Collective is an independent design publication that elevates unheard design voices and helps designers think more critically about theirwork.Generative fontsMake methinkStamina is a quiet advantage While stamina is the ability to sustain focused effort despite pain or discomfort, you should also think of it as the ability to stay true to your values and commitmentsto hold fidelity to a worthy purposeespecially when its hard to doso.What killed innovation? I entered the data visualization field in 2012, when D3.js had just come out and interactive graphics were going through a digital Renaissance. By the time I was fully steeped in the field in 2016, it felt like a new, experimental project was coming out every weekeach one pushing the boundaries of how we think about, visualize, and communicate data. But fast forward a decade, and it feels like Im seeing the same polished but predictable formats over andover.Leadership over measureship If youve ever had an app randomly interrupt your day to ask if you love itIf youve ever had a bossware-infected laptop demand you click your keyboard to prove you are workingCongratulations. Youve experienced measureship, the management philosophy du jour thats replacing leadership across the economy.Little gems thisweekDo chatbots really need faces? ByRiyaThe cognitive cost of convenience By Michael F.BuckleyWhose design process? By CaioBarrocalTools and resources5 tips for vibe coders (from a software engineer) How to keep you and your users safe.By Michael J.FordhamHow to deal with biases around testing? It is time to shed light on the testing craft.By JuliaKocbekSemantic tokens and responsive scaling Efficient, consistent, and flexible typography for digital platforms.By Oluwatosin ObalanaSupport the newsletterIf you find our content helpful, heres how you can supportus:Check out this weeks sponsor to support their worktooForward this email to a friend and invite them to subscribeSponsor aneditionUX or PX, the problem with research, semantic tokens, 5 tips for vibe coders was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·16 Views
  • No research is (often) better than some research
    uxdesign.cc
    Not anyone can just do research, but they can learn toContinue reading on UX Collective
    0 Σχόλια ·0 Μοιράστηκε ·16 Views
  • SXSW 2025: AI doesnt give a damn about UI; Futurists are the hottest job on the market
    uxdesign.cc
    Access the most impactful trends and unlisted video sessions from the iconic Austin conferenceContinue reading on UX Collective
    0 Σχόλια ·0 Μοιράστηκε ·37 Views
  • Dont just add AI on top: Rethinking mobile email UX for all workflows
    uxdesign.cc
    We redesigned mobile email replies with a UI for inserting local responses while reading, optionally with AI. Image by theauthor.Many UI designers are currently tasked with extending existing interfaces with AI features, such as copilots and chatbots. On desktops, these are often added in sidebarsbut on mobile devices, where screen space is limited, AI is typically layered on top of existing UIs aspopups.In this article, we explore the drawbacks of this design pattern and discuss a better alternative. Drawing on our research in Human-Computer Interaction, we introduce a redesign inspired by microtasking. We use this to improve the experience of replying to emails on smartphones.The problem: AI on top hides context for usingitMobile email UIs are a prime example for adding AI on top: The image below shows how several email apps added AI with a popup oroverlay.A common UI design pattern for AI replies in current mobile email apps: AI is integrated as a popup (A), on top of an empty draft view (B). Users enter a prompt (C), check the generated reply (D), and then accept it with a button (E). Image by the author, with screenshots from Googles website, Superhumans YouTube channel, The Copilot Connections YouTubechannelUnfortunately, this design wastes valuable screen space, because the AI popup hides the incoming email or existing draft. This forces users to remember key details from the original email while crafting an AI prompt and reviewing the generated response. This cognitive burden becomes especially problematic for longer emails or in typical mobile scenarios where users are frequently interrupted.Leaving UI space unused, while requiring users to recall information, indicates that current AI replies are not integrated well.The solution strategy: Microtasking keepscontextTo address these issues, we applied principles from microtasking, leading to a new UI concept we call Content-Driven Local Response (CDLR). It restructures the UI for email replies into twosteps:The redesign: (1) While reading the email, tap any sentence to insert a local reply, optionally with AI sentence suggestions. (2) On a usual draft UI, connect the local replies to one coherent emailor (3) let AI do just that. Image by theauthor.Step 1Local response: Instead of switching to a separate AI popup, users can insert responses directly within the email as they read. In a sense, the email text itself becomes the interface: Tapping a sentence inserts a response card where users can jot down a reply, notes, or keywords. This tap also signals the AI to generate sentence suggestions, which users can accept with a tap orignore.Step 2Connecting the local responses: Once users finish reading, they transition to the email draft screen, where all local responses are collected. Here, they can refine these snippets into a full reply manuallyor let AI generate a coherent message fromthem.Result: Flexible workflows and usercontrolWe evaluated this design in a user study with 126 participants, comparing it to two baselines:Manual replies: Users wrote responses withoutAI.AI-generated replies: This UI mimicked the industry standard of entering a prompt for a fully AI-generated response.Our findings show that Content-Driven Local Response (CDLR) provides a flexible middle ground between these extremes. By allowing users to draft local responses while reading, the design allowed users to engage more with the incoming email and their own thoughts on it. At the same time, AI support helped reduce typing effort anderrors.As our study (N=126) showed, our design (center) involves users more, also while reading. Image by theauthor.Moreover, participants also valued the added control over the final message. While full AI generation was faster, the new design also covers fast workflows: When preferring a quicker workflow, users can skip the local response step and jump straight to AI drafting. In this way, the new UI supports very flexible workflows.This design empowers users to decide when and how to involve AI, for example, to balance speed andcontrol.Design lessonslearnedA core principle of microtasking is keeping relevant context visible, allowing users to recognise information rather than having to recallit.Initially, we applied this idea to help users see the incoming email while prompting AI and reviewing its output. Through our user-centred design process, we refined it further: the ability to insert local responses while reading is useful on its owneven withoutAI.Put as a design insight for AI integration:Rather than adding AI on top of existing UIs, we should design for users workflowswhether they involve AI or not. Such flexible UIs empower users to decide themselves when and how to involveAI.So how can we redesign for these flexible workflows? Heres ourrecipe:Identify micro-decisions that users make in the workflow that currently lack explicit interaction. In email replies, this meant recognizing that users naturally decide which parts of an email to respond to whilereading.Enable users to express these moments in interaction. In our case, we allowed users to tap any sentence whilereading.Integrate AI specifically and optionally at these decision moments. For example, in our design, tapping a sentence also triggered (local) suggestions.ResourcesPreprint of the paper on arxiv: https://arxiv.org/abs/2502.06430Video showing the UI in action: https://youtu.be/wtTDgU6559Ihttps://medium.com/media/e7f6b6bfe841b7b510751e41d3f33a51/hrefDont just add AI on top: Rethinking mobile email UX for all workflows was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·39 Views
  • Individualism, pop bands & workplace collaboration
    uxdesign.cc
    How many of you remember the Spices Girls or Backstreet Boys?The late 90s and early 2000s were dominated by iconic girl and boy bands. We couldnt get enough of them. I remember those teenage years, locked in my bedroom, belting out their tunes and perfecting their dance moves (or at least trying to). But where have all the pop bandsgone?90s pop bands collageSourceThe number of bands in the Top 100 chart has shrunk since the last decade, and most of the top artists in the chart are solo projects.This is a great reflection of society and the social media world we currently live in, which focuses attention on individuals, indirectly creating a stronger individualistic culture. The Guardian observes that younger generations are less interested in being part of a band where collaboration is vital and more attracted to the idea of going solo. Its so much easier to create music in your own bedroom and upload it to social media. This mirrors the growing focus on standing out, personal brand and success, a cultural shift that places individualism and forgets about togetherness.The American DreamIn modern society, individualism has long been celebrated as a core value. It drives progress, innovation, and personal freedom. When thinking about America, its impossible not to think about the American Dream. This individualistic approach separates us, places us against each other and doesnt nurture empathy for one another. Its interesting to read about heroic individualism in Brad Stulbergs The Practice of Groundedness, as it captures a mindset that promotes the idea that personal achievement is paramount, often at the expense of collaboration. While it can drive individuals to achieve more, it can also leave them feeling isolated and disconnected from their teams, a trend increasingly reflected in the workplace.So, we cant deny that the fabric of modern society shapes not only our personal lives but also how we bring ourselves into the workplace, and this focus on individualism also has significant consequences for how we work together. How we define projects, work toward goals, and build relationships with colleagues is profoundly influenced by the world outside our officewalls.Whether we realize it or not, our social and cultural contexts inevitably seep into our professional lives. Some may strive to maintain a strict separation between their personal and professional identities, while others fully integrate the two. However, our society significantly shapes our thoughts and, consequently, ourwork.At first glance, this may seem philosophical or even abstract when applied to the workplace, but if you strip away the layers of bureaucracy, hierarchy, and formalities, what remains is a group of people-individuals coming together to collaborate and achieve something. Who we are, how we think, and the way we interpret the world around us deeply impact how we approach a project, how we work toward deadlines, and how we build meaningful connections with our coworkers.The importance of collaboration for designersCartoon from Tom FishburneAs designers, we cant achieve our goals by ourselves. If we dont want to get bothered by product or sales or marketing, we need at least to have a great working relationship with engineers as they are the ones pushing the designs to theworld.Paola Antonelli, talking about her student experience at the Politecnico of Milan on Design Better Podcast, clearlysays:Design can never stand by itself right it always has to have other crutches like I went to the Polytechnic of Milan so the crutch there was engineering not a bad crutch but it always is with something so its either within engineering or its within an art school and in either case you have some big shortcomings, which is interesting.https://medium.com/media/8052affd4f04acfeb6a3a32a82796dd6/hrefIndividualism can create challenges for organisations that rely on collaboration and shared goals-and thats every organisation! A balance must be struck between valuing personal achievements and promoting teamwork.Workplaces thrive when they foster an environment where individual talents are valued but contribute to a collective vision. Strong communication, collaboration, and a shared sense of purpose can counterbalance the downsides of individualism, leading to more engaged and connected teams.If we want to have great products, we dont need big egos, but greatteams.Melissa Tan explains the importance of teams focusing on fostering a team-first culture where collective success is valued more than individual achievements. Aligning on shared goals helps improve efficiency and impact. One key aspect of collaboration is establishing solid interpersonal relationships, which builds trust and promotes a collective mindset across theteam.I love the quote about growth being seen as a cross-functional effort, highlighting the importance of great collaboration.[] just infusing a culture of thinking like an owner. This infusing of the culture, I think it comes out in a few ways. One is, and this is really common in growth. Growth is so cross-functional that you often will end up feeling like youre blocked by otherteams.https://medium.com/media/73ac2078f45f66a721fe4ba156d3e57f/hrefHow can we make sure that the competitive nature of climbing the organisational ladder doesnt hinder teamwork? If the success of our products depends on how well teams collaborate, how can we prioritise and encourage this collaboration?Collaboration is not a workshop, its amindsetLets make this straight. Collaboration isnt about doing a brainstorming exercise and forgetting that it has even happened.I know workshops sometimes feel like a never-ending episode of a typical reality show (my go-to is RuPauls Drag Race). We are all in there to make a good show, but at the same time, we need to look better than the others towin.RuPaul Drag Race UKSourceQNewsWe have all been in awkward workshops where everyone seems to demonstrate they are the next Steve Jobs. But lets be honest, were all just trying to avoid being the next person laid off in a cost cutting measure nobody understands.Collaboration doesnt happen in any of those scenarios. Its a much longer process that needs to be consciously incorporated into everything we do as a team. It doesnt happen in a meeting; its something that we build over time the more we work together.When teams truly collaborate and use everyones expertise to solve problems, its visible. as it allows us to create something that actuallyworks.Thanks for reading Some Designers! Subscribe for free to receive new posts and support mywork.The Unholy Trinity: UX, Product, and EngineeringHow many times have you heard of the three-legged stool? The one where you have engineering, product and design supporting a seat, meaning they are equally important to achieve the goal of astool.SourceWe designers are always dreamers who think every problem can be solved with a sleek interface and a perfectly placed button. On the other hand, we have product managers who (bless them!) try to keep this stool standing while juggling odd stakeholder expectations. And then we have the engineers, the wizards, who transform our gorgeous mockups into reality. They also make more artistic interpretations of the designs and use their artistic licence to build something completely different.Unfortunately often each leg tries to convince the other one they are the most important part of the chair and organisation are oddlyshaped.SourceBut heres the thingmagic happens when we work together. And by magic, I mean a product that doesnt make users want to throw their devices out thewindow!The Collaboration ConundrumCollaboration isnt just a fancy word for yet another meeting. Its a mindset that we can all work towards, and it can set teams apart by producing better products. I like five main principles for creating great teams that work well together.The Importance of Inclusive DesignInclusive design is not merely a checklist; it involves creating diverse teams from the outset. This approach ensures that various perspectives are considered, ultimately leading to more innovative solutions. For managers, here are a few questions to think about when it comes to hiring and creatingteam:Have a fair hiring process: Evaluate your hiring stages critically. Are they fair and inclusive? Consider how you assess candidates and whether you have a matrix to evaluate them based on the teamsneeds.Target underrepresented groups: Actively seek to hire from underrepresented backgrounds. This means posting job ads on platforms that cater to marginalised communities, such as POC tech spaces or women in techforums.Read more about it in my previousarticle.Building respectful relationshipsRespect is fundamental in any team dynamic. Alan Alda humorously noted, The more empathy I have, the less annoying other people are. While empathy is often overused in design discussions, respect for each team members expertise is crucial. Heres how you can fosterrespect:Understand Roles: Take the initiative to learn about your colleagues rolesengineers, product managers, designersand clarify responsibilities.Encourage Open Dialogue: Create an environment where everyone feels comfortable sharing their insights and expertise.Leveraging insights for better collaborationInsights drive effective collaboration. Jim Barksdale famously said, If we have data, lets look at data. If all we have are opinions, lets go with mine. To avoid decision-making based solely on individual opinions:Conduct Research: Even without a large budget, you can gather valuable insights through simple methods like client interviews orsurveys.Involve Everyone: Encourage team members from various disciplines to participate in research efforts. This cross-functional involvement helps everyone understand customer needsbetter.I wrote more about starting small with research.Aligning design with businessgoalsUnderstanding the business context is vital for applying your design or engineering skills effectively. Heres how to align your work with business objectives:Frame decisions through a business lens: When proposing design changes, articulate how these decisions can help achieve business goalssuch as increasing revenue or expanding the audience.Focus on value creation: Rather than getting bogged down in aesthetics alone, consider how your work contributes to the overall success of the business.More about how to sell design through a business lenshere.Fostering enjoyment and celebrationCreating a joyful workplace is essential for team morale. Instead of viewing work as merely a series of tasks, celebrate achievements together:Recognise milestones: Take time to acknowledge accomplishments as a team. This practice not only boosts morale but also fosters a sense of belonging.Encourage fun: While forced fun can feel insincere, genuine celebrations of success can enhance teamspirit.More than workshopsSourceSo, collaboration isnt just a brainstorming session with sticky notes or an endless round of meetings that could have been emails. Genuine collaboration is messy, unglamorous, and often requires more patience than we like toadmit.But when done right, its magicthe type that happens when diverse minds, mutual respect, and a shared goal collide. It transforms clunky ideas into products people enjoyusing.Collaboration is about building inclusive teams, respecting each others expertise, and grounding decisions in insights, not egos. Its about aligning with business goals while keeping a sense of joy alive. And its a practicea mindsetwe must nurture everyday.So, lets stop treating collaboration like a checkbox and start weaving it into every moment of how we work. Because the best productsand the best teamsarent built by lone geniuses. Theyre built by people who roll up their sleeves, respect the process, and, maybe, crack open something fizzy to celebrate the wins along theway.Originally published at https://raffdimeo.com on October 6,2024.Individualism, pop bands & workplace collaboration was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·60 Views
  • Duolingos 6-step reactivation experience
    uxdesign.cc
    Its not just about getting users backits about what happens next.Continue reading on UX Collective
    0 Σχόλια ·0 Μοιράστηκε ·62 Views
  • Mastering typography in design systems with semantic tokens and responsive scaling
    uxdesign.cc
    Creating efficient, consistent, and flexible typography for digital platforms using modern design system principles.When designers no longer have to reinvent the wheel for every project, the entire process becomes much smoother and more efficient. This is the beauty of a design system, it makes the whole process smoother and faster. As weibel explains, beyond just saving time, its about working more effectively, keeping all the parts consistent, and making sure users have a clear and organized experience [1].User profile card showing that the consistent use of fonts contributes to a clear and organized user experience.At the heart of this approach is typography, which is how we make text look and feel. Its more than just selecting fonts; as Abhishek emphasizes, its about carefully arranging words so they are easy to read, visually appealing, and engaging to the reader [2]. It sets the tone for the overall design by creating harmony and consistency throughout every element of a digitalproduct.A strong typographic framework simplifies development and makes products easier to maintain. By integrating typography into a design system, teams can effortlessly scale designs across desktop, tablet, and mobile, ensuring a consistent visual language. As vinney notes, this is a key aspect[3,4].Key typographic principles in designsystemsEffective typography within a design system is guided by several key principles, including usability, clarity, and hierarchy.Usability: Usability in typography refers to how easily users can read and interact with text within a digital product According to abhishek, this is a key consideration in design systems [5]. When typography is well-designed, users can engage with content smoothly without distraction. For example, the Nielsen Norman Group emphasizes that factors such as clear font selection, appropriate sizing, balanced spacing, and strong contrast are crucial for reducing cognitive load and promoting smoother navigation.A design showcasing clear typography, demonstrating principles of readability and usability in presenting information.Clarity: Clarity in typography ensures that the textual content is communicated effectively and is easily understood. Achieving clarity involves selecting highly legible fonts with distinct letterforms, ensuring characters are easy to differentiate.Additionally, proper line spacing (leading) prevents text from appearing cramped, improving readability and reducing visual strain. By prioritizing clarity, typography improves both user experience and accessibility, making content more engaging and easier to navigate[6].A side-by-side comparison of legible and illegible typography, illustrating key factors for clarity such as font choice, spacing, and contrast.Hierarchy: Hierarchy is another critical typographic principle in design systems, playing a vital role in guiding users through information and highlighting key elements within the content. By implementing a clear typographic hierarchy, users can quickly scan content and understand the relative importance of different sections and pieces of information. To guide users effectively, this visual structure is achieved through deliberate variations in font size, weight, color, and the placement of text elements, a point also made by oliver in their discussion on font size in web design [7]. Design systems often employ a defined type scale to ensure a consistent and effective visual hierarchy across aproduct.A visual representation of how typographic hierarchy guides the readers eye to the most important information.Typography and accessibility in designsystemsDesigning for accessibility ensures that digital products accommodate users with disabilities, from mild impairments to severe limitations. However, accessibility guidelines dont cover every potential usability issue. For example, if a font choice makes text difficult to read, thats a usability problem rather than a strict accessibility violation. When users struggle with readability, assuming contrast isnt the issue, the challenge often lies in the legibility of the typeface or the clarity of the overall layout[8].Understanding typography readability and legibility is key to designing accessible interfaces. Readability refers to how easily words and sentences can be understood, while legibility focuses on the clarity of individual letterforms. Making intentional choices in these areas can significantly improve the reading experience for allusers.To ensure text remains accessible:Font Size: Body text should be at least 16px for comfortable reading, and text below 9pt should be avoided[9].Color Contrast: The Web Content Accessibility Guidelines (WCAG) recommend a minimum contrast ratio of 4.5:1 for small text and 3:1 for large text to support users with lowvision.Screenshot showing an episode card and a contrast checker confirming WCAG compliance.Spacing & Line Height: A line height of at least 1.5 times the font size and adequate spacing between letters and words improve readability, particularly for individuals with dyslexia or low vision[10].Text Resizing: Users should be able to scale text up to 200% without loss of content or functionality [11].Screen Reader Compatibility: Avoid embedding important text within images; instead, use actual text to ensure compatibility with assistive technologies.Establishing primitive tokens for typographyPrimitive tokens lay the groundwork for a cohesive typographic system by defining key properties like font family, weight, size, line height, and letter tracking. These core tokens ensure that typography remains consistent and scalable across diverse platforms and screen sizes, serving as the basis for more detailed semantictokens.To begin, a font family token specifies the primary typeface (for example, Satoshi or Helvetica). Next, tokens for font weight differentiate between various levels such as regular, medium, and bold, each represented by a specificvalue.Instead of using fixed font sizes, the system employs scalable font size tokens that allow text to adapt fluidly across different devices. Similarly, line height tokens preserve readability and visual balance, while letter tracking tokens manage the spacing between characters for optimal legibility.In Figma, you can set up this system by creating a collection called Primitive Type to house all these primitive tokens. Within this collection, organize categories as shown in the image below for a clear structure. Additionally, consider creating a central unit category that standardizes all unit values across the design system. This approach not only ensures consistency but also makes it easier to update and maintain typographic values throughout your projects.Figma screenshot of a Primitive Type table, illustrating the organization of core typographic tokensNaming conventions for typography tokensTypography token naming conventions are designed to be hierarchical and modular, ensuring that tokens remain clear, scalable, and adaptable across various platforms and screen sizes. Typically, the naming structure follows thisformat:[Category] / [Size] / [Style] / [Attribute]Examples of typography token naming conventions, demonstrating a hierarchical structure for clarity and scalability.Each component of this structure serves a distinctpurpose:Category (Type Role): Defines the texts purpose, such as Display, Heading, Body, orCaption.Size: Indicates the type scale, often using labels like XL, L, M, orS.Style: Specifies the text styling, such as Regular, Bold, or SemiBold.Attribute: Identifies the typography property being defined, such as weight, size, line height, or letterspacing.Understanding semantic typography tokensSemantic tokens in typography are design tokens that provide meaning and context to the fundamental typographic properties. They act as an abstraction layer over primitive tokens (which store raw values like font names or sizes), defining how and where these properties should be used within the design system[12]A visual representation of semantic typography tokens in a user interface.Why use semantic tokens in typography?Using semantic tokens offers several key advantages:Consistency Across Platforms: They ensure that typography remains consistent across different platforms like web, tablet, and mobile, providing a unified user experience.Effortless Updates: Updating typographic styles becomes seamless, as changes to a semantic token automatically apply to all elements using that token. For instance, modifying the font size for all Bold Heading Large elements only requires updating the Heading/L/Bold/Size semantictoken.Flexibility and Scalability: They make theming and customization easy, allowing typography to adapt to different brands or contexts without changing the corevaluesImproved Collaboration: By establishing a shared language between designers and developers, semantic tokens streamline the design-to-development handoffprocess.Key components of a semantic typography tokenA semantic typography token encapsulates several essential properties that define its visual appearance andusage:Screen Variants: Although not always explicitly included in the token name, tokens can be designed with different values for various devices (e.g., web, tablet, mobile). This ensures responsive typography that adapts seamlessly to different viewportsizes.Illustration of responsive design across laptop, tablet, and mobile phone.FreepikFont Name (Typeface): This specifies the font family to be used, ensuring consistency throughout the design system. For instance, a token might define INTER or OJUJU as the chosen typeface.Examples of different typefaces, such as INTER, OJUJU, and others, showcasing the options for a font name token in a designsystem.Size: The font size is defined using units such as pixels (px), rems, or ems. Relative units like rems are often preferred for accessibility, allowing users to adjust text size according to theirneeds.Examples of extra-large, large, medium, and small heading sizes, demonstrating different font sizeoptions.Letter Spacing (Tracking): This property sets the horizontal space between characters, typically expressed in em, percentage (%), or pixels (px). Proper letter spacing can significantly enhance readability, especially for headings or text in allcaps.Four examples of the word tracking with varying degrees of letter spacing, from tight towide.Line Height (Leading): Line height, or leading, is the breathing room for your text, it makes your content both easy to read and visually appealing.In our typography system, we use a smart ratio-based approach to determine line height. Depending on the type scale, we adjust the ratio: a tighter 1.14 ratio works well for larger text like displays and prominent headings, while a 1.5 ratio is ideal for smaller text like body copy and captions.Once calculated, these line heights are rounded to the nearest 4px. This step helps maintain a structured layout.For example:For a large heading with a 48px font at a 1.14 ratio, the line height comes out to roughly56px.For body text with a 16px font using a 1.5 ratio, the line height is24px.This flexible, ratio-driven approach lets us adjust line height based on text size and purpose, ensuring optimal readability and a balanced visual experience on alldevices.Examples of text with line height (leading), demonstrating how vertical spacing between lines affects readability.Weight: This defines the boldness of the font, with typical options including Regular, Medium, Bold, Semibold, etc. Font weight helps establish visual hierarchy and emphasizes important text elements.Examples of Light, Medium, and Bold fontweights.Establishing a typographic hierarchyBy defining distinct type role for different content levels, designers enable users to quickly understand the importance and relationships between various pieces of information. To achieve a clear hierarchy, start by categorizing your text into the following groups:Display: Large, attention-grabbing text used for key visuals or standout messaging.Headings: Primary titles that introduce major sections.Subheadings: Secondary titles that further break downcontent.Body: Main text for paragraphs and detailed information.Captions: Smaller text that supports images or supplementary content.Labels: Brief descriptors for form fields, buttons, andicons.A visual representation of how different content levels (Display, Headings, Subheadings, Body, Captions, Labels) are styled using varying font weights and sizes to create a clear typographic hierarchy and guide the readerseye.Applying semantic labels for hierarchical textstylesOnce youve defined your type role (Display, Heading, Subheading, Body, Caption, Label), the next step is to apply type scales using semantic labels such as XL, L, M, and S. These labels indicate different size variations within each category, allowing for a more granular control over your typographic hierarchy. By assigning these size-based semantic labels to your text elements, you ensure that each component reflects its relative importance and role within the overall layout, while also offering flexibility in visual emphasis.For instance, within the Display category, you mighthave:Display-XL: For the most impactful, attention-grabbing text. Think of the main title on yourpage.Display-L: For slightly less prominent displaytext.Similarly, within the Heading category, you couldhave:Heading-L: For primary sectiontitles.Heading-M: For slightly less important headings.And for Bodytext:Body-M: For your standard paragraph text.Body-S: For smaller body text variations.This systematic approach not only reinforces consistency across your design by using a defined set of sizes but also streamlines the implementation process for designers and developers.Establishing semantic typography tokens inFigmaHeres a detailed process for setting up semantic typography tokens inFigma:1. Create the semantic typecategoryStart by defining a semantic type category to house all typography-related variables. This category should include predefined responsive breakpoints for desktop, tablet, and mobile, as modes ensuring typography scales seamlessly across different devices.Screenshot of a figma menu with the Semantic Type category created and highlighted2. Define the font namevariableWithin the semantic type category, add a string variable for the font family. This variable should reference your primitive token for the font name and should be applied across all breakpoints.Screenshot showing a Family semantic type variable referencing the Satoshi primitive token for Desktop, Tablet, andMobile.3. Organize by typeroleDefine a type role group. This group categorizes text styles based on their function, suchas:Display (for large, attention-grabbing text)Heading (for sectiontitles)Subheading (for secondary headings)Body (for main contenttext)Caption & Label (for small text elements like footnotes and formlabels)Organizing semantic typography tokens by their typerole.4. Establish type scales within each typeroleInside each type role, create a sub-group for different type scales, suchas:XL (ExtraLarge)L (Large)M (Medium)S (Small)Establishing type scales within different type roles in a designsystem.These sub-groups set up a clear type scale hierarchy, providing a flexible and consistent framework for typography usage within your designsystem.5. Set up variables for typographic propertiesFor each type scale (XL, L, M, S), define the following key typographic variables using your primitive tokens:Font Weight: Assign predefined values such as Regular, Medium,Bold.Font Size: Set scalable font sizes appropriate for each typerole.Line Height: Define line heights optimized for readability based on fontsize.Letter Tracking (Spacing): Adjust spacing for improved legibility based on type scale andcase.Setting up typographic variables for a semantictokensBy referencing primitive tokens, these variables ensure consistency and flexibility while maintaining design system integrity.After setting up your semantic typography tokens in Figma and defining the core typographic variables, the next step is to bring these elements to life by applying them to your textstyles.Applying semantic tokens to typography stylesThis guide outlines how to apply semantic tokens to your typography styles, using Heading XL as an example. The process ensures that your text styles are both consistent and adaptable across various breakpoints (Web, Tablet,Mobile).Semantic token definition for HeadingXLBegin by creating a type scale reference that outlines the various type scale you need, we would be using XL. (Note: The initial sizes and values can be arbitrary, as they will be updatedlater.)Heading XL type scale for thewebUpdate the names of each text style to match the semantic token convention defined in Figma. For example, rename the style to Heading/XL/BOLD. This semantic naming links the text style to its function (e.g., headings, subheadings), scale and fontweight.Renaming based on semantictokensUtilizing a plugin like Styler allows us to generate figma text styles based on our renamed elements.Figma screenshot showing the Styler plugin being used to generate text styles from renamed typography layersBy selecting each style and running the plugin, we create the initial text styles for HeadingXL.An image showing the result of using a plugin to generate textstyles.Right now, the Heading XL style doesnt have the correct font sizes, weight, tracking or line heights. To fix this, I rearranged the order to prioritize the Bold variant, then clicked on Edit Style to update the settings.Figma screenshot showing the Edit text style modal for the Bold variant of Heading XL, where font properties like family, weight, size, and line height are beingupdated.Currently, the values for font weight, size, line height, and letter spacing appear arbitrary. To standardize these, well apply the predefined structure. First, update the font family to match the semantic type token. Click on the variable icon (the settings icon) to access the field where you can make thischange.Screenshot showing the font family dropdown menu in the Edit text style modal, with Satoshi selected to match the semantic typetoken.Next, we update the font weight. An efficient method is to copy the text style name from the layers panel and paste it into the token search field. This approach eliminates the need to manually scan through tokens, streamlining theprocess.Figma screenshot showing the search for Heading/XL/Bold and the Edit text style modal with the Weight set toBold.After selecting the Bold weight, you can apply the same process to adjust the font size, line height, and letter spacing by choosing their corresponding semantictokens.Figma screenshot showing the Edit text style modal for Heading XL Bold, with settings for font family (token), size, line height, and letterspacing.Once the Bold variant has been updated, we apply the same process for the remaining font weights, medium and regular. Initially, we set arbitrary values, but after applying the semantic tokens, the values have been adjusted accurately. The updated styles now perfectly reflect our intendeddesign.A visual summary of the updated Heading XL typography style for theweb.Next, we need to adapt the typography style for our breakpoints, web, tablet, and mobile. To achieve this, we apply variable mode to the typography style. Since weve already established breakpoints using semantic tokens and created distinct modes for each, this approach allows for seamless switching and consistent typography across alldevices.Figma screenshot of the breakpoint selection dropdown (Desktop, Tablet, Mobile) for the Heading XL text style, showing variablemode.With the new settings in place, you can now switch between desktop, tablet, and mobile modes. Next, duplicate the typography scale twiceone copy for Tablet and one for Mobileand apply the variable mode for each. The image below illustrates thisprocess.Heading XL typography style adapted for web, tablet, and mobile, showing the varying properties for each breakpoint.Now, for Heading XL, I can easily switch between Web, Tablet, and Mobile modes. The benefit of this approach is that it consolidates the Heading XL style into a single category within the text styles, rather than creating separate categories for each breakpoint. This streamlined method makes toggling between breakpoints quick and efficient.Text styles panel showing Heading/XL as a single category with Bold, Medium, and Regular weight variations.A strong typography system ensures consistency, scalability, and accessibility within a design system. By leveraging clear tokens, teams create a flexible foundation that adapts seamlessly to different screens and user needs. Prioritizing readability, hierarchy, and responsiveness not only enhances the user experience but also strengthens collaboration between design and development, making typography a cornerstone of both usability and visuallanguageReferences[1] Weibel, A. (2025, February 5). What is a Design System and How Does it Work? Salesforce. Retrieved from https://www.salesforce.com/blog/what-is-a-design-system/[2] Abhishek. (2021, October 7). Role of typography in design systems. Strate School of Design. Retrieved from https://strate.in/role-of-typography-in-design-systems/[3] Vinney, C. (2024, October 31). What is a design system and why is it useful? UX Design Institute. Retrieved from https://www.uxdesigninstitute.com/blog/what-is-a-design-system/[4] Understanding typography in design systems. (n.d.). Retrieved from https://www.qed42.com/insights/understanding-typography-in-design-systems[5] Abhishek. (2021, October 7). Role of typography in design systems. Strate School of Design. Retrieved from https://strate.in/role-of-typography-in-design-systems/[6] Establishing a Visual Language: Typography. (n.d.). Design Systems. Retrieved from https://www.neue.world/learn/design-system/establishing-a-visual-language-typography-a-detailed-discussion-on-choosing-and-using-typography-in-a-design-system[7] Oliver. (2021, September 21). Whats the right font size in web design? Pimp My Type. Retrieved from https://pimpmytype.com/font-size/[8] Typography and accessibility. (2023, January 10). The Interconnected. Retrieved from https://theinterconnected.net/kirabug/typography-and-accessibility/[9] TypographyMaterial Design 3. (n.d.). Material Design. Retrieved from https://m3.material.io/styles/typography/applying-type[10] Understanding success criterion 1.4.4: Resize text. (n.d.). W3C. Retrieved from https://www.w3.org/WAI/WCAG21/Understanding/resize-text.html[11] Understanding variables, styles and tokens in design systems. (n.d.). Retrieved from https://figr.design/blog/figma-tokens-in-design-systems[12] Design tokenstokensfoundationsSAP digital design system. (n.d.). Retrieved from https://www.sap.com/design-system/digital/foundations/tokens/design-tokens/GlossaryBreakpoint: A specific screen width at which the layout and styling of a website or application adapt to provide an optimal viewing experience across different devices (e.g., desktop, tablet,mobile).Design System: A set of standards, reusable components, and guidelines that help teams design and develop digital products in a consistent and efficient way.Font Family: A group of related typefaces sharing similar design characteristicsFont Size: The size of the text, typically measured in pixels (px), ems, orrems.Font Weight: The degree of thickness or boldness of a typeface (e.g., Regular, Medium,Bold).Hierarchy (Typographic): The visual organization of text on a page to guide the reader and indicate the importance of different content sections.Letter Spacing (Tracking): The horizontal space between characters in a line oftext.Line Height (Leading): The vertical space between lines oftext.Primitive Tokens: Fundamental design tokens that store rawvaluesReadability: How easily words and sentences in a block of text can be understood.Relative Units (ems, rems): Units of measurement in web design that are relative to other values, allowing for scalability and accessibility.Scalability: The ability of a design system or its components to adapt and grow effectively as the product evolves and expands across different platforms and screensizes.Semantic Tokens: Design tokens that provide meaning and context to typographic properties.Text Styles: Predefined sets of typographic attributes (font family, size, weight, line height, letter spacing) that can be applied to text elements to ensure consistency.Type Scale: A defined set of font sizes used throughout a design system to establish visual hierarchy and consistency.Type Role: A categorization of text styles based on their function or purpose within the content (e.g., Display, Heading, Body, Caption,Label).Usability (in Typography): How easily users can read and interact with text within a digitalproduct.Variable Mode: A feature in design tools (like Figma) that allows for defining different sets of values for designtokens.WCAG (Web Content Accessibility Guidelines): A set of international standards and recommendations for making web content more accessible to people with disabilities.Mastering typography in design systems with semantic tokens and responsive scaling was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·78 Views
  • Can we actually design for sustainability?
    uxdesign.cc
    4 strategies to improve a products eco-friendliness without losing usability.I have been bummed by the decision to withdraw the United States from the Paris Agreement. Climate change is real (and scary), and this treaty committed countries to take action to reduce greenhouse gas emissions. And as an individual, it feels I cant do anything to stop the harm being done to theplanet.When you think of greenhouse gas emissions, you probably think of harsh manufacturing plants and how reliant we are on plastics (especially in the UnitedStates).But do you also think of internet emissions?Ive always thought the digitization era was good for the environment with less physical waste and better resource management. But a lot goes into powering the internet and our digital products. Internet emissions are all-encompassing data centers, servers, devices, and power sources needed to load web-pages, store documents, or even checkemails.I didnt know how much the UX design industry contributes to internet emissions until I read Tom Greenwoods Sustainable WebDesign.UX designers want to create usable, efficient, and visually-appealing experiences so users continue to use our product more and more. But in reality, were just exacerbating the contributions to internet emissions. Our goals are not only to help users complete a task, but to get them to click more, consume more content, and spend more time in theproduct.This more, more, and more mindset is causing the average page weight to increase annually. According to the Web Almanac, the median page weight on desktop was 1080 KB in 2014, and increased to 2170 KB in 2022over 100% increase in 8 years. A heavier page weight requires more data transfer, also requiring more energy toload.Web Almanacs median page weight over time between 2014 to2022We have a dilemmado we continue to increase our engagement metrics for our products and intensify internet emissions? Or do we decrease engagement to lessen emissions, but lose ourjobs?There are strategies all UX designers can use (other than just stopping UX design altogether) to improve a products eco-friendliness without losing usabilitymaintaining user engagement and keeping stakeholders happy. Lets review these 4 strategies below, then look at real-world examples of these strategies being practiced.Table ofcontents4 strategies for sustainable designReduce medias energy consumptionDesign for low data consumptionOptimize userjourneysEncourage sustainable userbehaviorReal-world examplesNetflixAppleWebsite examples4 strategies for sustainable design1. Reduce medias energy consumptionI. Apply efficient fileformatsUse WebP format forimagesUse MP4 (H.264) format forvideosUse system fonts (like Arial) or WOFF2 format for webfontsII. Load content only whenneededUse lazy loading to load content when they become visible in the usersviewportDont auto-play videos or animations unless triggered by the user (I.E., user selects Play or Submitbuttons)Use tools like Pixelied to adapt images into web-optimized fileformats2. Design for low data consumptionI. Prioritize contentLimit videos and animations (they have the largest filesizes)Provide different resolutions for media to allow the users browser to select the most suitable size for the users device and connectionUse dark mode by default (have the user swap to light mode, ifoffered)II. Reduce customUIBuild and use design systems for reusable UI components andpatternsAvoid UX fads that are short-lived and will require a redesign lateronUse design systems, such as Atlassian, to recycle UI components throughout a digitalproduct3. Optimize userjourneysI. Reduce userstepsUse clear navigation and heading structure so users can easily find needed informationreducing unnecessary pageloadsMinimize user clicks and page loads by consolidating multi-step processes into fewer input fields orpagesII. Create mindful experiencesAdd intentional delays to the users journey (I.E., ask them if they want to continue after spending a certain amount of time in theproduct)Use caching to temporarily store copies of the web-page content to reduce HTTP requestsoptimizing page loadspeedsPatagonias navigation menu prioritizes intuitiveness and hierarchyallowing easy find-ability4. Encourage sustainable userbehaviorI. Offer sustainable optionsSuggest a Low-data mode or Eco mode for users to toggle on or label media that requires more energy for transparencyDisplay estimated energy savings to reinforce the users decision to choose low-data modes or simplified website experiencesII. Prompt users to takeactionMake it easy for users to identify and delete old files or messages taking up server or devicestorageEducate users with information to raise awareness about the products decision to prioritize web sustainabilityAvrils website includes a toggle for Low energy mode with an information tooltipadjacentReal-world examples1. NetflixNetflix includes their infamous Are you still watching? feature, which adds an intentional pause in the user continuing to consume their content (though it feels judgemental when youre binge watching).Netflix only offers a dark mode color paletteenhancing its visual appeal and sustainability efforts.Netflix recognizes its carbon footprint due to being a video-streaming service, and has a target to reduce their emissions by roughly half by2030.Netflix began caching content closer to their end-users and improving server hardware to reduce data transfer by 30% between 2018 and 2021decreasing overall energy consumption.Netflixs Are you still watching? feature adds a pause to the usersjourney2. AppleApple includes a Low data mode in the settings for iOS 13 that reduces background activity and video streamingdecreasing data transfers. This benefits users who are in low-connectivity areas as well as users who need to save on cellular datausage.Apple has environmental initiatives with Apple 30, which will bring their net emissions to zero by 2030. Though Apple is evolving both their hardware and software, they implemented the Supplier Clean Energy Program to reduce overall emissions.Apple provides a Low data mode option in the settings for Cellular dataoptions3. WebsiteexamplesC40 Cities: Their website redesign reduced their CO2 output from 6.7g to 0.34g per homepageview.Impact Management Platform: Their website only uses 0.19g of CO2 per homepage view while maintaining quality and performance.Good Energy: Their website is powered by 100% renewable energy and each homepage view uses 0.58g ofCO2.C40 Citys website; examples via RyteMagazineClimate change is happening, and the internets emissions heavily contribute to it. A lot goes into powering the internet, from data centers to our individual devices, but there is a large dependency on UX design that determines how sustainable (or not) a certain digital productis.Are the file formats efficient? Do videos and unnecessary content load automatically? Are users prompted to act more sustainably while using theproduct?All these questions can influence a products eco-friendliness without detracting its usability or visual appeal, but its up to UX designers to ensure its advocated for and implemented. By using strategies like designing for low data consumption and optimizing user journeys, products can reap the benefits of lowering their energy consumption and giving users faster and effective experiences.Instead of the standard more, more, and more approach found in UX design, lets implement an intentional mindset to design for sustainability while maintaining an ideal experience for ourusers.If you havent read Tom Greenwoods Sustainable Web Design, I highly recommend it!Can we actually design for sustainability? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·75 Views
  • Is this the death or dawn of human creativity?
    uxdesign.cc
    Should we fear AI as a creativity villain or celebrate it as a creativity companion?Continue reading on UX Collective
    0 Σχόλια ·0 Μοιράστηκε ·88 Views
  • Its all fun and games until your boss starts vibe coding
    uxdesign.cc
    Brace yourselves for a barrage of AI-assisted garbage made by people who dont know an API from anIPA.Screen shot from the movie OfficeSpaceWeve officially entered the age of vibe codingwhat a ridiculous name. In this new digital frontier, anyone can prompt an AI with, Build me an app thats like LinkedIn meets Tindercomplete with swipeable resumes, networking streaks, and flirty endorsement badges, and then publish it without a second thought. Cringe-worthy, Iknow.These tools spit out something that looks like a real product, and suddenly everyones acting like its 2008 and the App Store just launched.The idea that no-code solutions can take the place of real development has been quietly gaining ground in the industry for a while now. A few years ago, the company I was working for put out a request for proposal (RFP) for a complex websitebuild.We found one agency we liked that quoted us around $100Kexpensive, sure, but it made sense given the serious research, custom design,.NET content management integration, and a six-month timeline.The then CEO wandered in and commented, No way were paying thatcant we just build this with Wix in like aweek?That was the moment I realized he hadnt listened to a single word wed discussed in our meetings over the past threemonths.Somewhere along the line, no-code tools like Wix and Canva have convinced people that digital products are simple. Drag, drop, add a stock photo, slap on a buttondone.Yes, these tools can pull off some neat tricks, but theyre no substitute for building a digital product from the ground up. Not if you need it to actually work, scale, or avoid crumbling at the slightest sneeze.What were really dealing with is a psychological phenomenon known as the illusion of explanatory depth (IOED)the tendency to believe we understand something in far more detail than we actuallydo.In this case, people assume they grasp how complex systems work simply because theyve built a polished front-end. But the illusion quickly unravels when theyre asked to explain or construct the underlying framework.Worse still, these design-for-dummies platforms have emboldened non-designers and non-developers to skip the professionals altogether. Clients and stakeholders are suddenly cobbling together their own pages or websitesoften with the elegance of a 2003 PowerPoint deck.And now, vibe coding takes things to a whole new level. Never heard of it? Lucky you. Its when people use AI tools to build entire digital products based solely on vibes. What does that even mean? It sounds like something youd do on a psychedelic retreat, not during product development.They type in something like: Make me a Gen Z-friendly team collab app with video meetups, synergy dashboards, hype analytics, and a dark mode aesthetic.Yeah I think I just threw up in my mouth typingthat.If you think you dont have enough time and resources to develop products properly now, just wait until vibe coding goes mainstreamwhen your boss starts asking, Why cant we just build this in aday?Now, vibe coding can make sense if youre an experienced designer or developeryoull immediately spot off-kilter usability or spaghetti code. But to a boss, a client, or even a random marketer, it all looks finished, as if theyve just saved the company$100K.As the saying goes, you can polish a turd, but its still a turdand youre looking at is a digital product equivalent. A stitched-together no-code hack, riddled with design flaws and crawling with enough bugs to ruin a summerpicnic.But the damage is done. Pandoras box has been opened, and theres no stuffing the AI genie backin.All we can hope for, pray for, is that the industry eventually rediscovers an appreciation for the handcrafted and human touchartisan websites and bespoke appsthe digital equivalent of small-batch leather goods or a gorgeous farmers market sourdough. Imperfect, human, and not cranked out by a robot or someone who Googled how to make a website yesterday.Until then, brace yourself for a deluge of vibe-coded monstrosities and AI-fueled eyesores. One day, we might look back on this era and say something we never thought wewould:Man, I miss the days when our only gripe wasCanva.Dont miss out! Join my email list and receive the latestcontent.Its all fun and games until your boss starts vibe coding was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·84 Views
  • We built UX. We broke UX. And now we have to fix it!
    uxdesign.cc
    We didnt just lose our influence. We gave it away. UX professionals need to stop accepting silence, reclaim our seat at the table, and design with strategic clarity, not just surfacepolish.Maybe youve read the think pieces: UX is dead. Or dying. Or evolving. Or in a state of strategic irrelevance. Thought leaders like Pavel Samsonov, Patrick Neeman, Ed Orozco, and Cyd Harrell have all taken swings at the conversation, talking about how weve lost influence, lost trust, and in many cases, lost ourway.Lets not waste time sugarcoating it: UX didnt get sidelined by accident. We let it happen. We let ourselves be turned into ticket-takers, stylists, and decorators of decks no one reads. We watched user-centered become a checkbox. We accepted applause for work that never shipped and feedback that boiled down to, Can you make itpop?And today were still arguing about job titles while AI eats our credibility, while design systems distract from actual design, while the trust we once built is slipping away. The worst part? Were not even in the room to fight forit.This isnt a nostalgia play for some golden age of UX. That version had its flaws too. But weve reached a point where too many talented people are being treated like overhead, and too many teams are building products no one understands, no one trusts, and no oneuses.For those of us who still believe UX isnt just about whats on the screen, that its about how we show up, how we speak up, and how we make the case that what we do matters, its time to stop whispering from the corner. Time to speak like we matter. Time to reclaim the voice we let slipaway.How UX lost its influenceUX didnt just get pushed out of strategic conversations. We let it happen. We focused on tools, not outcomes; process, not purpose. And now, were trying to design better systems from the kiddietable.For years, weve been telling ourselves that were advocating for the user, but in practice, weve often been advocating for our own process: our sitemaps, our card sorts, our post-it note frameworks. Weve become so obsessed with how we do the work that weve lost sight of what the work is supposed toachieve.As one UX Planet article bluntly puts it, Stop preaching UX process! Reminding us that methodology without outcomes istheater.Ed Orozco put it more diplomatically in his piece for UX Collective: The highest-impact part of the design process is identifying and framing valuable problems tosolve.Pavel Samsonov echoes the shift when he writes that instead of using research to understand who we are building for, our orgs have been setting course based on the ideal user theyd like to sellto.And nowhere is this more obvious than in UX conferences, which have become increasingly insular and repetitive. Instead of pushing the industry forward, many of these events feel like echo chambers of recycled slide decks; a carousel of talks about mapping, heuristics, and job titles, as if those are the levers that truly change products, teams, or trust. You can almost hear the collective rustling of Moleskines and tote bags every time someone mentions a doublediamond.Weve become problem solvers with our heads up our asses about process, as one Redditor quipped in a UX design thread about unpopular opinions.The worst part? Theyre notwrong.This kind of echo chamber has long frustrated thoughtful practitioners. Jared Spool once criticized the UX community for treating process like religion, turning useful tools into unquestioned rituals. In a 2017 article, he warned that process shouldnt come before vision: When a team focuses on process first, before the vision, they can lose track of what they are trying to accomplish.UX became cool. That was part of theproblemLike cargo pants in the early 2000s, UX got cool fast and out of nowhere. Suddenly every startup, bank, and SaaS platform needed a UX person, even if they didnt know what that meant. The title became the equivalent of hot sauce: just sprinkle it on, and your product instantly hadflavor.We need UX, theyd say, but they couldnt explain why. The demand exploded, and with that came a wave of people who wanted jobs. Unfortunately, that didnt include people who had the responsibility or the experience. UX bootcamps sprung up everywhere, promising a fast-track to a new career. The industry, eager to fill the growing demand, welcomed the influx. But while some programs were thoughtful, many prioritized speed over depth, offering just enough vocabulary to sound competent but not enough understanding to be effective.As one UX leader told me bluntly, Great, now you can draw boxes and make up a persona.This created a dangerous cycle: companies hired underprepared designers, those designers couldnt explain their value, and stakeholders came away with the idea that UX was a soft, fragile discipline that slowed things down and overcomplicated the obvious. Its no surprise that many orgs left those engagements with a bad taste in their mouth, thinking we tried UX and it didntwork.But it wasnt UX that failed. It was the version of UX that we sold them: oversimplified, overpromised, and underpowered.Patrick Neeman summed this up well: Companies hire for UX because someone told them to, not because they understand what itis.The feedback loopbrokeThe foundation of UX is supposed to be a feedback loop: research, insight, iteration, refinement. Its a discipline rooted in learning. But over time, that loop fractured. Usability testing became checkbox validation. Metrics replaced user stories. What was once Discovery turned into justification. A loop became a cul-de-sac.As Pavel Samsonov observed, many teams today run p-hacked usability tests, structured not to learn, but to prove what someone already wanted todo.In other words, they ran usability tests not to uncover problems or generate insight, but to justify decisions that have already beenmade.In that kind of environment, outcomes take a back seat to optics. We stopped asking the hard questions. Even when we wanted to, we didnt have the time, the budget, or the air cover. Better to push pixels andpray.Another reason UX keeps getting sidelined: false confidence. Teams look at half-baked flows and recycled design patterns and think, Thats close enough. They posit, It worked in our last product, or, Thats how [insert over-glorified industry leader] does it. Instead of questioning the fit, they assume familiarity will substitute for usability. Nathan Curtis points out that when teams rely too heavily on pattern libraries and past solutions, they often mistake speed for efficacy and reduce the space for real problem-solving in theprocess.What feels efficient to a product team often feels like friction to a user. Skipping UX to save time rarely does. It just guarantees youll waste more of it cleaning uplater.The Nielsen Norman Group has been calling this out for years. They say that without stakeholder buy-in or an ability to tie UX work to business outcomes, teams get stuck in surface-level deliverables that lack strategic weight.We taught ourselves the wronglessonsMany designers came into this work because they cared. They cared about people, about systems, about making things better. Instead, they found themselves performing process for process sake. The post-its went up. The journey map was made. The Figma file was perfect. And nothingchanged.Others just quietly walkedaway.Those who stayed learned to keep their heads down, or learned to speak the language of delivery. They learned to get excited about design tokens, or design systems, or dark mode settings. Really, anything that didnt require facing the void of real influence.And we started to believe the myth: that this was as good as it gets. That UX was just a phase in the software development lifecycle. That design speaks for itself. That our value should beobvious.It isnt.As Cyd Harrell has said about civic design in her podcast, if were not working with intention, empathy, and a sense of responsibility, then were just performing. And if were just performing, we might as well do it on TikTok. At least then someones paying attention.Until we learn how to speak up again. Clearly. Credibly. And in context. Well keep getting the version of UX that the business is willing to tolerate, not the one we know the user actuallyneeds.The trustcrisisWhat AI (and everything else) is tellingusWere watching history repeat itself and this time at machine speed. AI is the latest shiny object in tech, being shipped fast, scaled faster, and handed to users with the same shrug weve seen before: users will figure it out. But they wont. Or worse: theyll stop trusting the systems we build altogether.This isnt just an AI problem. Its a design problem. And more specifically, a UX credibility problem.Weve accidentally trained stakeholders (executives, product leads, and entire orgs) to believe UX is a nice-to-have. That was a mistake. UX isnt some bonus level you unlock when the roadmap clears up, or a last-minute sprinkle to impress the execs. Its not the parsley garnish on your AI steak. Its the plate, the table, and half the damnkitchen.As I wrote in We Trust AI Until We Dont, trust in AI has almost nothing to do with logic. It has everything to do with comfort zones. We trust autocomplete, but not AI-powered diagnosis. Well use facial recognition to unlock our phones, but not to approve aloan.Comfort zones are a UX concern. But if weve been reduced to make it pretty or clean up the flows, we lose the ability to shape the experience people actually have with AI, not just what it looks like, but whether they trust it atall.Cyd Harrell has long talked about the ethical implications of design in the public sector. She reminds us that government interfaces arent just digital interactions, theyre moral contracts. The same applies to AI. These systems dont just serve people. They make decisions aboutpeople.Cyd says, Government technology should work at least as well as the private sector, because it carries the weight of moral obligation.If people dont understand how a system works, or worse, believe its lying to them, weve failed. Not because of bad tech, but because of brokentrust.This erosion of trust is well-documented. A 2023 KPMG study found that 61% of global respondents were wary of trusting AI systems, with only 39% expressing confidence in their accuracy.Similarly, A 2022 study published in the International Journal of HumanComputer Interaction highlighted that trust in AI is shaped not only by performance, but also by transparency, ethical safeguards, and how well the system supports human understanding.Meanwhile, research from the University of Pennsylvanias Wharton School found that users build trust in AI incrementally, if it helps them succeed. Trust isnt immediate. Its earned, interaction by interaction, experience by experience.Despite that, many AI tools are being rolled out like candy from a marketing piatawith little evidence that UX research is guiding their design. As Nielsen Norman Group puts it, AI initiatives often prioritize the tech first and only loop in UX once its too late to influence direction. Microsoft, in its own UX guidance for responsible AI, urges teams to involve design and research from the start, not after the model is built, because trust and understanding cant be bolted onlater.Wheres the usability testing for large language models? The participatory design sessions with real users? The accessibility work?Spoiler: its happening too late, if atall.Were also seeing the quiet normalization of dark patterns. UI decisions designed not to help users, but to trap them. Confirmshaming. Forced continuity. Roach motel flows. These arent edge cases. Theyre often built in on purpose and are often known as Dark UX. We build features that lock people into ecosystems, bury cancel buttons, manipulate behavior, or push frictionless engagement over informed decision-making.As Deceptive Design documents, these patterns are increasingly used to boost short-term metrics at the expense of long-term trust. Its anti-user behavior masked as clever conversion strategy, and the kind of thing a strong UX presence used to stop before itstarted.In 2022, the FTC issued a policy statement calling out the rise of manipulative interfaces, citing how they trick or trap consumers into subscriptions or disclosing personal data. Thats what happens when UX becomes reactive, silent, or excluded from decision-making entirely.We reward metrics that go up, even if trust goesdown.So once again, we forget the most important part of user experience: theuser.Weve seen this movie before. In fintech. In healthcare. In hiring platforms. In government services. We ship complexity, slap on a dashboard, and expect trust. Then we act surprised when users either disengage or rage-quit the experience, like their private Slack group was just exposed to the wholecompany.Heres the part that doesnt get said out loud enough: this isnt just a UX failure. Its a business failure. Because when you ignore the human, you lose the customer. Trust isnt a soft metric. Its a hard outcome. Its revenue. Retention. Reputation. UX is where user needs and business goals are supposed to shake hands, not silently walk past each other like exes at a conference.And all the while, were updating decks. Rebuilding flows. Writing another Jira ticket with a low effort, high impact tag we know isnt fooling anyone. And still, we wait our turn to be listenedto.Spoiler: that turn rarelycomes.As Jeffrey Veen, founding partner at Adaptive Path and former VP of Design at Adobe, said, Design without strategy is just decoration. And if that sounds a little too business school chic, lets bring it down to earth with Sarah Doody, who put it more plainly: When you involve people in the process, theyre more likely to believe the results.Strategy comes from talking to people. Trust comes from including them. If youre not grounding your work in outcomes, context, and conversation, youre not designing, youre redecorating.UX without trust is theater. UX without outcomes isnoise.If users dont trust the systems we design, thats not a PM problem. Its a design failure. And if we dont fix it, someone else will, probably with worse instincts, fewer ethics, and a much louder bullhorn.UX is supposed to be the human layer of technology. Its also supposed to be the place where strategy and empathy actually talk to each other. If we cant reclaim that space, cant build products people understand, trust, and want to return to, then what exactly are we doinghere?Reclaiming thevoiceThe case for speaking up(again)Lets not pretend this is some Pixar redemption arc. Were not Andys toys waiting to be rescued from the donation bin. Were Woody, realizing we still matter, even if weve been boxed up for a few years. The jobs not over. The kid still needs us. The work still needsdoing.But heres the thing: influence is recoverable. It didnt die, it drifted. We let it. We traded our voices for seatbelts in the product roadmap van and forgot that we used todrive.Getting that voice back doesnt mean pounding the table or redesigning your portfolio for the fifth time this year. It means remembering that UX at its best doesnt just make products better, it makes decisions smarter. It makes businesses better. It puts humanity back into systems, and it brings business objectives into focus by connecting them to actual human behavior. All in language people can understand.Reclaiming our voice means not waiting until a stakeholder asks for a redesign. It means being in the room when the problem is being defined in the first place. It means asking better questions, earlier, and not just the what are we solving? kind, but why is this even a thing weredoing?Jon Yablonski phrased it well: The best way to get people to care about UX is to show them what happens when youdont.Because if were not involved in shaping the direction, were just reacting to it. Thats not strategy. Thats survival.It also means being honest about value. If what youre shipping doesnt work for users, it doesnt matter how elegant the typography is. As Cameron Moll puts it, What separates design from art is that design is meant to be functional. And if we dont bring that clarity, we cant be surprised when were asked to make it pop one moretime.And lets stop pretending the work ends when the prototype hits the handoff doc. Your job doesnt stop at the screen. It just startsthere.And, as Dieter Rams says, Good design is thorough down to the last detail. Nothing must be arbitrary or left tochance.We dont need louder voices. We need clearer ones. We need to talk like we know what were solving, and who its for. UX isnt valuable because it adds polish. Its valuable because it prevents dumb, expensive mistakes before they ever leave thesprint.Your UX voice isnt your style or your deliverables. Its your ability to connect what people need to what the business can deliver and to make sure no one forgets that alignment is what success actually lookslike.We dont need more templates. We need more conviction. We need to speak plainly, challenge politely, and stay laser-focused on building things that earn trust and actuallywork.Lets build a UX practice that people dont just invite in at the last minute, but count on from thestart.Lets get back tothat.A few ways tostartAsk better questions earlier. Dont wait until usability testing to challenge assumptions. Start during planning. Be the one who says, What are we actually trying to solvehere?Make your work visible. Stop hiding behind Figma files. Build bridges with product, engineering, and marketing. Show how your thinking impacts real business outcomes.Use data and narrative. Pair your metrics with stories. Dont just say a design improved conversion. Tell them why itdid.Include more voices. Great UX doesnt come from isolation. Invite stakeholders into your process so they own the insights, not just theoutput.Stay curious, not precious. Fight the instinct to defend your solution. Defend the problem youre solving. Everything else is justform.Lets stop waiting for permission and start showing what UX was always meant tobe.We built UX. We broke UX. And now we have to fix it! was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·96 Views
  • From human-centred design to designer-as-human
    uxdesign.cc
    Theres no view from nowhere: embracing our humanity as design expertise.Last week, I listened to Keke Palmers podcast episode called Couples Therapy, Narcissism, and Open Relationships with Dr. Orna Guralnik. When Dr. Orna went off about the evolution of psychotherapy, I couldnt help but say to myself, this is what designneeds.Keke Palmer with Dr. OrnaGuralnikWhile Im a firm believer in therapy, Im even more passionate about connecting themes from diverse disciplineslike therapyto design thinking. My thesis last year was entitled Maybe Were Creative: What I Learned about Co-creation in Design by Dancing with My Dad. When Im designing, which is almost always, I bring my perspectives from business, social service work, community arts, and other lived experiences, like dance, into my practice. These perspectives dont just inform my work; they expandit.Experiences that shape who I am adesignerFeeling the connections between the experiences that shape how I practice design comes as naturally as breathing to me. This is also why existing in the domain can sometimes feel incredibly frustrating.In Design Journeys Through Complex Systems, Peter Jones and Kristel Van Ael (2022)write,Designers, social innovators, and business leaders are now called to address transformational challenges for which we have no relevant academic or practice training these challenges are fascinating, but not quite welcome.The challenges were facing have only become more complex since 2022. However, Id argue that we have always had relevant training; we just arent taught to look inward to find it, at least not in traditional institutional spaces.Relevant training is everywhere. Yet, Im often met with resistance when I suggest designers integrate knowledge from outside traditional design methodologies, specifically our lived experiences. I wonder if this resistance might come from the fear of losing our expert status, of being taken off of our design pedestals, when we decentralize expertise in this way. Scholar Sasha Costanza-Chock, author of Design Justice: Community-Led Practices to Build the Worlds We Need, also argues that dominant design practices reinforce existing hierarchies of knowledge and advocates for the redistribution of power in herwork.Image by Tamra Carhart capturing a Design Justice workshop co-led by Costanza-Chock.My approach is aligned with work in design justice and participatory design, focusing on collaborative meaning-making. By embracing this approach, we move beyond extractive design practices and towards co-creation, where a range of knowledge can be shared and valued rather than singularly imposed. All of this might sound similar to the verbiage around human-centred design, but the gap Im witnessing isnt in verbiage, its in practice and applicagtion. Designers are really good at talking about inclusive theories but in application and follow-through we, as a discipline, still uphold traditional, extractive, designer-as-expert foundations underneath the tools weuse.Similarly, Lilly Irani, in her critique of human-centred design, notes that though there is an increasing emphasis on empathy in design practice, human-centred design still privileges the designer as the central problem solver and the participants are assigned roles such as consumers. A human-centred appraoch is insufficient when the basis of the approach has not been reorganized and power re-distributed. Whether in consulting or teaching, despite efforts towards participatory research and co-design, ultimately, designers tend to defer to expert tools almost exclusively to solve complex problems, desiring clear-cut answersanswers they can trust. Furthermore, lately, trust has been increasingly placed in AI. But the question remains: what are we sacrificing when we outsource our inquiries to systems that, by design, distance us from ourselves?Trust, design, and the search foranswersLatanya Sweeny for On Being with KristaTippettIn the On Being episode On Shaping Technology to Human Purpose, Latanya Sweeney, founder of Harvards Public Interest Tech Lab,asks:Our need for a north star around truth is just fundamental to democracy. We cant really survive if all of us come to a table with completely different belief systems and [are] not even able to find a common fact that we agree onhow do we build trust atscale?Individual desires for universal truths and the reality of our mixed lived experiences will constantly be at odds and challenge designers everywhere. We often assume trust should be placed in external systemsresearch methods, industry best practices, and AI-generated insights to simplify and neatly categorize those mixed lived experiences. However, Ive noticed that using current research methods and tools facilitates an erasure of complexity more than a simplification or generalization attempting to capture complexity. But I find collectively, designer or otherwise, our most harmful behaviour isnt our desire for simplicity; its the assumption that someone else, somewhere elsesome expertholds the answers we seek rather than ourselves or the people we designwith.When we rely on unquestioned authorities, we risk diminishing not only our agency as designers and as human beings but also our multi-faceted existence. Im tired of hearing about the world becoming more polarized when the reality is that, specifically, rhetoric and media are increasingly polarizing. Humans have and will remain complex within polarized political and economic systems that simply dont capture our lived realities. The tension between us and these systems is increasing, but we are by no means polarized by nature. Designing with tools that dont capture this multi-facetedness only increases thistension.Designer-as-human: moving beyond objectivityDesign thinking is often framed as a problem-solving process guiding practitioners toward optimal solutions to complex problems. But foundationally, how do our tools define the complex issue, and what are optimal solutions? Even in participatory sessions, whose knowledge is privileged? Furthermore, does the end solution adequately capture the participation?Rather than positioning the designer as a neutral expert, I argue for Designer-as-Humanan approach that embraces subjectivity, uncertainty, and relational knowledge. This shifts the focus from mastering frameworks to cultivating self-reflection and trust in ourselves, our collaborators, and the communities we designwith.Dr. Orna describes her therapeutic approach as relational and interpersonalthe therapist is not an all-knowing authority but an active participant in the room. Similarly, designers are never separate actors in the design process. Yet, in professional practice, we act as if we can remove ourselves and act as third-party facilitatorsleaving the organizations we touch without a trace other than a brief, as if expertise requires distance and objectivity.Work in progress journey map comparing traditional human centred design and designer-as-human designI ask, what if acknowledging our own humanity sharpened our expertise rather than discredited it?Lived Experience Cartography: A tool for relational designI do believe in healthy boundaries in professional settings, but I do not think that complete dissociation is necessary or possible for design to take place. Furthermore, attempting to compartmentalize ourselves weakens our designs and the future of design thinking practice.Years ago, when I was working in a non-profit, I learned that one in three people will experience gender-based violence in their lifetime. Historically, I thought that I had to be working at a non-profit to be of service to humans. But I remember this statistic being a light-bulb moment for me. If one in three people have these experiences, then every workplace with more than three people working needs some sort of trauma-informed care. Complex lived experiences rooted in our humanity are the reality of our workplaces, educational environments, and communities, and by denying the existence of their overlap in ourselves and our designs, we are missing vital pieces of how to build truly life-centred, adaptive, flexible, and caringsystems.Choreographer Twyla Tharp (2006) in The Creative Habit writes that anything a person creates will reflect ten items: ambition, body, goals, passions, memories, prejudices, distractions, fears, ideas, and needs. These ten items shape a persons life by how theyve learned to channel their experiences into them. Tharp says when she walks into a room, she is alone, but with these tenthings.Work-in-progress developing Lived Experience Cartography using inspiration fromTharpI saw parallels between these ten items and the evolution of my thesis research. I used the spirit of Tharps interpretation of expression as inspiration for coding 14 in-depth participant interviews and the three-month autoethnographic reflective dance study and reflection practice with my dad. Some of Tharps items fit into larger themes from the interviews, and conversely, some of the interview themes fit into Tharps items. The result was a visual, conceptual framework: Lived Experience Cartography.This preliminary framework is more than just a checklist. Its built to be nuanced and ever-changing to help individuals and teams reflect, visualize their experiences, and engage more deeply with others. It slows down conventional design processes, making space for relational engagement and acknowledging diverse lived realities. This framework is currently being tested in a workshop setting to develop an iteration of an interactive prototype.Preliminary framework of Lived Experience CartographyAt the core of this model are collective Ideas, Needs, and Fearsaspects of self that are most immediately visible in collective settings. Surrounding them is an outer layer of individual lived experiences, including memory, identity, translation, future, collaboration, environment, and measure. These shape how we see and participate in the world. As lived experiences change, so does how we relate to each section. The model contents will constantly be changing in relationship to the rest of its parts, and instead of being static and controllable, its movement is what were made awareof.Two key reflection questions guide theprocess:Outer layer: How do my experiences shape how Isee?Inner layer: Are any of my experiences present in our collective expression?As you can see from the model, the idea is to keep individual lived experiences in tact and referencable so that designers can trace back whether what is collectively expressed captures the rich outer layer. Rather than extracting insights from participants, this tool encourages reciprocal meaning-making. Perhaps two people express similar ideas or fear but they may stem from completely different lived experiences. How can we gain more insight into better more nuanced and informed design by not losing those vital pieces of experience?The model will help designers and participants resist the impulse to force singular solutions onto complex challenges, instead embracing the ambiguity that emerges in real co-creation. I see this work sitting next to Naomi Rothman and Shimul Melwanis work on social functions of ambivalence and embracing paradoxical thinking. Rothman shared an episode of Hidden Brain how mixed emotions, or being pulled in different ways can be a good thing for groups. It can lead to rich, complex thinking, a both-and not either or. If we want rich designs that better capture the world were sharing, then we must face the reality that yes, a process of collaborating with the multitudes and validity of experiences might be overwhelming at times, but it does not have to be immobilizing. We can do this, but we need to be intentional and patient withit.Hidden Brain Podcast Episode with NaomiRothmanNatalie Loveless in How to Make Art at the End of the World: A Manifesto for Research-Creation, turns to academics King and Haraways respective works on stories toask:How are we remade by all we speak andhear?How are we remade through all we touch and are touchedby?Lived Experience Cartography does not explicitly aim to change any outcome, even though the outcome may inevitably change through engagement. Specifically, this tool aims to commit design teams to acknowledge more information about who and what experiences are in the room when we gather and honestly ask if they are represented in the collective ideas, needs, and fears that more visibly inform the system'sdesign.Work-in-progress journey map of Designer-as-Human designIts about prioritizing seeing and hearing the people around you and being open to flexibility in what were working towards. It underlines that sometimes, it isnt a revolutionary new idea or innovation needed to change group dynamics or relationships. Sometimes, being seen can radically shift a dynamic or environment. However, we rarely slow down enough in our social environments to know that, and from my experience, we lack visual tools to aid group dynamic theories. Hence, the development of Lived Experience Cartography.Accountability, failure, and reimagining designDesignerslike therapists, educators, and technologistsmust reckon with their role in shaping not just solutions but relationships. If we continue deferring to external authorities without questioning the foundations they are built on, we risk reinforcing existing hierarchies and narrow stories rather than reimagining new possibilities in the present andfuture.Legacy Russell, in Glitch Feminism, asks:What does it mean to find lifeto find ourselvesthrough the framework of failure? To build models that stand with strength on their own, not to be held up against those that have failed us, as reactionary tools of resistance? Here is the opportunity to build newworlds.Trusting our own humanity as designers means embracing the process despite having unknown outcomes its seeing failure along the way not as an endpoint but as continuous openings. It means recognizing that knowledge is not just something we acquire through learning but something we have in our lived experiences and further expand with others. Ultimately, developing relationships means shifting from human-centred design to a deeper, more accountable practice of Designer-as-Human.From human-centred design to designer-as-human was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·89 Views
  • Avoid premature solutions: how to respond when stakeholders ask for certain designs
    uxdesign.cc
    How to avoid anchoring problems that result in stuck designersContinue reading on UX Collective
    0 Σχόλια ·0 Μοιράστηκε ·83 Views
  • Do chatbots really need faces?
    uxdesign.cc
    Rethinking human-like avatars in generative AIchatbotsSource: authorChatbots are considered tricky to design because most designers focus on creating a great UI, while the conversational aspect often remains an afterthought. Lets break down what 80% of chatbot designers focus onthe UI. This includes message bubbles, recommendations, headers, input sections, and often, a fancy chatbotavatar.Or they aim for the cool approach-giving the chatbot a highly realistic AI-generated face that moves its lips while synthesizing speech from text. When choosing the most suitable avatar for a chatbot assistant on a complex sales dashboard, the question arose: How realistic should the chatbots face be? Should we match it to the audiences context, or should it portray thebrand?Source: authorIn many cases, stakeholders suggestions may lean toward making it human-like hyper-realistic thinking its a brilliant idea, but the Uncanny Valley Effect (UVE) is precisely why you shouldnt give your chatbot a hyper-realistic avatar.Lets dial it back abitVisuals have always made a strong statement with all interfaces. Printed on paper, visuals have been convincing; on digital interfaces, they have been both useful and influential. Visual elements play with linguistic elements in a chatbot to convey feelings or emotions.After decades of designing UI, designers tend to cling to visual aesthetics, leading to eye-pleasing explorations on Dribbble. This is great until UI isnt the only thing in thepicture.Chatbots are designed to replace human agents in many domains, from online tutoring to customer service to cognitive therapy. But the interactions often feel machine-like. Besides building UI elements, a significant part of chatbot design focuses on usability and humanizing the experience. Avatars are considered a crucial element in this effort.Chatbots existed long before avatars became a consideration. For instance, ELIZA, the first artificial intelligence chatbot in 1966, imitated a therapist and made users believe they were conversing with a realhuman.As chatbots evolved, designers began incorporating visual elements to enhance user interaction. This led to the emergence of chatbot avatarsgraphical representations that add a layer of personality and engagement to conversations.The term Conversational Avatar was first used in 1999 in a paper by Justine Cassell and Hannes Hgni Vilhjlmsson titled Fully Embodied Conversational Avatars: Making Communicative Behaviors Autonomous.What are avatars for chatbots?A visual representation of the chatbot that appears as the entity the user is talking to in conversations. The avatar is based on its personality, accommodating various factors, including the target audience and brand guidelines.Emotional Connection: Avatars create an experience of co-presence for the user and increase the level of social presence in the shared virtualspace.Extension of Brand Identity: The avatar can embody your brand personality. Think of a mascot-style avatar for a playful brand or a sleek, professional one for a corporate setting adding to visualappeal.But not all avatars are created equal. Choosing the right type of avatar depends on the chatbots purpose, audience, and brand identity. Ever wondered why Gemini, Copilot, ChatGPT, Siri, Alexa, and Google Assistant dont haveavatars?Choosing the rightavatarThe best avatar aligns with your brand and target audience. Here are some options to consider:Cartoon Characters: Friendly and approachable, perfect for casual interactions.Brand Mascots: Reinforce brand recognition and add a touch offun.Photorealistic Avatars: More human-like, supposedly ideal for building trust and credibility.Advanced systems like ChatAvatar can generate photorealistic 3D avatars of human faces through conversational text prompts. Many articles also recommended in Choose a Human-like Appearance for chatbots.While avatars can enhance chatbot interactions, making them too human-like can have unintended consequences. Remember I mentioned UVE -Uncanny Valley effects?wellThe Uncanny Valley: when almost human getscreepyWhen something appears almost human but has imperfections, it triggers a sense of eeriness or discomfort in us. This might be because it confuses our brainsis it real or not? It can lead to feelings of disgust, fear, or even revulsion.Source: https://essay.utwente.nl/78097/1/BachelorThesis_upload.pdfPicture a graph with a deep valley in the middle. On the left side, you have robots or animations that are clearly not humanthink industrial robots. As you move to the right, things get more human-like. But then you hit the uncanny valleythat unsettling dip where things look almost human, but not quite. This is where the UVE kicksin.For chatbots, the uncanny valley is especially relevant. We want avatars to be relatable and engaging, but if they look too human-like with even slight flaws in expression, it can backfire and creep usersout.Imagine having an average AI chatbot with a super realistic avatar, which initially sets the users expectation too high for a response, only to be let down by generic responses. Cartoon-like characters are particularly appreciated in HCI (HumanComputer Interaction) because they lower customer expectations towards the characters skills and help match the systems technical abilities.Source: https://essay.utwente.nl/78097/1/BachelorThesis_upload.pdfResearch provides evidence that UVE, engendered by the avatars hyperrealistic design and animacy, negatively impacts participants purchase intention and their willingness to reuse the anthropomorphic chatbot. These results are consistent with predictions fromUVE.Given these psychological effects, do chatbot avatars actually improve user engagement? Research suggests the answer isnt so straightforward.Mixed results on the benefits of chatbotavatarsWhile researchers found that avatars smoothen the interaction process, other studies have mixed results. Some participants find interactions with avatar-involved chatbots more engaging, while others say there is no need for anavatar.With conflicting findings, its clear that avatars arent always necessary. Instead of focusing on making chatbots more human-like, should we rethink what users actually need fromthem?In the era of deepfakes and AI imitating human speech, it is difficult to control a designers urge to make it unique, different, and more realistic with a human-like avatar for chatbot. This might explain why leading AI chatbots (ChatGPT, Gemini, Copilot) dont haveavatars.Still, I askedthem.Source:authorSource:authorSource: authorThe bottomlineA well-designed and implemented chatbot personality supersedes the need for a human-like avatar to make a human connection. We keep working towards making technology more like humans, but sometimes, even humans dont truly understand each other. The person living with me might not know I am looking to buy a smart watch, but my social media algorithms knows that and even recommends me nice options. Maybe users might stop seeking human connections in AI/algorithms and start accepting the magic-like experience, where AIthe machineunderstands their needs, provides resolutions at the right time and helps them accomplish their tasks. We are so used to projecting emotions onto things, thats our human habit of anthropomorphizing everything. Blurring the lines between making technology more human and making humans more accepting of technologys benefits and tools. We could improve things by using technology as it is, rather than spending excessive time and effort trying to humanize technology and create another set ofhumans.References/further readsUsability of information-retrieval chatbots and the effects of avatars ontrustMillennials attitude toward chatbots: an experimental study in a social relationship perspectiveHumanizing chatbots: The effects of visual, identity, and conversational cues on humanness perceptionsWhy the Chatbot Avatar DoesntMatterOur human habit of anthropomorphizing everythinghttps://botpenguin.com/glossary/chatbot-avatarDo chatbots really need faces? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·87 Views
  • UX or PX? Why naming matters
    uxdesign.cc
    From components to job titles, human systems design could be our identity for the futureContinue reading on UX Collective
    0 Σχόλια ·0 Μοιράστηκε ·93 Views
  • Vibe coding tips for product designers
    uxdesign.cc
    Vibe coding is not there yet for production-ready code. This article will focus on how product designers can get the most out of the new generation of AI IDE tools that have LLMs integrated in them for prototyping and building smaller, less complexapps.Image from flux modelKreaAIStart in a simple tool and jump to cursor orwindsurfTools like Bolt, v0, and Lovable are the best when it comes to starting a project (except if youre building in Swift for iOS). It creates a repository of necessary files, runs all the necessary project setup commands, and creates the dependencies for front-end and back-end. This initial setup is often the most intimidating barrier that prevents designers from venturing into coding,g and these tools can automate most of the challenges at thisstage.Lovable/v0/Bolt: Ideal for design-focused work with strong UI capabilities for building landing pages, small flows with mock data and quick prototypes of your ideauses Tailwind, Shadcn or other source libraries and designsystems.Cursor/Windsurf: Best for complex projects where you want to build a prototype with more logic, data, API and algorithms.Once youre happy with the initial draft, You can export the code from these tools to build it more on Cursor / Windsurf.Simple tools vs co-building IDEUse powerful models to break downPRD.PRDProduct Requirements Document outlines the requirements, features, functionality, and behavior of a product. When building a feature, asking the LLM to break it down into smaller steps so its easier to implement them iskey.Got a big, complex feature that you want to build? Imagine how you would explain it to your colleague. You will try to break it down into multiple parts of the journey, break it down into flows, into screens and into elements. This is the level of detail that should get you to one-shot features; which means, you get a highly usable output with the first attempt orprompt.The first prompt is the most important in Cursor as it sets the base for whole context and knowledge foundation for the AI model. Use powerful models like Claude Sonnet or ChatGPT 4.5 (as of writing) to explain what you are trying to build, why it will be useful, and how you want it broken down. Precisely like how you would explain it to anotherhuman.This prompting technique from Twitter user (https://x.com/benhylak) works really well in most scenarios to build PRD. You can also have a look at the originalpost.Credits (https://x.com/benhylak)Know thyselfKnow yourcode.In the initial stages, as you use Cursor, try to read some code. Ask the AI tool what a few code snippet does even if you dont understand it. Try to get familiar with the codebase. You can ask Cursor something like,What are the most important functions and code snippets for me to get familiar with the codebase?You can also select a chunk of code and chat with AI asking more questions about it. I would recommend doing this as it helps you troubleshoot easily as your project grows with more complex features.Selecting a chunk of code to chat withAIAt that stage, knowing your codebase will help you overcome blockers by guiding the AI toward the solution rather than hoping for the best while going back and forth in circles (what I call the prompt circus) without making progress.Prompting techniquesThis is the most important tip. If you dont want to read the rest, make sure to just read thisone.When building out a feature, ask the AI to ask clarifying questions after every prompt. This helps build additional context. Its like talking to a colleague once again, the back-and-forth questioning and answering helps build context around the problem, feature for you and the AI model which makes the results much more correct without having to troubleshoot toomuch.Another prompt for bug fixing is, before trying to code, reflect on 57 different sources of the problem, distill those down to 12 most likely sources, and then add logs to validate your assumptions before we move onto implementing the actual code fix. This prompt works really well when youre getting stuck with errors and the AI keeps circling back or breaking your code / deleting important parts (Hence its very important to know yourcode).Use your design tool knowledge: When trying to make front-end changes, try to be very specific. For example: Dont just say move the button to the centre with some spacing from text field but say Move the button relative to the text field component and space it 4px from the left. It should be fixed in itswidth.Use @. In Cursor and Windsurf, for example: You can reference exact files or even folders by typing @fileorfoldername. This approach offers several advantages: it helps the AI narrow its context to specific files and conserves yourcredits.Linking a file in Cursor / Windsurf chatwindowUse @web or paste web links to give more context to the AI. If youve found a good solution online for your problem or if the language has proper syntax thats published online (like Apple developer guidelines for Swift and SwiftUI), you can link this in your prompt to help it solve aproblem.Lighter context window gets faster solutions.A context window in AI is like the models short-term memory. It refers to the amount of information (text, image, code) that the AI can see and remember and workwith.Creating a new chat window once context becomes too longWindsurfIDEThe lighter this is, the faster and cheaper it is to get good quality solutions. Always try to create a new chat by clicking on the + on top in Cursor or Windsurf when your chat starts to get longer or when youve successfully completed a feature and, the next feature or implementation doesnt require context from this current chatwindow.Learn to useGit.Just like how we use version control and have branching systems in figma (if youve been working with a design system), learning to version control and perform big changes in a branch on your code is very important. It takes less than 5 seconds to do this once youre set up with Github and Cursor and I recommend to do this as often as possible after every feature implementation or bug fixes that youre happywith.This ensures, in the worst-case scenario, your AI deletes something important, you can always go back to an older version and prevent lots of heartache (speaking from experience as someone who didnt know to use it properlybefore).Dont overthink. You can do all of this on Cursor directly. The below points are more than enough for a good start and will help you revert back in case something goeswrong.Commit changes after each successful feature implementation.Write descriptive commit messages that explain what changed and why so you know which implementation to roll back to when things gowrong.Committing and saving changes in local repository. You can then connect your Github and pushchangesUse overarching rules and project documentation.On Cursor and Windsurf you can set high-level rules. This ensures that every prompt you make will consider this master rule and try to accommodate this (Ive found it doesnt work sometimes but mostly itdoes)Project documentation is extremely important from the start. This tip works well with Know your code tip as you start to build a mini database of valuable information in markdown (md) files. I generally ask Claude Sonnet or Cursor to create a database.md, appflow.md and more (according to my project) and ask it to fill it with the codebase as a guide. This md files are made to be human readable which is useful to help you get acquainted with thecode.Both human and AI readable that contains project informationYou can then reference these files using @filename prompting technique. So the flowbecomesStart a new composer (at this point, the AI has very little context on what you are trying todo)Type your prompt with good context setting (as discussed previously)Use @ to reference your project documentation along with any files if you can mapthem.Once you are done with the implementation, make sure to tell the AI to update the relevant markdown files with the new implementation logic and guidance. This ensures our project documentation always stays up-to date and is the source of truth if we want to reference and learn more about ourcodebaseProject documentation becomes important the more features you are piling up and more logic your app has to depend on. At some point, the AI model will struggle going through all your files and becomes lazy with less context window. At this point, referencing project documentation has worked really well forme.Bringing custom design from figma tocode.Many of us, we want to vibe code with our own design that we sketched on figma. To get here, Ive found to shift the mindset from thinking of figma as a sketch book and switch to a very production focussed mindset. Name layers, create auto-layouts (most important tip for getting good code when you use dev mode), name your components, Create clean, logical hierarchies in your layers, add proper constraints.figma dev mode code to AItoolsYou can use dev mode to export code and then bring this into cursor and ask the AI to integrate the feature with the custom UI. This takes a few trial and error iterations.You can also use plugins with open AI and Anthropic APIs to achieve this code export and then paste the exported code into cursor to integrate in your project / feature (You can find how to do thisonline).Iterate using specific UI focussed prompts once you get the basic UI / code inplace.For small changes, tweak it manually. Dont useAI.Tweaking smaller changes manually is faster, Use your design knowledge. Instead of trying to prompt everything, once youve built a good understanding of your project space, you can change values for UI iterations (shadows, color hex values, gradients, corner radius and more) by yourself and look at the preview to finalise your decision. This saves massive time and even credits instead of asking the AI to Change the corner radius from 24 to 28 when the context window is already less and the AI is struggling to keep up with new requests. It might also mess up and change unintended code which could break yourapp.This is by no means an exhaustive list. These are just some lessons I learned over the last few months as I tried these tools to build complex prototypes and ship products. Theres still so much to learn and improve in terms of workflow. Hope these help on your vibe codingjourney.And remember to havefun!Vibe coding tips for product designers was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·89 Views
  • Vibe coding, hype-driven CEOs, should you copy a competitor?
    uxdesign.cc
    Weekly curated resources for designersthinkers andmakers.Are you wondering WTF all this vibe coding stuff is thats been sweeping your social feeds? You probably get the gist that its about having a vision of an app or software you want to build and vibing your way into it with the help of an AItool.If youre an old-school coder like me, you might be laughing a bit at the idea that you can just blink software into realityespecially when you think about all the blood, sweat, and tears it traditionally takes to build excellent products. Its like a joke that it can suddenly be soeasy.Cracking the code of vibe coding By PeteSenaFigma to website: design in Figma & hit PUBLISH Sponsored] Design your website in Figma and hit PUBLISH to instantly get a live, fully responsive website. Free hosting and all the settings you need for SEO, custom code, embeds, analytics, forms, andmore.Editor picksHype-driven CEOs When innovation misses the point.By RaphaelDiasA bright future for strategic thinkers The ability to identify and frame problems is your most valuable asset.By EdOrozcoLinkedIn is an example of a bad product we just got used to Hell is emptyall my colleagues are here.By Rita Kind-EnvyThe UX Collective is an independent design publication that elevates unheard design voices and helps designers think more critically about theirwork.Extraordinary things imagined with AIMake methinkWhy its so hard to align our work with our values People often, for example, oppose the actions and belief systems of billionaires, but take jobs at companies that increase the power and influence of those same billionaires. Its not because these job-seekers are bad people, but because we are all operating in a system that makes aligning our values and our everyday lives seem impossible.Our interfaces have lost their senses Weve been successfully removing all friction from our appsthink about how effortless it is to scroll through a social feed. But is that what we want? Compare the feeling of doomscrolling to kneading dough, playing an instrument, sketching these take effort, but theyre also deeply satisfying. When you strip away too much friction, meaning and satisfaction go withit.Past and present futures of user interface design For almost half a century now, we havent really managed to come up with something better, and thats not for lack of trying. This fact seems to annoy a lot of people looking for a problem to solvewhich every so often leads to something rathersilly.Little gems thisweekDont become the butthead By TripCarrollType, between two languages By PeterChoHow these massive games keep players locked in By DaleyWilhelmTools and resourcesHow can NotebookLM help us spread good design thinking? Making AI work for us.By Ben Davies-RomanoShould you copy a competitor? Heres when it works (and when it doesnt).By Rosie HoggmascallWeb accessibility requirements in the EU In effect by mid-2025.By MarcusFlecknerSupport the newsletterIf you find our content helpful, heres how you can supportus:Check out this weeks sponsor to support their worktooForward this email to a friend and invite them to subscribeSponsor aneditionVibe coding, hype-driven CEOs, should you copy a competitor? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·70 Views
  • Whose design process?
    uxdesign.cc
    The dynamics of designing with generative creationContinue reading on UX Collective
    0 Σχόλια ·0 Μοιράστηκε ·111 Views
  • The cognitive cost of convenience
    uxdesign.cc
    As oversimplification and automation erode our cognitive abilities, embracing meaningful friction may be the key to restoring it.Photo by Amelia Holowaty Krales / TheVergeWhen I was 19, I took my first solo road tripa 250-mile drive from Long Island, New York to Delaware to pick up my girlfriend from collegea journey that normally takes four to fivehours.At the time, I had little highway or interstate driving experience. I was nervous about the whole thing. This was before GPS or smartphones were widespreadback when the most advanced technology people had was a flip phone and a desktop computer. Finding your way meant relying on an atlas. Thats right, kidswe had to use paper maps to getaround.An atlas reminiscent of the one I recall as a teenager / Image source: WorthPoint.comI still remember the day I left. I had planned my route using my fathers worn-out map book and got some advice from my girlfriends parents on the best route. My mother, meanwhile, was silently freaking out that I was heading out alone, with no experience driving beyond our surrounding towns.Less than an hour into the drive, I made a wrong turn. I had confused my directions for getting onto the Southern State Parkway and ended up at Sunken Meadow State Park instead. Anyone familiar with Long Island will laughas those places arent remotely close to eachother.But I didnt panicwell, I did a little. Fortunately, I was able to quickly retrace the route using the map of Long Island I had mentally stored. I knew if I could find 495 Westthe Long Island ExpresswayId be back on track. So I backtracked, spotted the signs for 495, and adjustedcourse.Some of you might say, Well, if you had GPS, that wouldnt have happened. And youd be right. But I also wouldnt have an interesting storyand, more importantly, a real-world example of how mental models, even a simple map of an unfamiliar area, can help us find our way. Its a skill that remains essential even today because GPS isnt foolprooftechnology fails, and cell service isnt always reliable.The trade-offconvenience vs. cognitive engagementWhat Ill call the GPS Effect is part of a broader UX trendour blind pursuit of simplicity and friction reduction is slowly erasing opportunities for deeper cognitive engagement.Modern UX often functions like GPS, keeping users on a narrow, predefined path. It worksuntil something unexpected happens. If the system doesnt anticipate a users goal or edge case, theyre left stranded, unable toadapt.This effect extends beyond navigation. Autocomplete, for example, speeds up typing but nudges communication toward the most common expressions. Gradually, this reliance flattens language, limiting nuance and originalityjust as GPS weakens our memory and spatial awareness.The more we depend on systems optimized for convenience, the less we develop the ability to function withoutthem.Image source: 9to5mac.comAI is accelerating this shift. Large language models, predictive text, and AI-generated content are reducing the need for critical thinking and problem-solving.Why struggle to write when AI can generate an answer? Why explore different approaches when an algorithm suggests the most efficient solution? These systems optimize for speed and easebut at whatcost?One could argue that these tools help us go further, expanding our capabilities and removing tedious obstacles. But lets be honestmany people dont use them to enhance their thinkingthey use them to replaceit.The case for cognitive depth inUXOur minds dont merely process informationthey construct meaning through what cognitive scientist Steven Pinker calls mentalesean internal language that helps us form mental models by connecting new input to existing knowledge. This ability is essential for problem-solving, adaptation, and anticipation.Image source: https://www.nature.com/articles/491036aConsider language acquisition. A child learning the word dog doesnt memorize the letters D-O-G first. Instead, they form a mental model that links the word dog and the physical animal. Over time, hearing or seeing the word triggers an image, a bark, or even an emotional response. This process streamlines information interpretation, making it more intuitive and efficient.The distinctionbetween surface-level memorization and true understandingmatters in UX. When interfaces are overly simplified, they can bypass natural cognitive processes. Step-by-step guidance enhances usability in the short term, but it weakens a users ability to develop their own mentalmodels.This is fine for isolated tasks. But when users need to deviate from the prescribed path, a lack of mental modeling leads to confusion and frustration. They resort to trial and error instead of intuitive navigation.Why the future of UX needs cognitive engagementThe trend toward hyper-simplification has turned UX into a system of invisible guardrailsusers are guided through interactions without necessarily understanding how or why things work. This creates a smooth experience but limits autonomy.Think about learning to drive. Many beginners start with an automatic transmission because it removes complexity, allowing them to focus on steering and road awareness.But if they never learn how a manual transmission works, they miss out on a deeper understanding of engine behavior and full control in all conditions. They can still get from point A to B, but their knowledge remains surface-level.The same applies to UX. Instead of eliminating all friction, interfaces should embrace progressive depthstarting simple but allowing users to gradually build competence.This strategy is sometimes called progressive disclosurea design approach that gradually reveals more complex information or features only when users need them, so they arent overwhelmed by too many options atonce.Interestingly, the effect of reduced cognitive flexibility isnt limited to end users. Designers themselves can become entangled in technologies that oversimplify essential complexities.In fact, I recently explored this idea in an article that sparked some strong reactions arguing that designers over-reliance on no-code tools like Figma has weakened their critical thinking skillsmaking them less attuned to the digital media they designfor.Of course, simplicity has its place. Removing unnecessary complexity is crucial for efficiency, usability, and accessibility. But the goal shouldnt be to eliminate all cognitive effortonly unnecessary friction. Thoughtful UX doesnt just make things easierit makes experiences more enriching and empowering overtime.How do we fix this? Designing for cognitive depthIf UX prioritizes frictionless experiences at the cost of user autonomy, the solution isnt to reintroduce unnecessary complexity but to design systems that encourage deeper engagement whenneeded.Some of the best-designed systems already do this naturally. For example, Wikipedia doesnt just provide a quick answerit invites users to explore interconnected ideas through internal links, forming a deeper understanding of a subject. Alternatively, Googles Advanced Search offers a streamlined starting point but reveals greater control for those who seekit.Even IKEAs self-assembly model turns effort into an advantageby assembling their own furniture, customers develop a stronger connection to the product, reinforcing both understanding and perceived value.Image source: https://thedecisionlab.com/biases/ikea-effectHowever, it seems AI is moving in the opposite direction. Most AI-driven tools today prioritize instant solutions over long-term comprehension.Chatbots answer questions immediately, autocomplete finishes thoughts before they fully form, and code generators eliminate the need to understand syntaxall reducing effort but also stripping away the opportunity for learning.But what if AI did more than just provide answers? Imagine a design assistant that doesnt just suggest improvements but challenges users to think criticallyexplaining why certain choices work based on cognitive load principles and prompting them to refine their decisions through an iterative process.Or a coding AI that, instead of simply completing a function, nudges developers to predict potential errors, guiding them toward the solution rather than handing it over outright.The goal shouldnt be to eliminate every cognitive challengeonly the unnecessary ones. Thoughtful UX design, including AI-driven interfaces, should preserve engagement where it matters, guiding users toward understanding rather than just handing them solutions.Cognitive depththe missing ingredient inUXCognitive depth isnt about making interfaces or engagement harderits about making them more meaningful.Great design should empower users, not just guide them. Like learning to navigate without GPS, an interface that broadens mental models, encourages exploration, and promotes problem-solving to help users develop confidence and adaptability. When we remove all friction, we risk leaving people directionless the moment they stray from thepath.Because in the end, a frictionless world isnt always a better one. If I had GPS on that first road trip, I might not have gotten lostbut I also wouldnt have learned how to find my way. Sometimes, the best designs are the ones that make usthink.Dont miss out! Join my email list and receive the latestcontent.The cognitive cost of convenience was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·113 Views
  • The real problem with research
    uxdesign.cc
    After 22 years in design, I think that the power of research is overrated, while other important aspects are overlooked.If you pick up any iconic digital-design literature or a popular methodology like Design thinking or the Double diamond, their main promise is that in order to create in-demand products, you need to reduce risks through research. This sounds like common sense: lets make a cheap, dummy prototype, quickly test it on users, and validate the idea. This saves engineers expensive time and/or avoids creating products nobodyneeds.If you have an existing product, more advanced designers will also notice that its important to combine qualitative research with quantitative data. Since qualitative research mainly answers the question why, and quantitative research answers what, where, when, and how much, only the combination of both research types gives you a complete behavioral picture and ROI for every sneeze. This is the pinnacle, so to speak, of the modern approach of making informed decisions: improve where it really matters, and dont build false assumptions along theway.Thats thetheory.But in reality (if youre actually analysing the data), it often turns out that despite the obvious improvement in user experience confirmed by your research, the conversion rate of the control version might be higher than the variant you propose. Or, what happens even more often, the experiment may not show any statistical significance at all. In my experience, this happens in more than half of the experiments. Im not talking about CSAT or NPS, God forbid, but about how people vote with their ownmoney.What happened? Was the research not conducted thoroughly enough? Did you lack business context? Looking at the sheer number of experiments successful brands run, I get the feeling that quantity is what truly matters (700/week at Airbnb, 1,000 parallel tests for Booking.com, 100200/month at FloHealth).Or heres anotherstory:A product team turned to consultants from BCG, McKinsey, PwC, Accenture (pick your favourite), and for a sum with many zeros, they produced a 200-page research report that has everything: 360-personas, relevant in-depth user interviews, business context considerations, analysis of direct and indirect competitors, as well as a breakdown of current product errors. With their crme de la crme insight on top, that the product should be yet more user-centric (a real case). Followed by a list of mismatched mental models and violated patterns. After all, its so valuable for your users that the shopping cart is in the top right corner, and the product page has social proofs. And all this has already helped their 1,000 clients from the Forbeslist.Full of confidence, you implement these changes, and then crash into realitythe metrics didnt move at all. Users simply dont care about your changes: they might be concerned about the price, the weather, or other factors beyond your control. Or they might be motivated enough to go through a broken flow better than a new one simply because they love the product. Or they recently read a negative article before making a purchase decision. The wider the audience and the deeper the funnel, the broader the range of these potential causes.Whats the matter then? Did researchers confuse behaviour with intention? Misinterpret the data? Hire the wrong respondents?So what is the main problem with research?I believe the main problem is that research shifts responsibility for decisions from designers to users. We cover our asses with these studies instead of realising a simple fact: we cant influence everything. And things are chaotic beyond Figmascreens.Researchers spend weeks detailing personas with their motivations, fears, and needs priorities. Where the pain number 1 is 15% weaker than pain number 2 for industry X. Then the designer, full of confidence, places the icon with pain number 2 before the icon with pain number 1. Behold, the quintessence of design! Stakeholders are ecstatic, you can start preparing your case study and asking for apromo.Because of research, designers choose the more obvious resolutions instead of doing something innovative or truly cool. Instead of bold solutions, they just cycle through another set of recommendations and checklists from Medium or (now) ChatGPT. And theres nothing more mind-corrupting than desk research, which transforms all apps and websites into the same thing. But more on that, anothertime.Any modern engineering methodology is cyclical, but no one explains what to do if you get stuck in this cycle without any meaningful changes.The reason for this is design education, which doesnt teach a few keythings:Research often takes up a significant part of design education, creating false expectations for designers. However, daily routine here requires a different set of skills: facilitation, negotiation, entrepreneurship, working within constraints, understanding front-end and back-end tech, designing logically connected and cohesive UI, attention to details, quality, and a strong commitment to your principles.The quality of hypotheses and their interpretation directly depends on the quality of experience. And to get this genuine experience requires years of hard work, tears and sweat (ideally with a mentor). This genuine experience simply cant be be achieved if you spent entire time drawing Dribbble crypto dashboards. This is why beginner designers often consider any positive or negative feedback from a respondent as aninsight.Research doesnt guarantee good design. Dozens of times I saw how having research ended up with dull or even bad execution.Doing research, you fall into a vicious cycle of validating hypotheses and evaluating prototypes. First, you validate the general idea, then you check the final execution (through, for example, user interviews or pilot launches). Metrics didnt move? Then we go for a second round, then a third, and so on. A small feature can take years of polishing without any clearresults.Nobody gives a damn about design. We ourselves created this mystical image of design, that is both about architecture, business, art, and psychology. Yes, design is somewhere between disciplines and rarely has clear boundaries. But if you look at successful designers, they all have one thing in common: they reason from a business perspective, not from the perspective of user advocates. Taking such a role, which seems even further from the discipline, makes it easier to prove yourpoint.More or less developed businesses are complex: mobile team(s), web team(s), marketing and sales, support, etc. When designing for something bigger than a garage startup, youll undoubtedly face the problem that you have little influence on anything. And thatsF-I-N-E!Right now theres too much informational noise from experts who are fighting for your attention and telling you how to do design the right way. In my experience, real experts dont even have the time to share their knowledge. And even when they do, these voices just drown in a ton of informational clickbait.Even if something was effective somewhere, it doesnt mean it will work elsewhere. Experience is extremely contextual because it takes into account many factors and systems dependencies.Exposure to visuals is a double-edged sword. I.e. the more you look at how it should be done, the higher the chance youll copy it with nofilter.A couple of tips at theend:The bigger the change you make, the higher the chance of moving the metrics needle. But the bigger the change, the less its clear what exactly influenced the success. And all those stories about how a button color improved conversion by 400% are just fairy tales that adult designers tell to young ones at bedtime. Next time, take a closer look at who publishes these successful experimental data99% they are marketing agencies trying to prove their value thisway.Improvements are typically measured in relative terms, but remember that a 40% increase in a conversion rate with an initial value of 0.01 will result in 0.014. Therefore, adequately evaluate your success and look at the problem comprehensively; perhaps the issue is deeper or not in that place atall.Its often better to quickly release a feature or product to find out if its really needed, than to spend too much time on research and be full of doubts. Good product managers simply have The Vision and gut feeling (aka genuine experience), and this turns out to be more important than everything else.Try designing something without looking at how others do it, and youll be surprised by how difficult it is the firsttime.Dont forget about context. What are the usage conditions? What job does a person hire your app for? How many other interconnected elements are in the system? Very often, adoption, funnel, and content issues can literally be understood on a common-sense level.Research with real users is helpful, especially if youre a beginner designer. It can help you quickly eliminate interface friction and understand user expectations. Or it might not help . The key words here are real users and beginner designer, which together can give an unpredictable result. But as soon as a designer stops being a beginner one, they quickly begin to notice that users say what they already know.I also dont want to ignore the fact that research is convenient for creating the appearance of professional work. Weekly highlight reels of your users struggles serve as a great reminder to your managers of why they need a UX designer.Aesthetics doesnt work without execution. Better think about who youre designing for. For your portfolio? For other designers? Or for your audience?Dont confuse what you wish for with what you actually do, and be humble. Its possible that something has improved not because of your changes: maybe its due to a better acquisition source, or improved apps performance. Design is a team discipline where both business and engineers have the right to be heard. After all, whats important is being able to work together to achieve common goals and enjoy life along theway.Sometimes its just luck. But next time, try looking not at the personas, but at the positioning, for example. How saturated is your market with competitors? What are their weaknesses? Is this really the rightniche?After finishing this essay, I realised that it all sounds too provocative. Obviously, talking to your users is beneficial, but my point is that it shouldnt be the core of our craft. This especially applies to desk research and AI-generated artefactsboth of these methods are just a copy-paste of someones thoughts, which are a copy-paste of other ideas without any context and critical thinking. Yes, methods like contextual inquiry help understand the context, and usability interviews help remove snags and convince some managers to make right UX decisions.But what to do if, after all, your sales are still low, and metrics havent moved? After a couple of researchdesignresearchdevelopment cycles, you inevitably ask yourself: Is this research really that important?The real problem with research was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·113 Views
  • Cracking the code of vibe coding
    uxdesign.cc
    Not all vibes aregood.Vibe codingImage byAuthorAre you wondering WTF all this vibe coding stuff is thats been sweeping your social feeds? You probably get the gist that its about having a vision of an app or software you want to build and vibing your way into it with the help of an AItool.If youre an old-school coder like me, you might be laughing a bit at the idea that you can just blink software into realityespecially when you think about all the blood, sweat, and tears it traditionally takes to build excellent products. Its like a joke that it can suddenly be soeasy.Well, stop laughing. Vibe coding is real, and in this article, Ill explain it and break down how its changing how we imagine, build, and grow products and companies going forward. Ill also make a case for embracing vibe coding without losing your soul, including four frameworks to optimize your success with the good vibes from creating genuinely helpful, well-crafted software andapps.Same cycle, newvibeVibe coding is also just one more example of what I call the Great Democratization Cycle. Weve seen it in photography as it evolved from darkrooms to digital cameras, which eliminated film processing, to smartphones and Instagram filters, making everyone a high-end photographer. The same goes for publishing (from printing presses to WordPress), video production (from studio equipment to TikTok), and music creation (from recording studios to GarageBand on a laptop and now AI tools like Suno on your smartphone).Each wave democratized image creation while simultaneously changing what it meant to be a professional in thefield.Software development is in no way exempt. Weve come a long way from traditional coding (1970s-2010s), which was expert-driven and hard to get into. From the low-code/no-code movement (2010s) to AI-assisted development (early 2020s; i.e., Github Copilot), the path to easy software development has accelerated to where we find ourselves today: vibe coding. Thanks to platforms like Windsurf, Cursor, LoveableDev, and Replit, anyone can take their app or software idea to execution withinminutes.However, its not that simplenot for the non-technical entrepreneur who feels empowered or the veteran developer who feelsdissed.The seduction of simplicityLet me take you back to my first coding project. As a pimply-faced teen in my parents basement, I spent countless sleepless nights wrestling with syntax errors, debugging mysterious crashes, and finally experiencing that incomparable rush when my creation actually worked. The journey was killer, but it shaped my understanding of how software fundamentally operates.Fast forward to 2025, and were witnessing a revolution called vibe codinga term popularized by AI researcher Andrej Karpathy thats taken the tech world by storm. The premise? Simply describe what you want in natural language, and AI generates the code. No more syntax struggles. No more Stack Overflow deep dives at 2 AM. Justvibes.Its intoxicatingly easy.I recently tested this myself. I prompted CursorAI, an AI-enabled IDE (integrated development environment), and built an app called DaddyTime that helps discover new cool things I can do with my son (hesthree).Within 30 minutes, I had a fully functional progressive web appthat:Image byAuthor.Connected to local events in myareaIntegrated with a weather service (to suggest indoor or outdoor activities)Correlated ideas with localweatherIntegrated with my Google calendar for bookingeventsThe entire process took less than 30 minutesno coding required, just a conversation with anAI.This isnt an exaggerationits our new reality. And thats precisely what worriesme.The craftcrisisThis AI-driven accessibility is undeniably powerful. Designers can prototype without developer dependencies. Domain experts can build tools to solve specific problems without learning Python. Entrepreneurs can validate concepts without hiring engineering teams.But as we embrace this new paradigm, we face a profound question: What happens when we separate makers from their materials?Consider this parallel: Would we celebrate a world where painters never touch paint, sculptors never feel clay, or chefs never taste their ingredients? Would their art, their craft, retain itssoul?When we remove the intimate connection between creator and mediumin this case, between developer and codewe risk losing something essential: the craft. And its not just about producing working software. Itsabout:Understanding systems at a fundamental level, which allows you to solve problems when things inevitably breakCreating elegant, maintainable solutions that stand the test oftimeBuilding mental models that inform higher-level architectural decisionsDeveloping an intuition for performance, security, and edgecasesA Microsoft engineer was brutally honest about AI-generated code, stating that LLMs are not good at maintaining or extending projects over time and often get lost in the requirements and generate a lot of nonsense content.This isnt surprising. AI excels at mimicking patterns but lacks the deeper understanding that comes from years of hands-on experience. It can produce code that works initially but falls apart under pressure.As one tech CTO warned, overreliance on AI can lead to hidden complexitiesquick fixes that become unmanageable during scaling or debugging. The 75% that AI solves quickly often leaves the critical 25%making code production-readya looming challenge.And dont even get me started on the security risks this poses. You could be a few clicks away from leaking all your data by signing up for some cool thing that popped up in your IG feed. Thats not a vibe, isit?Beyond technical debt: creativedebtTheres something even more concerning than technical debt lurking in our AI-coded future: creativedebt.True innovation often emerges from constraints and deep domain knowledge. When you wrestle with a programming languages limitations, youre forced to think creatively within boundaries. This tension produces novel solutions and unexpected breakthroughs.When we remove this friction entirely, we risk homogenizing our solutions. If everyone asks AI for a responsive e-commerce site with product filtering, well get variations on the same themetechnically correct but creatively bankrupt implementations that feel eerilysimilar.The danger isnt just bad code; its boring products and AIslop.The knowledge gapwidensVibe coding creates two distinct tracks for engineers:Those who understand the foundations and can wield and direct AI effectivelyThose who depend entirely on AI outputs without comprehending whats happening under thehoodThis bifurcation has serious implications. When problems ariseand they willthe second group will be helplessly dependent on AI to fix issues it may have created in the firstplace.What happens when the AI cant solve the problem? Who do we turn tothen?Image meme viaReddit.As The Guardian aptly observed, Now you dont even need code to be a programmer. But you do still need expertise. This expertise gap will only widen as more people build software without understanding its inner workings.The swiss army knife imperativeRecent headlines confirm a troubling trend: Amazon plans to terminate over 14,000 managerial positions to save $3.5 billion annually. Meta, Microsoft, and countless others are making similar moves. The message is clearoperational efficiency is king, and specialization is becoming aluxury.This streamlining creates a new mandate: everyone must become a Swiss Army knife ofskills.Vibe coding accelerates this transformation. When anyone can generate functional code through conversation, the specialization that once protected technical roles evaporates. The implications ripple through organizations:Product managers cant hide behind documents and wireframestheyll need to generate working prototypesDesigners cant simply hand off mockupstheyll need to implement theirdesignsMarketers cant request custom toolstheyll build their own analytics dashboardsExecutives cant claim technical ignorancetheyll need to understand the systems theyoverseeThis isnt just speculation. Amjad Masad, CEO of Replit, revealed that 75% of Replit customers never write a single line of code already. The future is arriving faster than wethink.In this new landscape, value shifts dramatically from technical implementation to problem identification. As one entrepreneur noted, If you have an idea, youre only a few prompts away from a product. The bottleneck is no longer development speedits knowing which problems are worthsolving.Mental models for the vibe codingeraTo navigate this shift, we need new mental models. Here are four frameworks Im using to make sense of the vibe coding revolution:1. The creation-maintenance divideVibe coding excels at creation but struggles with maintenance. This creates a fundamental split:Creation: Easy, accessible, democraticMaintenance: Complex, requiring deep expertise, increasingly valuableSmart organizations will develop dual skillsetsrapid vibe coding for prototyping and proof-of-concepts, alongside rigorous engineering practices for production systems.2. The three tiers of softwarecreatorsAs coding barriers fall, a new hierarchy emerges:Prompt Engineers: Those who use AI to implement existingpatternsSolution Architects: Those who combine AI capabilities in novelwaysSystem Innovators: Those who create entirely new paradigms AI hasntseenYour value as a software creator will increasingly depend on moving up thisladder.3. Intellectual leverage vs. Execution leverageTraditional software provided execution leverage, automating repetitive tasks. Vibe coding gives us intellectual leverage, automating thinking itself. This means the highest-return activities shift from building things right to building the rightthings.4. The specialization paradoxAs tools become more powerful, success depends less on specialization and more on synthesis. The most valuable person isnt the deep expert in a single domain but the connector who understands multiple domains well enough to identify novel intersections.Finding the balance: augmentation, not replacementIm not suggesting we abandon AI-assisted codingthat would be like rejecting power tools in favor of hand saws. But we need to approach this revolution thoughtfully, preserving craft while embracing innovation.Heres what Ipropose:For individual creators:Learn the fundamentals first. Build a strong foundation in programming concepts before relying heavily on AI. I refuse to hire engineers who cant code without anLLMUse AI as a collaborator, not a replacement. Let it handle boilerplate while you focus on architecture and novel features.Understand what the AI produces. Take time to read and comprehend generated code before implementing it.Challenge AI outputs. Instead of accepting the first solution, ask, Is there a betterway?Develop T-shaped expertise. Deep knowledge in one area, with broad understanding acrossmany.For teams and organizations:Establish robust review processes for AI-generated code. Dont skip quality assurance just because AI wroteit.Create balanced teams with both AI enthusiasts and traditional craftspeople who can provide valuable checks and balances.Invest in education that emphasizes system thinking and architecture, not just prompt engineering and vibecoding.Document diligently. With less human-written code, thorough documentation becomes even morecrucial.Restructure around problem spaces, not technical specializations.For the community:Value and celebrate craftsmanship. Lets not lose sight of the artistry in well-crafted code.Develop ethical frameworks for responsible AI coding that preserves innovation while mitigating risks.Share learning resources that combine AI tools with foundational programming knowledge.Create new certification paths that validate understanding, not just implementation ability.Distribution: the new cheat code and businessmoatWhile we debate craft versus convenience, vibe coding has another dimension that deserves attention: the democratization of distribution.Pieter Levels, the indie hacker behind Nomad List and a dozen other profitable solo ventures, recently dramatically demonstrated this shift. Using vibe coding techniques through CursorAI, he built and launched RemoteOK Jobs 2.0 in just six hours, then shared the entire process on socialmedia.Levelsio twitter profile image fromXI had the idea at breakfast, he posted. By dinner, it was live with 5,000users.This isnt just fast developmentits a fundamental collapse of the creation-to-market timeline. When Levels built his first successful product in 2014, it took him weeks of coding. Now, with AI assistance, hes compressing that cycle to less than a day while reaching audiences in the millions.The implications are profound:The idea-execution gap vanishes. When you can go from concept to working product in hours instead of months, more ideas get tested in thewild.Audience trumps technical complexity. Levels success isnt primarily about his coding skillsits about his deep understanding of his audience and distribution channels. He knows exactly who hes building for and how to reachthem.Marketing > Building. As Levels bluntly said: I spent 20% of my time building and 80% telling people about it. That ratio used to be reversed.This hints at a future where technical implementation is so streamlined that distribution becomes the primary differentiator. The winners wont necessarily be those who build the best product in a technical sense but those who make the right product for a specific audience and get it in front of that audiencefastest.For entrepreneurs, this means investments in audience building, community development, and marketing channels may yield higher returns than investments in technical infrastructure.For larger organizations, it means the teams that understand customers and distribution will increasingly drive product development, not the other wayaround.In this brave new world, the greatest advantage goes to the creators whocombine:Deep audience understandingRapid implementation through vibecodingEstablished distribution channelsA willingness to launch fast and iteratepubliclyThats a very different skill set than what created tech success a decade ago, and it favors domain experts, community builders, and audience cultivators over traditional technical specialists.Disrupting the disruptors: no-codes no-future?Perhaps the most fascinating second-order effect of vibe coding is how it threatens to make the no-code/low-code movement obsolete almost overnight.For the past decade, platforms like Webflow, Bubble, and Airtable have carved out a valuable middle ground: visual interfaces that let non-developers build functional software without coding. These platforms found product-market fit by eliminating the need to write code while still requiring users to understand logical structures and workflows.Vibe coding leapfrogs this paradigm entirely. Why learn a proprietary visual interface when you can simply describe what you want in plain language?This disruption of the disruptors creates cascading effects:No-code platforms must evolve or die, likely by integrating AI to become AI-enhanced no-code.Visual programming becomes a transitional technology rather than the end-state many predicted.The value shifts from tools to prompts and patterns, creating new opportunities for prompt marketplaces and pattern libraries.Traditional developers and no-code creators increasingly compete in the samespace.The survivors will be those who recognize that the true value lies not in the implementation method but in a deep understanding of human problems and creative solutions.A call for thoughtful evolutionPeople have always created the most compelling software with vision, empathy, and a deep understanding of human needs. AI can help us execute that vision more efficiently but cant replace the human spark that drives genuinely transformative products.Consider that what separates good products from great ones is rarely technical perfectionits the human touch, the careful consideration of edge cases based on real-world experience, and empathetic design that anticipates userneeds.As Karpathy himself noted, vibe coding isnt about abandoning thoughtits about thinking at a higher level: I just see stuff, say stuff, run stuff, and copy-paste stuff, and it mostly works. The key word is mostly. The gap between mostly works and delights users is where human creativity and craft still reignsupreme.So, the question is: Will we use AI to amplify human creativity, freeing us from drudgery so we can focus on innovation? Or will we surrender our craft entirely, becoming mere prompt engineers orchestrating increasingly generic software?The choice isours.For my part, Im embracing AI as a powerful collaborator while fiercely protecting the craft that drew me to this field. While good vibes might get you a working prototype, its the marriage of human creativity with technological tools that creates truly extraordinary products.Not all vibes are goodbut with intention and care, we can ensure that the products we build in this new era combine the best of both worlds: AI efficiency and human ingenuity.The code of the future shouldnt just run. It shouldsing.Cracking the code of vibe coding was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·111 Views
  • Font names, and plagiarism
    uxdesign.cc
    Name a font well, and dont let someone call the lawyer on you.Continue reading on UX Collective
    0 Σχόλια ·0 Μοιράστηκε ·116 Views
  • And the Oscar goes to Blender
    uxdesign.cc
    How open-source community champion Gints Zilbalodis out-rendered Big AnimationContinue reading on UX Collective
    0 Σχόλια ·0 Μοιράστηκε ·101 Views
  • Web accessibility requirements in the EU
    uxdesign.cc
    In effect by mid-2025Continue reading on UX Collective
    0 Σχόλια ·0 Μοιράστηκε ·105 Views
και άλλες ιστορίες