Smashing Magazine
Smashing Magazine
Smashing Magazine delivers useful and innovative information to Web designers and developers.
181 أشخاص أعجبو بهذا
20 المنشورات
0 الصور
0 الفيديوهات
التحديثات الأخيرة
  • SerpApi: A Complete API For Fetching Search Engine Data
    smashingmagazine.com
    This article is a sponsored by SerpApiSerpApi leverages the power of search engine giants, like Google, DuckDuckGo, Baidu, and more, to put together the most pertinent and accurate search result data for your users from the comfort of your app or website. Its customizable, adaptable, and offers an easy integration into any project.What do you want to put together?Search information on a brand or business for SEO purposes;Input data to train AI models, such as the Large Language Model, for a customer service chatbot;Top news and websites to pick from for a subscriber newsletter;Google Flights API: collect flight information for your travel app;Price comparisons for the same product across different platforms;Extra definitions and examples for words that can be offered along a language learning app.The list goes on. In other words, you get to leverage the most comprehensive source of data on the internet for any number of needs, from competitive SEO research and tracking news to parsing local geographic data and even completing personal background checks for employment.Start With A Simple GET RequestThe results from the search API are only a URL request away for those who want a super quick start. Just add your search details in the URL parameters. Say you need the search result for Stone Henge from the location Westminster, England, United Kingdom in language en-GB, and country of search origin uk from the domain google.co.uk. Heres how simple it is to put the GET request together:https://serpapi.com/search.json?q=Stone+Henge&location=Westminster,+England,+United+Kingdom&hl=en-GB&gl=uk&google_domain=google.co.uk&api_key=your_api_keyThen theres the impressive list of libraries that seamlessly integrate the APIs into mainstream programming languages and frameworks such as JavaScript, Ruby, .NET, and more. Give It A Quick TryWant to give it a spin? Sign up and start for free, or tinker with the SerpApis live playground without signing up. The playground allows you to choose which search engine to target, and you can fill in the values for all the basic parameters available in the chosen API to customize your search. On clicking Search, you get the search result page and its extracted JSON data.If you need to get a feel for the full API first, you can explore their easy-to-grasp web documentation before making any decision. You have the chance to work with all of the APIs to your satisfaction before committing to it, and when that time comes, SerpApis multiple price plans tackle anywhere between an economic few hundred searches a month and bulk queries fit for large corporations. What Data Do You Need?Beyond the rudimentary search scraping, SerpApi provides a range of configurations, features, and additional APIs worth considering. GeolocationCapture the global trends, or refine down to more localized particulars by names of locations or Googles place identifiers. SerpApis optimized routing of requests ensures accurate retrieval of search result data from any location worldwide. If locations themselves are the answers to your queries say, a cycle trail to be suggested in a fitness app those can be extracted and presented as maps using SerpApis Google Maps API.Structured JSONAlthough search engines reveal results in a tidy user interface, deriving data into your application could cause you to end up with a large data dump to be sifted through but not if youre using SerpApi.SerpApi pulls data in a well-structured JSON format, even for the popular kinds of enriched search results, such as knowledge graphs, review snippets, sports league stats, ratings, product listings, AI overview, and more.Speedy ResultsSerpApis baseline performance can take care of timely search data for real-time requirements. But what if you need more? SerpApis Ludicrous Speed option, easily enabled from the dashboard with an upgrade, provides a super-fast response time. More than twice as fast as usual, thanks to twice the server power. Theres also Ludicrous Speed Max, which allocates four times more server resources for your data retrieval. Data that is time-sensitive and for monitoring things in real-time, such as sports scores and tracking product prices, will lose its value if it is not handled in a timely manner. Ludicrous Speed Max guarantees no delays, even for a large-scale enterprise haul.You can also use a relevant SerpApi API to hone in on your relevant category, like Google Flights API, Amazon API, Google News API, etc., to get fresh and apt results. If you dont need the full depth of the search API, theres a Light version available for Google Search, Google Images, Google Videos, Google News, and DuckDuckGo Search APIs. Search Controls & PrivacyNeed the results asynchronously picked up? Want a refined output using advanced search API parameters and a JSON Restrictor? Looking for search outcomes for specific devices? Dont want auto-corrected query results? Theres no shortage of ways to configure SerpApi to get exactly what you need.Additionally, if you prefer not to have your search metadata on their servers, simply turn on the ZeroTrace mode thats available for selected plans. The X-RaySave yourself a headache, literally, trying to play match between what you see on a search result page and its extracted data in JSON. SerpApis X-Ray tool shows you where what comes from. Its available and free in all plans.Inclusive SupportIf you dont have the expertise or resources for tackling the validity of scraping search results, heres what SerpApi says:SerpApi, LLC assumes scraping and parsing liabilities for both domestic and foreign companies unless your usage is otherwise illegal.You can reach out and have a conversation with them regarding the legal protections they offer, as well as inquire about anything else you might want to know about, including SerpApi in your project, such as pricing, performance expected, on-demand options, and technical support. Just drop a message at their contact page.In other words, the SerpApi team has your back with the support and expertise to get the most from your fetched data.Try SerpApi FreeThats right, you can get your hands on SerpApi today and start fetching data with absolutely no commitment, thanks to a free starter plan that gives you up to 250 free search queries. Give it a try and then bump up to one of the reasonably-priced monthly subscription plans with generous search limits.Try SerpApi
    0 التعليقات ·0 المشاركات
  • The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence
    smashingmagazine.com
    Misuse and misplaced trust of AI is becoming an unfortunate common event. For example, lawyers trying to leverage the power of generative AI for research submit court filings citing multiple compelling legal precedents. The problem? The AI had confidently, eloquently, and completely fabricated the cases cited. The resulting sanctions and public embarrassment can become a viral cautionary tale, shared across social media as a stark example of AIs fallibility.This goes beyond a technical glitch; its a catastrophic failure of trust in AI tools in an industry where accuracy and trust are critical. The trust issue here is twofold the law firms are submitting briefs in which they have blindly over-trusted the AI tool to return accurate information. The subsequent fallout can lead to a strong distrust in AI tools, to the point where platforms featuring AI might not be considered for use until trust is reestablished. Issues with trusting AI arent limited to the legal field. We are seeing the impact of fictional AI-generated information in critical fields such as healthcare and education. On a more personal scale, many of us have had the experience of asking Siri or Alexa to perform a task, only to have it done incorrectly or not at all, for no apparent reason. Im guilty of sending more than one out-of-context hands-free text to an unsuspecting contact after Siri mistakenly pulls up a completely different name than the one Id requested.With digital products moving to incorporate generative and agentic AI at an increasingly frequent rate, trust has become the invisible user interface. When it works, our interactions are seamless and powerful. When it breaks, the entire experience collapses, with potentially devastating consequences. As UX professionals, were on the front lines of a new twist on a common challenge. How do we build products that users can rely on? And how do we even begin to measure something as ephemeral as trust in AI?Trust isnt a mystical quality. It is a psychological construct built on predictable factors. I wont dive deep into academic literature on trust in this article. However, it is important to understand that trust is a concept that can be understood, measured, and designed for. This article will provide a practical guide for UX researchers and designers. We will briefly explore the psychological anatomy of trust, offer concrete methods for measuring it, and provide actionable strategies for designing more trustworthy and ethical AI systems.The Anatomy of Trust: A Psychological Framework for AITo build trust, we must first understand its components. Think of trust like a four-legged stool. If any one leg is weak, the whole thing becomes unstable. Based on classic psychological models, we can adapt these legs for the AI context.1. Ability (or Competence)This is the most straightforward pillar: Does the AI have the skills to perform its function accurately and effectively? If a weather app is consistently wrong, you stop trusting it. If an AI legal assistant creates fictitious cases, it has failed the basic test of ability. This is the functional, foundational layer of trust.2. BenevolenceThis moves from function to intent. Does the user believe the AI is acting in their best interest? A GPS that suggests a toll-free route even if its a few minutes longer might be perceived as benevolent. Conversely, an AI that aggressively pushes sponsored products feels self-serving, eroding this sense of benevolence. This is where user fears, such as concerns about job displacement, directly challenge trustthe user starts to believe the AI is not on their side.3. IntegrityDoes AI operate on predictable and ethical principles? This is about transparency, fairness, and honesty. An AI that clearly states how it uses personal data demonstrates integrity. A system that quietly changes its terms of service or uses dark patterns to get users to agree to something violates integrity. An AI job recruiting tool that has subtle yet extremely harmful social biases, existing in the algorithm, violates integrity.4. Predictability & ReliabilityCan the user form a stable and accurate mental model of how the AI will behave? Unpredictability, even if the outcomes are occasionally good, creates anxiety. A user needs to know, roughly, what to expect. An AI that gives a radically different answer to the same question asked twice is unpredictable and, therefore, hard to trust.The Trust Spectrum: The Goal of a Well-Calibrated RelationshipOur goal as UX professionals shouldnt be to maximize trust at all costs. An employee who blindly trusts every email they receive is a security risk. Likewise, a user who blindly trusts every AI output can be led into dangerous situations, such as the legal briefs referenced at the beginning of this article. The goal is well-calibrated trust.Think of it as a spectrum where the upper-mid level is the ideal state for a truly trustworthy product to achieve:Active DistrustThe user believes the AI is incompetent or malicious. They will avoid it or actively work against it.Suspicion & ScrutinyThe user interacts cautiously, constantly verifying the AIs outputs. This is a common and often healthy state for users of new AI.Calibrated Trust (The Ideal State)This is the sweet spot. The user has an accurate understanding of the AIs capabilitiesits strengths and, crucially, its weaknesses. They know when to rely on it and when to be skeptical.Over-trust & Automation BiasThe user unquestioningly accepts the AIs outputs. This is where users follow flawed AI navigation into a field or accept a fictional legal brief as fact.Our job is to design experiences that guide users away from the dangerous poles of Active Distrust and Over-trust and toward that healthy, realistic middle ground of Calibrated Trust.The Researchers Toolkit: How to Measure Trust In AITrust feels abstract, but it leaves measurable fingerprints. Academics in the social sciences have done much to define both what trust looks like and how it might be measured. As researchers, we can capture these signals through a mix of qualitative, quantitative, and behavioral methods.Qualitative Probes: Listening For The Language Of TrustDuring interviews and usability tests, go beyond Was that easy to use? and listen for the underlying psychology. Here are some questions you can start using tomorrow:To measure Ability:Tell me about a time this tools performance surprised you, either positively or negatively.To measure Benevolence:Do you feel this system is on your side? What gives you that impression?To measure Integrity:If this AI made a mistake, how would you expect it to handle it? What would be a fair response?To measure Predictability:Before you clicked that button, what did you expect the AI to do? How closely did it match your expectation?Investigating Existential Fears (The Job Displacement Scenario)One of the most potent challenges to an AIs Benevolence is the fear of job displacement. When a participant expresses this, it is a critical research finding. It requires a specific, ethical probing technique.Imagine a participant says, Wow, it does that part of my job pretty well. I guess I should be worried.An untrained researcher might get defensive or dismiss the comment. An ethical, trained researcher validates and explores:Thank you for sharing that; its a vital perspective, and its exactly the kind of feedback we need to hear. Can you tell me more about what aspects of this tool make you feel that way? In an ideal world, how would a tool like this work with you to make your job better, not to replace it?This approach respects the participant, validates their concern, and reframes the feedback into an actionable insight about designing a collaborative, augmenting tool rather than a replacement. Similarly, your findings should reflect the concern users expressed about replacement. We shouldnt pretend this fear doesnt exist, nor should we pretend that every AI feature is being implemented with pure intention. Users know better than that, and we should be prepared to argue on their behalf for how the technology might best co-exist within their roles.Quantitative Measures: Putting A Number On ConfidenceYou can quantify trust without needing a data science degree. After a user completes a task with an AI, supplement your standard usability questions with a few simple Likert-scale items:The AIs suggestion was reliable. (1-7, Strongly Disagree to Strongly Agree)I am confident in the AIs output. (1-7)I understood why the AI made that recommendation. (1-7)The AI responded in a way that I expected. (1-7)The AI provided consistent responses over time. (1-7)Over time, these metrics can track how trust is changing as your product evolves.Note: If you want to go beyond these simple questions that Ive made up, there are numerous scales (measurements) of trust in technology that exist in academic literature. It might be an interesting endeavor to measure some relevant psychographic and demographic characteristics of your users and see how that correlates with trust in AI/your product. Table 1 at the end of the article contains four examples of current scales you might consider using to measure trust. You can decide which is best for your application, or you might pull some of the items from any of the scales if you arent looking to publish your findings in an academic journal, yet want to use items that have been subjected to some level of empirical scrutiny.Behavioral Metrics: Observing What Users Do, Not Just What They SayPeoples true feelings are often revealed in their actions. You can use behaviors that reflect the specific context of use for your product. Here are a few general metrics that might apply to most AI tools that give insight into users behavior and the trust they place in your tool.Correction RateHow often do users manually edit, undo, or ignore the AIs output? A high correction rate is a powerful signal of low trust in its Ability.Verification BehaviorDo users switch to Google or open another application to double-check the AIs work? This indicates they dont trust it as a standalone source of truth. It can also potentially be positive that they are calibrating their trust in the system when they use it up front.DisengagementDo users turn the AI feature off? Do they stop using it entirely after one bad experience? This is the ultimate behavioral vote of no confidence.Designing For Trust: From Principles to PixelsOnce youve researched and measured trust, you can begin to design for it. This means translating psychological principles into tangible interface elements and user flows.Designing for Competence and PredictabilitySet Clear ExpectationsUse onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle. A simple Im still learning about [topic X], so please double-check my answers can work wonders.Show Confidence LevelsInstead of just giving an answer, have the AI signal its own uncertainty. A weather app that says 70% chance of rain is more trustworthy than one that just says It will rain and is wrong. An AI could say, Im 85% confident in this summary, or highlight sentences its less sure about.The Role of Explainability (XAI) and TransparencyExplainability isnt about showing users the code. Its about providing a useful, human-understandable rationale for a decision.Instead of:Here is your recommendation.Try:Because you frequently read articles about UX research methods, Im recommending this new piece on measuring trust in AI.This addition transforms AI from an opaque oracle to a transparent logical partner.Many of the popular AI tools (e.g., ChatGPT and Gemini) show the thinking that went into the response they provide to a user. Figure 3 shows the steps Gemini went through to provide me with a non-response when I asked it to help me generate the masterpiece displayed above in Figure 2. While this might be more information than most users care to see, it provides a useful resource for a user to audit how the response came to be, and it has provided me with instructions on how I might proceed to address my task.Figure 4 shows an example of a scorecard OpenAI makes available as an attempt to increase users trust. These scorecards are available for each ChatGPT model and go into the specifics of how the models perform as it relates to key areas such as hallucinations, health-based conversations, and much more. In reading the scorecards closely, you will see that no AI model is perfect in any area. The user must remain in a trust but verify mode to make the relationship between human reality and AI work in a way that avoids potential catastrophe. There should never be blind trust in an LLM.Designing For Trust Repair (Graceful Error Handling) And Not Knowing an AnswerYour AI will make mistakes.Trust is not determined by the absence of errors, but by how those errors are handled.Acknowledge Errors Humbly.When the AI is wrong, it should be able to state that clearly. My apologies, I misunderstood that request. Could you please rephrase it? is far better than silence or a nonsensical answer.Provide an Easy Path to Correction.Make feedback mechanisms (like thumbs up/down or a correction box) obvious. More importantly, show that the feedback is being used. A Thank you, Im learning from your correction can help rebuild trust after a failure. As long as this is true.Likewise, your AI cant know everything. You should acknowledge this to your users.UX practitioners should work with the product team to ensure that honesty about limitations is a core product principle.This can include the following:Establish User-Centric Metrics: Instead of only measuring engagement or task completion, UXers can work with product managers to define and track metrics like:Hallucination Rate: The frequency with which the AI provides verifiably false information.Successful Fallback Rate: How often the AI correctly identifies its inability to answer and provides a helpful, honest alternative.Prioritize the I Dont Know Experience: UXers should frame the I dont know response not as an error state, but as a critical feature. They must lobby for the engineering and content resources needed to design a high-quality, helpful fallback experience.UX Writing And TrustAll of these considerations highlight the critical role of UX writing in the development of trustworthy AI. UX writers are the architects of the AIs voice and tone, ensuring that its communication is clear, honest, and empathetic. They translate complex technical processes into user-friendly explanations, craft helpful error messages, and design conversational flows that build confidence and rapport. Without thoughtful UX writing, even the most technologically advanced AI can feel opaque and untrustworthy.The words and phrases an AI uses are its primary interface with users. UX writers are uniquely positioned to shape this interaction, ensuring that every tooltip, prompt, and response contributes to a positive and trust-building experience. Their expertise in human-centered language and design is indispensable for creating AI systems that not only perform well but also earn and maintain the trust of their users.A few key areas for UX writers to focus on when writing for AI include:Prioritize TransparencyClearly communicate the AIs capabilities and limitations, especially when its still learning or if its responses are generated rather than factual. Use phrases that indicate the AIs nature, such as As an AI, I can... or This is a generated response.Design for ExplainabilityWhen the AI provides a recommendation, decision, or complex output, strive to explain the reasoning behind it in an understandable way. This builds trust by showing the user how the AI arrived at its conclusion.Emphasize User ControlEmpower users by providing clear ways to provide feedback, correct errors, or opt out of certain AI features. This reinforces the idea that the user is in control and the AI is a tool to assist them.The Ethical Tightrope: The Researchers ResponsibilityAs the people responsible for understanding and advocating for users, we walk an ethical tightrope. Our work comes with profound responsibilities.The Danger Of TrustwashingWe must draw a hard line between designing for calibrated trust and designing to manipulate users into trusting a flawed, biased, or harmful system. For example, if an AI system designed for loan approvals consistently discriminates against certain demographics but presents a user interface that implies fairness and transparency, this would be an instance of trustwashing. Another example of trustwashing would be if an AI medical diagnostic tool occasionally misdiagnoses conditions, but the user interface makes it seem infallible. To avoid trustwashing, the system should clearly communicate the potential for error and the need for human oversight.Our goal must be to create genuinely trustworthy systems, not just the perception of trust. Using these principles to lull users into a false sense of security is a betrayal of our professional ethics.To avoid and prevent trustwashing, researchers and UX teams should:Prioritize genuine transparency.Clearly communicate the limitations, biases, and uncertainties of AI systems. Dont overstate capabilities or obscure potential risks.Conduct rigorous, independent evaluations.Go beyond internal testing and seek external validation of system performance, fairness, and robustness.Engage with diverse stakeholders.Involve users, ethics experts, and impacted communities in the design, development, and evaluation processes to identify potential harms and build genuine trust.Be accountable for outcomes.Take responsibility for the societal impact of AI systems, even if unintended. Establish mechanisms for redress and continuous improvement.Be accountable for outcomes.Establish clear and accessible mechanisms for redress when harm occurs, ensuring that individuals and communities affected by AI decisions have avenues for recourse and compensation.Educate the public.Help users understand how AI works, its limitations, and what to look for when evaluating AI products.Advocate for ethical guidelines and regulations.Support the development and implementation of industry standards and policies that promote responsible AI development and prevent deceptive practices.Be wary of marketing hype.Critically assess claims made about AI systems, especially those that emphasize trustworthiness without clear evidence or detailed explanations. Publish negative findings.Dont shy away from reporting challenges, failures, or ethical dilemmas encountered during research. Transparency about limitations is crucial for building long-term trust.Focus on user empowerment.Design systems that give users control, agency, and understanding rather than just passively accepting AI outputs.The Duty To AdvocateWhen our research uncovers deep-seated distrust or potential harm like the fear of job displacement our job has only just begun. We have an ethical duty to advocate for that user. In my experience directing research teams, Ive seen that the hardest part of our job is often carrying these uncomfortable truths into rooms where decisions are made. We must champion these findings and advocate for design and strategy shifts that prioritize user well-being, even when it challenges the product roadmap.I personally try to approach presenting this information as an opportunity for growth and improvement, rather than a negative challenge.For example, instead of stating Users dont trust our AI because they fear job displacement, I might frame it as Addressing user concerns about job displacement presents a significant opportunity to build deeper trust and long-term loyalty by demonstrating our commitment to responsible AI development and exploring features that enhance human capabilities rather than replace them. This reframing can shift the conversation from a defensive posture to a proactive, problem-solving mindset, encouraging collaboration and innovative solutions that ultimately benefit both the user and the business.Its no secret that one of the more appealing areas for businesses to use AI is in workforce reduction. In reality, there will be many cases where businesses look to cut 1020% of a particular job family due to the perceived efficiency gains of AI. However, giving users the opportunity to shape the product may steer it in a direction that makes them feel safer than if they do not provide feedback. We should not attempt to convince users they are wrong if they are distrustful of AI. We should appreciate that they are willing to provide feedback, creating an experience that is informed by the human experts who have long been doing the task being automated.Conclusion: Building Our Digital Future On A Foundation Of TrustThe rise of AI is not the first major technological shift our field has faced. However, it presents one of the most significant psychological challenges of our current time. Building products that are not just usable but also responsible, humane, and trustworthy is our obligation as UX professionals.Trust is not a soft metric. It is the fundamental currency of any successful human-technology relationship. By understanding its psychological roots, measuring it with rigor, and designing for it with intent and integrity, we can move from creating intelligent products to building a future where users can place their confidence in the tools they use every day. A trust that is earned and deserved.Table 1: Published Academic Scales Measuring Trust In Automated Systems Survey Tool Name Focus Key Dimensions of Trust Citation Trust in Automation Scale 12-item questionnaire to assess trust between people and automated systems. Measures a general level of trust, including reliability, predictability, and confidence. Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 5371. Trust of Automated Systems Test (TOAST) 9-items used to measure user trust in a variety of automated systems, designed for quick administration. Divided into two main subscales: Understanding (users comprehension of the system) and Performance (belief in the systems effectiveness). Wojton, H. M., Porter, D., Lane, S. T., Bieber, C., & Madhavan, P. (2020). Initial validation of the trust of automated systems test (TOAST). (PDF) The Journal of Social Psychology, 160(6), 735750. Trust in Automation Questionnaire A 19-item questionnaire capable of predicting user reliance on automated systems. A 2-item subscale is available for quick assessments; the full tool is recommended for a more thorough analysis. Measures 6 factors: Reliability, Understandability, Propensity to trust, Intentions of developers, Familiarity, Trust in automation Krber, M. (2018). Theoretical considerations and development of a questionnaire to measure trust in automation. In Proceedings 20th Triennial Congress of the IEA. Springer. Human Computer Trust Scale 12-item questionnaire created to provide an empirically sound tool for assessing user trust in technology. Divided into two key factors:Benevolence and Competence: This dimension captures the positive attributes of the technologyPerceived Risk: This factor measures the users subjective assessment of the potential for negative consequences when using a technical artifact. Siddharth Gulati, Sonia Sousa & David Lamas (2019): Design, development and evaluation of a human-computer trust scale, (PDF) Behaviour & Information Technology Appendix A: Trust-Building Tactics ChecklistTo design for calibrated trust, consider implementing the following tactics, organized by the four pillars of trust:1. Ability (Competence) & Predictability Set Clear Expectations: Use onboarding, tooltips, and empty states to honestly communicate the AIs strengths and weaknesses. Show Confidence Levels: Display the AIs uncertainty (e.g., 70% chance, 85% confident) or highlight less certain parts of its output. Provide Explainability (XAI): Offer useful, human-understandable rationales for the AIs decisions or recommendations (e.g., Because you frequently read X, Im recommending Y). Design for Graceful Error Handling: Acknowledge errors humbly (e.g., My apologies, I misunderstood that request.). Provide easy paths to correction (e. ] g., prominent feedback mechanisms like thumbs up/down). Show that feedback is being used (e.g., Thank you, Im learning from your correction). Design for I Dont Know Responses: Acknowledge limitations honestly. Prioritize a high-quality, helpful fallback experience when the AI cannot answer. Prioritize Transparency: Clearly communicate the AIs capabilities and limitations, especially if responses are generated.2. Benevolence Address Existential Fears: When users express concerns (e.g., job displacement), validate their concerns and reframe the feedback into actionable insights about collaborative tools. Prioritize User Well-being: Advocate for design and strategy shifts that prioritize user well-being, even if it challenges the product roadmap. Emphasize User Control: Provide clear ways for users to give feedback, correct errors, or opt out of AI features.3. Integrity Adhere to Ethical Principles: Ensure the AI operates on predictable, ethical principles, demonstrating fairness and honesty. Prioritize Genuine Transparency: Clearly communicate the limitations, biases, and uncertainties of AI systems; avoid overstating capabilities or obscuring risks. Conduct Rigorous, Independent Evaluations: Seek external validation of system performance, fairness, and robustness to mitigate bias. Engage Diverse Stakeholders: Involve users, ethics experts, and impacted communities in the design and evaluation processes. Be Accountable for Outcomes: Establish clear mechanisms for redress and continuous improvement for societal impacts, even if unintended. Educate the Public: Help users understand how AI works, its limitations, and how to evaluate AI products. Advocate for Ethical Guidelines: Support the development and implementation of industry standards and policies that promote responsible AI. Be Wary of Marketing Hype: Critically assess claims about AI trustworthiness and demand verifiable data. Publish Negative Findings: Be transparent about challenges, failures, or ethical dilemmas encountered during research.4. Predictability & Reliability Set Clear Expectations: Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle. Show Confidence Levels: Instead of just giving an answer, have the AI signal its own uncertainty. Provide Explainability (XAI) and Transparency: Offer a useful, human-understandable rationale for AI decisions. Design for Graceful Error Handling: Acknowledge errors humbly and provide easy paths to correction. Prioritize the I Dont Know Experience: Frame I dont know as a feature and design a high-quality fallback experience. Prioritize Transparency (UX Writing): Clearly communicate the AIs capabilities and limitations, especially when its still learning or if responses are generated. Design for Explainability (UX Writing): Explain the reasoning behind AI recommendations, decisions, or complex outputs.
    0 التعليقات ·0 المشاركات
  • AmbientAnimationsInWeb Design: Principles And Implementation (Part1)
    smashingmagazine.com
    Unlike timeline-based animations, which tell stories across a sequence of events, or interaction animations that are triggered when someone touches something, ambient animations are the kind of passive movements you might not notice at first. But, they make a design look alive in subtle ways.In an ambient animation, elements might subtly transition between colours, move slowly, or gradually shift position. Elements can appear and disappear, change size, or they could rotate slowly.Ambient animations arent intrusive; they dont demand attention, arent distracting, and dont interfere with what someones trying to achieve when they use a product or website. They can be playful, too, making someone smile when they catch sight of them. That way, ambient animations add depth to a brands personality.To illustrate the concept of ambient animations, Ive recreated the cover of a Quick Draw McGraw comic book (PDF) as a CSS/SVG animation. The comic was published by Charlton Comics in 1971, and, being printed, these characters didnt move, making them ideal candidates to transform into ambient animations.FYI: Original cover artist Ray Dirgo was best known for his work drawing Hanna-Barbera characters for Charlton Comics during the 1970s. Ray passed away in 2000 at the age of 92. He outlived Charlton Comics, which went out of business in 1986, and DC Comics acquired its characters.Tip: You can view the complete ambient animation code on CodePen.Choosing Elements To AnimateNot everything on a page or in a graphic needs to move, and part of designing an ambient animation is knowing when to stop. The trick is to pick elements that lend themselves naturally to subtle movement, rather than forcing motion into places where it doesnt belong.Natural Motion CuesWhen Im deciding what to animate, I look for natural motion cues and think about when something would move naturally in the real world. I ask myself: Does this thing have weight?, Is it flexible?, and Would it move in real life? If the answers yes, itll probably feel right if it moves. There are several motion cues in Ray Dirgos cover artwork.For example, the peace pipe Quick Draws puffing on has two feathers hanging from it. They swing slightly left and right by three degrees as the pipe moves, just like real feathers would.#quick-draw-pipe { animation: quick-draw-pipe-rotate 6s ease-in-out infinite alternate;}@keyframes quick-draw-pipe-rotate { 0% { transform: rotate(3deg); } 100% { transform: rotate(-3deg); }}#quick-draw-feather-1 { animation: quick-draw-feather-1-rotate 3s ease-in-out infinite alternate;}#quick-draw-feather-2 { animation: quick-draw-feather-2-rotate 3s ease-in-out infinite alternate;}@keyframes quick-draw-feather-1-rotate { 0% { transform: rotate(3deg); } 100% { transform: rotate(-3deg); }}@keyframes quick-draw-feather-2-rotate { 0% { transform: rotate(-3deg); } 100% { transform: rotate(3deg); }}Atmosphere, Not ActionI often choose elements or decorative details that add to the vibe but dont fight for attention.Ambient animations arent about signalling to someone where they should look; theyre about creating a mood. Here, the chief slowly and subtly rises and falls as he puffs on his pipe.#chief { animation: chief-rise-fall 3s ease-in-out infinite alternate;}@keyframes chief-group-rise-fall { 0% { transform: translateY(0); } 100% { transform: translateY(-20px); }}For added effect, the feather on his head also moves in time with his rise and fall:#chief-feather-1 { animation: chief-feather-1-rotate 3s ease-in-out infinite alternate;}#chief-feather-2 { animation: chief-feather-2-rotate 3s ease-in-out infinite alternate;}@keyframes chief-feather-1-rotate { 0% { transform: rotate(0deg); } 100% { transform: rotate(-9deg); }}@keyframes chief-feather-2-rotate { 0% { transform: rotate(0deg); } 100% { transform: rotate(9deg); }}Playfulness And FunOne of the things I love most about ambient animations is how they bring fun into a design. Theyre an opportunity to demonstrate personality through playful details that make people smile when they notice them. Take a closer look at the chief, and you might spot his eyebrows raising and his eyes crossing as he puffs hard on his pipe. Quick Draws eyebrows also bounce at what look like random intervals.#quick-draw-eyebrow { animation: quick-draw-eyebrow-raise 5s ease-in-out infinite;}@keyframes quick-draw-eyebrow-raise { 0%, 20%, 60%, 100% { transform: translateY(0); } 10%, 50%, 80% { transform: translateY(-10px); }}Keep Hierarchy In MindMotion draws the eye, and even subtle movements have a visual weight. So, I reserve the most obvious animations for elements that I need to create the biggest impact. Smoking his pipe clearly has a big effect on Quick Draw McGraw, so to demonstrate this, I wrapped his elements including his pipe and its feathers within a new SVG group, and then I made that wobble.#quick-draw-group { animation: quick-draw-group-wobble 6s ease-in-out infinite;}@keyframes quick-draw-group-wobble { 0% { transform: rotate(0deg); } 15% { transform: rotate(2deg); } 30% { transform: rotate(-2deg); } 45% { transform: rotate(1deg); } 60% { transform: rotate(-1deg); } 75% { transform: rotate(0.5deg); } 100% { transform: rotate(0deg); }}Then, to emphasise this motion, I mirrored those values to wobble his shadow:#quick-draw-shadow { animation: quick-draw-shadow-wobble 6s ease-in-out infinite;}@keyframes quick-draw-shadow-wobble { 0% { transform: rotate(0deg); } 15% { transform: rotate(-2deg); } 30% { transform: rotate(2deg); } 45% { transform: rotate(-1deg); } 60% { transform: rotate(1deg); } 75% { transform: rotate(-0.5deg); } 100% { transform: rotate(0deg); }}Apply RestraintJust because something can be animated doesnt mean it should be. When creating an ambient animation, I study the image and note the elements where subtle motion might add life. I keep in mind the questions: Whats the story Im telling? Where does movement help, and when might it become distracting?Remember, restraint isnt just about doing less; its about doing the right things less often.Layering SVGs For ExportIn Smashing Animations Part 4: Optimising SVGs, I wrote about the process I rely on to prepare, optimise, and structure SVGs for animation. When elements are crammed into a single SVG file, they can be a nightmare to navigate. Locating a specific path or group can feel like searching for a needle in a haystack.Thats why I develop my SVGs in layers, exporting and optimising one set of elements at a time always in the order theyll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section.I start by exporting background elements, optimising them, adding class and ID attributes, and pasting their code into my SVG file.Then, I export elements that often stay static or move as groups, like the chief and Quick Draw McGraw.Before finally exporting, naming, and adding details, like Quick Draws pipe, eyes, and his stoned sparkles.Since I export each layer from the same-sized artboard, I dont need to worry about alignment or positioning issues as they all slot into place automatically.Implementing Ambient AnimationsYou dont need an animation framework or library to add ambient animations to a project. Most of the time, all youll need is a well-prepared SVG and some thoughtful CSS.But, lets start with the SVG. The key is to group elements logically and give them meaningful class or ID attributes, which act as animation hooks in the CSS. For this animation, I gave every moving part its own identifier like #quick-draw-tail or #chief-smoke-2. That way, I could target exactly what I needed without digging through the DOM like a raccoon in a trash can.Once the SVG is set up, CSS does most of the work. I can use @keyframes for more expressive movement, or animation-delay to simulate randomness and stagger timings. The trick is to keep everything subtle and remember Im not animating for attention, Im animating for atmosphere.Remember that most ambient animations loop continuously, so they should be lightweight and performance-friendly. And of course, its good practice to respect users whove asked for less motion. You can wrap your animations in an @media prefers-reduced-motion query so they only run when theyre welcome.@media (prefers-reduced-motion: no-preference) { #quick-draw-shadow { animation: quick-draw-shadow-wobble 6s ease-in-out infinite; }}Its a small touch thats easy to implement, and it makes your designs more inclusive.Ambient Animation Design PrinciplesIf you want your animations to feel ambient, more like atmosphere than action, it helps to follow a few principles. These arent hard and fast rules, but rather things Ive learned while animating smoke, sparkles, eyeballs, and eyebrows.Keep Animations Slow And SmoothAmbient animations should feel relaxed, so use longer durations and choose easing curves that feel organic. I often use ease-in-out, but cubic Bzier curves can also be helpful when you want a more relaxed feel and the kind of movements you might find in nature.Loop Seamlessly And Avoid Abrupt ChangesHard resets or sudden jumps can ruin the mood, so if an animation loops, ensure it cycles smoothly. You can do this by matching start and end keyframes, or by setting the animation-direction to alternate the value so the animation plays forward, then back.Use Layering To Build ComplexityA single animation might be boring. Five subtle animations, each on separate layers, can feel rich and alive. Think of it like building a sound mix you want variation in rhythm, tone, and timing. In my animation, sparkles twinkle at varying intervals, smoke curls upward, feathers sway, and eyes boggle. Nothing dominates, and each motion plays its small part in the scene.Avoid DistractionsThe point of an ambient animation is that it doesnt dominate. Its a background element and not a call to action. If someones eyes are drawn to a raised eyebrow, its probably too much, so dial back the animation until it feels like something youd only catch if youre really looking.Consider Accessibility And PerformanceCheck prefers-reduced-motion, and dont assume everyones device can handle complex animations. SVG and CSS are light, but things like blur filters and drop shadows, and complex CSS animations can still tax lower-powered devices. When an animation is purely decorative, consider adding aria-hidden="true" to keep it from cluttering up the accessibility tree.Quick On The DrawAmbient animation is like seasoning on a great dish. Its the pinch of salt you barely notice, but youd miss when its gone. It doesnt shout, it whispers. It doesnt lead, it lingers. Its floating smoke, swaying feathers, and sparkles you catch in the corner of your eye. And when its done well, ambient animation adds personality to a design without asking for applause.Now, I realise that not everyone needs to animate cartoon characters. So, in part two, Ill share how I created animations for several recent client projects. Until next time, if youre crafting an illustration or working with SVG, ask yourself: What would move if this were real? Then animate just that. Make it slow and soft. Keep it ambient.You can view the complete ambient animation code on CodePen.
    0 التعليقات ·0 المشاركات
  • How To Minimize The Environmental Impact Of Your Website
    smashingmagazine.com
    Climate change is the single biggest health threat to humanity, accelerated by human activities such as the burning of fossil fuels, which generate greenhouse gases that trap the suns heat.The average temperature of the earths surface is now 1.2C warmer than it was in the late 1800s, and projected to more than double by the end of the century. The consequences of climate change include intense droughts, water shortages, severe fires, melting polar ice, catastrophic storms, and declining biodiversity. The Internet Is A Significant Part Of The ProblemShockingly, the internet is responsible for higher global greenhouse emissions than the aviation industry, and is projected to be responsible for 14% of all global greenhouse gas emissions by 2040.If the internet were a country, it would be the 4th largest polluter in the world and represents the largest coal-powered machine on the planet.But how can something digital like the internet produce harmful emissions?Internet emissions come from powering the infrastructure that drives the internet, such as the vast data centres and data transmission networks that consume huge amounts of electricity.Internet emissions also come from the global manufacturing, distribution, and usage of the estimated 30.5 billion devices (phones, laptops, etc.) that we use to access the internet.Unsurprisingly, internet related emissions are increasing, given that 60% of the worlds population spend, on average, 40% of their waking hours online.We Must Urgently Reduce The Environmental Impact Of The InternetAs responsible digital professionals, we must act quickly to minimise the environmental impact of our work. It is encouraging to see the UK government encourage action by adding Minimise environmental impact to their best practice design principles, but there is still too much talking and not enough corrective action taking place within our industry.The reality of many tightly constrained, fast-paced, and commercially driven web projects is that minimising environmental impact is far from the agenda.So how can we make the environment more of a priority and talk about it in ways that stakeholders will listen to?A eureka moment on a recent web optimisation project gave me an idea.My Eureka MomentI led a project to optimise the mobile performance of www.talktofrank.com, a government drug advice website that aims to keep everyone safe from harm.Mobile performance is critically important for the success of this service to ensure that users with older mobile devices and those using slower network connections can still access the information they need.Our work to minimise page weights focused on purely technical changes that our developer made following recommendations from tools such as Google Lighthouse that reduced the size of the webpages of a key user journey by up to 80%. This resulted in pages downloading up to 30% faster and the carbon footprint of the journey being reduced by 80%.We hadnt set out to reduce the carbon footprint, but seeing these results led to my eureka moment. I realised that by minimising page weights, you improve performance (which is a win for users and service owners) and also consume less energy (due to needing to transfer and store less data), creating additional benefits for the planet so everyone wins.This felt like a breakthrough because business, user, and environmental requirements are often at odds with one another. By focussing on minimising websites to be as simple, lightweight and easy to use as possible you get benefits that extend beyond the triple bottom line of people, planet and profit to include performance and purpose.So why is minimising such a great digital sustainability strategy?ProfitWebsite providers win because their website becomes more efficient and more likely to meet its intended outcomes, and a lighter site should also lead to lower hosting bills.PeoplePeople win because they get to use a website that downloads faster, is quick and easy to use because it's been intentionally designed to be as simple as possible, enabling them to complete their tasks with the minimum amount of effort and mental energy.PerformanceLightweight webpages download faster so perform better for users, particularly those on older devices and on slower network connections. PlanetThe planet wins because the amount of energy (and associated emissions) that is required to deliver the website is reduced.PurposeWe know that we do our best work when we feel a sense of purpose. It is hugely gratifying as a digital professional to know that our work is doing good in the world and contributing to making things better for people and the environment.In order to prioritise the environment, we need to be able to speak confidently in a language that will resonate with the business and ensure that any investment in time and resources yields the widest range of benefits possible.So even if you feel that the environment is a very low priority on your projects, focusing on minimising page weights to improve performance (which is generally high on the agenda) presents the perfect trojan horse for an environmental agenda (should you need one).Doing the right thing isnt always easy, but weve done it before when managing to prioritise issues such as usability, accessibility, and inclusion on digital projects. Many of the things that make websites easier to use, more accessible, and more effective also help to minimise their environmental impact, so the things you need to do will feel familiar and achievable, so dont worry about it all being another new thing to learn about!So this all makes sense in theory, but whats the master plan to use when putting it into practice?The MasterplanThe masterplan for creating websites that have minimal environmental impact is to focus on offering the maximum value from the minimum input of energy.Its an adaptation of Buckminister Fullers Dymaxion principle, which is one of his many progressive and groundbreaking sustainability strategies for living and surviving on a planet with finite resources.Inputs of energy include both the electrical energy that is required to operate websites and also the mental energy that is required to use them. You can achieve this by minimising websites to their core content, features, and functionality, ensuring that everything can be justified from the perspective of meeting a business or user need. This means that anything that isnt adding a proportional amount of value to the amount of energy it requires to provide it should be removed.So thats the masterplan, but how do you put it into practice?Decarbonise Your Highest Value User JourneysIve developed a new approach called Decarbonising User Journeys that will help you to minimise the environmental impact of your website and maximise its performance.Note: The approach deliberately focuses on optimising key user journeys and not entire websites to keep things manageable and to make it easier to get started. The secret here is to start small, demonstrate improvements, and then scale.The approach consists of five simple steps:Identify your highest value user journey,Benchmark your user journey,Set targets,Decarbonise your user journey,Track and share your progress.Heres how it works.Step 1: Identify Your Highest Value User JourneyYour highest value user journey might be the one that your users value the most, the one that brings you the highest revenue, or the one that is fundamental to the success of your organisation. You could also focus on a user journey that you know is performing particularly badly and has the potential to deliver significant business and user benefits if improved.You may have lots of important user journeys, and its fine to decarbonise multiple journeys in parallel if you have the resources, but Id recommend starting with one first to keep things simple.To bring this to life, lets consider a hypothetical example of a premiership football club trying to decarbonise its online ticket-buying journey that receives high levels of traffic and is responsible for a significant proportion of its weekly income.Step 2: Benchmark Your User JourneyOnce youve selected your user journey, you need to benchmark it in terms of how well it meets user needs, the value it offers your organisation, and its carbon footprint. It is vital that you understand the job it needs to do and how well it is doing it before you start to decarbonise it. There is no point in removing elements of the journey in an effort to reduce its carbon footprint, for example, if you compromise its ability to meet a key user or business need.You can benchmark how well your user journey is meeting user needs by conducting user research alongside analysing existing customer feedback. Interviews with business stakeholders will help you to understand the value that your journey is providing the organisation and how well business needs are being met.You can benchmark the carbon footprint and performance of your user journey using online tools such as Cardamon, Ecograder, Website Carbon Calculator, Google Lighthouse, and Bioscore. Make sure you have your analytics data to hand to help get the most accurate estimate of your footprint.To use these tools, simply add the URL of each page of your journey, and they will give you a range of information such as page weight, energy rating, and carbon emissions. Google Lighthouse works slightly differently via a browser plugin and generates a really useful and detailed performance report as opposed to giving you a carbon rating. A great way to bring your benchmarking scores to life is to visualise them in a similar way to how you would present a customer journey map or service blueprint. This example focuses on just communicating the carbon footprint of the user journey, but you can also add more swimlanes to communicate how well the journey is performing from a user and business perspective, too, adding user pain points, quotes, and business metrics where appropriate. Ive found that adding the energy efficiency ratings is really effective because its an approach that people recognise from their household appliances. This adds a useful context to just showing the weights (such as grams or kilograms) of CO2, which are generally meaningless to people.Within my benchmarking reports, I also add a set of benchmarking data for every page within the user journey. This gives your stakeholders a more detailed breakdown and a simple summary alongside a snapshot of the benchmarked page.Your benchmarking activities will give you a really clear picture of where remedial work is required from an environmental, user, and business point of view. In our football user journey example, its clear that the News and Tickets pages need some attention to reduce their carbon footprint, so they would be a sensible priority for decarbonising.Step 3: Set TargetsUse your benchmarking results to help you set targets to aim for, such as a carbon budget, energy efficiency, maximum page weight, and minimum Google Lighthouse performance targets for each individual page, in addition to your existing UX metrics and business KPIs. There is no right or wrong way to set targets. Choose what you think feels achievable and viable for your business, and youll only learn how reasonable and achievable they are when you begin to decarbonise your user journeys.Setting targets is important because it gives you something to aim for and keeps you focused and accountable. The quantitative nature of this work is great because it gives you the ability to quickly demonstrate the positive impact of your work, making it easier to justify the time and resources you are dedicating to it.Step 4: Decarbonise Your User JourneyYour objective now is to decarbonise your user journey by minimising page weights, improving your Lighthouse performance rating, and minimising pages so that they meet both user and business needs in the most efficient, simple, and effective way possible.Its up to you how you approach this depending on the resources and skills that you have, you can focus on specific pages or addressing a specific problem area such as heavyweight images or videos across the entire user journey.Heres a list of activities that will all help to reduce the carbon footprint of your user journey:Work through the recommendations in the diagnostics section of your Google Lighthouse report to help optimise page performance.Switch to a green hosting provider if you are not already using one. Use the Green Web Directory to help you choose one.Work through the W3C Web Sustainability Guidelines, implementing the most relevant guidelines to your specific user journey.Remove anything that is not adding any user or business value.Reduce the amount of information on your webpages to make them easier to read and less overwhelming for people.Replace content with a lighter-weight alternative (such as swapping a video for text) if the lighter-weight alternative provides the same value.Optimise assets such as photos, videos, and code to reduce file sizes.Remove any barriers to accessing your website and any distractions that are getting in the way.Re-use familiar components and design patterns to make your websites quicker and easier to use. Write simply and clearly in plain English to help people get the most value from your website and to help them avoid making mistakes that waste time and energy to resolve.Fix any usability issues you identified during your benchmarking to ensure that your website is as easy to use and useful as possible.Ensure your user journey is as accessible as possible so the widest possible audience can benefit from using it, offsetting the environmental cost of providing the website.Step 5: Track And Share Your ProgressAs you decarbonise your user journeys, use the benchmarking tools from step 2 to track your progress against the targets you set in step 3 and share your progress as part of your wider sustainability reporting initiatives.All being well at this point, you will have the numbers to demonstrate how the performance of your user journey has improved and also how you have managed to reduce its carbon footprint. Share these results with the business as soon as you have them to help you secure the resources to continue the work and initiate similar work on other high-value user journeys.You should also start to communicate your progress with your users.Its important that they are made aware of the carbon footprint of their digital activity and empowered to make informed choices about the environmental impact of the websites that they use. Ideally, every website should communicate the emissions generated from viewing their pages to help people make these informed choices and also to encourage website providers to minimise their emissions if they are being displayed publicly. Often, people will have no choice but to use a specific website to complete a specific task, so it is the responsibility of the website provider to ensure the environmental impact of using their website is as small as possible.You can also help to raise awareness of the environmental impact of websites and what you are doing to minimise your own impact by publishing a digital sustainability statement, such as Unilevers, as shown below. A good digital sustainability statement should acknowledge the environmental impact of your website, what you have done to reduce it, and what you plan to do next to minimise it further.As an industry, we should normalise publishing digital sustainability statements in the same way that accessibility statements have become a standard addition to website footers.Useful Decarbonising PrinciplesKeep these principles in mind to help you decarbonise your user journeys:More doing and less talking.Start decarbonising your user journeys as soon as possible to accelerate your learning and positive change.Start small.Starting small by decarbonising an individual journey makes it easier to get started and generates results to demonstrate value faster.Aim to do more with less.Minimise what you offer to ensure you are providing the maximum amount of value for the energy you are consuming.Make your website as useful and as easy to use as possible.Useful websites can justify the energy they consume to provide them, ensuring they are net positive in terms of doing more good than harm.Focus on progress over perfection.Websites are never finished or perfect but they can always be improved, every small improvement you make will make a difference.Start Decarbonising Your User Journeys TodayDecarbonising user journeys shouldnt be done as a one-off, reserved for the next time that you decide to redesign or replatform your website; it should happen on a continual basis as part of your broader digital sustainability strategy.We know that websites are never finished and that the best websites continually improve as both user and business needs change. Id like to encourage people to adopt the same mindset when it comes to minimising the environmental impact of their websites.Decarbonising will happen most effectively when digital professionals challenge themselves on a daily basis to minimise the things they are working on.This avoids building carbon debt that consists of compounding technical and design debt within our websites, which is always harder to retrospectively remove than avoid in the first place.By taking a pragmatic approach, such as optimising high-value user journeys and aligning with business metrics such as performance, we stand the best possible chance of making digital sustainability a priority. Youll have noticed that, other than using website carbon calculator tools, this approach doesnt require any skills that dont already exist within typical digital teams today. This is great because it means youve already got the skills that you need to do this important work.I would encourage everyone to raise the issue of the environmental impact of the internet in their next team meeting and to try this decarbonising approach to create better outcomes for people, profit, performance, purpose, and the planet.Good luck!
    0 التعليقات ·0 المشاركات
  • Designing For TV: The Evergreen Pattern That Shapes TV Experiences (Part 1)
    smashingmagazine.com
    Television sets have been the staple of our living rooms for decades. We watch, we interact, and we control, but how often do we design for them? TV design flew under my radar for years, until one day I found myself in the deep, designing TV-specific user interfaces. Now, after gathering quite a bit of experience in the area, I would like to share my knowledge on this rather rare topic. If youre interested in learning more about the user experience and user interfaces of television, this article should be a good starting point.Just like any other device or use case, TV has its quirks, specifics, and guiding principles. Before getting started, it will be beneficial to understand the core ins and outs. In Part 1, well start with a bit of history, take a close look at the fundamentals, and review the evolution of television. In Part 2, well dive into the depths of practical aspects of designing for TV, including its key principles and patterns.Lets start with the two key paradigms that dictate the process of designing TV interfaces.Mind The Gap, Or The 10-foot-experienceFirstly, we have the so-called 10-foot experience, referring to the fact that interaction and consumption on TV happens from a distance of roughly three or more meters. This is significantly different than interacting with a phone or a computer and implies having some specific approaches in the TV user interface design. For example, well need to make text and user interface (UI) elements larger on TV to account for the bigger distance to the screen.Furthermore, well take extra care to adhere to contrast standards, primarily relying on dark interfaces, as light ones may be too blinding in darker surroundings. And finally, considering the laid-back nature of the device, well simplify the interactions.But the 10-foot experience is only one part of the equation. There wouldnt be a 10-foot experience in the first place if there were no mediator between the user and the device, and if we didnt have something to interact through from a distance.There would be no 10-foot experience if there were no remote controllers.The MediatorThe remote, the second half of the equation, is what allows us to interact with the TV from the comfort of the couch. Slower and more deliberate, this conglomerate of buttons lacks the fluid motion of a mouse, or the dexterity of fingers against a touchscreen yet the capabilities of the remote should not be underestimated.Rudimentary as it is and with a limited set of functions, the remote allows for some interesting design approaches and can carry the weight of the modern TV along with its ever-growing requirements for interactivity. It underwent a handful of overhauls during the seventy years since its inception and was refined and made more ergonomic; however, there is a 40-year-old pattern so deeply ingrained in its foundation that nothing can change it.What if I told you that you could navigate TV interfaces and apps with a basic controller from the 1980s just as well as with the latest remote from Apple? Not only that, but any experience built around the six core buttons of a remote will be system-agnostic and will easily translate across platforms.This is the main point I will focus on for the rest of this article.Birth Of A PatternAs television sets were taking over peoples living rooms in the 1950s, manufacturers sought to upgrade and improve the user experience. The effort of walking up to the device to manually adjust some settings was eventually identified as an area for improvement, and as a result, the first television remote controllers were introduced to the market.Early DevelopmentsPreliminary iterations of the remotes were rather unique, and it took some divergence before we finally settled on a rectangular shape and sprinkled buttons on top. Take a look at the Zenith Flash-Matic, for example. Designed in the mid-1950s, this standout device featured a single button that triggered a directional lamp; by pointing it at specific corners of the TV set, viewers could control various functions, such as changing channels or adjusting the volume.While they were a far cry compared to their modern counterparts, devices like the Flash-Matic set the scene for further developments, and we were off to the races!As the designs evolved, the core functionality of the remote solidified. Gradually, remote controls became more than just simple channel changers, evolving into command centers for the expanding territory of home entertainment.Note: I will not go too much into history here aside from some specific points that are of importance to the matter at hand but if you have some time to spare, do look into the developmental history of television sets and remotes, its quite a fascinating topic.However, practical as they may have been, they were still considered a luxury, significantly increasing the prices of TV sets. As the 1970s were coming to a close, only around 17% of United States households had a remote controller for their TVs. Yet, things would change as the new decade rolled in.Button Mania Of The 1980sThe eighties brought with them the Apple Macintosh, MTV, and Star Wars. It was a time of cultural shifts and technological innovation. Videocassette recorders (VCRs) and a multitude of other consumer electronics found their place in the living rooms of the world, along with TVs.These new devices, while enriching our media experiences, also introduced a few new design problems. Where there was once a single remote, now there were multiple remotes, and things were getting slowly out of hand.This marked the advent of universal remotes.Trying to hit many targets with one stone, the unwieldy universal remotes were humanitys best solution for controlling a wider array of devices. And they did solve some of these problems, albeit in an awkward way. The complexity of universal remotes was a trade-off for versatility, allowing them to be programmed and used as a command center for controlling multiple devices. This meant transforming the relatively simple design of their predecessors into a beehive of buttons, prioritizing broader compatibility over elegance.On the other hand, almost as a response to the inconvenience of the universal remote, a different type of controller was conceived in the 1980s one with a very basic layout and set of buttons, and which would leave its mark in both how we interact with the TV, and how our remotes are laid out. A device that would, knowingly or not, give birth to a navigational pattern that is yet to be broken the NES controller.D-pad DominanceReleased in 1985, the Nintendo Entertainment System (NES) was an instant hit. Having sold sixty million units around the world, it left an undeniable mark on the gaming console industry.The NES controller (which was not truly remote, as it ran a cable to the central unit) introduced the world to a deceptively simple control scheme. Consisting of six primary actions, it gave us the directional pad (the D-pad), along with two action buttons (A and B). Made in response to the bulky joystick, the cross-shaped cluster allowed for easy movement along two axes (up, down, left, and right).Charmingly intuitive, this navigational pattern would produce countless hours of gaming fun, but more importantly, its elementary design would seep over into the wider industry the D-pad, along with the two action buttons, would become the very basis on which future remotes would be constructed.The world continued spinning madly on, and what was once a luxury became commonplace. By the end of the decade, TV remotes were more integral to the standard television experience, and more than two-thirds of American TV owners had some sort of a remote.The nineties rolled in with further technological advancements. TV sets became more robust, allowing for finer tuning of their settings. This meant creating interfaces through which such tasks could be accomplished, and along with their master sets, remotes got updated as well.Gone were the bulky rectangular behemoths of the eighties. As ergonomics took precedence, they got replaced by comfortably contoured devices that better fit their users hands. Once conglomerations of dozens of uniform buttons, these contemporary remotes introduced different shapes and sizes, allowing for recognition simply through touch. Commands were being clustered into sensible groups along the body of the remote, and within those button groups, a familiar shape started to emerge.Gradually, the D-pad found its spot on our TV remotes. As the evolution of these devices progressed, it became even more deeply embedded at the core of their interactivity.Set-top boxes and smart features emerged in the 2000s and 2010s, and TV technology continued to advance. Along the way, many bells and whistles were introduced. TVs got bigger, brighter, thinner, yet their essence remained unchanged.In the years since their inception, remotes were innovated upon, but all the undertakings circle back to the core principles of the NES controller. Future endeavours never managed to replace, but only to augment and reinforce the pattern.The Evergreen PatternIn 2013, LG introduced their Magic remote (So magically simple, the kids will be showing you how to use it!). This uniquely shaped device enabled motion controls on LG TV sets, allowing users to point and click similar to a computer mouse. Having a pointer on the screen allowed for much more flexibility and speed within the system, and the remote was well-received and praised as one of the best smart TV remotes.Innovating on tradition, this device introduced new features and fresh perspectives to the world of TV. But if we look at the device itself, well see that, despite its differences, it still retains the D-pad as a means of interaction. It may be argued that LG never set out to replace the directional pad, and as it stands, regardless of their intent, they only managed to augment it.For an even better example, lets examine Apple TVs second-generation remotes (the first-generation Siri remote). Being the industry disruptors, Apple introduced a touchpad to the top half of the remote. The glass surface provided briskness and precision to the experience, enabling multi-touch gestures, swipe navigation, and quick scrolling. This quality of life upgrade was most noticeable when typing with the horizontal on-screen keyboards, as it allowed for smoother and quicker scrolling from A to Z, making for a more refined experience.While at first glance it may seem Apple removed the directional buttons, the fact is that the touchpad is simply a modernised take on the pattern, still abiding by the same four directions a classic D-pad does. You could say its a D-pad with an extra layer of gimmick.Furthermore, the touchpad didnt really sit well with the user base, along with the fact that the remotes ergonomics were a bit iffy. So instead of pushing the boundaries even further with their third generation of remotes, Apple did a complete 180, re-introducing the classic D-pad cluster while keeping the touch capabilities from the previous generation (the touch-enabled clickpad lets you select titles, swipe through playlists, and use a circular gesture on the outer ring to find just the scene youre looking for).Now, why cant we figure out a better way to navigate TVs? Does that mean we shouldnt try to innovate? We can argue that using motion controls and gestures is an obvious upgrade to interacting with a TV. And wed be right in principle. These added features are more complex and costly to produce, but more importantly, while it has been upgraded with bits and bobs, the TV is essentially a legacy system. And its not only that.While touch controls are a staple of interaction these days, adding them without thorough consideration can reduce the usability of a remote.Pitfalls Of Touch ControlsModern car dashboards are increasingly being dominated by touchscreens. While they may impress at auto shows, their real-world usability is often compromised. Driving demands constant focus and the ability to adapt and respond to ever-changing conditions. Any interface that requires taking your eyes off the road for more than a moment increases the risk of accidents. Thats exactly where touch controls fall short. While they may be more practical (and likely cheaper) for manufacturers to implement, theyre often the opposite for the end user.Unlike physical buttons, knobs, and levers, which offer tactile landmarks and feedback, touch interfaces lack the ability to be used by feeling alone. Even simple tasks like adjusting the volume of the radio or the climate controls often involve gestures and nested menus, all performed on a smooth glass surface that demands visual attention, especially when fine-tuning.Fortunately, the upcoming 2026 Euro NCAP regulations will encourage car manufacturers to reintroduce physical controls for core functions, reducing driver distraction and promoting safer interaction.Similarly (though far less critically), sleek, buttonless TV remote controls may feel modern, but they introduce unnecessary abstraction to a familiar set of controls.Physical buttons with distinct shapes and positioning allow users to navigate by memory and touch, even in the dark. Thats not outdated its a deeper layer of usability that modern design should respect, not discard.And this is precisely why Apple reworked the Apple TV third-generation remote the way it is now, where the touch area at the top disappeared. Instead, the D-pad again had clearly defined buttons, and at the same time, the D-pad could also be extended (not replaced) to accept some touch gestures.The Legacy Of TVLets take a look at an old on-screen keyboard.The Legend of Zelda, released in 1986, allowed players to register their names in-game. There are even older games with the same feature, but thats beside the point. Using the NES controller, the players would move around the keyboard, entering their moniker character by character. Now lets take a look at a modern iteration of the on-screen keyboard.Notice the difference? Or, to phrase it better: do you notice the similarities? Throughout the years, weve introduced quality of life improvements, but the core is exactly the same as it was forty years ago. And it is not the lack of innovation or bad remotes that keep TV deeply ingrained in its beginnings. Its simply that its the most optimal way to interact given the circumstances.Laying It All OutJust like phones and computers, TV layouts are based on a grid system. However, this system is a lot more apparent and rudimentary on TV. Taking a look at a standard TV interface, well see that it consists mainly of horizontal and vertical lists, also known as shelves.These grids may be populated with cards, characters of the alphabet, or anything else, essentially, and upon closer examination, well notice that our movement is restricted by a few factors:There is no pointer for our eyes to follow, like there would be on a computer.There is no way to interact directly with the display like we would with a touchscreen.For the purposes of navigating with a remote, a focus state is introduced. This means that an element will always be highlighted for our eyes to anchor, and it will be the starting point for any subsequent movement within the interface.Simplified TV UI demonstrating a focus state along with sequential movement from item to item within a column.Moreover, starting from the focused element, we can notice that the movement is restricted to one item at a time, almost like skipping stones. Navigating linearly in such a manner, if we wanted to move within a list of elements from element #1 to element #5, wed have to press a directional button four times.Simplified TV UI demonstrating a focus state along with sequential movement from item to item within a row.To successfully navigate such an interface, we need the ability to move left, right, up, and down we need a D-pad. And once weve landed on our desired item, there needs to be a way to select it or make a confirmation, and in the case of a mistake, we need to be able to go back. For the purposes of those two additional interactions, wed need two more buttons, OK and back, or to make it more abstract, wed need buttons A and B.So, to successfully navigate a TV interface, we need only a NES controller.Yes, we can enhance it with touchpads and motion gestures, augment it with voice controls, but this unshakeable foundation of interaction will remain as the very basic level of inherent complexity in a TV interface. Reducing it any further would significantly impair the experience, so all weve managed to do throughout the years is to only build upon it.The D-pad and buttons A and B survived decades of innovation and technological shifts, and chances are theyll survive many more. By understanding and respecting this principle, you can design intuitive, system-agnostic experiences and easily translate them across platforms. Knowing you cant go simpler than these six buttons, youll easily build from the ground up and attach any additional framework-bound functionality to the time-tested core.And once you get the grip of these paradigms, youll get into mapping and re-mapping buttons depending on context, and understand just how far you can go when designing for TV. Youll be able to invent new experiences, conduct experiments, and challenge the patterns. But that is a topic for a different article.Closing ThoughtsWhile designing for TV almost exclusively during the past few years, I was also often educating the stakeholders on the very principles outlined in this article. Trying to address their concerns about different remotes working slightly differently, I found respite in the simplicity of the NES controller and how it got the point across in an understandable way. Eventually, I expanded my knowledge by looking into the developmental history of the remote and was surprised that my analogy had backing in history. This is a fascinating niche, and theres a lot more to share on the topic. Im glad we started!Its vital to understand the fundamental ins and outs of any venture before getting practical, and TV is no different. Now that you understand the basics, go, dig in, and break some ground.Having covered the underlying interaction patterns of TV experiences in detail, its time to get practical.In Part 2, well explore the building blocks of the 10-foot experience and how to best utilize them in your designs. Well review the TV design fundamentals (the screen, layout, typography, color, and focus/focus styles), and the common TV UI components (menus, shelves, spotlights, search, and more). I will also show you how to start thinking beyond the basics and to work with and around the constraints which we abide by when designing for TV. Stay tuned!Further ReadingThe 10 Foot Experience, by Robert Stulle (Edenspiekermann)Every user interface should offer effortless navigation and control. For the 10-foot experience, this is twice as important; with only up, down, left, right, OK and back as your input vocabulary, things had better be crystal clear. You want to sit back and enjoy without having to look at your remote your thumb should fly over the buttons to navigate, select, and activate.Introduction to the 10-Foot Experience for Windows Game Developers (Microsoft Learn)A growing number of people are using their personal computers in a completely new way. When you think of typical interaction with a Windows-based computer, you probably envision sitting at a desk with a monitor, and using a mouse and keyboard (or perhaps a joystick device); this is referred to as the 2-foot experience. But there's another trend which you'll probably start hearing more about: the 10-foot experience, which describes using your computer as an entertainment device with output to a TV. This article introduces the 10-foot experience and explores the list of things that you should consider first about this new interaction pattern, even if you aren't expecting your game to be played this way.10-foot user interface (Wikipedia)In computing, a 10-foot user interface, or 3-meter UI, is a graphical user interface designed for televisions (TV). Compared to desktop computer and smartphone user interfaces, it uses text and other interface elements that are much larger in order to accommodate a typical television viewing distance of 10 feet (3.0 meters); in reality, this distance varies greatly between households, and additionally, the limitations of a television's remote control necessitate extra user experience considerations to minimize user effort.The Television Remote Control: A Brief History, by Mary Bellis (ThoughtCo)The first TV remote, the Lazy Bone, was made in 1950 and used a cable. In 1955, the Flash-matic was the first wireless remote, but it had issues with sunlight. Zenith's Space Command in 1956 used ultrasound and became the popular choice for over 25 years.The History of The TV Remote, by Remy Millisky (Grunge)The first person to create and patent the remote control was none other than Nikola Tesla, inventor of the Tesla coil and numerous electronic systems. He patented the idea in 1893 to drive boats remotely, far before televisions were invented. Since then, remotes have come a long way, especially for the television, changing from small boxes with long wires to the wireless universal remotes that many people have today. How has the remote evolved over time?Nintendo Entertainment System controller (Nintendo Wiki)The Nintendo Entertainment System controller is the main controller for the NES. While previous systems had used joysticks, the NES controller provided a directional pad (the D-pad was introduced in the Game & Watch version of Donkey Kong).Why Touchscreens In Cars Dont Work, by Jacky Li (published in June 2018)Observing the behaviour of 21 drivers has made me realize whats wrong with automotive UX. [...] While I was excited to learn more about the Tesla Model X, it slowly became apparent to me that the drivers eyes were more glued to the screen than the road. Something about interacting with a touchscreen when driving made me curious to know: just how distracting are they?Europe Is Requiring Physical Buttons For Cars To Get Top Safety Marks, by Jason Torchinsky (published in March 2024)The overuse of touchscreens is an industry-wide problem, with almost every vehicle-maker moving key controls onto central touchscreens, obliging drivers to take their eyes off the road and raising the risk of distraction crashes. New Euro NCAP tests due in 2026 will encourage manufacturers to use separate, physical controls for basic functions in an intuitive manner, limiting eyes-off-road time and therefore promoting safer driving.
    0 التعليقات ·0 المشاركات
  • Integrating CSS Cascade Layers To An Existing Project
    smashingmagazine.com
    You can always get a fantastic overview of things in Stephenie Eckles article, Getting Started With CSS Cascade Layers. But lets talk about the experience of integrating cascade layers into real-world code, the good, the bad, and the spaghetti.I could have created a sample project for a classic walkthrough, but nah, thats not how things work in the real world. I want to get our hands dirty, like inheriting code with styles that work and no one knows why.Finding projects without cascade layers was easy. The tricky part was finding one that was messy enough to have specificity and organisation issues, but broad enough to illustrate different parts of cascade layers integration.Ladies and gentlemen, I present you with this Discord bot website by Drishtant Ghosh. Im deeply grateful to Drishtant for allowing me to use his work as an example. This project is a typical landing page with a navigation bar, a hero section, a few buttons, and a mobile menu.You see how it looks perfect on the outside. Things get interesting, however, when we look at the CSS styles under the hood.Understanding The ProjectBefore we start throwing @layers around, lets get a firm understanding of what were working with. I cloned the GitHub repo, and since our focus is working with CSS Cascade Layers, Ill focus only on the main page, which consists of three files: index.html, index.css, and index.js.Note: I didnt include other pages of this project as itd make this tutorial too verbose. However, you can refactor the other pages as an experiment.The index.css file is over 450 lines of code, and skimming through it, I can see some red flags right off the bat:Theres a lot of code repetition with the same selectors pointing to the same HTML element.There are quite a few #id selectors, which one might argue shouldnt be used in CSS (and I am one of those people).#botLogo is defined twice and over 70 lines apart.The !important keyword is used liberally throughout the code.And yet the site works. There is nothing technically wrong here, which is another reason CSS is a big, beautiful monster errors are silent!Planning The Layer StructureNow, some might be thinking, Cant we simply move all of the styles into a single layer, like @layer legacy and call it a day?You could but I dont think you should.Think about it: If more layers are added after the legacy layer, they should override the styles contained in the legacy layer because the specificity of layers is organized by priority, where the layers declared later carry higher priority./* new is more specific */@layer legacy, new;/* legacy is more specific */@layer new, legacy;That said, we must remember that the sites existing styles make liberal use of the !important keyword. And when that happens, the order of cascade layers gets reversed. So, even though the layers are outlined like this:@layer legacy, new;any styles with an !important declaration suddenly shake things up. In this case, the priority order becomes:!important styles in the legacy layer (most powerful),!important styles in the new layer,Normal styles in the new layer,Normal styles in the legacy layer (least powerful).I just wanted to clear that part up. Lets continue.We know that cascade layers handle specificity by creating an explicit order where each layer has a clear responsibility, and later layers always win. So, I decided to split things up into five distinct layers:reset: Browser default resets like box-sizing, margins, and paddings.base: Default styles of HTML elements, like body, h1, p, a, etc., including default typography and colours.layout: Major page structure stuff for controlling how elements are positioned.components: Reusable UI segments, like buttons, cards, and menus.utilities: Single helper modifiers that do just one thing and do it well.This is merely how I like to break things out and organize styles. Zell Liew, for example, has a different set of four buckets that could be defined as layers.Theres also the concept of dividing things up even further into sublayers:@layer components { /* sub-layers */ @layer buttons, cards, menus;}/* or this: */@layer components.buttons, components.cards, components.menus;That might come in handy, but I also dont want to overly abstract things. That might be a better strategy for a project thats scoped to a well-defined design system.Another thing we could leverage is unlayered styles and the fact that any styles not contained in a cascade layer get the highest priority:@layer legacy { a { color: red !important; } }@layer reset { a { color: orange !important; } }@layer base { a { color: yellow !important; } }/* unlayered */a { color: green !important; } /* highest priority */But I like the idea of keeping all styles organized in explicit layers because it keeps things modular and maintainable, at least in this context.Lets move on to adding cascade layers to this project.Integrating Cascade LayersWe need to define the layer order at the top of the file:@layer reset, base, layout, components, utilities;This makes it easy to tell which layer takes precedence over which (they get more priority from left to right), and now we can think in terms of layer responsibility instead of selector weight. Moving forward, Ill proceed through the stylesheet from top to bottom.First, I noticed that the Poppins font was imported in both the HTML and CSS files, so I removed the CSS import and left the one in index.html, as thats generally recommended for quickly loading fonts.Next is the universal selector (*) styles, which include classic reset styles that are perfect for @layer reset:@layer reset { * { margin: 0; padding: 0; box-sizing: border-box; }}With that out of the way, the body selector is next. Im putting this into @layer base because it contains core styles for the project, like backgrounds and fonts:@layer base { body { background-image: url("bg.svg"); /* Renamed to bg.svg for clarity */ font-family: "Poppins", sans-serif; /* ... other styles */ }}The way Im tackling this is that styles in the base layer should generally affect the whole document. So far, no page breaks or anything.Swapping IDs For ClassesFollowing the body element selector is the page loader, which is defined as an ID selector, #loader.Im a firm believer in using class selectors over ID selectors as much as possible. It keeps specificity low by default, which prevents specificity battles and makes the code a lot more maintainable.So, I went into the index.html file and refactored elements with id="loader" to class="loader". In the process, I saw another element with id="page" and changed that at the same time.While still in the index.html file, I noticed a few div elements missing closing tags. It is astounding how permissive browsers are with that. Anyways, I cleaned those up and moved the <script> tag out of the .heading element to be a direct child of body. Lets not make it any tougher to load our scripts.Now that weve levelled the specificity playing field by moving IDs to classes, we can drop them into the components layer since a loader is indeed a reusable component:@layer components { .loader { width: 100%; height: 100vh; /* ... */ } .loader .loading { /* ... */ } .loader .loading span { /* ... */ } .loader .loading span:before { /* ... */ }}AnimationsNext are keyframes, and this was a bit tricky, but I eventually chose to isolate animations in their own new fifth layer and updated the layer order to include it:@layer reset, base, layout, components, utilities, animations;But why place animations as the last layer? Because animations are generally the last to run and shouldnt be affected by style conflicts.I searched the projects styles for @keyframes and dumped them into the new layer:@layer animations { @keyframes loading { /* ... */ } @keyframes loading2 { /* ... */ } @keyframes pageShow { /* ... */ }}This gives a clear distinction of static styles from dynamic ones while also enforcing reusability.LayoutsThe #page selector also has the same issue as #id, and since we fixed it in the HTML earlier, we can modify it to .page and drop it in the layout layer, as its main purpose is to control the initial visibility of the content:@layer layout { .page { display: none; }}Custom ScrollbarsWhere do we put these? Scrollbars are global elements that persist across the site. This might be a gray area, but Id say it fits perfectly in @layer base since its a global, default feature.@layer base { /* ... */ ::-webkit-scrollbar { width: 8px; } ::-webkit-scrollbar-track { background: #0e0e0f; } ::-webkit-scrollbar-thumb { background: #5865f2; border-radius: 100px; } ::-webkit-scrollbar-thumb:hover { background: #202225; }}I also removed the !important keywords as I came across them.NavigationThe nav element is pretty straightforward, as it is the main structure container that defines the position and dimensions of the navigation bar. It should definitely go in the layout layer:@layer layout { /* ... */ nav { display: flex; height: 55px; width: 100%; padding: 0 50px; /* Consistent horizontal padding */ /* ... */ }}LogoWe have three style blocks that are tied to the logo: nav .logo, .logo img, and #botLogo. These names are redundant and could benefit from inheritance component reusability.Heres how Im approaching it:The nav .logo is overly specific since the logo can be reused in other places. I dropped the nav so that the selector is just .logo. There was also an !important keyword in there, so I removed it.I updated .logo to be a Flexbox container to help position .logo img, which was previously set with less flexible absolute positioning.The #botLogo ID is declared twice, so I merged the two rulesets into one and lowered its specificity by making it a .botLogo class. And, of course, I updated the HTML to replace the ID with the class.The .logo img selector becomes .botLogo, making it the base class for styling all instances of the logo.Now, were left with this:/* initially .logo img */.botLogo { border-radius: 50%; height: 40px; border: 2px solid #5865f2;}/* initially #botLogo */.botLogo { border-radius: 50%; width: 180px; /* ... */}The difference is that one is used in the navigation and the other in the hero section heading. We can transform the second .botLogo by slightly increasing the specificity with a .heading .botLogo selector. We may as well clean up any duplicated styles as we go.Lets place the entire code in the components layer as weve successfully turned the logo into a reusable component:@layer components { /* ... */ .logo { font-size: 30px; font-weight: bold; color: #fff; display: flex; align-items: center; gap: 10px; } .botLogo { aspect-ratio: 1; /* maintains square dimensions with width */ border-radius: 50%; width: 40px; border: 2px solid #5865f2; } .heading .botLogo { width: 180px; height: 180px; background-color: #5865f2; box-shadow: 0px 0px 8px 2px rgba(88, 101, 242, 0.5); /* ... */ }}This was a bit of work! But now the logo is properly set up as a component that fits perfectly in the new layer architecture.Navigation ListThis is a typical navigation pattern. Take an unordered list (<ul>) and turn it into a flexible container that displays all of the list items horizontally on the same row (with wrapping allowed). Its a type of navigation that can be reused, which belongs in the components layer. But theres a little refactoring to do before we add it.Theres already a .mainMenu class, so lets lean into that. Well swap out any nav ul selectors with that class. Again, it keeps specificity low while making it clearer what that element does.@layer components { /* ... */ .mainMenu { display: flex; flex-wrap: wrap; list-style: none; } .mainMenu li { margin: 0 4px; } .mainMenu li a { color: #fff; text-decoration: none; font-size: 16px; /* ... */ } .mainMenu li a:where(.active, .hover) { color: #fff; background: #1d1e21; } .mainMenu li a.active:hover { background-color: #5865f2; }}There are also two buttons in the code that are used to toggle the navigation between open and closed states when the navigation is collapsed on smaller screens. Its tied specifically to the .mainMenu component, so well keep everything together in the components layer. We can combine and simplify the selectors in the process for cleaner, more readable styles:@layer components { /* ... */ nav:is(.openMenu, .closeMenu) { font-size: 25px; display: none; cursor: pointer; color: #fff; }}I also noticed that several other selectors in the CSS were not used anywhere in the HTML. So, I removed those styles to keep things trim. There are automated ways to go about this, too.Media QueriesShould media queries have a dedicated layer (@layer responsive), or should they be in the same layer as their target elements? I really struggled with that question while refactoring the styles for this project. I did some research and testing, and my verdict is the latter, that media queries ought to be in the same layer as the elements they affect.My reasoning is that keeping them together:Maintains responsive styles with their base element styles,Makes overrides predictable, andFlows well with component-based architecture common in modern web development.However, it also means responsive logic is scattered across layers. But it beats the one with a gap between the layer where elements are styled and the layer where their responsive behaviors are managed. Thats a deal-breaker for me because its way too easy to update styles in one layer and forget to update their corresponding responsive style in the responsive layer.The other big point is that media queries in the same layer have the same priority as their elements. This is consistent with my overall goal of keeping the CSS Cascade simple and predictable, free of style conflicts.Plus, the CSS nesting syntax makes the relationship between media queries and elements super clear. Heres an abbreviated example of how things look when we nest media queries in the components layer:@layer components { .mainMenu { display: flex; flex-wrap: wrap; list-style: none; } @media (max-width: 900px) { .mainMenu { width: 100%; text-align: center; height: 100vh; display: none; } }}This also allows me to nest a components child element styles (e.g., nav .openMenu and nav .closeMenu).@layer components { nav { &.openMenu { display: none; @media (max-width: 900px) { &.openMenu { display: block; } } } }}Typography & ButtonsThe .title and .subtitle can be seen as typography components, so they and their responsive associates go into you guessed it the components layer:@layer components { .title { font-size: 40px; font-weight: 700; /* etc. */ } .subtitle { color: rgba(255, 255, 255, 0.75); font-size: 15px; /* etc.. */ } @media (max-width: 420px) { .title { font-size: 30px; } .subtitle { font-size: 12px; } }}What about buttons? Like many websites this one has a class, .btn, for that component, so we can chuck those in there as well:@layer components { .btn { color: #fff; background-color: #1d1e21; font-size: 18px; /* etc. */ } .btn-primary { background-color: #5865f2; } .btn-secondary { transition: all 0.3s ease-in-out; } .btn-primary:hover { background-color: #5865f2; box-shadow: 0px 0px 8px 2px rgba(88, 101, 242, 0.5); /* etc. */ } .btn-secondary:hover { background-color: #1d1e21; background-color: rgba(88, 101, 242, 0.7); } @media (max-width: 420px) { .btn { font-size: 14px; margin: 2px; padding: 8px 13px; } } @media (max-width: 335px) { .btn { display: flex; flex-direction: column; } }}The Final LayerWe havent touched the utilities layer yet! Ive reserved this layer for helper classes that are designed for specific purposes, like hiding content or, in this case, theres a .noselect class that fits right in. It has a single reusable purpose: to disable selection on an element.So, thats going to be the only style rule in our utilities layer:@layer utilities { .noselect { -webkit-touch-callout: none; -webkit-user-select: none; -khtml-user-select: none; -webkit-user-drag: none; -moz-user-select: none; -ms-user-select: none; user-select: none; }}And thats it! Weve completely refactored the CSS of a real-world project to use CSS Cascade Layers. You can compare where we started with the final code.It Wasnt All EasyThats not to say that working with Cascade Layers was challenging, but there were some sticky points in the process that forced me to pause and carefully think through what I was doing.I kept some notes as I worked:Its tough to determine where to start with an existing project.However, by defining the layers first and setting their priority levels, I had a framework for deciding how and where to move specific styles, even though I was not totally familiar with the existing CSS. That helped me avoid situations where I might second-guess myself or define extra, unnecessary layers.Browser support is still a thing!I mean, Cascade Layers enjoy 94% support coverage as Im writing this, but you might be one of those sites that needs to accommodate legacy browsers that are unable to support layered styles.It wasnt clear where media queries fit into the process.Media queries put me on the spot to find where they work best: nested in the same layers as their selectors, or in a completely separate layer? I went with the former, as you know.The !important keyword is a juggling act.They invert the entire layering priority system, and this project was littered with instances. Once you start chipping away at those, the existing CSS architecture erodes and requires a balance between refactoring the code and fixing whats already there to know exactly how styles cascade.Overall, refactoring a codebase for CSS Cascade Layers is a bit daunting at first glance. The important thing, though, is to acknowledge that it isnt really the layers that complicate things, but the existing codebase.Its tough to completely overhaul someones existing approach for a new one, even if the new approach is elegant.Where Cascade Layers Helped (And Didnt)Establishing layers improved the code, no doubt. Im sure there are some performance benchmarks in there since we were able to remove unused and conflicting styles, but the real win is in a more maintainable set of styles. Its easier to find what you need, know what specific style rules are doing, and where to insert new styles moving forward.At the same time, I wouldnt say that Cascade Layers are a silver bullet solution. Remember, CSS is intrinsically tied to the HTML structure it queries. If the HTML youre working with is unstructured and suffers from div-itus, then you can safely bet that the effort to untangle that mess is higher and involves rewriting markup at the same time.Refactoring CSS for cascade layers is most certainly worth the maintenance enhancements alone.It may be easier to start from scratch and define layers as you work from the ground up because theres less inherited overhead and technical debt to sort through. But if you have to start from an existing codebase, you might need to de-tangle the complexity of your styles first to determine exactly how much refactoring youre looking at.
    0 التعليقات ·0 المشاركات
  • From Data To Decisions: UX Strategies For Real-Time Dashboards
    smashingmagazine.com
    I once worked with a fleet operations team that monitored dozens of vehicles in multiple cities. Their dashboard showed fuel consumption, live GPS locations, and real-time driver updates. Yet the team struggled to see what needed urgent attention. The problem was not a lack of data but a lack of clear indicators to support decision-making. There were no priorities, alerts, or context to highlight what mattered most at any moment.Real-time dashboards are now critical decision-making tools in industries like logistics, manufacturing, finance, and healthcare. However, many of them fail to help users make timely and confident decisions, even when they show live data.Designing for real-time use is very different from designing static dashboards. The challenge is not only presenting metrics but enabling decisions under pressure. Real-time users face limited time and a high cognitive load. They need clarity on actions, not just access to raw data.This requires interface elements that support quick scanning, pattern recognition, and guided attention. Layout hierarchy, alert colors, grouping, and motion cues all help, but they must be driven by a deeper strategy: understanding what the user must decide in that moment.This article explores practical UX strategies for real-time dashboards that enable real decisions. Instead of focusing only on visual best practices, it looks at how user intent, personalization, and cognitive flow can turn raw data into meaningful, timely insights.Designing for Real-Time Comprehension: Helping Users Stay Focused Under PressureA GPS app not only shows users their location but also helps them decide where to go next. In the same way, a real-time dashboard should go beyond displaying the latest data. Its purpose is to help users quickly understand complex information and make informed decisions, especially in fast-paced environments with short attention spans.How Users Process Real-Time UpdatesHumans have limited cognitive capacity, so they can only process a small amount of data at once. Without proper context or visual cues, rapidly updating dashboards can overwhelm users and shift attention away from key information.To address this, I use the following approaches:Delta Indicators and Trend SparklinesDelta indicators show value changes at a glance, while sparklines are small line charts that reveal trends over time in a compact space. For example, a sales dashboard might show a green upward arrow next to revenue to indicate growth, along with a sparkline displaying sales trends over the past week.Subtle Micro-AnimationsSmall animations highlight changes without distracting users. Research in cognitive psychology shows that such animations effectively draw attention, helping users notice updates while staying focused. For instance, a soft pulse around a changing metric can signal activity without overwhelming the viewer.Mini-History ViewsShowing a short history of recent changes reduces reliance on memory. For example, a dashboard might let users scroll back a few minutes to review updates, supporting better understanding and verification of data trends.Common Challenges In Real-Time DashboardsMany live dashboards fail when treated as static reports instead of dynamic tools for quick decision-making.In my early projects, I made this mistake, resulting in cluttered layouts, distractions, and frustrated users.Typical errors include the following:Overcrowded Interfaces: Presenting too many metrics competes for users attention, making it hard to focus.Flat Visual Hierarchy: Without clear emphasis on critical data, users might focus on less important information.No Record of Changes: When numbers update instantly with no explanation, users can feel lost or confused.Excessive Refresh Rates: Not all data needs constant updates. Updating too frequently can create unnecessary motion and cognitive strain.Managing Stress And Cognitive OverloadUnder stress, users depend on intuition and focus only on immediately relevant information. If a dashboard updates too quickly or shows conflicting alerts, users may delay actions or make mistakes. It is important to:Prioritize the most important data first to avoid overwhelming the user.Offer snapshot or pause options so users can take time to process information.Use clear indicators to show if an action is required or if everything is operating normally.In real-time environments, the best dashboards balance speed with calmness and clarity. They are not just data displays but tools that promote live thinking and better decisions.Enabling Personalization For Effective Data ConsumptionMany analytics tools let users build custom dashboards, but these design principles guide layouts that support decision-making. Personalization options such as custom metric selection, alert preferences, and update pacing help manage cognitive load and improve data interpretation. Cognitive Challenge UX Risk in Real-Time Dashboards Design Strategy to Mitigate Users cant track rapid changes Confusion, missed updates, second-guessing Use delta indicators, change animations, and trend sparklines Limited working memory Overload from too many metrics at once Prioritize key KPIs, apply progressive disclosure Visual clutter under stress Tunnel vision or misprioritized focus Apply a clear visual hierarchy, minimize non-critical elements Unclear triggers or alerts Decision delays, incorrect responses Use thresholds, binary status indicators, and plain language Lack of context/history Misinterpretation of sudden shifts Provide micro-history, snapshot freeze, or hover reveal Common Cognitive Challenges in Real-Time Dashboards and UX Strategies to Overcome Them.Designing For Focus: Using Layout, Color, And Animation To Drive Real-Time DecisionsLayout, color, and animation do more than improve appearance. They help users interpret live data quickly and make decisions under time pressure. Since users respond to rapidly changing information, these elements must reduce cognitive load and highlight key insights immediately.Creating a Visual Hierarchy to Guide Attention.A clear hierarchy directs users eyes to key metrics. Arrange elements so the most important data stands out. For example, place critical figures like sales volume or system health in the upper left corner to match common scanning patterns. Limit visible elements to about five to prevent overload and ease processinggroup related data into cards to improve scannability and help users focus without distraction.Using Color Purposefully to Convey Meaning.Color communicates meaning in data visualization. Red or orange indicates critical alerts or negative trends, signaling urgency. Blue and green represent positive or stable states, offering reassurance. Neutral tones like gray support background data and make key colors stand out. Ensure accessibility with strong contrast and pair colors with icons or labels. For example, bright red can highlight outages while muted gray marks historical logs, keeping attention on urgent issues.Supporting Comprehension with Subtle Animation.Animation should clarify, not distract. Smooth transitions of 200 to 400 milliseconds communicate changes effectively. For instance, upward motion in a line chart reinforces growth. Hover effects and quick animations provide feedback and improve interaction. Thoughtful motion makes changes noticeable while maintaining focus.Layout, color, and animation create an experience that enables fast, accurate interpretation of live data. Real-time dashboards support continuous monitoring and decision-making by reducing mental effort and highlighting anomalies or trends. Personalization allows users to tailor dashboards to their roles, improving relevance and efficiency. For example, operations managers may focus on system health metrics while sales directors prioritize revenue KPIs. This adaptability makes dashboards dynamic, strategic tools. Element Placement & Visual Weight Purpose & Suggested Colors Animation Use Case & Effect Primary KPIs Center or top-left; bold, large font Highlight critical metrics; typically stable states Value updates: smooth increase (200400 ms) Controls Top or left panel; light, minimal visual weight Provide navigation/filtering; neutral color schemes User actions: subtle feedback (100150 ms) Charts Middle or right; medium emphasis Show trends and comparisons; use blue/green for positives, grey for neutral Chart trends: trail or fade (300600 ms) Alerts Edge of dashboard or floating; high contrast (bold) Signal critical issues; red/orange for alerts, yellow/amber for warnings Quick animations for appearance; highlight changes Design Elements, Placement, Color, and Motion Strategies for Effective Real-Time Dashboards.Clarity In Motion: Designing Dashboards That Make Change UnderstandableIf users cannot interpret changes quickly, the dashboard fails regardless of its visual design. Over time, I have developed methods that reduce confusion and make change feel intuitive rather than overwhelming.One of the most effective tools I use is the sparkline, a compact line chart that shows a trend over time and is typically placed next to a key performance indicator. Unlike full charts, sparklines omit axes and labels. Their simplicity makes them powerful, since they instantly show whether a metric is trending up, down, or steady. For example, placing a sparkline next to monthly revenue immediately reveals if performance is improving or declining, even before the viewer interprets the number.When using sparklines effectively, follow these principles:Pair sparklines with metrics such as revenue, churn rate, or user activity so users can see both the value and its trajectory at a glance.Simplify by removing clutter like axis lines or legends unless they add real value.Highlight the latest data point with a dot or accent color since current performance often matters more than historical context.Limit the time span. Too many data points compress the sparkline and hurt readability. A focused window, such as the last 7 or 30 days, keeps the trend clear.Use sparklines in comparative tables. When placed in rows (for example, across product lines or regions), they reveal anomalies or emerging patterns that static numbers may hide.Interactive P&L Performance Dashboard with Forecast and Variance Tracking. (Large preview)I combine sparklines with directional indicators like arrows and percentage deltas to support quick interpretation.For example, pairing +3.2% with a rising sparkline shows both the direction and scale of change. I do not rely only on color to convey meaning.Since 1 in 12 men is color-blind, using red and green alone can exclude some users. To ensure accessibility, I add shapes and icons alongside color cues.Micro-animations provide subtle but effective signals. This counters change blindness our tendency to miss non-salient changes.When numbers update, I use fade-ins or count-up transitions to indicate change without distraction.If a list reorders, such as when top-performing teams shift positions, a smooth slide animation under 300 milliseconds helps users maintain spatial memory. These animations reduce cognitive friction and prevent disorientation.Layout is critical for clarifying change:I use modular cards with consistent spacing, alignment, and hierarchy to highlight key metrics.Cards are arranged in a sortable grid, allowing filtering by severity, recency, or relevance.Collapsible sections manage dense information while keeping important data visible for quick scanning and deeper exploration.For instance, in a logistics dashboard, a card labeled On-Time Deliveries may display a weekly sparkline. If performance dips, the line flattens or turns slightly red, a downward arrow appears with a 1.8% delta, and the updated number fades in. This gives instant clarity without requiring users to open a detailed chart.All these design choices support fast, informed decision-making. In high-velocity environments like product analytics, logistics, or financial operations, dashboards must do more than present data. They must reduce ambiguity and help teams quickly detect change, understand its impact, and take action.Making Reliability Visible: Designing for Trust In Real-Time Data InterfacesIn real-time data environments, reliability is not just a technical feature. It is the foundation of user trust. Dashboards are used in high-stakes, fast-moving contexts where decisions depend on timely, accurate data. Yet these systems often face less-than-ideal conditions such as unreliable networks, API delays, and incomplete datasets. Designing for these realities is not just damage control. It is essential for making data experiences usable and trustworthy.When data lags or fails to load, it can mislead users in serious ways:A dip in a trendline may look like a market decline when it is only a delay in the stream.Missing categories in a bar chart, if not clearly signaled, can lead to flawed decisions.To mitigate this:Every data point should be paired with its condition.Interfaces must show not only what the data says but also how current or complete it is.One effective strategy is replacing traditional spinners with skeleton UIs. These are greyed-out, animated placeholders that suggest the structure of incoming data. They set expectations, reduce anxiety, and show that the system is actively working. For example, in a financial dashboard, users might see the outline of a candlestick chart filling in as new prices arrive. This signals that data is being refreshed, not stalled.Handling Data UnavailabilityWhen data is unavailable, I show cached snapshots from the most recent successful load, labeled with timestamps such as Data as of 10:42 AM. This keeps users aware of what they are viewing.In operational dashboards such as logistics or monitoring systems, this approach lets users act confidently even when real-time updates are temporarily out of sync.Managing Connectivity FailuresTo handle connectivity failures, I use auto-retry mechanisms with exponential backoff, giving the system several chances to recover quietly before notifying the user.If retries fail, I maintain transparency with clear banners such as Offline Reconnecting In one product, this approach prevented users from reloading entire dashboards unnecessarily, especially in areas with unreliable Wi-Fi.Ensuring Reliability with AccessibilityReliability strongly connects with accessibility:Real-time interfaces must announce updates without disrupting user focus, beyond just screen reader compatibility.ARIA live regions quietly narrate significant changes in the background, giving screen reader users timely updates without confusion.All controls remain keyboard-accessible.Animations follow motion-reduction preferences to support users with vestibular sensitivities.Data Freshness IndicatorA compact but powerful pattern I often implement is the Data Freshness Indicator, a small widget that:Shows sync status,Displays the last updated time,Includes a manual refresh button.This improves transparency and reinforces user control. Since different users interpret these cues differently, advanced systems allow personalization. For example:Analysts may prefer detailed logs of update attempts.Business users might see a simple status such as Live, Stale, or Paused.Reliability in data visualization is not about promising perfection. It is about creating a resilient, informative experience that supports human judgment by revealing the true state of the system.When users understand what the dashboard knows, what it does not, and what actions it is taking, they are more likely to trust the data and make smarter decisions.Real-World Case StudyIn my work across logistics, hospitality, and healthcare, the challenge has always been to distill complexity into clarity. A well-designed dashboard is more than functional; it serves as a trusted companion in decision-making, embedding clarity, speed, and confidence from the start.1. Fleet Management DashboardA client in the car rental industry struggled with fragmented operational data. Critical details like vehicle locations, fuel usage, maintenance schedules, and downtime alerts were scattered across static reports, spreadsheets, and disconnected systems. Fleet operators had to manually cross-reference data sources, even for basic dispatch tasks, which caused missed warnings, inefficient routing, and delays in response.We solved these issues by redesigning the dashboard strategically, focusing on both layout improvements and how users interpret and act on information.Strategic Design Improvements and Outcomes:Instant visibility of KPIsHigh-contrast cards at the top of the dashboard made key performance indicators instantly visible.Example: Fuel consumption anomalies that previously went unnoticed for days were flagged within hours, enabling quick corrective action.Clear trend and pattern visualizationBooking forecasts, utilization graphs, and city-by-city comparisons highlighted performance trends.Example: A weekday-weekend booking chart helped a regional manager spot underperformance in one city and plan targeted vehicle redistribution.Unified operational snapshotCost, downtime, and service schedules were grouped into one view.Result: The operations team could assess fleet health in under five minutes each morning instead of using multiple tools.Predictive context for planningVisual cues showed peak usage periods and historical demand curves.Result: Dispatchers prepared for forecasted spikes, reducing customer wait times and improving resource availability.Live map with real-time statusA color-coded map displays vehicle status: green for active, red for urgent attention, gray for idle.Result: Supervisors quickly identified inactive or delayed vehicles and rerouted resources as needed.Role-based personalizationPersonalization options were built in, allowing each role to customize dashboard views.Example: Fleet managers prioritized financial KPIs, while technicians filtered for maintenance alerts and overdue service reports.Strategic Impact: The dashboard redesign was not only about improving visuals. It changed how teams interacted with data. Operators no longer needed to search for insights, as the system presented them in line with tasks and decision-making. The dashboard became a shared reference for teams with different goals, enabling real-time problem solving, fewer manual checks, and stronger alignment across roles. Every element was designed to build both understanding and confidence in action.2. Hospitality Revenue DashboardOne of our clients, a hospitality group with 11 hotels in the UAE, faced a growing strategic gap. They had data from multiple departments, including bookings, events, food and beverage, and profit and loss, but it was spread across disconnected dashboards. Strategic Design Improvements and Outcomes:All revenue streams (rooms, restaurants, bars, and profit and loss) were consolidated into a single filterable dashboard.Example: A revenue manager could filter by property to see if a drop in restaurant revenue was tied to lower occupancy or was an isolated issue. The structure supported daily operations, weekly reviews, and quarterly planning.Disconnected charts and metrics were replaced with a unified visual narrative showing how revenue streams interacted.Example: The dashboard revealed how event bookings influenced bar sales or staffing. This shifted teams from passive data consumption to active interpretation.AI modules for demand forecasting, spend prediction, and pricing recommendations were embedded in the dashboard.Result: Managers could test rate changes with interactive sliders and instantly view effects on occupancy, revenue per available room, and food and beverage income. This enabled proactive scenario planning.Compact, color-coded sparklines were placed next to each key metric to show short- and long-term trends.Result: These visuals made it easy to spot seasonal shifts or channel-specific patterns without switching views or opening separate reports.Predictive overlays such as forecast bands and seasonality markers were added to performance graphs.Example: If occupancy rose but lagged behind seasonal forecasts, the dashboard surfaced the gap, prompting early action such as promotions or issue checks.Strategic Impact: By aligning the dashboard structure with real pricing and revenue strategies, the client shifted from static reporting to forward-looking decision-making. This was not a cosmetic interface update. It was a complete rethinking of how data could support business goals. The result enabled every team, from finance to operations, to interpret data based on their specific roles and responsibilities.3. Healthcare Interoperability DashboardIn healthcare, timely and accurate access to patient information is essential. A multi-specialist hospital client struggled with fragmented data. Doctors had to consult separate platforms such as electronic health records, lab results, and pharmacy systems to understand a patients condition. This fragmented process slowed decision-making and increased risks to patient safety.Strategic Design Improvements and Outcomes:Patient medical history was integrated to unify lab reports, medications, and allergy information in one view.Example: A cardiologist, for example, could review recent cardiac markers with active medications and allergy alerts in the same place, enabling faster diagnosis and treatment.Lab report tracking was upgraded to show test type, date, status, and a clear summary with labels such as Pending, Completed, and Awaiting Review.Result: Trends were displayed with sparklines and color-coded indicators, helping clinicians quickly spot abnormalities or improvements.A medication management module was added for prescription entry, viewing, and exporting. It included dosage, frequency, and prescribing physician details.Example: Specialists could customize it to highlight drugs relevant to their practice, reducing overload and focusing on critical treatments.Rapid filtering options were introduced to search by patient name, medical record number, date of birth, gender, last visit, insurance company, or policy number.Example: Billing staff could locate patients by insurance details, while clinicians filtered records by visits or demographics.Visual transparency was provided through interactive tooltips explaining alert rationales and flagged data points.Result: Clinicians gained immediate context, such as the reason a lab value was marked as critical, supporting informed and timely decisions.Strategic Impact: Our design encourages active decision-making instead of passive data review. Interactive tooltips ensure visual transparency by explaining the rationale behind alerts and flagged data points. These information boxes give clinicians immediate context, such as why a lab value is marked critical, helping them understand implications and next steps without delay.Key UX Insights from the Above 3 ExamplesDesign should drive conclusions, not just display data.Contextualized data enabled faster and more confident decisions. For example, a logistics dashboard flagged high-risk delays so dispatchers could act immediately.Complexity should be structured, not eliminated.Tools used timelines, layering, and progressive disclosure to handle dense information. A financial tool groups transactions by time blocks, easing cognitive load without losing detail.Trust requires clear system logic.Users trusted predictive alerts only after understanding their triggers. A healthcare interface added a "Why this alert?" option that explained the reasoning.The aim is clarity and action, not visual polish.Redesigns improved speed, confidence, and decision-making. In real-time contexts, confusion delays are more harmful than design flaws.Final TakeawaysReal-time dashboards are not about overwhelming users with data. They are about helping them act quickly and confidently. The most effective dashboards reduce noise, highlight the most important metrics, and support decision-making in complex environments. Success lies in balancing visual clarity with cognitive ease while accounting for human limits like memory, stress, and attention alongside technical needs.Do:Prioritize key metrics in a clear order so priorities are obvious. For instance, a support manager may track open tickets before response times.Use subtle micro-animations and small visual cues to indicate changes, helping users spot trends without distraction.Display data freshness and sync status to build trust.Plan for edge cases like incomplete or offline data to keep the experience consistent.Ensure accessibility with high contrast, ARIA labels, and keyboard navigation.Dont:Overcrowd the interface with too many metrics.Rely only on color to communicate critical information.Update all data at once or too often, which can cause overload.Hide failures or delays; transparency helps users adapt.Over time, Ive come to see real-time dashboards as decision assistants rather than control panels. When users say, This helps me stay in control, it reflects a design built on empathy that respects cognitive limits and enhances decision-making. That is the true measure of success.
    0 التعليقات ·0 المشاركات
  • Prompting Is A Design Act: How To Brief, Guide And Iterate With AI
    smashingmagazine.com
    In A Week In The Life Of An AI-Augmented Designer, we followed Kates weeklong journey of her first AI-augmented design sprint. She had three realizations through the process:AI isnt a co-pilot (yet); its more like a smart, eager intern.One with access to a lot of information, good recall, fast execution, but no context. That mindset defined how she approached every interaction with AI: not as magic, but as management. Dont trust; guide, coach, and always verify.Like any intern, AI needs coaching and supervision, and thats where her designerly skills kicked in. Kate relied on curiosity to explore, observation to spot bias, empathy to humanize the output, and critical thinking to challenge what didnt feel right. Her learning mindset helped her keep up with advances, and experimentation helped her learn by doing.Prompting is part creative brief, and part conversation design, just with an AI instead of a person.When you prompt an AI, youre not just giving instructions, but designing how it responds, behaves, and outputs information. If AI is like an intern, then the prompt is your creative brief that frames the task, sets the tone, and clarifies what good looks like. Its also your conversation script that guides how it responds, how the interaction flows, and how ambiguity is handled.As designers, were used to designing interactions for people. Prompting is us designing our own interactions with machines it uses the same mindset with a new medium. It shapes an AIs behavior the same way youd guide a user with structure, clarity, and intent. If youve bookmarked, downloaded, or saved prompts from others, youre not alone. Weve all done that during our AI journeys. But while someone elses prompts are a good starting point, you will get better and more relevant results if you can write your own prompts tailored to your goals, context, and style. Using someone elses prompt is like using a Figma template. It gets the job done, but mastery comes from understanding and applying the fundamentals of design, including layout, flow, and reasoning. Prompts have a structure too. And when you learn it, you stop guessing and start designing.Note: All prompts in this article were tested using ChatGPT not because its the only game in town, but because its friendly, flexible, and lets you talk like a person, yes, even after the recent GPT-5 update. That said, any LLM with a decent attention span will work. Results for the same prompt may vary based on the AI model you use, the AIs training, mood, and how confidently it can hallucinate.Privacy PSA: As always, dont share anything you wouldnt want leaked, logged, or accidentally included in the next AI-generated meme. Keep it safe, legal, and user-respecting.With that out of the way, lets dive into the mindset, anatomy, and methods of effective prompting as another tool in your design toolkit.Mindset: Prompt Like A DesignerAs designers, we storyboard journeys, wireframe interfaces to guide users, and write UX copy with intention. However, when prompting AI, we treat it differently: Summarize these insights, Make this better, Write copy for this screen, and then wonder why the output feels generic, off-brand, or just meh. Its like expecting a creative team to deliver great work from a one-line Slack message. We wouldnt brief a freelancer, much less an intern, with Design a landing page, so why brief AI that way?Prompting Is A Creative Brief For A MachineThink of a good prompt as a creative brief, just for a non-human collaborator. It needs similar elements, including a clear role, defined goal, relevant context, tone guidance, and output expectations. Just as a well-written creative brief unlocks alignment and quality from your team, a well-structured prompt helps the AI meet your expectations, even though it doesnt have real instincts or opinions. Prompting Is Also Conversation DesignA good prompt goes beyond defining the task and sets the tone for the exchange by designing a conversation: guiding how the AI interprets, sequences, and responds. You shape the flow of tasks, how ambiguity is handled, and how refinement happens thats conversation design. Anatomy: Structure It Like A DesignerSo how do you write a designer-quality prompt? Thats where the W.I.R.E.+F.R.A.M.E. prompt design framework comes in a UX-inspired framework for writing intentional, structured, and reusable prompts. Each letter represents a key design direction, grounded in the way UX designers already think: Just as a wireframe doesnt dictate final visuals, this WIRE+FRAME framework doesnt constrain creativity, but guides the AI with structured information it needs. Why not just use a series of back-and-forth chats with AI?You can, and many people do. But without structure, AI fills in the gaps on its own, often with vague or generic results. A good prompt upfront saves time, reduces trial and error, and improves consistency. And whether youre working on your own or across a team, a framework means youre not reinventing a prompt every time but reusing what works to get better results faster.Just as we build wireframes before adding layers of fidelity, the WIRE+FRAME framework has two parts:WIRE is the must-have skeleton. It gives the prompt its shape.FRAME is the set of enhancements that bring polish, logic, tone, and reusability like building a high-fidelity interface from the wireframe.Lets improve Kates original research synthesis prompt (Read this customer feedback and tell me how we can improve financial literacy for Gen Z in our app). To better reflect how people actually prompt in practice, lets tweak it to a more broadly applicable version: Read this customer feedback and tell me how we can improve our app for Gen Z users. This one-liner mirrors the kinds of prompts we often throw at AI tools: short, simple, and often lacking structure. Now, well take that prompt and rebuild it using the first four elements of the W.I.R.E. framework the core building blocks that provide AI with the main information it needs to deliver useful results.W: Who & WhatDefine who the AI should be, and what its being asked to deliver.A creative brief starts with assigning the right hat. Are you briefing a copywriter? A strategist? A product designer? The same logic applies here. Give the AI a clear identity and task. Treat AI like a trusted freelancer or intern. Instead of saying help me, tell it who it should act as and whats expected.Example: You are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.I: Input ContextProvide background that frames the task.Creative partners dont work in a vacuum. They need context: the audience, goals, product, competitive landscape, and whats been tried already. This is the What you need to know before you start section of the brief. Think: key insights, friction points, business objectives. The same goes for your prompt. Example: You are analyzing customer feedback for Fintech Brands app, targeting Gen Z users. Feedback will be uploaded from sources such as app store reviews, survey feedback, and usability test transcripts.R: Rules & ConstraintsClarify any limitations, boundaries, and exclusions.Good creative briefs always include boundaries what to avoid, whats off-brand, or whats non-negotiable. Things like brand voice guidelines, legal requirements, or time and word count limits. Constraints dont limit creativity they focus it. AI needs the same constraints to avoid going off the rails.Example: Only analyze the uploaded customer feedback data. Do not fabricate pain points, representative quotes, journey stages, or patterns. Do not supplement with prior knowledge or hypothetical examples. Use clear, neutral, stakeholder-facing language.E: Expected OutputSpell out what the deliverable should look like.This is the deliverable spec: What does the finished product look like? What tone, format, or channel is it for? Even if the task is clear, the format often isnt. Do you want bullet points or a story? A table or a headline? If you dont say, the AI will guess, and probably guess wrong. Even better, include an example of the output you want, an effective way to help AI know what youre expecting. If youre using GPT-5, you can also mix examples across formats (text, images, tables) together. Example: Return a structured list of themes. For each theme, include:Theme TitleSummary of the IssueProblem StatementOpportunityRepresentative Quotes (from data only)Journey Stage(s)Frequency (count from data)Severity Score (15) where 1 = Minor inconvenience or annoyance; 3 = Frustrating but workaround exists; 5 = Blocking issueEstimated Effort (Low / Medium / High), where Low = Copy or content tweak; Medium = Logic/UX/UI change; High = Significant changes.WIRE gives you everything you need to stop guessing and start designing your prompts with purpose. When you start with WIRE, your prompting is like a briefing, treating AI like a collaborator. Once youve mastered this core structure, you can layer in additional fidelity, like tone, step-by-step flow, or iterative feedback, using the FRAME elements. These five elements provide additional guidance and clarity to your prompt by layering clear deliverables, thoughtful tone, reusable structure, and space for creative iteration. F: Flow of TasksBreak complex prompts into clear, ordered steps.This is your project plan or creative workflow that lays out the stages, dependencies, or sequence of execution. When the task has multiple parts, dont just throw it all into one sentence. You are doing the thinking and guiding AI. Structure it like steps in a user journey or modules in a storyboard. In this example, it fits as the blueprint for the AI to use to generate the table described in E: Expected OutputExample: Recommended flow of tasks:Step 1: Parse the uploaded data and extract discrete pain points.Step 2: Group them into themes based on pattern similarity.Step 3: Score each theme by frequency (from data), severity (based on content), and estimated effort.Step 4: Map each theme to the appropriate customer journey stage(s).Step 5: For each theme, write a clear problem statement and opportunity based only on whats in the data.R: Reference Voice or StyleName the desired tone, mood, or reference brand.This is the brand voice section or style mood board reference points that shape the creative feel. Sometimes you want buttoned-up. Other times, you want conversational. Dont assume the AI knows your tone, so spell it out.Example: Use the tone of a UX insights deck or product research report. Be concise, pattern-driven, and objective. Make summaries easy to scan by product managers and design leads.A: Ask for ClarificationInvite the AI to ask questions before generating, if anything is unclear.This is your Any questions before we begin? moment a key step in collaborative creative work. You wouldnt want a freelancer to guess what you meant if the brief was fuzzy, so why expect AI to do better? Ask AI to reflect or clarify before jumping into output mode.Example: If the uploaded data is missing or unclear, ask for it before continuing. Also, ask for clarification if the feedback format is unstructured or inconsistent, or if the scoring criteria need refinement.M: Memory (Within The Conversation)Reference earlier parts of the conversation and reuse whats working.This is similar to keeping visual tone or campaign language consistent across deliverables in a creative brief. Prompts are rarely one-shot tasks, so this reminds AI of the tone, audience, or structure already in play. GPT-5 got better with memory, but this still remains a useful element, especially if you switch topics or jump around.Example: Unless I say otherwise, keep using this process: analyze the data, group into themes, rank by importance, then suggest an action for each.E: Evaluate & IterateInvite the AI to critique, improve, or generate variations.This is your revision loop your way of prompting for creative direction, exploration, and refinement. Just like creatives expect feedback, your AI partner can handle review cycles if you ask for them. Build iteration into the brief to get closer to what you actually need. Sometimes, you may see ChatGPT test two versions of a response on its own by asking for your preference. Example: After listing all themes, identify the one with the highest combined priority score (based on frequency, severity, and effort).For that top-priority theme:Critically evaluate its framing: Is the title clear? Are the quotes strong and representative? Is the journey mapping appropriate?Suggest one improvement (e.g., improved title, more actionable implication, clearer quote, tighter summary).Rewrite the theme entry with that improvement applied.Briefly explain why the revision is stronger and more useful for product or design teams.Heres a quick recap of the WIRE+FRAME framework: Framework Component Description W: Who & What Define the AI persona and the core deliverable. I: Input Context Provide background or data scope to frame the task. R: Rules & Constraints Set boundaries E: Expected Output Spell out the format and fields of the deliverable. F: Flow of Tasks Break the work into explicit, ordered sub-tasks. R: Reference Voice/Style Name the tone, mood, or reference brand to ensure consistency. A: Ask for Clarification Invite AI to pause and ask questions if any instructions or data are unclear before proceeding. M: Memory Leverage in-conversation memory to recall earlier definitions, examples, or phrasing without restating them. E: Evaluate & Iterate After generation, have the AI self-critique the top outputs and refine them. And heres the full WIRE+FRAME prompt: (W) You are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.(I) You are analyzing customer feedback for Fintech Brands app, targeting Gen Z users. Feedback will be uploaded from sources such as app store reviews, survey feedback, and usability test transcripts.(R) Only analyze the uploaded customer feedback data. Do not fabricate pain points, representative quotes, journey stages, or patterns. Do not supplement with prior knowledge or hypothetical examples. Use clear, neutral, stakeholder-facing language.(E) Return a structured list of themes. For each theme, include:Theme TitleSummary of the IssueProblem StatementOpportunityRepresentative Quotes (from data only)Journey Stage(s)Frequency (count from data)Severity Score (15) where 1 = Minor inconvenience or annoyance; 3 = Frustrating but workaround exists; 5 = Blocking issueEstimated Effort (Low / Medium / High), where Low = Copy or content tweak; Medium = Logic/UX/UI change; High = Significant changes(F) Recommended flow of tasks:Step 1: Parse the uploaded data and extract discrete pain points.Step 2: Group them into themes based on pattern similarity.Step 3: Score each theme by frequency (from data), severity (based on content), and estimated effort.Step 4: Map each theme to the appropriate customer journey stage(s).Step 5: For each theme, write a clear problem statement and opportunity based only on whats in the data.(R) Use the tone of a UX insights deck or product research report. Be concise, pattern-driven, and objective. Make summaries easy to scan by product managers and design leads.(A) If the uploaded data is missing or unclear, ask for it before continuing. Also, ask for clarification if the feedback format is unstructured or inconsistent, or if the scoring criteria need refinement.(M) Unless I say otherwise, keep using this process: analyze the data, group into themes, rank by importance, then suggest an action for each.(E) After listing all themes, identify the one with the highest combined priority score (based on frequency, severity, and effort).For that top-priority theme:Critically evaluate its framing: Is the title clear? Are the quotes strong and representative? Is the journey mapping appropriate?Suggest one improvement (e.g., improved title, more actionable implication, clearer quote, tighter summary).Rewrite the theme entry with that improvement applied.Briefly explain why the revision is stronger and more useful for product or design teams.You could use ## to label the sections (e.g., ##FLOW) more for your readability than for AI. At over 400 words, this Insights Synthesis prompt example is a detailed, structured prompt, but it isnt customized for you and your work. The intent wasnt to give you a specific prompt (the proverbial fish), but to show how you can use a prompt framework like WIRE+FRAME to create a customized, relevant prompt that will help AI augment your work (teaching you to fish).Keep in mind that prompt length isnt a common concern, but rather a lack of quality and structure is. As of the time of writing, AI models can easily process prompts that are thousands of words long.Not every prompt needs all the FRAME components; WIRE is often enough to get the job done. But when the work is strategic or highly contextual, pick components from FRAME the extra details can make a difference. Together, WIRE+FRAME give you a detailed framework for creating a well-structured prompt, with the crucial components first, followed by optional components:WIRE builds a clear, focused prompt with role, input, rules, and expected output.FRAME adds refinement like tone, reusability, and iteration. Here are some scenarios and recommendations for using WIRE or WIRE+FRAME: Scenarios Description Recommended Simple, One-Off Analyses Quick prompting with minimal setup and no need for detailed process transparency. WIRE Tight Sprints or Hackathons Rapid turnarounds, and times you dont need embedded review and iteration loops. WIRE Highly Iterative Exploratory Work You expect to tweak results constantly and prefer manual control over each step. WIRE Complex Multi-Step Playbooks Detailed workflows that benefit from a standardized, repeatable, visible sequence. WIRE+FRAME Shared or Hand-Off Projects When different teams will rely on embedded clarification, memory, and consistent task flows for recurring analyses. WIRE+FRAME Built-In Quality Control You want the AI to flag top issues, self-critique, and refine, minimizing manual QC steps. WIRE+FRAME Prompting isnt about getting it right the first time. Its about designing the interaction and redesigning when needed. With WIRE+FRAME, youre going beyond basic prompting and designing the interaction between you and AI.From Gut Feel To Framework: A Prompt MakeoverLets compare the results of Kates first AI-augmented design sprint prompt (to synthesize customer feedback into design insights) with one based on the WIRE+FRAME prompt framework, with the same data and focusing on the top results:Original prompt: Read this customer feedback and tell me how we can improve our app for Gen Z users.Initial ChatGPT Results:Improve app reliability to reduce crashes and freezing.Provide better guidance or tutorials for financial tools like budgeting or goal setting.Enhance the transparency of Zelle transfers by showing confirmation messages.Speed up app loading and reduce lag on key actions.With this version, youd likely need to go back and forth with follow-up questions, rewrite the output for clarity, and add structure before sharing with your team.WIRE+FRAME prompt above (with defined role, scope, rules, expected format, tone, flow, and evaluation loop).Initial ChatGPT Results:You can clearly see the very different results from the two prompts, both using the exact same data. While the first prompt returns a quick list of ideas, the detailed WIRE+FRAME version doesnt just summarize feedback but structures it. Themes are clearly labeled, supported by user quotes, mapped to customer journey stages, and prioritized by frequency, severity, and effort. The structured prompt results can be used as-is or shared without needing to reformat, rewrite, or explain them (see disclaimer below). The first prompt output needs massaging: its not detailed, lacks evidence, and would require several rounds of clarification to be actionable. The first prompt may work when the stakes are low and you are exploring. But when your prompt is feeding design, product, or strategy, structure comes to the rescue.Disclaimer: Know Your DataA well-structured prompt can make AI output more useful, but it shouldnt be the final word, or your single source of truth. AI models are powerful pattern predictors, not fact-checkers. If your data is unclear or poorly referenced, even the best prompt may return confident nonsense. Dont blindly trust what you see. Treat AI like a bright intern: fast, eager, and occasionally delusional. You should always be familiar with your data and validate what AI spits out. For example, in the WIRE+FRAME results above, AI rated the effort as low for financial tool onboarding. That could easily be a medium or high. Good prompting should be backed by good judgment.Try This NowStart by using the WIRE+FRAME framework to create a prompt that will help AI augment your work. You could also rewrite the last prompt you were not satisfied with, using the WIRE+FRAME, and compare the output.Feel free to use this simple tool to guide you through the framework.Methods: From Lone Prompts to a Prompt SystemJust as design systems have reusable components, your prompts can too. You can use the WIRE+FRAME framework to write detailed prompts, but you can also use the structure to create reusable components that are pre-tested, plug-and-play pieces you can assemble to build high-quality prompts faster. Each part of WIRE+FRAME can be transformed into a prompt component: small, reusable modules that reflect your teams standards, voice, and strategy.For instance, if you find yourself repeatedly using the same content for different parts of the WIRE+FRAME framework, you could save them as reusable components for you and your team. In the example below, we have two different reusable components for W: Who & What an insights analyst and an information architect.W: Who & WhatYou are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.You are an experienced information architect specializing in organizing enterprise content on intranets. Your task is to reorganize the content and features into categories that reflect user goals, reduce cognitive load, and increase findability.Create and save prompt components and variations for each part of the WIRE+FRAME framework, allowing your team to quickly assemble new prompts by combining components when available, rather than starting from scratch each time. Behind The Prompts: Questions About PromptingQ: If I use a prompt framework like WIRE+FRAME every time, will the results be predictable?A: Yes and no. Yes, your outputs will be guided by a consistent set of instructions (e.g., Rules, Examples, Reference Voice / Style) that will guide the AI to give you a predictable format and style of results. And no, while the framework provides structure, it doesnt flatten the generative nature of AI, but focuses it on whats important to you. In the next article, we will look at how you can use this to your advantage to quickly reuse your best repeatable prompts as we build your AI assistant.Q: Could changes to AI models break the WIRE+FRAME framework?A: AI models are evolving more rapidly than any other technology weve seen before in fact, ChatGPT was recently updated to GPT-5 to mixed reviews. The update didnt change the core principles of prompting or the WIRE+FRAME prompt framework. With future releases, some elements of how we write prompts today may change, but the need to communicate clearly with AI wont. Think of how you delegate work to an intern vs. someone with a few years experience: you still need detailed instructions the first time either is doing a task, but the level of detail may change. WIRE+FRAME isnt built only for todays models; the components help you clarify your intent, share relevant context, define constraints, and guide tone and format all timeless elements, no matter how smart the model becomes. The skill of shaping clear, structured interactions with non-human AI systems will remain valuable.Q: Can prompts be more than text? What about images or sketches?A: Absolutely. With tools like GPT-5 and other multimodal models, you can upload screenshots, pictures, whiteboard sketches, or wireframes. These visuals become part of your Input Context or help define the Expected Output. The same WIRE+FRAME principles still apply: youre setting context, tone, and format, just using images and text together. Whether your input is a paragraph or an image and text, youre still designing the interaction.Have a prompt-related question of your own? Share it in the comments, and Ill either respond there or explore it further in the next article in this series.From Designerly Prompting To Custom AssistantsGood prompts and results dont come from using others prompts, but from writing prompts that are customized for you and your context. The WIRE+FRAME framework helps with that and makes prompting a tool you can use to guide AI models like a creative partner instead of hoping for magic from a one-line request.Prompting uses the designerly skills you already use every day to collaborate with AI:Curiosity to explore what the AI can do and frame better prompts.Observation to detect bias or blind spots.Empathy to make machine outputs human.Critical thinking to verify and refine.Experiment & Iteration to learn by doing and improve the interaction over time.Growth Mindset to keep up with new technology like AI and prompting.Once you create and refine prompt components and prompts that work for you, make them reusable by documenting them. But wait, theres more what if your best prompts, or the elements of your prompts, could live inside your own AI assistant, available on demand, fluent in your voice, and trained on your context? Thats where were headed next.In the next article, Design Your Own Design Assistant, well take what youve learned so far and turn it into a Custom AI assistant (aka Custom GPT), a design-savvy, context-aware assistant that works like you do. Well walk through that exact build, from defining the assistants job description to uploading knowledge, testing, and sharing it with others. ResourcesGPT-5 Prompting GuideGPT-4.1 Prompting GuideAnthropic Prompt Engineering Prompt Engineering by GooglePerplexity Webapp to guide you through the WIRE+FRAME framework
    0 التعليقات ·0 المشاركات
  • A Breeze Of Inspiration In September (2025 Wallpapers Edition)
    smashingmagazine.com
    September is just around the corner, and that means its time for some new wallpapers! For more than 14 years already, our monthly wallpapers series has been the perfect occasion for artists and designers to challenge their creative skills and take on a little just-for-fun project telling the stories they want to tell, using their favorite tools. This always makes for a unique and inspiring collection of wallpapers month after month, and, of course, this September is no exception.In this post, youll find desktop wallpapers for September 2025, created with love by the community for the community. As a bonus, weve also added some oldies but goodies from our archives to the collection, so maybe youll spot one of your almost-forgotten favorites in here, too? A huge thank-you to everyone who shared their artworks with us this month this post wouldnt exist without your creativity and support!By the way, if youd like to get featured in one of our upcoming wallpapers editions, please dont hesitate to submit your design. We are always looking for creative talent and cant wait to see your story come to life!You can click on every image to see a larger preview.We respect and carefully consider the ideas and motivation behind each and every artists work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers werent anyhow influenced by us but rather designed from scratch by the artists themselves.21st Night Of SeptemberOn the 21st night of September, the world danced in perfect harmony. Earth, Wind & Fire set the tone and now its your turn to keep the rhythm alive. Designed by Ginger IT Solutions from Serbia.previewwith calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440WhoDesigned by Ricardo Gimenes from Spain.previewwith calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160Skating Through Chocolate Milk DayCelebrate Chocolate Milk Day with a perfect blend of fun and flavor. From smooth sips to smooth rides, its all about enjoying the simple moments that make the day unforgettable. Designed by PopArt Studio from Serbia.previewwith calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440without calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440MoodDesigned by Ricardo Gimenes from Spain.previewwith calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160without calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160Funny CatsCats are beautiful animals. Theyre quiet, clean, and warm. Theyre funny and can become an endless source of love and entertainment. Here for the cats! Designed by UrbanUI from India.previewwithout calendar: 360x640, 1024x768, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1680x1200, 1920x1080Pigman And RobinDesigned by Ricardo Gimenes from Spain.previewwithout calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160Autumn RainsThis autumn, we expect to see a lot of rainy days and blues, so we wanted to change the paradigm and wish a warm welcome to the new season. After all, if you come to think of it: rain is not so bad if you have an umbrella and a raincoat. Come autumn, we welcome you! Designed by PopArt Studio from Serbia.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440TerrazzoWith the end of summer and fall coming soon, I created this terrazzo pattern wallpaper to brighten up your desktop. Enjoy the month! Designed by Melissa Bogemans from Belgium.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Summer EndingAs summer comes to an end, all the creatures pull back to their hiding places, searching for warmth within themselves and dreaming of neverending adventures under the tinted sky of closing dog days. Designed by Ana Masnikosa from Belgrade, Serbia.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Cacti EverywhereSeasons come and go, but our brave cactuses still stand. Summer is almost over and autumn is coming, but the beloved plants dont care. Designed by Lvia Lnrt from Hungary.previewwithout calendar: 320x480, 800x480, 1024x768, 1024x1024, 1280x1024, 1400x1050, 1920x1080, 1920x1200, 1920x1440, 2560x1440Flower SoulThe earth has music for those who listen. Take a break and relax and while you drive out the stress, catch a glimpse of the beautiful nature around you. Can you hear the rhythm of the breeze blowing, the flowers singing, and the butterflies fluttering to cheer you up? We dedicate flowers which symbolize happiness and love to one and all. Designed by Krishnankutty from India.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Stay Or Leave?Designed by Ricardo Gimenes from Spain.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Rainy FlowersDesigned by Teodora Vasileva from Bulgaria.previewwithout calendar: 640x480, 800x480, 800x600, 1024x768, 1280x720, 1280x960, 1280x1024, 1400x1050, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Listen Closer The Mushrooms Are GrowingIts this time of the year when children go to school and grown-ups go to collect mushrooms. Designed by Igor Izhik from Canada.previewwithout calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 2560x1600Weekend RelaxDesigned by Robert from the United States.previewwithout calendar: 320x480, 1024x1024, 1280x720, 1680x1200, 1920x1080, 2560x1440HungryDesigned by Elise Vanoorbeek from Belgium.previewwithout calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1440x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440National Video Games Day DelightSeptember 12th brings us National Video Games Day. US-based video game players love this day and celebrate with huge gaming tournaments. What was once a 2D experience in the home is now a global phenomenon with players playing against each other across statelines and national borders via the internet. National Video Games Day gives gamers the perfect chance to celebrate and socialize! So grab your controller, join online, and let the games begin! Designed by Ever Increasing Circles from the United Kingdom.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440More BananasDesigned by Ricardo Gimenes from Spain.previewwithout calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160National Elephant Appreciation DayToday, we celebrate these magnificent creatures who play such a vital role in our ecosystems and cultures. Elephants are symbols of wisdom, strength, and loyalty. Their social bonds are strong, and their playful nature, especially in the young ones, reminds us of the importance of joy and connection in our lives. Designed by PopArt Studio from Serbia.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Long Live SummerWhile Septembers Autumnal Equinox technically signifies the end of the summer season, this wallpaper is for all those summer lovers, like me, who dont want the sunshine, warm weather, and lazy days to end. Designed by Vicki Grunewald from Washington.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Bear TimeDesigned by Bojana Stojanovic from Serbia.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1080, 1366x768, 1400x1050, 1440x990, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Still In Vacation MoodIts officially the end of summer and Im still in vacation mood, dreaming about all the amazing places Ive seen. This illustration is inspired by a small town in France, on the Atlantic coast, right by the beach. Designed by Miruna Sfia from Romania.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1440x900, 1440x1050, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440Maryland PrideAs summer comes to a close, so does the end of blue crab season in Maryland. Blue crabs have been a regional delicacy since the 1700s and have become Marylands most valuable fishing industry, adding millions of dollars to the Maryland economy each year. The blue crab has contributed so much to the states regional culture and economy, in 1989 it was named the State Crustacean, cementing its importance in Maryland history. Designed by The Hannon Group from Washington DC.previewwithout calendar: 320x480, 640x480, 800x600, 1024x768, 1280x960, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1440, 2560x1440Summer In Costa RicaWe continue in tropical climates. In this case, we travel to Costa Rica to observe the Arenal volcano from the lake while we use a kayak. Designed by Veronica Valenzuela from Spain.previewwithout calendar: 640x480, 800x480, 1024x768, 1280x720, 1280x800, 1440x900, 1600x1200, 1920x1080, 1920x1440, 2560x1440Wine Harvest SeasonWelcome to the wine harvest season in Serbia. Its September, and the hazy sunshine bathes the vines on the slopes of Fruka Gora. Everything is ready for the making of Bermet, the most famous wine from Serbia. This spiced wine was a favorite of the Austro-Hungarian elite and was served even on the Titanic. Bermets recipe is a closely guarded secret, and the wine is produced by just a handful of families in the town of Sremski Karlovci, near Novi Sad. On the other side of Novi Sad, plains of corn and sunflower fields blend in with the horizon, catching the last warm sun rays of this year. Designed by PopArt Studio from Serbia.previewwithout calendar: 320x480, 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440OfficeClean, minimalistic office for a productive day. Designed by Antun Hirman from Croatia.previewwithout calendar: 320x480, 800x600, 1280x720, 1280x1024, 1440x900, 1680x1050, 1920x1080, 1920x1440, 2560x1440Colors Of SeptemberI love September. Its colors and smells. Designed by Juliagav from Ukraine.previewwithout calendar: 320x480, 1024x768, 1024x1024, 1280x800, 1280x1024, 1440x900, 1680x1050, 1920x1080, 2560x1440Never Stop ExploringDesigned by Ricardo Gimenes from Spain.previewwithout calendar: 640x480, 800x480, 800x600, 1024x768, 1024x1024, 1152x864, 1280x720, 1280x800, 1280x960, 1280x1024, 1366x768, 1400x1050, 1440x900, 1600x1200, 1680x1050, 1680x1200, 1920x1080, 1920x1200, 1920x1440, 2560x1440, 3840x2160
    0 التعليقات ·0 المشاركات
  • Designing For TV: Principles, Patterns And Practical Guidance (Part 2)
    smashingmagazine.com
    Having covered the developmental history and legacy of TV in Part 1, lets now delve into more practical matters. As a quick reminder, the 10-foot experience and its reliance on the six core buttons of any remote form the basis of our efforts, and as youll see, most principles outlined simply reinforce the unshakeable foundations.In this article, well sift through the systems, account for layout constraints, and distill the guidelines to understand the essence of TV interfaces. Once weve collected all the main ingredients, well see what we can do to elevate these inherently simplistic experiences.Lets dig in, and lets get practical!The SystemsWhen it comes to hardware, TVs and set-top boxes are usually a few generations behind phones and computers. Their components are made to run lightweight systems optimised for viewing, energy efficiency, and longevity. Yet even within these constraints, different platforms offer varying performance profiles, conventions, and price points.Some notable platforms/systems of today are:Roku, the most affordable and popular, but severely bottlenecked by weak hardware. WebOS, most common on LG devices, relies on web standards and runs well on modest hardware.Android TV, considered very flexible and customisable, but relatively demanding hardware-wise.Amazon Fire, based on Android but with a separate ecosystem. It offers great smooth performance, but is slightly more limited than stock Android.tvOS, by Apple, offering a high-end experience followed by a high-end price with extremely low customizability.Despite their differences, all of the platforms above share something in common, and by now youve probably guessed that it has to do with the remote. Lets take a closer look:If these remotes were stripped down to just the D-pad, OK, and BACK buttons, they would still be capable of successfully navigating any TV interface. It is this shared control scheme that allows for the agnostic approach of this article with broadly applicable guidelines, regardless of the manufacturer. Having already discussed the TV remote in detail in Part 1, lets turn to the second part of the equation: the TV screen, its layout, and the fundamental building blocks of TV-bound experiences.TV Design FundamentalsThe ScreenWith almost one hundred years of legacy, TV has accumulated quite some baggage. One recurring topic in modern articles on TV design is the concept of overscan a legacy concept from the era of cathode ray tube (CRT) screens. Back then, the lack of standards in production meant that television sets would often crop the projected image at its edges. To address this inconsistency, broadcasters created guidelines to keep important content from being cut off.While overscan gets mentioned occasionally, we should call it what it really is a thing of the past. Modern panels display content with greater precision, making thinking in terms of title and action safe areas rather archaic. Today, we can simply consider the margins and get the same results.Google calls for a 5% margin layout and Apple advises a 60-point margin top and bottom, and 80 points on the sides in their Layout guidelines. The standard is not exactly clear, but the takeaway is simple: leave some breathing room between screen edge and content, like you would in any thoughtful layout.Having left some baggage behind, we can start considering what to put within and outside the defined bounds.The LayoutConsidering the device is made for content consumption, streaming apps such as Netflix naturally come to mind. Broadly speaking, all these interfaces share a common layout structure where a vast collection of content is laid out in a simple grid.These horizontally scrolling groups (sometimes referred to as shelves) resemble rows of a bookcase. Typically, theyll contain dozens of items that dont fit into the initial fold, so well make sure the last visible item peeks from the edge, subtly indicating to the viewer theres more content available if they continue scrolling.If we were to define a standard 12-column layout grid, with a 2-column-wide item, wed end up with something like this:As you can see, the last item falls outside the safe zone. Tip: A useful trick I discovered when designing TV interfaces was to utilise an odd number of columns. This allows the last item to fall within the defined margins and be more prominent while having little effect on the entire layout. Weve concluded that overscan is not a prominent issue these days, yet an additional column in the layout helps completely circumvent it. Food for thought!TypographyTV design requires us to practice restraint, and this becomes very apparent when working with type. All good typography practices apply to TV design too, but Id like to point out two specific takeaways.First, accounting for the distance, everything (including type) needs to scale up. Where 1618px might suffice for web baseline text, 24px should be your starting point on TV, with the rest of the scale increasing proportionally. Typography can become especially tricky in 10-ft experiences. When in doubt, go larger. Molly Lafferty (Marvel Blog)With that in mind, the second piece of advice would be to start with a small 56 size scale and adjust if necessary. The simplicity of a TV experience can, and should, be reflected in the typography itself, and while small, such a scale will do all the heavy lifting if set correctly.What you see in the example above is a scale I reduced from Google and Apple guidelines, with a few size adjustments. Simple as it is, this scale served me well for years, and I have no doubt it could do the same for you.FreebieIf youd like to use my basic reduced type scale Figma design file for kicking off your own TV project, feel free to do so! ColorImagine watching TV at night with the device being the only source of light in the room. You open up the app drawer and select a new streaming app; it loads into a pretty splash screen, and bam! a bright interface opens up, which, amplified by the dark surroundings, blinds you for a fraction of a second. That right there is our main consideration when using color on TV.Built for cinematic experiences and often used in dimly lit environments, TVs lend themselves perfectly to darker and more subdued interfaces. Bright colours, especially pure white (#ffffff), will translate to maximum luminance and may be straining on the eyes. As a general principle, you should rely on a more muted color palette. Slightly tinting brighter elements with your brand color, or undertones of yellow to imitate natural light, will produce less visually unsettling results.Finally, without a pointer or touch capabilities, its crucial to clearly highlight interactive elements. While using bright colors as backdrops may be overwhelming, using them sparingly to highlight element states in a highly contrasting way will work perfectly.A focus state is the underlying principle of TV navigation. Most commonly, it relies on creating high contrast between the focused and unfocused elements. (Large preview)This highlighting of UI elements is what TV leans on heavily and it is what well discuss next.FocusIn Part 1, we have covered how interacting through a remote implies a certain detachment from the interface, mandating reliance on a focus state to carry the burden of TV interaction. This is done by visually accenting elements to anchor the users eyes and map any subsequent movement within the interface.If you have ever written HTML/CSS, you might recall the use of the :focus CSS pseudo-class. While its primarily an accessibility feature on the web, its the core of interaction on TV, with more flexibility added in the form of two additional directions thanks to a dedicated D-pad.Focus StylesThere are a few standard ways to style a focus state. Firstly, theres scaling enlarging the focused element, which creates the illusion of depth by moving it closer to the viewer.Example of scaling elements on focus. This is especially common in cases where only images are used for focusable elements. (Large preview)Another common approach is to invert background and text colors.Color inversion on focus, common for highlighting cards. (Large preview)Finally, a border may be added around the highlighted element.Example of border highlights on focus. (Large preview)These styles, used independently or in various combinations, appear in all TV interfaces. While execution may be constrained by the specific system, the purpose remains the same: clear and intuitive feedback, even from across the room.The three basic styles can be combined to produce more focus state variants. (Large preview)Having set the foundations of interaction, layout, and movement, we can start building on top of them. The next chapter will cover the most common elements of a TV interface, their variations, and a few tips and tricks for button-bound navigation.Common TV UI ComponentsNowadays, the core user journey on television revolves around browsing (or searching through) a content library, selecting an item, and opening a dedicated screen to watch or listen.This translates into a few fundamental screens:Library (or Home) for content browsing,Search for specific queries, andA player screen focused on content playback.These screens are built with a handful of components optimized for the 10-foot experience, and while they are often found on other platforms too, its worth examining how they differ on TV.MenusAppearing as a horizontal bar along the top edge of the screen, or as a vertical sidebar, the menu helps move between the different screens of an app. While its orientation mostly depends on the specific system, it does seem TV favors the side menu a bit more.Both menu types share a common issue: the farther the user navigates away from the menu (vertically, toward the bottom for top-bars; and horizontally, toward the right for sidebars), the more button presses are required to get back to it. Fortunately, usually a Back button shortcut is added to allow for immediate menu focus, which greatly improves usability.16:9 posters abide by the same principles but with a horizontal orientation. They are often paired with text labels, which effectively turn them into cards, commonly seen on platforms like YouTube. In the absence of dedicated poster art, they show stills or playback from the videos, matching the aspect ratio of the media itself.1:1 posters are often found in music apps like Spotify, their shape reminiscent of album art and vinyl sleeves. These squares often get used in other instances, like representing channel links or profile tiles, giving more visual variety to the interface.All of the above can co-exist within a single app, allowing for richer interfaces and breaking up otherwise uniform content libraries. And speaking of breaking up content, lets see what we can do with spotlights!SpotlightsTypically taking up the entire width of the screen, these eye-catching components will highlight a new feature or a promoted piece of media. In a sea of uniform shelves, they can be placed strategically to introduce aesthetic diversity and disrupt the monotony.A spotlight can be a focusable element by itself, or it could expose several actions thanks to its generous space. In my ventures into TV design, I relied on a few different spotlight sizes, which allowed me to place multiples into a single row, all with the purpose of highlighting different aspects of the app, without breaking the form to which viewers were used.Posters, cards, and spotlights shape the bulk of the visual experience and content presentation, but viewers still need a way to find specific titles. Lets see how search and input are handled on TV.Search And Entering TextManually browsing through content libraries can yield results, but having the ability to search will speed things up though not without some hiccups.TVs allow for text input in the form of on-screen keyboards, similar to the ones found in modern smartphones. However, inputting text with a remote control is quite inefficient given the restrictiveness of its control scheme. For example, typing hey there on a mobile keyboard requires 9 keystrokes, but about 38 on a TV (!) due to the movement between characters and their selection. Typing with a D-pad may be an arduous task, but at the same time, having the ability to search is unquestionably useful.Luckily for us, keyboards are accounted for in all systems and usually come in two varieties. Weve got the grid layouts used by most platforms and a horizontal layout in support of the touch-enabled and gesture-based controls on tvOS. Swiping between characters is significantly faster, but this is yet another pattern that can only be enhanced, not replaced.Modernization has made things significantly easier, with search autocomplete suggestions, device pairing, voice controls, and remotes with physical keyboards, but on-screen keyboards will likely remain a necessary fallback for quite a while. And no matter how cumbersome this fallback may be, we as designers need to consider it when building for TV.Players And Progress BarsWhile all the different sections of a TV app serve a purpose, the Player takes center stage. Its where all the roads eventually lead to, and where viewers will spend the most time. Its also one of the rare instances where focus gets lost, allowing for the interface to get out of the way of enjoying a piece of content.Arguably, players are the most complex features of TV apps, compacting all the different functionalities into a single screen. Take YouTube, for example, its player doesnt just handle expected playback controls but also supports content browsing, searching, reading comments, reacting, and navigating to channels, all within a single screen.Compared to YouTube, Netflix offers a very lightweight experience guided by the nature of the app. Still, every player has a basic set of controls, the foundation of which is the progress bar.The progress bar UI element serves as a visual indicator for content duration. During interaction, focus doesnt get placed on the bar itself, but on a movable knob known as the scrubber. It is by moving the scrubber left and right, or stopping it in its tracks, that we can control playback. Another indirect method of invoking the progress bar is with the good old Play and Pause buttons. Rooted in the mechanical era of tape players, the universally understood triangle and two vertical bars are as integral to the TV legacy as the D-pad. No matter how minimalist and sleek the modern player interface may be, these symbols remain a staple of the viewing experience.The presence of a scrubber may also indicate the type of content. Video on demand allows for the full set of playback controls, while live streams (unless DVR is involved) will do away with the scrubber since viewers wont be able to rewind or fast-forward.Earlier iterations of progress bars often came bundled with a set of playback control buttons, but as viewers got used to the tools available, these controls often got consolidated into the progress bar and scrubber themselves.Bringing It All TogetherWith the building blocks out of the box, weve got everything necessary for a basic but functional TV app. Just as the six core buttons make remote navigation possible, the components and principles outlined above help guide purposeful TV design. The more context you bring, the more youll be able to expand and combine these basic principles, creating an experience unique to your needs. Before we wrap things up, Id like to share a few tips and tricks I discovered along the way tips and tricks which I wish I had known from the start. Regardless of how simple or complex your idea may be, these may serve you as useful tools to help add depth, polish, and finesse to any TV experience.Thinking Beyond The BasicsLike any platform, TV has a set of constraints that we abide by when designing. But sometimes these norms are applied without question, making the already limited capabilities feel even more restraining. Below are a handful of less obvious ideas that can help you design more thoughtfully and flexibly for the big screen.Long PressMost modern remotes support press-and-hold gestures as a subtle way to enhance the functionality, especially on remotes with fewer buttons available.For example, holding directional buttons when browsing content speeds up scrolling, while holding Left/Right during playback speeds up timeline seeking. In many apps, a single press of the OK button opens a video, but holding it for longer opens a contextual menu with additional actions.With limited input, context becomes a powerful tool. It not only declutters the interface to allow for more focus on specific tasks, but also enables the same set of buttons to trigger different actions based on the viewers location within an app.Another great example is YouTubes scrubber interaction. Once the scrubber is moved, every other UI element fades. This cleans up the viewers working area, so to speak, narrowing the interface to a single task. In this state and only in this state pressing Up one more time moves away from scrubbing and into browsing by chapter.This is such an elegant example of expanding restraint, and adding more only when necessary. I hope it inspires similar interactions in your TV app designs.Efficient Movement On TVAt its best, every action on TV costs at least one click. Theres no such thing as aimless cursor movement if you want to move, you must press a button. Weve seen how cumbersome it can be inside a keyboard, but theres also something we can learn about efficient movement in these restrained circumstances.Going back to the Homescreen, we can note that vertical and horizontal movement serve two distinct roles. Vertical movement switches between groups, while horizontal movement switches items within these groups. No matter how far youve gone inside a group, a single vertical click will move you into another.Every step on TV costs an action, so we might as well optimize movement. (Large preview)This subtle difference two axes with separate roles is the most efficient way of moving in a TV interface. Reversing the pattern: horizontal to switch groups, and vertical to drill down, will work like a charm as long as you keep the role of each axis well defined.Properly applied in a vertical layout, the principles of optimal movement remain the same. (Large preview)Quietly brilliant and easy to overlook, this pattern powers almost every step of the TV experience. Remember it, and use it well.Thinking Beyond JPGsAfter covering in detail many of the technicalities, lets finish with some visual polish. Most TV interfaces are driven by tightly packed rows of cover and poster art. While often beautifully designed, this type of content and layouts leave little room for visual flair. For years, the flat JPG, with its small file size, has been a go-to format, though contemporary alternatives like WebP are slowly taking its place. Meanwhile, we can rely on the tried and tested PNG to give a bit more shine to our TV interfaces. The simple fact that it supports transparency can help the often-rigid UIs feel more sophisticated. Used strategically and paired with simple focus effects such as background color changes, PNGs can bring subtle moments of delight to the interface. Having a transparent background blends well with surface color changes common in TV interfaces. (Large preview)And dont forget, transparency doesnt have to mean that there shouldn't be any background at all. (Large preview)Moreover, if transformations like scaling and rotating are supported, you can really make those rectangular shapes come alive with layering multiple assets. Combining multiple images along with a background color change can liven up certain sections. (Large preview)As you probably understand by now, these little touches of finesse dont go out of bounds of possibility. They simply find more room to breathe within it. But with such limited capabilities, its best to learn all the different tricks that can help make your TV experiences stand out.Closing ThoughtsRooted in legacy, with a limited control scheme and a rather shallow interface, TV design reminds us to do the best with what we have at our disposal. The restraints I outlined are not meant to induce claustrophobia and make you feel limited in your design choices, but rather to serve you as guides. It is by accepting that fact that we can find freedom and new avenues to explore.This two-part series of articles, just like my experience designing for TV, was not about reinventing the wheel with radical ideas. It was about understanding its nuances and contributing to whats already there with my personal touch.If you find yourself working in this design field, I hope my guide will serve as a warm welcome and will help you do your finest work. And if you have any questions, do leave a comment, and I will do my best to reply and help.Good luck!Further ReadingDesign for TV, by Android DevelopersGreat TV design is all about putting content front and center. It's about creating an interface that's easier to use and navigate, even from a distance. It's about making it easier to find the content you love, and to enjoy it in the best possible quality.TV Guidelines: A quick kick-off on designing for Television Experiences, by Andrea PachecoJust like designing a mobile app, designing a TV application can be a fun and complex thing to do, due to the numerous guidelines and best practices to follow. Below, I have listed the main best practices to keep in mind when designing an app for a 10-foot screen.Designing for Television TV Ui design, by Molly LaffertyWere no longer limited to a remote and cable box to control our TVs; were using Smart TVs, or streaming from set-top boxes like Roku and Apple TV, or using video game consoles like Xbox and PlayStation. And each of these devices allows a user interface thats much more powerful than your old-fashioned on-screen guide.Rethinking User Interface Design for the TV Platform, by Pascal PotvinDesigning for television has become part of the continuum of devices that require a rethink of how we approach user interfaces and user experiences.Typography for TV, by Android DevelopersAs television screens are typically viewed from a distance, interfaces that use larger typography are more legible and comfortable for users. TV Design's default type scale includes contrasting and flexible type styles to support a wide range of use cases.Typography, by Apple Developer docsYour typographic choices can help you display legible text, convey an information hierarchy, communicate important content, and express your brand or style.Color on TV, by Android DevelopersColor on TV design can inspire, set the mood, and even drive users to make decisions. It's a powerful and tangible element that users notice first. As a rich way to connect with a wide audience, it's no wonder color is an important step in crafting a high-quality TV interface.Designing for Television TV UI Design, by Molly Lafferty (Marvel Blog)Today, were no longer limited to a remote and cable box to control our TVs; were using Smart TVs, or streaming from set-top boxes like Roku and Apple TV, or using video game consoles like Xbox and PlayStation. And each of these devices allows a user interface thats much more powerful than your old-fashioned on-screen guide.
    0 التعليقات ·0 المشاركات
  • UX Job Interview Helpers
    smashingmagazine.com
    When talking about job interviews for a UX position, we often discuss how to leave an incredible impression and how to negotiate the right salary. But its only one part of the story. The other part is to be prepared, to ask questions, and to listen carefully.Below, Ive put together a few useful resources on UX job interviews from job boards to Notion templates and practical guides. I hope you or your colleagues will find it helpful.The Design Interview KitAs you are preparing for that interview, get ready with the Design Interview Kit (Figma), a helpful practical guide that covers how to craft case studies, solve design challenges, write cover letters, present your portfolio, and negotiate your offer. Kindly shared by Oliver Engel.The Product Designers (Job) Interview Playbook (PDF)The Product Designers (Job) Interview Playbook (PDF) is a practical little guide for designers through each interview phase, with helpful tips and strategies on things to keep in mind, talking points, questions to ask, red flags to watch out for and how to tell a compelling story about yourself and your work. Kindly put together by Meghan Logan.From my side, I can only wholeheartedly recommend to not only speak about your design process. Tell stories about the impact that your design work has produced. Frame your design work as an enabler of business goals and user needs. And include insights about the impact youve produced on business goals, processes, team culture, planning, estimates, and testing.Also, be very clear about the position that you are applying for. In many companies, titles do matter. There are vast differences in responsibilities and salaries between various levels for designers, so if you see yourself as a senior, review whether it actually reflects in the position.A Guide To Successful UX Job Interviews (+ Notion template)Catt Smalls Guide To Successful UX Job Interviews, a wonderful practical series on how to build a referral pipeline, apply for an opening, prepare for screening and interviews, present your work, and manage salary expectations. You can also download a Notion template.30 Useful Questions To Ask In UX Job InterviewsIn her wonderful article, Nati Asher has suggested many useful questions to ask in a job interview when you are applying as a UX candidate. Ive taken the liberty of revising some of them and added a few more questions that might be worth considering for your next job interview.What are the biggest challenges the team faces at the moment?What are the teams main strengths and weaknesses?What are the traits and skills that will make me successful in this position?Where is the company going in the next 5 years?What are the achievements I should aim for over the first 90 days?What would make you think Im so happy we hired X!?Do you have any doubts or concerns regarding my fit for this position?Does the team have any budget for education, research, etc.?What is the process of onboarding in the team?Who is in the team, and how long have they been in that team?Who are the main stakeholders I will work with on a day-to-day basis?Which options do you have for user research and accessing users or data?Are there analytics, recordings, or other data sources to review?How do you measure the impact of design work in your company?To what extent does management understand the ROI of good UX?How does UX contribute strategically to the companys success?Who has the final say on design, and who decides what gets shipped?What part of the design process does the team spend most time on?How many projects do designers work on simultaneously?How has the organization overcome challenges with remote work?Do we have a design system, and in what state is it currently?Why does a company want to hire a UX designer?How would you describe the ideal candidate for this position?What does a career path look like for this role?How will my performance be evaluated in this role?How long do projects take to launch? Can you give me some examples?What are the most immediate projects that need to be addressed?How do you see the design team growing in the future?What traits make someone successful in this team?Whats the most challenging part of leading the design team?How does the company ensure its upholding its values?Before a job interview, have your questions ready. Not only will they convey a message that you care about the process and the culture, but also that you understand what is required to be successful. And this fine detail might go a long way.Dont Forget About The STAR MethodInterviewers closer to business will expect you to present examples of your work using the STAR method (Situation Task Action Result), and might be utterly confused if you delve into all the fine details of your ideation process or the choice of UX methods youve used.Situation: Set the scene and give necessary details.Task: Explain your responsibilities in that situation.Action: Explain what steps you took to address it.Result: Share the outcomes your actions achieved.As Meghan suggests, the interview is all about how your skills add value to the problem the company is currently solving. So ask about the current problems and tasks. Interview the person who interviews you, too but also explain who you are, your focus areas, your passion points, and how you and your expertise would fit in a product and in the organization.Wrapping UpA final note on my end: never take a rejection personally. Very often, the reasons you are given for rejection are only a small part of a much larger picture and have almost nothing to do with you. It might be that a job description wasnt quite accurate, or the company is undergoing restructuring, or the finances are too tight after all.Dont despair and keep going. Write down your expectations. Job titles matter: be deliberate about them and your level of seniority. Prepare good references. Have your questions ready for that job interview. As Catt Small says, once you have a foot in the door, youve got to kick it wide open.You are a bright shining star dont you ever forget that.Job BoardsRemote + In-personIXDAWho Is Still Hiring?UXPA Job BankOttaBoooomBlack Creatives Job BoardUX Research JobsUX Content JobsUX Content Collective JobsUX Writing JobsUseful ResourcesHow To Be Prepared For UX Job Interviews, by yours trulyUX Job Search Strategies and Templates, by yours trulyHow To Ace Your Next Job Interview, by Startup.jobsCracking The UX Job Interview, by Artiom DashinskyThe Product Design Interview Process, by Tanner Christensen10 Questions To Ask in a UX Interview, by Ryan ScottSix questions to ask after a UX designer job interview, by Nick BabichMeet Smart Interface Design PatternsYou can find more details on design patterns and UX in Smart Interface Design Patterns, our 15h-video course with 100s of practical examples from real-life projects with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off.Meet Smart Interface Design Patterns, our video course on interface design & UX.Video + UX TrainingVideo onlyVideo + UX Training$495.00 $699.00Get Video + UX Training25 video lessons (15h) + Live UX Training.100 days money-back-guarantee.Video only$300.00$395.00Get the video course40 video lessons (15h). Updated yearly.Also available as a UX Bundle with 2 video courses.
    Like
    Love
    Wow
    Sad
    Angry
    1كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • Automating Design Systems: Tips And Resources For Getting Started
    smashingmagazine.com
    A design system is more than just a set of colors and buttons. Its a shared language that helps designers and developers build good products together. At its core, a design system includes tokens (like colors, spacing, fonts), components (such as buttons, forms, navigation), plus the rules and documentation that tie all together across projects.If youve ever used systems like Google Material Design or Shopify Polaris, for example, then youve seen how design systems set clear expectations for structure and behavior, making teamwork smoother and faster. But while design systems promote consistency, keeping everything in sync is the hard part. Update a token in Figma, like a color or spacing value, and that change has to show up in the code, the documentation, and everywhere else its used.The same thing goes for components: when a buttons behavior changes, it needs to update across the whole system. Thats where the right tools and a bit of automation can make the difference. They help reduce repetitive work and keep the system easier to manage as it grows.In this article, well cover a variety of tools and techniques for syncing tokens, updating components, and keeping docs up to date, showing how automation can make all of it easier.The Building Blocks Of AutomationLets start with the basics. Color, typography, spacing, radii, shadows, and all the tiny values that make up your visual language are known as design tokens, and theyre meant to be the single source of truth for the UI. Youll see them in design software like Figma, in code, in style guides, and in documentation. Smashing Magazine has covered them before in great detail.The problem is that they often go out of sync, such as when a color or component changes in design but doesnt get updated in the code. The more your team grows or changes, the more these mismatches show up; not because people arent paying attention, but because manual syncing just doesnt scale. Thats why automating tokens is usually the first thing teams should consider doing when they start building a design system. That way, instead of writing the same color value in Figma and then again in a configuration file, you pull from a shared token source and let that drive both design and development.There are a few tools that are designed to help make this easier.Token StudioToken Studio is a Figma plugin that lets you manage design tokens directly in your file, export them to different formats, and sync them to code.SpecifySpecify lets you collect tokens from Figma and push them to different targets, including GitHub repositories, continuous integration pipelines, documentation, and more.NameDesignTokens.guideNamedDesignTokens.guide helps with naming conventions, which is honestly a common pain point, especially when youre working with a large number of tokens.Once your tokens are set and connected, youll spend way less time fixing inconsistencies. It also gives you a solid base to scale, whether thats adding themes, switching brands, or even building systems for multiple products.Thats also when naming really starts to count. If your tokens or components arent clearly named, things can get confusing quickly.Note: Vitaly Friedmans How to Name Things is worth checking out if youre working with larger systems.From there, its all about components. Tokens define the values, but components are what people actually use, e.g., buttons, inputs, cards, dropdowns you name it. In a perfect setup, you build a component once and reuse it everywhere. But without structure, its easy for things to drift out of scope. Its easy to end up with five versions of the same button, and whats in code doesnt match whats in Figma, for example.Automation doesnt replace design, but rather, it connects everything to one source.The Figma component matches the one in production, the documentation updates when the component changes, and the whole team is pulling from the same library instead of rebuilding their own version. This is where real collaboration happens.Here are a few tools that help make that happen: Tool What It Does UXPin Merge Lets you design using real code components. What you prototype is what gets built. Supernova Helps you publish a design system, sync design and code sources, and keep documentation up-to-date. Zeroheight Turns your Figma components into a central, browsable, and documented system for your whole team. How Does Everything Connect?A lot of the work starts right inside your design application. Once your tokens and components are in place, tools like Supernova help you take it further by extracting design data, syncing it across platforms, and generating production-ready code. You dont need to write custom scripts or use the Figma API to get value from automation; these tools handle most of it for you.But for teams that want full control, Figma does offer an API. It lets you do things like the following:Pull token values (like colors, spacing, typography) directly from Figma files,Track changes to components and variants,Tead metadata (like style names, structure, or usage patterns), andMap which components are used where in the design.The Figma API is REST-based, so it works well with custom scripts and automations. You dont need a huge setup, just the right pieces. On the development side, teams usually use Node.js or Python to handle automation. For example:Fetch styles from Figma.Convert them into JSON.Push the values to a design token repo or directly into the codebase.You wont need that level of setup for most use cases, but its helpful to know its there if your team outgrows no-code tools.Where do your tokens and components come from?How do updates happen?What tools keep everything connected?The workflow becomes easier to manage once thats clear, and you spend less time trying to fix changes or mismatches. When tokens, components, and documentation stay in sync, your team moves faster and spends less time fixing the same issues.Extracting Design DataFigma is a collaborative design tool used to create UIs: buttons, layouts, styles, components, everything that makes up the visual language of the product. Its also where all your design data lives, which includes the tokens we talked about earlier. This data is what well extract and eventually connect to your codebase. But first, youll need a setup. To follow along:Go to figma.com and create a free account.Download the Figma desktop app if you prefer working locally, but keep an eye on system requirements if youre on an older device.Once youre in, youll see a home screen that looks something like the following: From here, its time to set up your design tokens. You can either create everything from scratch or use a template from the Figma community to save time. Templates are a great option if you dont want to build everything yourself. But if you prefer full control, creating your setup totally works too.There are other ways to get tokens as well. For example, a site like namedesigntokens.guide lets you generate and download tokens in formats like JSON. The only catch is that Figma doesnt let you import JSON directly, so if you go that route, youll need to bring in a middle tool like Specify to bridge that gap. It helps sync tokens between Figma, GitHub, and other places.For this article, though, well keep it simple and stick with Figma. Pick any design system template from the Figma community to get started; there are plenty to choose from.Depending on the template you choose, youll get a pre-defined set of tokens that includes colors, typography, spacing, components, and more. These templates come in all types: website, e-commerce, portfolio, app UI kits, you name it. For this article, well be using the /Design-System-Template--Community because it includes most of the tokens youll need right out of the box. But feel free to pick a different one if you want to try something else.Once youve picked your template, its time to download the tokens. Well use Supernova, a tool that connects directly to your Figma file and pulls out design tokens, styles, and components. It makes the design-to-code process a lot smoother.Step 1: Sign Up on SupernovaGo to supernova.io and create an account. Once youre in, youll land on a dashboard that looks like this: Step 2: Connect Your Figma FileTo pull in the tokens, head over to the Data Sources section in Supernova and choose Figma from the list of available sources. (Youll also see other options like Storybook or Figma variables, but were focusing on Figma.) Next, click on Connect a new file, paste the link to your Figma template, and click Import.Supernova will load the full design system from your template. From your dashboard, youll now be able to see all the tokens.Turning Tokens Into CodeDesign tokens are great inside Figma, but the real value shows when you turn them into code. Thats how the developers on your team actually get to use them.Heres the problem: Many teams default to copying values manually for things like color, spacing, and typography. But when you make a change to them in Figma, the code is instantly out of sync. Thats why automating this process is such a big win.Instead of rewriting the same theme setup for every project, you generate it, constantly translating designs into dev-ready assets, and keep everything in sync from one source of truth.Now that weve got all our tokens in Supernova, lets turn them into code. First, go to the Code Automation tab, then click New Pipeline. Youll see different options depending on what you want to generate: React Native, CSS-in-JS, Flutter, Godot, and a few others.Lets go with the CSS-in-JS option for the sake of demonstration:After that, youll land on a setup screen with three sections: Data, Configuration, and Delivery.DataHere, you can pick a theme. At first, it might only give you Black as the option; you can select that or leave it empty. It really doesnt matter for the time being.ConfigurationThis is where you control how the code is structured. I picked PascalCase for how token names are formatted. You can also update how things like spacing, colors, or font styles are grouped and saved.DeliveryThis is where you choose how you want the output delivered. I chose Build Only, which builds the code for you to download.Once youre done, click Save. The pipeline is created, and youll see it listed in your dashboard. From here, you can download your token code, which is already generated.Automating DocumentationSo, whats the point of documentation in a design system?You can think of it as the instruction manual for your team. It explains what each token or component is, why it exists, and how to use it. Designers, developers, and anyone else on your team can stay on the same page no guessing, no back-and-forth. Just clear context.Lets continue from where we stopped. Supernova is capable of handling your documentation. Head over to the Documentation tab. This is where you can start editing everything about your design system docs, all from the same place.You can:Add descriptions to your tokens,Define what each base token is for (as well as what its not for),Organize sections by colors, typography, spacing, or components, andDrop in images, code snippets, or examples.Youre building the documentation inside the same tool where your tokens live. In other words, theres no jumping between tools and no additional setup. Thats where the automation kicks in. You edit once, and your docs stay synced with your design source. It all stays in one environment.Once youre done, click Publish and you will be presented with a new window asking you to sign in. After that, youre able to access your live documentation site.Practical Tips For AutomationsAutomation is great. It saves hours of manual work and keeps your design system tight across design and code. The trick is knowing when to automate and how to make sure it keeps working over time. You dont need to automate everything right away. But if youre doing the same thing over and over again, thats a kind of red flag.A few signs that its time to consider using automation:Youre using the same styles across multiple platforms (like web and mobile).You have a shared design system used by more than one team.Design tokens change often, and you want updates to flow into code automatically.Youre tired of manual updates every time the brand team tweaks a color. There are three steps you need to consider. Lets look at each one.Step 1: Keep An Eye On Tools And API UpdatesIf your pipeline depends on design tools, like Figma, or platforms, like Supernova, youll want to know when changes are made and evaluate how they impact your work, because even small updates can quietly affect your exports.Its a good idea to check Figmas API changelog now and then, especially if something feels off with your token syncing. They often update how variables and components are structured, and that can impact your pipeline. Theres also an RSS feed for product updates.The same goes for Supernovas product updates. They regularly roll out improvements that might tweak how your tokens are handled or exported. If youre using open-source tools like Style Dictionary, keeping an eye on the GitHub repo (particularly the Issues tab) can save you from debugging weird token name changes later.All of this isnt about staying glued to release notes, but having a system to check if something suddenly stops working. That way, youll catch things before they reach production.Step 2: Break Your Pipeline Into Smaller StepsA common trap teams fall into is trying to automate everything in one big run: colors, spacing, themes, components, and docs, all processed in a single click. It sounds convenient, but its hard to maintain, and even harder to debug.Its much more manageable to split your automation into pieces. For example, having a single workflow that handles your core design tokens (e.g., colors, spacing, and font sizes), another for theme variations (e.g., light and dark themes), and one more for component mapping (e.g., buttons, inputs, and cards). This way, if your team changes how spacing tokens are named in Figma, you only need to update one part of the workflow, not the entire system. Its also easier to test and reuse smaller steps.Step 3: Test The Output Every TimeEven if everything runs fine, always take a moment to check the exported output. It doesnt need to be complicated. A few key things:Are the token names clean and readable?If you see something like PrimaryColorColorText, thats a red flag.Did anything disappear or get renamed unexpectedly?It happens more often than you think, especially with typography or spacing tokens after design changes.Does the UI still work?If youre using something like Tailwind, CSS variables, or custom themes, double-check that the new token values arent breaking anything in the design or build process.To catch issues early, it helps to run tools like ESLint or Stylelint right after the pipeline completes. Theyll flag odd syntax or naming problems before things get shipped.How AI Can HelpOnce your automation is stable, theres a next layer that can boost your workflow: AI. Its not just for writing code or generating mockups, but for helping with the small, repetitive things that eat up time in design systems. When used right, AI can assist without replacing your control over the system.Heres where it might fit into your workflow:Naming SuggestionsWhen youre dealing with hundreds of tokens, naming them clearly and consistently is a real challenge. Some AI tools can help by suggesting clean, readable names for your tokens or components based on patterns in your design. Its not perfect, but its a good way to kickstart naming, especially for large teams.Pattern RecognitionAI can also spot repeated styles or usage patterns across your design files. If multiple buttons or cards share similar spacing, shadows, or typography, tools powered by AI can group or suggest components for systemization even before a human notices.Automated DocumentationInstead of writing everything from scratch, AI can generate first drafts of documentation based on your tokens, styles, and usage. You still need to review and refine, but it takes away the blank-page problem and saves hours.Here are a few tools that already bring AI into the design and development space in practical ways:Uizard: Uizard uses AI to turn wireframes into designs automatically. You can sketch something by hand, and it transforms that into a usable mockup.Anima: Anima can convert Figma designs into responsive React code. It also helps fill in real content or layout structures, making it a powerful bridge between design and development, with some AI assistance under the hood.Builder.io: Builder uses AI to help generate and edit components visually. It's especially useful for marketers or non-developers who need to build pages fast. AI helps streamline layout, content blocks, and design rules.ConclusionThis article is not about achieving complete automation in the technical sense, but more about using smart tools to streamline the menial and manual aspects of working with design systems. Exporting tokens, generating docs, and syncing design with code can be automated, making your process quicker and more reliable with the right setup.Instead of rebuilding everything from scratch every time, you now have a way to keep things consistent, stay organized, and save time.Further ReadingDesign System Guide by Romina KavcicDesign System In 90 Days by Vitaly Friedman
    Like
    Love
    Wow
    Sad
    Angry
    1كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • The Power Of The <code>Intl</code> API: A Definitive Guide To Browser-Native Internationalization
    smashingmagazine.com
    Its a common misconception that internationalization (i18n) is simply about translating text. While crucial, translation is merely one facet. One of the complexities lies in adapting information for diverse cultural expectations: How do you display a date in Japan versus Germany? Whats the correct way to pluralize an item in Arabic versus English? How do you sort a list of names in various languages?Many developers have relied on weighty third-party libraries or, worse, custom-built formatting functions to tackle these challenges. These solutions, while functional, often come with significant overhead: increased bundle size, potential performance bottlenecks, and the constant struggle to keep up with evolving linguistic rules and locale data.Enter the ECMAScript Internationalization API, more commonly known as the Intl object. This silent powerhouse, built directly into modern JavaScript environments, is an often-underestimated, yet incredibly potent, native, performant, and standards-compliant solution for handling data internationalization. Its a testament to the webs commitment to being worldwide, providing a unified and efficient way to format numbers, dates, lists, and more, according to specific locales.Intl And Locales: More Than Just Language CodesAt the heart of Intl lies the concept of a locale. A locale is far more than just a two-letter language code (like en for English or es for Spanish). It encapsulates the complete context needed to present information appropriately for a specific cultural group. This includes:Language: The primary linguistic medium (e.g., en, es, fr).Script: The script (e.g., Latn for Latin, Cyrl for Cyrillic). For example, zh-Hans for Simplified Chinese, vs. zh-Hant for Traditional Chinese.Region: The geographic area (e.g., US for United States, GB for Great Britain, DE for Germany). This is crucial for variations within the same language, such as en-US vs. en-GB, which differ in date, time, and number formatting.Preferences/Variants: Further specific cultural or linguistic preferences. See Choosing a Language Tag from W3C for more information.Typically, youll want to choose the locale according to the language of the web page. This can be determined from the lang attribute:// Get the page's language from the HTML lang attributeconst pageLocale = document.documentElement.lang || 'en-US'; // Fallback to 'en-US'Occasionally, you may want to override the page locale with a specific locale, such as when displaying content in multiple languages:// Force a specific locale regardless of page languageconst tutorialFormatter = new Intl.NumberFormat('zh-CN', { style: 'currency', currency: 'CNY' });console.log(Chinese example: ${tutorialFormatter.format(199.99)}); // Output: 199.99In some cases, you might want to use the users preferred language:// Use the user's preferred languageconst browserLocale = navigator.language || 'ja-JP';const formatter = new Intl.NumberFormat(browserLocale, { style: 'currency', currency: 'JPY' });When you instantiate an Intl formatter, you can optionally pass one or more locale strings. The API will then select the most appropriate locale based on availability and preference.Core Formatting PowerhousesThe Intl object exposes several constructors, each for a specific formatting task. Lets delve into the most frequently used ones, along with some powerful, often-overlooked gems.1. Intl.DateTimeFormat: Dates and Times, GloballyFormatting dates and times is a classic i18n problem. Should it be MM/DD/YYYY or DD.MM.YYYY? Should the month be a number or a full word? Intl.DateTimeFormat handles all this with ease.const date = new Date(2025, 6, 27, 14, 30, 0); // June 27, 2025, 2:30 PM// Specific locale and options (e.g., long date, short time)const options = { weekday: 'long', year: 'numeric', month: 'long', day: 'numeric', hour: 'numeric', minute: 'numeric', timeZoneName: 'shortOffset' // e.g., "GMT+8"};console.log(new Intl.DateTimeFormat('en-US', options).format(date));// "Friday, June 27, 2025 at 2:30 PM GMT+8"console.log(new Intl.DateTimeFormat('de-DE', options).format(date));// "Freitag, 27. Juni 2025 um 14:30 GMT+8"// Using dateStyle and timeStyle for common patternsconsole.log(new Intl.DateTimeFormat('en-GB', { dateStyle: 'full', timeStyle: 'short' }).format(date));// "Friday 27 June 2025 at 14:30"console.log(new Intl.DateTimeFormat('ja-JP', { dateStyle: 'long', timeStyle: 'short' }).format(date));// "2025627 14:30"The flexibility of options for DateTimeFormat is vast, allowing control over year, month, day, weekday, hour, minute, second, time zone, and more.2. Intl.NumberFormat: Numbers With Cultural NuanceBeyond simple decimal places, numbers require careful handling: thousands separators, decimal markers, currency symbols, and percentage signs vary wildly across locales.const price = 123456.789;// Currency formattingconsole.log(new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD' }).format(price));// "$123,456.79" (auto-rounds)console.log(new Intl.NumberFormat('de-DE', { style: 'currency', currency: 'EUR' }).format(price));// "123.456,79 "// Unitsconsole.log(new Intl.NumberFormat('en-US', { style: 'unit', unit: 'meter', unitDisplay: 'long' }).format(100));// "100 meters"console.log(new Intl.NumberFormat('fr-FR', { style: 'unit', unit: 'kilogram', unitDisplay: 'short' }).format(5.5));// "5,5 kg"Options like minimumFractionDigits, maximumFractionDigits, and notation (e.g., scientific, compact) provide even finer control.3. Intl.ListFormat: Natural Language ListsPresenting lists of items is surprisingly tricky. English uses and for conjunction and or for disjunction. Many languages have different conjunctions, and some require specific punctuation.This API simplifies a task that would otherwise require complex conditional logic:const items = ['apples', 'oranges', 'bananas'];// Conjunction ("and") listconsole.log(new Intl.ListFormat('en-US', { type: 'conjunction' }).format(items));// "apples, oranges, and bananas"console.log(new Intl.ListFormat('de-DE', { type: 'conjunction' }).format(items));// "pfel, Orangen und Bananen"// Disjunction ("or") listconsole.log(new Intl.ListFormat('en-US', { type: 'disjunction' }).format(items));// "apples, oranges, or bananas"console.log(new Intl.ListFormat('fr-FR', { type: 'disjunction' }).format(items));// "apples, oranges ou bananas"4. Intl.RelativeTimeFormat: Human-Friendly TimestampsDisplaying 2 days ago or in 3 months is common in UI, but localizing these phrases accurately requires extensive data. Intl.RelativeTimeFormat automates this.const rtf = new Intl.RelativeTimeFormat('en-US', { numeric: 'auto' });console.log(rtf.format(-1, 'day')); // "yesterday"console.log(rtf.format(1, 'day')); // "tomorrow"console.log(rtf.format(-7, 'day')); // "7 days ago"console.log(rtf.format(3, 'month')); // "in 3 months"console.log(rtf.format(-2, 'year')); // "2 years ago"// French example:const frRtf = new Intl.RelativeTimeFormat('fr-FR', { numeric: 'auto', style: 'long' });console.log(frRtf.format(-1, 'day')); // "hier"console.log(frRtf.format(1, 'day')); // "demain"console.log(frRtf.format(-7, 'day')); // "il y a 7 jours"console.log(frRtf.format(3, 'month')); // "dans 3 mois"The numeric: 'always' option would force 1 day ago instead of yesterday.5. Intl.PluralRules: Mastering PluralizationThis is arguably one of the most critical aspects of i18n. Different languages have vastly different pluralization rules (e.g., English has singular/plural, Arabic has zero, one, two, many...). Intl.PluralRules allows you to determine the plural category for a given number in a specific locale.const prEn = new Intl.PluralRules('en-US');console.log(prEn.select(0)); // "other" (for "0 items")console.log(prEn.select(1)); // "one" (for "1 item")console.log(prEn.select(2)); // "other" (for "2 items")const prAr = new Intl.PluralRules('ar-EG');console.log(prAr.select(0)); // "zero"console.log(prAr.select(1)); // "one"console.log(prAr.select(2)); // "two"console.log(prAr.select(10)); // "few"console.log(prAr.select(100)); // "other"This API doesnt pluralize text directly, but it provides the essential classification needed to select the correct translation string from your message bundles. For example, if you have message keys like item.one, item.other, youd use pr.select(count) to pick the right one.6. Intl.DisplayNames: Localized Names For EverythingNeed to display the name of a language, a region, or a script in the users preferred language? Intl.DisplayNames is your comprehensive solution.// Display language names in Englishconst langNamesEn = new Intl.DisplayNames(['en'], { type: 'language' });console.log(langNamesEn.of('fr')); // "French"console.log(langNamesEn.of('es-MX')); // "Mexican Spanish"// Display language names in Frenchconst langNamesFr = new Intl.DisplayNames(['fr'], { type: 'language' });console.log(langNamesFr.of('en')); // "anglais"console.log(langNamesFr.of('zh-Hans')); // "chinois (simplifi)"// Display region namesconst regionNamesEn = new Intl.DisplayNames(['en'], { type: 'region' });console.log(regionNamesEn.of('US')); // "United States"console.log(regionNamesEn.of('DE')); // "Germany"// Display script namesconst scriptNamesEn = new Intl.DisplayNames(['en'], { type: 'script' });console.log(scriptNamesEn.of('Latn')); // "Latin"console.log(scriptNamesEn.of('Arab')); // "Arabic"With Intl.DisplayNames, you avoid hardcoding countless mappings for language names, regions, or scripts, keeping your application robust and lean.Browser SupportYou might be wondering about browser compatibility. The good news is that Intl has excellent support across modern browsers. All major browsers (Chrome, Firefox, Safari, Edge) fully support the core functionality discussed (DateTimeFormat, NumberFormat, ListFormat, RelativeTimeFormat, PluralRules, DisplayNames). You can confidently use these APIs without polyfills for the majority of your user base.Conclusion: Embrace The Global Web With IntlThe Intl API is a cornerstone of modern web development for a global audience. It empowers front-end developers to deliver highly localized user experiences with minimal effort, leveraging the browsers built-in, optimized capabilities.By adopting Intl, you reduce dependencies, shrink bundle sizes, and improve performance, all while ensuring your application respects and adapts to the diverse linguistic and cultural expectations of users worldwide. Stop wrestling with custom formatting logic and embrace this standards-compliant tool!Its important to remember that Intl handles the formatting of data. While incredibly powerful, it doesnt solve every aspect of internationalization. Content translation, bidirectional text (RTL/LTR), locale-specific typography, and deep cultural nuances beyond data formatting still require careful consideration. (I may write about these in the future!) However, for presenting dynamic data accurately and intuitively, Intl is the browser-native answer.Further Reading & ResourcesMDN Web Docs:Intl namespace objectIntl.DateTimeFormatIntl.NumberFormatIntl.ListFormatIntl.RelativeTimeFormatIntl.PluralRulesIntl.DisplayNamesECMAScript Internationalization API Specification: The official ECMA-402 StandardChoosing a Language Tag
    Like
    Love
    Wow
    Sad
    Angry
    722
    · 0 التعليقات ·0 المشاركات
  • Designing With AI, Not Around It: Practical Advanced Techniques For Product Design Use Cases
    smashingmagazine.com
    AI is almost everywhere it writes text, makes music, generates code, draws pictures, runs research, chats with you and apparently even understands people better than they understand themselves?!Its a lot to take in. The pace is wild, and new tools pop up faster than anyone has time to try them. Amid the chaos, one thing is clear: this isnt hype, but its structural change.According to the Future of Jobs Report 2025 by the World Economic Forum, one of the fastest-growing, most in-demand skills for the next five years is the ability to work with AI and Big Data. That applies to almost every role including product design.What do companies want most from their teams? Right, efficiency. And AI can make people way more efficient. Wed easily spend 3x more time on tasks like replying to our managers without AI helping out. Were learning to work with it, but many of us are still figuring out how to meet the rising bar.Thats especially important for designers, whose work is all about empathy, creativity, critical thinking, and working across disciplines. Its a uniquely human mix. At least, thats what we tell ourselves.Even as debates rage about AIs limitations, tools today (June 2025 timestamp matters in this fast-moving space) already assist with research, ideation, and testing, sometimes better than expected.Of course, not everyone agrees. AI hallucinates, loses context, and makes things up. So how can both views exist at the same time? Very simple. Its because both are true: AI is deeply flawed and surprisingly useful. The trick is knowing how to work with its strengths while managing its weaknesses. The real question isnt whether AI is good or bad its how we, as designers, stay sharp, stay valuable, and stay in the loop.Why Prompting MattersPrompting matters more than most people realize because even small tweaks in how you ask can lead to radically different outputs. To see how this works in practice, lets look at a simple example.Imagine you want to improve the onboarding experience in your product. On the left, you have the prompt you send to AI. On the right, the response you get back. Input Output How to improve onboarding in a SaaS product? Broad suggestions: checklists, empty states, welcome modals How to improve onboarding in Product As workspace setup flow? Suggestions focused on workspace setup How to improve onboarding in Product As workspace setup step to address user confusion? ~10 common pain points with targeted UX fixes for each How to improve onboarding in Product A by redesigning the workspace setup screen to reduce drop-off, with detailed reasoning? ~10 paragraphs covering a specific UI change, rationale, and expected impact This side-by-side shows just how much even the smallest prompt details can change what AI gives you.Talking to an AI model isnt that different from talking to a person. If you explain your thoughts clearly, you get a better understanding and communication overall. Advanced prompting is about moving beyond one-shot, throwaway prompts. Its an iterative, structured process of refining your inputs using different techniques so you can guide the AI toward more useful results. It focuses on being intentional with every word you put in, giving the AI not just the task but also the path to approach it step by step, so it can actually do the job.Where basic prompting throws your question at the model and hopes for a quick answer, advanced prompting helps you explore options, evaluate branches of reasoning, and converge on clear, actionable outputs.But that doesnt mean simple prompts are useless. On the contrary, short, focused prompts work well when the task is narrow, factual, or time-sensitive. Theyre great for idea generation, quick clarifications, or anything where deep reasoning isnt required. Think of prompting as a scale, not a binary. The simpler the task, the faster a lightweight prompt can get the job done. The more complex the task, the more structure it needs.In this article, well dive into how advanced prompting can empower different product & design use cases, speeding up your workflow and improving your results whether youre researching, brainstorming, testing, or beyond. Lets dive in.Practical CasesIn the next section, well explore six practical prompting techniques that weve found most useful in real product design work. These arent abstract theories each one is grounded in hands-on experience, tested across research, ideation, and evaluation tasks. Think of them as modular tools: you can mix, match, and adapt them depending on your use case. For each, well explain the thinking behind it and walk through a sample prompt.Important note: The prompts youll see are not copy-paste recipes. Some are structured templates you can reuse with small tweaks; others are more specific, meant to spark your thinking. Use them as scaffolds, not scripts.1. Task Decomposition By JTBDTechnique: Role, Context, Instructions template + Checkpoints (with self-reflection)Before solving any problem, theres a critical step we often overlook: breaking the problem down into clear, actionable parts.Jumping straight into execution feels fast, but its risky. We might end up solving the wrong thing, or solving it the wrong way. Thats where GPT can help: not just by generating ideas, but by helping us think more clearly about the structure of the problem itself.There are many ways to break down a task. One of the most useful in product work is the Jobs To Be Done (JTBD) framework. Lets see how we can use advanced prompting to apply JTBD decomposition to any task.Good design starts with understanding the user, the problem, and the context. Good prompting? Pretty much the same. Thats why most solid prompts include three key parts: Role, Context, and Instructions. If needed, you can also add the expected format and any constraints.In this example, were going to break down a task into smaller jobs and add self-checkpoints to the prompt, so the AI can pause, reflect, and self-verify along the way.RoleAct as a senior product strategist and UX designer with deep expertise in Jobs To Be Done (JTBD) methodology and user-centered design. You think in terms of user goals, progress-making moments, and unmet needs similar to approaches used at companies like Intercom, Basecamp, or IDEO.ContextYou are helping a product team break down a broad user or business problem into a structured map of Jobs To Be Done. This decomposition will guide discovery, prioritization, and solution design.Task & Instructions[ DESCRIBE THE USER TASK OR PROBLEM ]Use JTBD thinking to uncover:The main functional job the user is trying to get done;Related emotional or social jobs;Sub-jobs or tasks users must complete along the way;Forces of progress and barriers that influence behavior.CheckpointsBefore finalizing, check yourself:Are the jobs clearly goal-oriented and not solution-oriented?Are sub-jobs specific steps toward the main job?Are emotional/social jobs captured?Are user struggles or unmet needs listed?If anythings missing or unclear, revise and explain what was added or changed.With a simple one-sentence prompt, youll likely get a high-level list of user needs or feature ideas. An advanced approach can produce a structured JTBD breakdown of a specific user problem, which may include:Main Functional Job: A clear, goal-oriented statement describing the primary outcome the user wants to achieve.Emotional & Social Jobs: Supporting jobs related to how the user wants to feel or be perceived during their progress.Sub-Jobs: Step-by-step tasks or milestones the user must complete to fulfill the main job.Forces of Progress: A breakdown of motivations (push/pull) and barriers (habits/anxieties) that influence user behavior.But these prompts are most powerful when used with real context. Try it now with your product. Even a quick test can reveal unexpected insights.2. Competitive UX AuditTechnique: Attachments + Reasoning Before Understanding + Tree of Thought (ToT)Sometimes, you dont need to design something new you need to understand what already exists.Whether youre doing a competitive analysis, learning from rivals, or benchmarking features, the first challenge is making sense of someone elses design choices. Whats the feature really for? Whos it helping? Why was it built this way?Instead of rushing into critique, we can use GPT to reverse-engineer the thinking behind a product before judging it. In this case, start by:Grabbing the competitors documentation for the feature you want to analyze.Save it as a PDF. Then head over to ChatGPT (or other models).Before jumping into the audit, ask it to first make sense of the documentation. This technique is called Reasoning Before Understanding (RBU). That means before you ask for critique, you ask for interpretation. This helps AI build a more accurate mental model and avoids jumping to conclusions.RoleYou are a senior UX strategist and cognitive design analyst. Your expertise lies in interpreting digital product features based on minimal initial context, inferring purpose, user intent, and mental models behind design decisions before conducting any evaluative critique.ContextYouve been given internal documentation and screenshots of a feature. The goal is not to evaluate it yet, but to understand what its doing, for whom, and why.Task & InstructionsReview the materials and answer:What is this feature for?Who is the intended user?What tasks or scenarios does it support?What assumptions does it make about the user?What does its structure suggest about priorities or constraints?Once you get the first reply, take a moment to respond: clarify, correct, or add nuance to GPTs conclusions. This helps align the models mental frame with your own.For the audit part, well use something called the Tree of Thought (ToT) approach. Tree of Thought (ToT) is a prompting strategy that asks the AI to think in branches. Instead of jumping to a single answer, the model explores multiple reasoning paths, compares outcomes, and revises logic before concluding like tracing different routes through a decision tree. This makes it perfect for handling more complex UX tasks.You are now performing a UX audit based on your understanding of the feature. Youll identify potential problems, alternative design paths, and trade-offs using a Tree of Thought approach, i.e., thinking in branches, comparing different reasoning paths before concluding.orConvert your understanding of the feature into a set of Jobs-To-Be-Done statements from the users perspective using a Tree of Thought approach.List implicit assumptions this feature makes about the user's behavior, workflow, or context using a Tree of Thought approach.Propose alternative versions of this feature that solve the same job using different interaction or flow mechanics using a Tree of Thought approach.3. Ideation With An Intellectual OpponentTechnique: Role Conditioning + Memory UpdateWhen youre working on creative or strategic problems, theres a common trap: AI often just agrees with you or tries to please your way of thinking. It treats your ideas like gospel and tells you theyre great even when theyre not.So how do you avoid this? How do you get GPT to challenge your assumptions and act more like a critical thinking partner? Simple: tell it to and ask to remember.InstructionsFrom now on, remember to follow this mode unless I explicitly say otherwise.Do not take my conclusions at face value. Your role is not to agree or assist blindly, but to serve as a sharp, respectful intellectual opponent.Every time I present an idea, do the following:Interrogate my assumptions: What am I taking for granted?Present counter-arguments: Where could I be wrong, misled, or overly confident?Test my logic: Is the reasoning sound, or are there gaps, fallacies, or biases?Offer alternatives: Not for the sake of disagreement, but to expand perspective.Prioritize truth and clarity over consensus: Even when its uncomfortable.Maintain a constructive, rigorous, truth-seeking tone. Dont argue for the sake of it. Argue to sharpen thought, expose blind spots, and help me reach clearer, stronger conclusions.This isnt a debate. Its a collaboration aimed at insight.4. Requirements For ConceptingTechnique: Requirement-Oriented + Meta promptingThis one deserves a whole article on its own, but lets lay the groundwork here.When youre building quick prototypes or UI screens using tools like v0, Bolt, Lovable, UX Pilot, etc., your prompt needs to be better than most PRDs youve worked with. Why? Because the output depends entirely on how clearly and specifically you describe the goal.The catch? Writing that kind of prompt is hard. So instead of jumping straight to the design prompt, try writing a meta-prompt first. That is a prompt that asks GPT to help you write a better prompt. Prompting about prompting, prompt-ception, if you will.Heres how to make that work: Feed GPT what you already know about the app or the screen. Then ask it to treat things like information architecture, layout, and user flow as variables it can play with. That way, you dont just get one rigid idea you get multiple concept directions to explore.RoleYou are a product design strategist working with AI to explore early-stage design concepts.GoalGenerate 3 distinct prompt variations for designing a Daily Wellness Summary single screen in a mobile wellness tracking app for Lovable/Bolt/v0.Each variation should experiment with a different Information Architecture and Layout Strategy. You dont need to fully specify the IA or layout just take a different angle in each prompt. For example, one may prioritize user state, another may prioritize habits or recommendations, and one may use a card layout while another uses a scroll feed.User contextThe target user is a busy professional who checks this screen once or twice a day (morning/evening) to log their mood, energy, and sleep quality, and to receive small nudges or summaries from the app.Visual styleKeep the tone calm and approachable.FormatEach of the 3 prompt variations should be structured clearly and independently.Remember: The key difference between the three prompts should be the underlying IA and layout logic. You dont need to over-explain just guide the design generator toward different interpretations of the same user need.5. From Cognitive Walkthrough To Testing HypothesisTechnique: Casual Tree of Though + Casual Reasoning + Multi-Roles + Self-ReflectionCognitive walkthrough is a powerful way to break down a user action and check whether the steps are intuitive.Example: User wants to add a task Do they know where to click? What to do next? Do they know it worked?Weve found this technique super useful for reviewing our own designs. Sometimes theres already a mockup; other times were still arguing with a PM about what should go where. Either way, GPT can help.Heres an advanced way to run that process:ContextYouve been given a screenshot of a screen where users can create new tasks in a project management app. The main action the user wants to perform is add a task. Simulate behavior from two user types: a beginner with no prior experience and a returning user familiar with similar tools.Task & InstructionsGo through the UI step by step and evaluate:Will the user know what to do at each step?Will they understand how to perform the action?Will they know theyve succeeded?For each step, consider alternative user paths (if multiple interpretations of the UI exist). Use a casual Tree-of-Thought method.At each step, reflect: what assumptions is the user making here? What visual feedback would help reduce uncertainty?FormatUse a numbered list for each step. For each, add observations, possible confusions, and UX suggestions.LimitsDont assume prior knowledge unless its visually implied.Do not limit analysis to a single user type.Cognitive walkthroughs are great, but they get even more useful when they lead to testable hypotheses.After running the walkthrough, youll usually uncover moments that might confuse users. Instead of leaving that as a guess, turn those into concrete UX testing hypotheses.We ask GPT to not only flag potential friction points, but to help define how wed validate them with real users: using a task, a question, or observable behavior.Task & InstructionsBased on your previous cognitive walkthrough:Extract all potential usability hypotheses from the walkthrough.For each hypothesis:Assess whether it can be tested through moderated or unmoderated usability testing.Explain what specific UX decision or design element may cause this issue. Use causal reasoning.For testable hypotheses:Propose a specific usability task or question.Define a clear validation criterion (how youll know if the hypothesis is confirmed or disproved).Evaluate feasibility and signal strength of the test (e.g., how easy it is to test, and how confidently it can validate the hypothesis).Assign a priority score based on Impact, Confidence, and Ease (ICE).LimitsDont invent hypotheses not rooted in your walkthrough output. Only propose tests where user behavior or responses can provide meaningful validation. Skip purely technical or backend concerns.6. Cross-Functional FeedbackTechnique: Multi-RolesGood design is co-created. And good designers are used to working with cross-functional teams: PMs, engineers, analysts, QAs, you name it. Part of the job is turning scattered feedback into clear action items.Earlier, we talked about how giving AI a role helps sharpen its responses. Now lets level that up: what if we give it multiple roles at once? This is called multi-role prompting. Its a great way to simulate a design review with input from different perspectives. You get quick insights and a more well-rounded critique of your design.RoleYou are a cross-functional team of experts evaluating a new dashboard design:PM (focus: user value & prioritization)Engineer (focus: feasibility & edge cases)QA tester (focus: clarity & testability)Data analyst (focus: metrics & clarity of reporting)Designer (focus: consistency & usability)ContextThe team is reviewing a mockup for a new analytics dashboard for internal use.Task & InstructionsFor each role:What stands out immediately?What concerns might this role have?What feedback or suggestions would they give?Designing With AI Is A Skill, Not A ShortcutBy now, youve seen that prompting isnt just about typing better instructions. Its about designing better thinking.Weve explored several techniques, and each is useful in different contexts: Technique When to use It Role + Context + Instructions + Constraints Anytime you want consistent, focused responses (especially in research, decomposition, and analysis). Checkpoints / Self-verification When accuracy, structure, or layered reasoning matters. Great for complex planning or JTBD breakdowns. Reasoning Before Understanding (RBU) When input materials are large or ambiguous (like docs or screenshots). Helps reduce misinterpretation. Tree of Thought (ToT) When you want the model to explore options, backtrack, compare. Ideal for audits, evaluations, or divergent thinking. Meta-prompting When you're not sure how to even ask the right question. Use it early in fuzzy or creative concepting. Multi-role prompting When you need well-rounded, cross-functional critique or to simulate team feedback. Memory-updated opponent prompting When you want to challenge your own logic, uncover blind spots, or push beyond echo chambers. But even the best techniques wont matter if you use them blindly, so ask yourself:Do I need precision or perspective right now?Precision? Try Role + Checkpoints for clarity and control.Perspective? Use Multi-Role or Tree of Thought to explore alternatives.Should the model reflect my framing, or break it?Reflect it? Use Role + Context + Instructions.Break it? Try Opponent prompting to challenge assumptions.Am I trying to reduce ambiguity, or surface complexity?Reduce ambiguity? Use Meta-prompting to clarify your ask.Surface complexity? Go with ToT or RBU to expose hidden layers.Is this task about alignment, or exploration?Alignment? Use Multi-Roles prompting to simulate consensus.Exploration? Use Cognitive Walkthrough to push deeper.Remember, you dont need a long prompt every time. Use detail when the task demands it, not out of habit. AI can do a lot, but it reflects the shape of your thinking. And prompting is how you shape it. So dont just prompt better. Think better. And design with AI not around it.
    Like
    Love
    Wow
    Sad
    Angry
    634
    · 0 التعليقات ·0 المشاركات
  • Optimizing PWAs For Different Display Modes
    smashingmagazine.com
    Progressive web apps (PWA) are a fantastic way to turn web applications into native-like, standalone experiences. They bridge the gap between websites and native apps, but this transformation can be prone to introducing design challenges that require thoughtful consideration.We define our PWAs with a manifest file. In our PWAs manifest, we can select from a collection of display modes, each offering different levels of browser interface visibility:fullscreen: Hides all browser UI, using the entire display.standalone: Looks like a native app, hiding browser controls but keeping system UI.minimal-ui: Shows minimal browser UI elements.browser: Standard web browser experience with full browser interface.Oftentimes, we want our PWAs to feel like apps rather than a website in a browser, so we set the display manifest member to one of the options that hides the browsers interface, such as fullscreen or standalone. This is fantastic for helping make our applications feel more at home, but it can introduce some issues we wouldnt usually consider when building for the web.Its easy to forget just how much functionality the browser provides to us. Things like forward/back buttons, the ability to refresh a page, search within pages, or even manipulate, share, or copy a pages URL are all browser-provided features that users can lose access to when the browsers UI is hidden. There is also the case of things that we display on websites that dont necessarily translate to app experiences.Imagine a user deep into a form with no back button, trying to share a product page without the ability to copy a URL, or hitting a bug with no refresh button to bail them out!Much like how we make different considerations when designing for the web versus designing for print, we need to make considerations when designing for independent experiences rather than browser-based experiences by tailoring the content and user experience to the medium.Thankfully, were provided with plenty of ways to customise the web.Using Media Queries To Target Display ModesWe use media queries all the time when writing CSS. Whether its switching up styles for print or setting breakpoints for responsive design, theyre commonplace in the web developers toolkit. Each of the display modes discussed previously can be used as a media query to alter the appearance of documents depending.Media queries such as @media (min-width: 1000px) tend to get the most use for setting breakpoints based on the viewport size, but theyre capable of so much more. They can handle print styles, device orientation, contrast preferences, and a whole ton more. In our case, were interested in the display-mode media feature.Display mode media queries correspond to the current display mode.Note: While we may set display modes in our manifest, the actual display mode may differ depending on browser support.These media queries directly reference the current mode:@media (display-mode: standalone) will only apply to pages set to standalone mode.@media (display-mode: fullscreen) applies to fullscreen mode. It is worth noting that this also applies when using the Fullscreen API.@media (display-mode: minimal-ui) applies to minimal UI mode.@media (display-mode: browser) applies to standard browser mode.It is also worth keeping an eye out for the window-controls-overlay and tabbed display modes. At the time of writing, these two display modes are experimental and can be used with display_override. display-override is a member of our PWAs manifest, like display, but provides some extra options and power.display has a predetermined fallback chain (fullscreen -> standalone -> minimal-ui -> browser) that we cant change, but display-override allows setting a fallback order of our choosing, like the following:"display_override": ["fullscreen", "minimal-ui"]window-controls-overlay can only apply to PWAs running on a desktop operating system. It makes the PWA take up the entire window, with window control buttons appearing as an overlay. Meanwhile, tabbed is relevant when there are multiple applications within a single window.In addition to these, there is also the picture-in-picture display mode that applies to (you guessed it) picture-in-picture modes.We use these media queries exactly as we would any other media query. To show an element with the class .pwa-only when the display mode is standalone, we could do this:.pwa-only { display: none;}@media (display-mode: standalone) { .pwa-only { display: block; }}If we wanted to show the element when the display mode is standalone or minimal-ui, we could do this:@media (display-mode: standalone), (display-mode: minimal-ui) { .pwa-only { display: block; }}As great as it is, sometimes CSS isnt enough. In those cases, we can also reference the display mode and make necessary adjustments with JavaScript:const isStandalone = window.matchMedia("(display-mode: standalone)").matches;// Listen for display mode changeswindow.matchMedia("(display-mode: standalone)").addEventListener("change", (e) => { if (e.matches) { // App is now in standalone mode console.log("Running as PWA"); }});Practical ApplicationsNow that we know how to make display modifications depending on whether users are using our web app as a PWA or in a browser, we can have a look at how we might put these newly learnt skills to use.Tailoring Content For PWA UsersUsers who have an app installed as a PWA are already converted, so you can tweak your app to tone down the marketing speak and focus on the user experience. Since these users have demonstrated commitment by installing your app, they likely dont need promotional content or installation prompts.Display More Options And FeaturesYou might need to directly expose more things in PWA mode, as people wont be able to access the browsers settings as easily when the browser UI is hidden. Features like changing font sizing, switching between light and dark mode, bookmarks, sharing, tabs, etc., might need an in-app alternative.Platform-Appropriate FeaturesThere are features you might not want on your web app because they feel out of place, but that you might want on your PWA. A good example is the bottom navigation bar, which is common in native mobile apps thanks to the easier reachability it provides, but uncommon on websites.People sometimes print websites, but they very rarely print apps. Consider whether features like print buttons should be hidden in PWA mode.Install PromptsA common annoyance is a prompt to install a site as a PWA appearing when the user has already installed the site. Ideally, the browser will provide an install prompt of its own if our PWA is configured correctly, but not all browsers do, and it can be finicky. MDN has a fantastic guide on creating a custom button to trigger the installation of a PWA, but it might not fit our needs.We can improve things by hiding install prompts with our media query or detecting the current display mode with JavaScript and forgoing triggering popups in the first place.We could even set this up as a reusable utility class so that anything we dont want to be displayed when the app is installed as a PWA can be hidden with ease./* Utility class to hide elements in PWA mode */.hide-in-pwa { display: block;}@media (display-mode: standalone), (display-mode: minimal-ui) { .hide-in-pwa { display: none !important; }}Then in your HTML:<div class="install-prompt hide-in-pwa"> <button>Install Our App</button></div><div class="browser-notice hide-in-pwa"> <p>For the best experience, install this as an app!</p></div>We could also do the opposite and create a utility class to make elements only show when in a PWA, as we discussed earlier.Strategic Use Of Scope And Start URLAnother way to hide content from your site is to set the scope and start_url properties. These arent using media queries as weve discussed, but should be considered as ways to present different content depending on whether a site is installed as a PWA.Here is an example of a manifest using these properties:{ "name": "Example PWA", "scope": "/dashboard/", "start_url": "/dashboard/index.html", "display": "standalone", "icons": [ { "src": "icon.png", "sizes": "192x192", "type": "image/png" } ]}scope here defines the top level of the PWA. When users leave the scope of your PWA, theyll still have an app-like interface but gain access to browser UI elements. This can be useful if youve got certain parts of your app that you still want to be part of the PWA but which arent necessarily optimised or making the necessary considerations.start_url defines the URL a user will be presented with when they open the application. This is useful if, for example, your app has marketing content at example.com and a dashboard at example.com/dashboard/index.html. It is likely that people who have installed the app as a PWA dont need the marketing content, so you can set the start_url to /dashboard/index.html so the app starts on that page when they open the PWA.Enhanced TransitionsView transitions can feel unfamiliar, out of place, and a tad gaudy on the web, but are a common feature of native applications. We can set up PWA-only view transitions by wrapping the relevant CSS appropriately:@media (display-mode: standalone) { @view-transition { navigation: auto; }}If youre really ambitious, you could also tweak the design of a site entirely to fit more closely with native design systems when running as a PWA by pairing a check for the display mode with a check for the device and/or browser in use as needed.Browser Support And TestingBrowser support for display mode media queries is good and extensive. However, its worth noting that Firefox doesnt have PWA support, and Firefox for Android only displays PWAs in browser mode, so you should make the necessary considerations. Thankfully, progressive enhancement is on our side. If were dealing with a browser lacking support for PWAs or these media queries, well be treated to graceful degradation.Testing PWAs can be challenging because every device and browser handles them differently. Each display mode behaves slightly differently in every browser and OS combination.Unfortunately, I dont have a silver bullet to offer you with regard to this. Browsers dont have a convenient way to simulate display modes for testing, so youll have to test out your PWA on different devices, browsers, and operating systems to be sure everything works everywhere it should, as it should.RecapUsing a PWA is a fundamentally different experience from using a web app in the browser, so considerations should be made. display-mode media queries provide a powerful way to create truly adaptive Progressive Web Apps that respond intelligently to their installation and display context. By leveraging these queries, we can do the following:Hide redundant installation prompts for users who have already installed the app,Provide appropriate navigation aids when making browser controls unavailable,Tailor content and functionality to match user expectations in different contexts,Create more native-feeling experiences that respect platform conventions, andProgressively enhance the experience for committed users.The key is remembering that PWA users in standalone mode have different needs and expectations than standard website visitors. By detecting and responding to display modes, we can create experiences that feel more polished, purposeful, and genuinely app-like.As PWAs continue to mature, thoughtful implementations and tailoring will become increasingly important for creating truly compelling app experiences on the web. If youre itching for even more information and PWA tips and tricks, check out Ankita Masands Extensive Guide To Progressive Web Applications.Further Reading On SmashingMagCreating A Magento PWA: Customizing Themes vs. Coding From Scratch, Alex HusarHow To Optimize Progressive Web Apps: Going Beyond The Basics, Gert SvaikoHow To Decide Which PWA Elements Should Stick, Suzanne ScaccaUniting Web And Native Apps With 4 Unknown JavaScript APIs, Juan Diego Rodrguez
    Like
    Love
    Wow
    Sad
    Angry
    593
    · 0 التعليقات ·0 المشاركات
  • From Line To Layout: How Past Experiences Shape Your Design Career
    smashingmagazine.com
    Design career origin stories often sound clean and linear: a degree in Fine Arts, a lucky internship, or a first job that launches a linear, upward path. But what about those whose paths were not so straight? The ones who came from service, retail, construction, or even firefighting the messy, winding paths that didnt begin right out of design school who learned service instincts long before learning design tools?I earned my Associate in Science way later than planned, after 15 years in fine dining, which I once dismissed as a detour delaying my real career. But in hindsight, it was anything but. Those years built skills and instincts I still rely on daily in meetings, design reviews, and messy mid-project pivots.Your Past Is Your AdvantageI still have the restaurant dream.Whenever Im overwhelmed or deep in a deadline, it comes back: Im the only one running the restaurant floor. The grill is on fire. Theres no clean glassware. Everyone needs their check, their drink, and their table turned. I wake up sweating, and I ask myself, Why am I still having restaurant nightmares 15 years into a design career?Because those jobs wired themselves into how I think and work.Those years werent just a job but high-stakes training in adaptability, anticipation, and grace under pressure. They built muscle memory: ways of thinking, reacting, and solving problems that still appear daily in my design work. They taught me to adapt, connect with people, and move with urgency and grace.But those same instincts rooted in nightmares can trip you up if youre unaware. Speed can override thoughtfulness. Constant anticipation can lead to over-complication. The pressure to polish can push you to over-deliver too soon. Embracing your past also means examining it recognizing when old habits serve you and when they dont.With reflection, those experiences can become your greatest advantage.Lessons From The LineThese arent abstract comparisons. Theyre instincts built through repetition and real-world pressure, and they show up daily in my design process.Here are five moments from restaurant life that shaped how I think, design, and collaborate today.1. Reading The RoomReading a customers mood begins as soon as they sit down. Through years of trial and error, I refined my understanding of subtle cues, like seating delays indicating frustration or menus set aside, suggesting they want to enjoy cocktails. Adapting my approach based on these signals became instinctual, emerging from countless moments of observation.What I LearnedThe subtleties of reading a client arent so different in product design. Contexts differ, but the cues remain similar: project specifics, facial expressions, tone of voice, lack of engagement, or even the word salad of client feedback. With time, these signals become easier to spot, and you learn to ask better questions, challenge assumptions, or offer alternate approaches before misalignment grows. Whether a client is energized and all-in or hesitant and constrained, reading those cues early can make all the difference.Those instincts like constant anticipation and early intervention served me well in fine dining, but can hinder the design process if Im not in tune with how Im reacting. Jumping in too early can lead to over-complicating the design process, solving problems that havent been voiced (yet), or stepping on others roles. Ive had to learn to pause, check in with the team, and trust the process to unfold more collaboratively.How I Apply This TodayGuide direction with focused options.Early on, share 23 meaningful variations, like style tiles or small component explorations, to shape the conversation and avoid overwhelm.Flag misalignment fast.If something feels off, raise it early and loop in the right people.Be intentional about workshop and deliverable formats.Structure or space? Depends on what helps the client open up and share.Pause before jumping in.A sticky note on my screen (Pause) helps me slow down and check assumptions.2. Speed Vs. IntentionalityIn fine dining, multitasking wasnt just helpful, it was survival. Every night demanded precision timing, orchestrating every meal step, from the first drink poured to the final dessert plated. The souffl, for example, was a constant test. It takes precisely 45 minutes no more, no less. If the guests lingered over appetizers or finished their entres too early, that souffl risked collapse.But fine dining taught me how to handle that volatility. I learned to manage timing proactively, mastering small strategies: an amuse-bouche to buy the kitchen precious minutes, a complimentary glass of champagne to slow a too-quickly paced meal. Multitasking meant constantly adjusting in real-time, keeping a thousand tiny details aligned even when, behind the scenes, chaos loomed.What I LearnedMultitasking is a given in product design, just in a different form. While the pressure is less immediate, it is more layered as designers often juggle multiple projects, overlapping timelines, differing stakeholder expectations, and evolving product needs simultaneously. That restaurant instinct to keep numerous plates spinning at the same time? Its how I handle shifting priorities, constant Slack pings, regular Figma updates, and unexpected client feedback without losing sight of the big picture.The hustle and pace of fine dining hardwired me to associate speed with success. But in design, speed can sometimes undermine depth. Jumping too quickly into a solution might mean missing the real problem or polishing the wrong idea. Ive learned that staying in motion isnt always the goal. Unlike a fast-paced service window, product design invites experimentation and course correction. Ive had to quiet the internal timer and lean into design with a slower, more intentional nature.How I Apply This TodayMake space for inspiration.Set aside time for untasked exploration outside the norm magazines, bookstores, architecture, or gallery visits before jumping into design.Build in pause points.Plan breaks between design rounds and schedule reviews after a weekend gap to return with fresh eyes.Stay open to starting over.Let go of work that isnt working, even full comps. Starting fresh often leads to better ideas.3. Presentation MattersPresentation isnt just a finishing touch in fine dining its everything. Its the mint leaf delicately placed atop a dessert, the raspberry glace cascading across the perfectly off-centered espresso cake.The presentation engages every sense: the smell of rare imported truffles on your truffle fries, or the meticulous choreography of four servers placing entres in front of diners simultaneously, creating a collective wow moment. An excellent presentation shapes diners emotional connection with their meal that experience directly impacts how generously they spend, and ultimately, your success.What I LearnedA product design presentation, from the initial concept to the handoff, carries that same power. Introducing a new homepage design can feel mechanical or magical, depending entirely on how you frame and deliver it. Just like careful plating shapes a diners experience, clear framing and confident storytelling shape how design is received. Beyond the initial introduction, explain the why behind your choices. Connect patterns to the organic elements of the brands identity and highlight how users will intuitively engage with each section. Presentation isnt just about aesthetics; it helps clients connect with the work, understand its value, and get excited to share it.The pressure to get everything right the first time, to present a pixel-perfect comp that wows immediately, is intense.Sometimes, an excellent presentation isnt about perfection its about pacing, storytelling, and allowing the audience to see themselves in the work.Ive had to let go of the idea that polish is everything and instead focus on the why, describing it with clarity, confidence, and connection.How I Apply This TodayFrame the story first.Lead with the why behind the work before showing the what. It sets the tone and invites clients into the design.Keep presentations polished.Share fewer, more intentional concepts to reduce distractions and keep focus.Skip the jargon.Clients arent designers. Use clear, relatable terms. Say section instead of component, or repeatable element instead of pattern.Bring designs to life.Use motion, prototypes, and real content to add clarity, energy, and brand relevance.5. Composure Under PressureIn fine dining, pressure isnt an occasional event its the default setting. Every night is high stakes. Timing is tight, expectations are sky-high, and mistakes are rarely forgiven. Composure becomes your edge. You dont show panic when the kitchen is backed up or when a guest sends a dish back mid-rush. You pivot. You delegate. You anticipate. Some nights, the only thing that kept things on track was staying calm and thinking clearly. This notion of problem solving and decision making is key to being a great designer. I think that we need to get really strong at problem identification and then prioritization. All designers are good problem solvers, but the really great designers are strong problem finders. Jason Cyr, How being a firefighter made me a better designer thinkerWhat I LearnedThe same principle applies to product design. When pressure mounts tight timelines, conflicting feedback, or unclear priorities your ability to stay composed can shift the energy of the entire project.Composure isnt just about being calm; its about being adaptable and responsive without reacting impulsively. It helps you hold space for feedback, ask better questions, and move forward with clarity instead of chaos.There have also been plenty of times when a client doesnt resonate with a design, which can feel crushing. You can easily take it personally and internalize the rejection, or you can pause, listen, and course-correct. Ive learned to focus on understanding the root of the feedback. Often, what seems like a rejection is just discomfort with a small detail, which in most cases can be easily corrected.Perfection was the baseline in restaurants, and pressure drove polish. In design, that mindset can lead to overinvesting in perfection too soon or freezing under critique. Ive had to unlearn that success means getting everything right the first time. Now I see messy collaboration and gradual refinement as a mark of success, not failure.How I Apply This TodayUse live design to unblock.When timelines are tight and feedback goes in circles, co-designing in real time helps break through stuck points and move forward quickly.Turn critique into clarity.Listen for whats underneath the feedback, then ask clarifying questions, or repeat back what youre hearing to align before acting.Pause when stress builds.If you feel reactive, take a moment to regroup before responding.Frame changes as progress.Normalize iteration as part of the process, and not a design failure.Would I Go Back?I still dream about the restaurant floor. But now, I see it as a reminder not of where I was stuck, but of where I perfected the instincts I use today. If youre someone who came to design from another path, try asking yourself:When do I feel strangely at ease while others panic?What used to feel like just part of the job, but now feels like a superpower?Where do I get frustrated because my instincts are different and maybe sharper?What kinds of group dynamics feel easy to me that others struggle with?What strengths would not exist in me today if I hadnt lived that past life?Once you see the patterns, start using them.Name your edge. Talk about your background as an asset: in intros, portfolios, interviews, or team retrospectives. When projects get messy, lean into what you already know how to do. Trust your instincts. Theyre real, and theyre earned. But balance them, too. Stay aware of when your strengths could become blind spots, like speed overriding thoughtfulness. That kind of awareness turns experience into a tool, not a trigger.Your past doesnt need to look like anyone elses. It just needs to teach you something.Further ReadingIf I Was Starting My Career Today: Thoughts After 15 Years Spent In UX Design (Part One, Part Two), by Andrii Zhdan (Smashing Magazine)In this two-part series, Andrii Zhdan outlines common challenges faced at the start of a design career and offers advice to smooth your journey based on insights from his experience hiring designers.Overcoming Imposter Syndrome By Developing Your Own Guiding Principles, by Luis Ouriach (Smashing Magazine)Unfortunately, not everyone has access to a mentor or a guide at the start of the design career, which is why we often have to rely on working it out by ourselves. In this article, Luis Ouriach tries to help you in this task so that you can walk into the design critique meetings with more confidence and really deliver the best representation of your ideas.Why Designers Get Stuck In The Details And How To Stop, by Nikita Samutin (Smashing Magazine)Designers love to craft, but polishing pixels before the problem is solved is a time sink. This article pinpoints the five traps that lure us into premature detail and then hands you a rescue plan to refocus on goals, ship faster, and keep your craft where it counts.Rediscovering The Joy Of Design, by Pratik Joglekar (Smashing Magazine)Pratik Joglekar takes a philosophical approach to remind designers about the lost joy within themselves by effectively placing massive importance on mindfulness, introspection, and forward-looking.Lessons Learned As A Designer-Founder, by Dave Feldman (Smashing Magazine)In this article, Dave Feldman shares his lessons learned and the experiments he has done as a multidisciplinary designer-founder-CEO at an early-stage startup.How Designers Should Ask For (And Receive) High-Quality Feedback, by Andy Budd (Smashing Magazine)Designers often complain about the quality of feedback they get from senior stakeholders without realizing its usually because of the way they initially have framed the request. In this article, Andy Budd shares a better way of requesting feedback: rather than sharing a linear case study that explains every design revision, the first thing to do would be to better frame the problem.How being a Firefighter made me a better Designer Thinker by Jason Cyr (Medium)The ability to come upon a situation and very quickly start evaluating information, asking questions, making decisions, and formulating a plan is a skill that every firefighter learns to develop, especially as you rise through the ranks and start leading others.Advice for making the most of an indirect career path to design, by Heidi Meredith (Adobe Express Growth)I didnt know anything about design until after I graduated from the University of California, Santa Cruz, with a degree in English Literature/Creative Writing. A mere three months into it, though, I realized I didn't want to write books I wanted to design them.I want to express my deep gratitude to Sara Wachter-Boettcher, whose coaching helped me find the clarity and confidence to write this piece and, more importantly, to move forward with purpose in both life and work. And to Lea Alcantara, my design director at Fueled, for being a steady creative force and an inspiring example of thoughtful leadership.
    Like
    Love
    Wow
    Sad
    Angry
    1كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • The Psychology Of Color In UX And Digital Products
    smashingmagazine.com
    Color plays a pivotal role in crafting compelling user experiences and successful digital products. Its far more than just aesthetics; color strategically guides users, establishes brand identity, and evokes specific emotions.Beyond functionality, color is also a powerful tool for brand recognition and emotional connection. Consistent use of brand colors across a digital product reinforces identity and builds trust. Different hues carry cultural and psychological associations, allowing designers to subtly influence user perception and create the desired mood. A thoughtful and deliberate approach to color in UX design elevates the user experience, strengthens brand presence, and contributes significantly to the overall success and impact of digital products. In this article, we will talk about the importance of color and why they are important for creating emotional connection and delivering consistent and accessible digital products.Well-chosen color palettes enhance usability by creating visual hierarchies, highlighting interactive elements, and providing crucial feedback on screens. For instance, a bright color might draw attention to a call-to-action button, while consistent color coding can help users navigate complex interfaces intuitively. Color also contributes significantly to accessibility, ensuring that users with visual impairments can still effectively interact with digital products. By carefully considering contrast ratios and providing alternative visual cues, designers can create inclusive experiences that cater to a wider audience.The colors we choose are the silent language of our digital products, and speaking it fluently is essential for success.Communicating Brand Identity Through Color In The Digital SpaceA thoughtfully curated and vibrant color palette becomes a critical differentiator, allowing a brand to stand out amidst the digital noise and cultivate stronger connections with the audience.Far beyond mere decoration, color acts as a visual shorthand, instantly conveying a brands personality, its underlying values, and its unique essence. According to the American Marketing Association, vibrant colors, in particular, possess an inherent magnetism, drawing the eye and etching themselves into memory within the online environment. They infuse the brand with energy and dynamism, projecting approachability and memorability in a way that more muted tones often cannot. Consistency: The Core Of Great DesignConsistency is important because it fosters trust and familiarity, allowing users to quickly identify and connect with the brand in the online landscape. The strategic deployment of vibrant colors is especially crucial for brands seeking to establish themselves and flourish within the digital ecosystem. In the absence of physical storefronts or tangible in-person interactions, visual cues become paramount in shaping user perception and building brand recognition. A carefully selected primary color, supported by a complementary and equally energetic secondary palette, can become synonymous with a brands digital presence. A consistent application of the right colors across different digital touchpoints from the logo and website design to the user interface of an app and engaging social media campaigns creates a cohesive and instantly recognizable visual language. Several sources and professionals agree that the psychology behind the colors plays a significant role in shaping brand perception. The publication Insights Psychology, for instance, explains how colors can create emotional and behavioural responses. Vibrant colors often evoke strong emotions and associations. A bold, energetic red, for example, might communicate passion and excitement, while a bright, optimistic yellow could convey innovation and cheerfulness. By consciously aligning their color choices with their brand values and target audience preferences, digitally-native brands can create a powerful emotional resonance.Beyond Aesthetics: How Color Psychologically Impacts User Behavior In DigitalAs designers working with digital products, weve learned that color is far more than a superficial layer of visual appeal. Its a potent, often subconscious, force that shapes how users interact with and feel about the digital products we build.Were not just painting pixels, were conducting a psychological symphony, carefully selecting each hue to evoke specific emotions, guide behavior, and ultimately forge a deeper connection with the user.The initial allure of a color palette might be purely aesthetic, but its true power lies in its ability to bypass conscious thought and tap directly into our emotional core. Think about the subtle unease that might creep in when encountering a predominantly desaturated interface for a platform promising dynamic content, or the sense of calm that washes over you when a learning application utilizes soft, analogous colors. These are not arbitrary responses; theyre deeply rooted in our evolutionary history and cultural conditioning.To understand how colors psychologically impact user behavior in digital, we first need to understand how colors are defined. In digital design, colors are precisely defined using the HSB model, which stands for Hue, Saturation, and Brightness. This model provides a more intuitive way for designers to think about and manipulate color compared to other systems like RGB (Red, Green, Blue). Here is a quick breakdown of each component:HueThis is the pure color itself, the essence that we typically name, such as red, blue, green, or yellow. On a color wheel, hue is represented as an angle ranging from 0 to 360 degrees. For example, 0 is red, 120 is green, and 240 is blue. Think of it as the specific wavelength of light that our eyes perceive as a particular color. In UX, selecting the base hues is often tied to brand identity and the overall feeling you want to convey.SaturationSaturation refers to the intensity or purity of the hue. It describes how vivid or dull the color appears. A fully saturated color is rich and vibrant, while a color with low saturation appears muted, grayish, or desaturated. Saturation is typically expressed as a percentage, from 0% (completely desaturated, appearing as a shade of gray) to 100% (fully saturated, the purest form of the hue).In UX, saturation levels are crucial for creating visual hierarchy and drawing attention to key elements. Highly saturated colors often indicate interactive elements or important information, while lower saturation can be used for backgrounds or less critical content.BrightnessBrightness, sometimes also referred to as a value or lightness, indicates how light or dark a color appears. Its the amount of white or black mixed into the hue. Brightness is also usually represented as a percentage, ranging from 0% (completely black, regardless of the hue or saturation) to 100% (fully bright). At 100% brightness and 0% saturation, you get white. In UX, adjusting brightness is essential for creating contrast and ensuring readability. Sufficient brightness contrast between text and background is a fundamental accessibility requirement. Furthermore, variations in brightness within a color palette can create visual depth and subtle distinctions between UI elements.By understanding and manipulating these 3 color dimensions, digital designers have precise control over their color choices. This allows for the creation of harmonious and effective color palettes that not only align with brand guidelines but also strategically influence user behavior.Just as in the physical world, colors in digital also carry symbolic meanings and trigger subconscious associations. Understanding these color associations is essential for UX designers aiming to craft experiences that not only look appealing but also resonate emotionally and guide user behavior effectively. As the EMB Global states, the way we perceive and interpret color is not universal, yet broad patterns of association exist. For instance, the color blue often evokes feelings of trust, stability, and calmness. This association stems from the natural world the vastness of the sky and the tranquility of deep waters. In the digital space, this makes blue a popular choice for financial institutions, corporate platforms, and interfaces aiming to project reliability and security. However, the specific shade and context matter immensely. A bright, electric blue can feel energetic and modern, while a muted and darker blue might convey a more serious and authoritative tone.Kendra Cherry, a psychosocial and rehabilitation specialist and author of the book Everything Psychology, explains very well how colors evoke certain responses in us. For example, the color green is intrinsically linked to nature, often bringing about feelings of growth, health, freshness, and tranquility. It can also symbolize prosperity in some cultures. In digital design, green is frequently used for health and wellness applications, environmental initiatives, and platforms emphasizing sustainability. A vibrant lime green can feel energetic and youthful, while a deep forest green can evoke a sense of groundedness and organic quality.Yellow, the color of sunshine, is generally associated with optimism, happiness, energy, and warmth. Its attention-grabbing and can create a sense of playfulness. In digital interfaces, yellow is often used for highlighting important information, calls to action (though sparingly, as too much can be overwhelming), or for brands wanting to project a cheerful and approachable image. Red, a color with strong physiological effects, typically evokes excitement, passion, urgency, and sometimes anger or danger. It commands attention and can stimulate action. Digitally, red is often used for alerts, error messages, sales promotions, or for brands wanting to project a bold and energetic identity. Its intensity requires careful consideration, as overuse can lead to user fatigue or anxiety.Orange blends the energy of red with the optimism of yellow, often conveying enthusiasm, creativity, and friendliness. It can feel less aggressive than red but still commands attention. In digital design, orange is frequently used for calls to action, highlighting sales or special offers, and for brands aiming to appear approachable and innovative.Purple has historically been associated with royalty and luxury. It can evoke feelings of creativity, wisdom, and mystery. In digital contexts, purple is often used for brands aiming for a sophisticated or unique feel, particularly in areas like luxury goods, beauty, or spiritual and creative platforms.Black often signifies sophistication, power, elegance, and sometimes mystery. In digital design, black is frequently used for minimalist interfaces, luxury brands, and for creating strong contrast with lighter elements. The feeling it evokes heavily depends on the surrounding colors and overall design aesthetic.White is generally associated with purity, cleanliness, simplicity, and neutrality. It provides a sense of spaciousness and allows other colors to stand out. In digital design, white space is a crucial element, and white is often used as a primary background color to create a clean and uncluttered feel.Gray is often seen as neutral, practical, and sometimes somber or conservative. In digital interfaces, various shades of gray are essential for typography, borders, dividers, and creating visual hierarchy without being overly distracting.Evoking Emotions In Digital InterfacesImagine an elegant furniture application. The designers might choose a primary palette of soft, desaturated blues and greens, accented with gentle earth tones. The muted blues could subtly induce a feeling of calmness and tranquility, aligning with the apps core purpose of relaxation. The soft greens might evoke a sense of nature and well-being, further reinforcing the theme of peace and mental clarity. The earthy browns could ground the visual experience, creating a feeling of stability and connection to the natural world.Now, consider a platform for extreme investment enthusiasts. The color palette might be dominated by high-energy oranges and reds, contrasted with stark blacks and sharp whites. The vibrant oranges could evoke feelings of excitement and adventure, while the bold red might amplify the sense of adrenaline and intensity. The black and white could provide a sense of dynamism and modernity, reflecting the fast-paced nature of the activities.By consciously understanding and applying these color associations, digital designers can move beyond purely aesthetic choices and craft experiences that resonate deeply with users on an emotional level, leading to more engaging, intuitive, and successful digital products.Color As A Usability ToolChoosing the right colors isnt about adhering to fleeting trends; its about ensuring that our mobile applications and websites are usable by the widest possible audience, including individuals with visual impairments. Improper color choices can create significant barriers, rendering content illegible, interactive elements indistinguishable, and ultimately excluding a substantial portion of potential users.Prioritizing color with accessibility in mind is not just a matter of ethical design; its a fundamental aspect of creating inclusive and user-friendly digital experiences that benefit everyone.For individuals with low vision, sufficient color contrast between text and background is paramount for readability. Imagine trying to decipher light gray text on a white background a common design trend that severely hinders those with even mild visual impairments. Adhering to Web Content Accessibility Guidelines (WCAG) contrast ratios ensures that text remains legible and understandable.Furthermore, color blindness, affecting a significant percentage of the population, necessitates the use of redundant visual cues. Relying solely on color to convey information, such as indicating errors in red without an accompanying text label, excludes colorblind users. By pairing color with text, icons, or patterns, we ensure that critical information is conveyed through multiple sensory channels, making it accessible to all. Thoughtful color selection, therefore, is not an optional add-on but an integral component of designing digital products that are truly usable and equitable.Choosing Your PaletteAs designers, we need a strategic approach to choosing color palettes, considering various factors to build a scalable and impactful color system. Heres a breakdown of the steps and considerations involved:1. Deep Dive Into Brand Identity And Main GoalsThe journey begins with a thorough understanding of the brand itself. What are its core values? What personality does it project? Is it playful, sophisticated, innovative? Analyze existing brand guidelines (if any), target audience demographics and psychographics, and the overall goals of the digital product. The color palette should be a visual extension of this identity, reinforcing brand recognition and resonating with the intended users. For instance, a financial app aiming for trustworthiness might lean towards blues and greens, while a creative platform could explore more vibrant and unconventional hues.2. Understand Color Psychology And Cultural AssociationsAs discussed previously, colors carry inherent psychological and cultural baggage. While these associations are not absolute, they provide a valuable framework for initial exploration. Consider the emotions you want to evoke and research how your target audience might perceive different colors, keeping in mind cultural nuances that can significantly alter interpretations. This step is important to help in making informed decisions that align with the desired user experience and brand perception.3. Defining The Core ColorsStart by identifying the primary color the dominant hue that represents your brands essence. This will likely be derived from the brand logo or existing visual identity. Next, establish a secondary color or two that complement the primary color and provide visual interest and hierarchy. These secondary colors should work harmoniously with the primary, offering flexibility for different UI elements and interactions.4. Build A Functional Color SystemA consistent and scalable color palette goes beyond just a few base colors. It involves creating a system of variations for practical application within the digital interface. This typically includes tints and shades, accent colors, and neutral colors.5. Do Not Forget About Usability And AccessibilityEnsure sufficient color contrast between text and background, as well as between interactive elements and their surroundings, to meet WCAG guidelines. Tools are readily available to check color contrast ratios.Test your palette using color blindness simulators to see how it will be perceived by individuals with different types of color vision deficiencies. This helps identify potential issues where information might be lost due to color alone.Visual hierarchy is also important to guide the users eye and establish a clear visual story. Important elements should be visually distinct. 6. Testing And IterationOnce you have a preliminary color palette, its crucial to test it within the context of your digital product. Create mockups and prototypes to see how the colors work together in the actual interface. Gather feedback from stakeholders and, ideally, conduct user testing to identify any usability or aesthetic issues. Be prepared to iterate and refine your palette based on these insights.A well-defined color palette for the digital medium should be:Consistent,Scalable,Accessible,Brand-aligned,Emotionally resonant, andFunctionally effective.By following these steps and keeping these considerations in mind, designers can craft color palettes that are not just visually appealing but also strategically powerful tools for creating effective and accessible digital experiences.Color Consistency: Building Trust And Recognition Through A Harmonized Digital PresenceConsistency plays an important role in the whole color ecosystem. By maintaining a unified color scheme for interactive elements, navigation cues, and informational displays, designers create a seamless and predictable user journey, building trust through visual stability.Color consistency directly contributes to brand recognition in the increasingly crowded digital landscape. Just as a logo or typeface becomes instantly identifiable, a consistent color palette acts as a powerful visual signature. When users repeatedly encounter the same set of colors associated with a particular brand, it strengthens their recall and fosters a stronger brand association. This visual consistency extends beyond the core interface to marketing materials, social media presence, and all digital touchpoints, creating a cohesive and memorable brand experience. By strategically and consistently applying a solid and consistent color palette, digital products can cultivate stronger brand recognition, build user trust, and enhance user loyalty.
    Like
    Love
    Wow
    Sad
    Angry
    1كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • Beyond The Hype: What AI Can Really Do For Product Design
    smashingmagazine.com
    These days, its easy to find curated lists of AI tools for designers, galleries of generated illustrations, and countless prompt libraries. Whats much harder to find is a clear view of how AI is actually integrated into the everyday workflow of a product designer not for experimentation, but for real, meaningful outcomes.Ive gone through that journey myself: testing AI across every major stage of the design process, from ideation and prototyping to visual design and user research. Along the way, Ive built a simple, repeatable workflow that significantly boosts my productivity.In this article, Ill share whats already working and break down some of the most common objections Ive encountered many of which Ive faced personally.Stage 1: Idea Generation Without The ClichsPushback: Whenever I ask AI to suggest ideas, I just get a list of clichs. It cant produce the kind of creative thinking expected from a product designer.Thats a fair point. AI doesnt know the specifics of your product, the full context of your task, or many other critical nuances. The most obvious fix is to feed it all the documentation you have. But thats a common mistake as it often leads to even worse results: the context gets flooded with irrelevant information, and the AIs answers become vague and unfocused.Current-gen models can technically process thousands of words, but the longer the input, the higher the risk of missing something important, especially content buried in the middle. This is known as the lost in the middle problem.To get meaningful results, AI doesnt just need more information it needs the right information, delivered in the right way. Thats where the RAG approach comes in.How RAG WorksThink of RAG as a smart assistant working with your personal library of documents. You upload your files, and the assistant reads each one, creating a short summary a set of bookmarks (semantic tags) that capture the key topics, terms, scenarios, and concepts. These summaries are stored in a kind of card catalog, called a vector database.When you ask a question, the assistant doesnt reread every document from cover to cover. Instead, it compares your query to the bookmarks, retrieves only the most relevant excerpts (chunks), and sends those to the language model to generate a final answer.How Is This Different from Just Dumping a Doc into the Chat?Lets break it down:Typical chat interactionIts like asking your assistant to read a 100-page book from start to finish every time you have a question. Technically, all the information is in front of them, but its easy to miss something, especially if its in the middle. This is exactly what the lost in the middle issue refers to.RAG approachYou ask your smart assistant a question, and it retrieves only the relevant pages (chunks) from different documents. Its faster and more accurate, but it introduces a few new risks:Ambiguous questionYou ask, How can we make the project safer? and the assistant brings you documents about cybersecurity, not finance.Mixed chunksA single chunk might contain a mix of marketing, design, and engineering notes. That blurs the meaning so the assistant cant tell what the core topic is.Semantic gapYou ask, How can we speed up the app? but the document says, Optimize API response time. For a human, thats obviously related. For a machine, not always.These arent reasons to avoid RAG or AI altogether. Most of them can be avoided with better preparation of your knowledge base and more precise prompts. So, where do you start?Start With Three Short, Focused DocumentsThese three short documents will give your AI assistant just enough context to be genuinely helpful:Product Overview & ScenariosA brief summary of what your product does and the core user scenarios.Target AudienceYour main user segments and their key needs or goals.Research & ExperimentsKey insights from interviews, surveys, user testing, or product analytics.Each document should focus on a single topic and ideally stay within 300500 words. This makes it easier to search and helps ensure that each retrieved chunk is semantically clean and highly relevant.Language MattersIn practice, RAG works best when both the query and the knowledge base are in English. I ran a small experiment to test this assumption, trying a few different combinations:English prompt + English documents: Consistently accurate and relevant results.Non-English prompt + English documents: Quality dropped sharply. The AI struggled to match the query with the right content.Non-English prompt + non-English documents: The weakest performance. Even though large language models technically support multiple languages, their internal semantic maps are mostly trained in English. Vector search in other languages tends to be far less reliable.Takeaway: If you want your AI assistant to deliver precise, meaningful responses, do your RAG work entirely in English, both the data and the queries. This advice applies specifically to RAG setups. For regular chat interactions, youre free to use other languages. A challenge also highlighted in this 2024 study on multilingual retrieval.From Outsider to Teammate: Giving AI the Context It NeedsOnce your AI assistant has proper context, it stops acting like an outsider and starts behaving more like someone who truly understands your product. With well-structured input, it can help you spot blind spots in your thinking, challenge assumptions, and strengthen your ideas the way a mid-level or senior designer would.Heres an example of a prompt that works well for me:Your task is to perform a comparative analysis of two features: "Group gift contributions" (described in group_goals.txt) and "Personal savings goals" (described in personal_goals.txt).The goal is to identify potential conflicts in logic, architecture, and user scenarios and suggest visual and conceptual ways to clearly separate these two features in the UI so users can easily understand the difference during actual use.Please include:Possible overlaps in user goals, actions, or scenarios;Potential confusion if both features are launched at the same time;Any architectural or business-level conflicts (e.g. roles, notifications, access rights, financial logic);Suggestions for visual and conceptual separation: naming, color coding, separate sections, or other UI/UX techniques;Onboarding screens or explanatory elements that might help users understand both features.If helpful, include a comparison table with key parameters like purpose, initiator, audience, contribution method, timing, access rights, and so on.AI Needs Context, Not Just PromptsIf you want AI to go beyond surface-level suggestions and become a real design partner, it needs the right context. Not just more information, but better, more structured information.Building a usable knowledge base isnt difficult. And you dont need a full-blown RAG system to get started. Many of these principles work even in a regular chat: well-organized content and a clear question can dramatically improve how helpful and relevant the AIs responses are. Thats your first step in turning AI from a novelty into a practical tool in your product design workflow.Stage 2: Prototyping and Visual ExperimentsPushback: AI only generates obvious solutions and cant even build a proper user flow. Its faster to do it manually.Thats a fair concern. AI still performs poorly when it comes to building complete, usable screen flows. But for individual elements, especially when exploring new interaction patterns or visual ideas, it can be surprisingly effective.For example, I needed to prototype a gamified element for a limited-time promotion. The idea is to give users a lottery ticket they can flip to reveal a prize. I couldnt recreate the 3D animation I had in mind in Figma, either manually or using any available plugins. So I described the idea to Claude 4 in Figma Make and within a few minutes, without writing a single line of code, I had exactly what I needed.At the prototyping stage, AI can be a strong creative partner in two areas:UI element ideationIt can generate dozens of interactive patterns, including ones you might not think of yourself.Micro-animation generationIt can quickly produce polished animations that make a concept feel real, which is great for stakeholder presentations or as a handoff reference for engineers.AI can also be applied to multi-screen prototypes, but its not as simple as dropping in a set of mockups and getting a fully usable flow. The bigger and more complex the project, the more fine-tuning and manual fixes are required. Where AI already works brilliantly is in focused tasks individual screens, elements, or animations where it can kick off the thinking process and save hours of trial and error.A quick UI prototype of a gamified promo banner created with Claude 4 in Figma Make. No code or plugins needed.Heres another valuable way to use AI in design as a stress-testing tool. Back in 2023, Google Research introduced PromptInfuser, an internal Figma plugin that allowed designers to attach prompts directly to UI elements and simulate semi-functional interactions within real mockups. Their goal wasnt to generate new UI, but to check how well AI could operate inside existing layouts placing content into specific containers, handling edge-case inputs, and exposing logic gaps early.The results were striking: designers using PromptInfuser were up to 40% more effective at catching UI issues and aligning the interface with real-world input a clear gain in design accuracy, not just speed.That closely reflects my experience with Claude 4 and Figma Make: when AI operates within a real interface structure, rather than starting from a blank canvas, it becomes a much more reliable partner. It helps test your ideas, not just generate them.Stage 3: Finalizing The Interface And Visual StylePushback: AI cant match our visual style. Its easier to just do it by hand.This is one of the most common frustrations when using AI in design. Even if you upload your color palette, fonts, and components, the results often dont feel like they belong in your product. They tend to be either overly decorative or overly simplified.And this is a real limitation. In my experience, todays models still struggle to reliably apply a design system, even if you provide a component structure or JSON files with your styles. I tried several approaches:Direct integration with a component library.I used Figma Make (powered by Claude) and connected our library. This was the least effective method: although the AI attempted to use components, the layouts were often broken, and the visuals were overly conservative. Other designers have run into similar issues, noting that library support in Figma Make is still limited and often unstable.Uploading styles as JSON.Instead of a full component library, I tried uploading only the exported styles colors, fonts in a JSON format. The results improved: layouts looked more modern, but the AI still made mistakes in how styles were applied.Two-step approach: structure first, style second.What worked best was separating the process. First, I asked the AI to generate a layout and composition without any styling. Once I had a solid structure, I followed up with a request to apply the correct styles from the same JSON file. This produced the most usable result though still far from pixel-perfect.So yes, AI still cant help you finalize your UI. It doesnt replace hand-crafted design work. But its very useful in other ways:Quickly creating a visual concept for discussion.Generating what if alternatives to existing mockups.Exploring how your interface might look in a different style or direction.Acting as a second pair of eyes by giving feedback, pointing out inconsistencies or overlooked issues you might miss when tired or too deep in the work.AI wont save you five hours of high-fidelity design time, since youll probably spend that long fixing its output. But as a visual sparring partner, its already strong. If you treat it like a source of alternatives and fresh perspectives, it becomes a valuable creative collaborator.Stage 4: Product Feedback And Analytics: AI As A Thinking ExosuitProduct designers have come a long way. We used to create interfaces in Photoshop based on predefined specs. Then we delved deeper into UX with mapping user flows, conducting interviews, and understanding user behavior. Now, with AI, we gain access to yet another level: data analysis, which used to be the exclusive domain of product managers and analysts.As Vitaly Friedman rightly pointed out in one of his columns, trying to replace real UX interviews with AI can lead to false conclusions as models tend to generate an average experience, not a real one. The strength of AI isnt in inventing data but in processing it at scale.Let me give a real example. We launched an exit survey for users who were leaving our service. Within a week, we collected over 30,000 responses across seven languages.Simply counting the percentages for each of the five predefined reasons wasnt enough. I wanted to know:Are there specific times of day when users churn more?Do the reasons differ by region?Is there a correlation between user exits and system load?The real challenge was... figuring out what cuts and angles were even worth exploring. The entire technical process, from analysis to visualizations, was done for me by Gemini, working inside Google Sheets. This task took me about two hours in total. Without AI, not only would it have taken much longer, but I probably wouldnt have been able to reach that level of insight on my own at all.AI enables near real-time work with large data sets. But most importantly, it frees up your time and energy for whats truly valuable: asking the right questions.A few practical notes: Working with large data sets is still challenging for models without strong reasoning capabilities. In my experiments, I used Gemini embedded in Google Sheets and cross-checked the results using ChatGPT o3. Other models, including the standalone Gemini 2.5 Pro, often produced incorrect outputs or simply refused to complete the task.AI Is Not An Autopilot But A Co-PilotAI in design is only as good as the questions you ask it. It doesnt do the work for you. It doesnt replace your thinking. But it helps you move faster, explore more options, validate ideas, and focus on the hard parts instead of burning time on repetitive ones. Sometimes its still faster to design things by hand. Sometimes it makes more sense to delegate to a junior designer. But increasingly, AI is becoming the one who suggests, sharpens, and accelerates. Dont wait to build the perfect AI workflow. Start small. And that might be the first real step in turning AI from a curiosity into a trusted tool in your product design process.Lets SummarizeIf you just paste a full doc into chat, the model often misses important points, especially things buried in the middle. Thats the lost in the middle problem.The RAG approach helps by pulling only the most relevant pieces from your documents. So responses are faster, more accurate, and grounded in real context.Clear, focused prompts work better. Narrow the scope, define the output, and use familiar terms to help the model stay on track.A well-structured knowledge bas makes a big difference. Organizing your content into short, topic-specific docs helps reduce noise and keep answers sharp.Use English for both your prompts and your documents. Even multilingual models are most reliable when working in English, especially for retrieval.Most importantly: treat AI as a creative partner. It wont replace your skills, but it can spark ideas, catch issues, and speed up the tedious parts.Further ReadingAI-assisted Design Workflows: How UX Teams Move Faster Without Sacrificing Quality, Cindy BrummerThis piece is a perfect prequel to my article. It explains how to start integrating AI into your design process, how to structure your workflow, and which tasks AI can reasonably take on before you dive into RAG or idea generation.8 essential tips for using Figma Make, Alexia DantonWhile this article focuses on Figma Make, the recommendations are broadly applicable. It offers practical advice that will make your work with AI smoother, especially if youre experimenting with visual tools and structured prompting.What Is Retrieval-Augmented Generation aka RAG, Rick MerrittIf you want to go deeper into how RAG actually works, this is a great starting point. It breaks down key concepts like vector search and retrieval in plain terms and explains why these methods often outperform long prompts alone.
    Like
    Love
    Wow
    Sad
    Angry
    2كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • The Double-Edged Sustainability Sword Of AI In Web Design
    smashingmagazine.com
    Artificial intelligence is increasingly automating large parts of design and development workflows tasks once reserved for skilled designers and developers. This streamlining can dramatically speed up project delivery. Even back in 2023, AI-assisted developers were found to complete tasks twice as fast as those without. And AI tools have advanced massively since then.Yet this surge in capability raises a pressing dilemma:Does the environmental toll of powering AI infrastructure eclipse the efficiency gains?We can create websites faster that are optimized and more efficient to run, but the global consumption of energy by AI continues to climb.As awareness grows around the digital sectors hidden ecological footprint, web designers and businesses must grapple with this double-edged sword, weighing the grid-level impacts of AI against the cleaner, leaner code it can produce.The Good: How AI Can Enhance Sustainability In Web DesignTheres no disputing that AI-driven automation has introduced higher speeds and efficiencies to many of the mundane aspects of web design. Tools that automatically generate responsive layouts, optimize image sizes, and refactor bloated scripts should free designers to focus on completing the creative side of design and development. By some interpretations, these accelerated project timelines could represent a reduction in the required energy for development, and speedier production should mean less energy used. Beyond automation, AI excels at identifying inefficiencies in code and design, as it can take a much more holistic view and assess things as a whole. Advanced algorithms can parse through stylesheets and JavaScript files to detect unused selectors or redundant logic, producing leaner, faster-loading pages. For example, AI-driven caching can increase cache hit rates by 15% by improving data availability and reducing latency. This means more user requests are served directly from the cache, reducing the need for data retrieval from the main server, which reduces energy expenditure.AI tools can utilize next-generation image formats like AVIF or WebP, as theyre basically designed to be understood by AI and automation, and selectively compress assets based on content sensitivity. This slashes media payloads without perceptible quality loss, as the AI can use Generative Adversarial Networks (GANs) that can learn compact representations of data.AIs impact also brings sustainability benefits via user experience (UX). AI-driven personalization engines can dynamically serve only the content a visitor needs, which eliminates superfluous scripts or images that they dont care about. This not only enhances perceived performance but reduces the number of server requests and data transferred, cutting downstream energy use in network infrastructure. With the right prompts, generative AI can be an accessibility tool and ensure sites meet inclusive design standards by checking against accessibility standards, reducing the need for redesigns that can be costly in terms of time, money, and energy.So, if you can take things in isolation, AI can and already acts as an important tool to make web design more efficient and sustainable. But do these gains outweigh the cost of the resources required in building and maintaining these tools?The Bad: The Environmental Footprint Of AI InfrastructureYet the carbon savings engineered at the page level must be balanced against the prodigious resource demands of AI infrastructure. Large-scale AI hinges on data centers that already account for roughly 2% of global electricity consumption, a figure projected to swell as AI workloads grow. The International Energy Agency warns that electricity consumption from data centers could more than double by 2030 due to the increasing demand for AI tools, reaching nearly the current consumption of Japan. Training state-of-the-art language models generates carbon emissions on par with hundreds of transatlantic flights, and inference workloads, serving billions of requests daily, can rival or exceed training emissions over a models lifetime.Image generation tasks represent an even steeper energy hill to climb. Producing a single AI-generated image can consume energy equivalent to charging a smartphone.As generative design and AI-based prototyping become more common in web development, the cumulative energy footprint of these operations can quickly undermine the carbon savings achieved through optimized code.Water consumption forms another hidden cost. Data centers rely heavily on evaporative cooling systems that can draw between one and five million gallons of water per day, depending on size and location, placing stress on local supplies, especially in drought-prone regions. Studies estimate a single ChatGPT query may consume up to half a liter of water when accounting for direct cooling requirements, with broader AI use potentially demanding billions of liters annually by 2027.Resource depletion and electronic waste are further concerns. High-performance components underpinning AI services, like GPUs, can have very small lifespans due to both wear and tear and being superseded by more powerful hardware. AI alone could add between 1.2 and 5 million metric tons of e-waste by 2030, due to the continuous demand for new hardware, amplifying one of the worlds fastest-growing waste streams. Mining for the critical minerals in these devices often proceeds under unsustainable conditions due to a lack of regulations in many of the environments where rare metals can be sourced, and the resulting e-waste, rich in toxic metals like lead and mercury, poses another form of environmental damage if not properly recycled.Compounding these physical impacts is a lack of transparency in corporate reporting. Energy and water consumption figures for AI workloads are often aggregated under general data center operations, which obscures the specific toll of AI training and inference among other operations.And the energy consumption reporting of the data centres themselves has been found to have been obfuscated.Reports estimate that the emissions of data centers are up to 662% higher than initially reported due to misaligned metrics, and creative interpretations of what constitutes an emission. This makes it hard to grasp the true scale of AIs environmental footprint, leaving designers and decision-makers unable to make informed, environmentally conscious decisions.Do The Gains From AI Outweigh The Costs?Some industry advocates argue that AIs energy consumption isnt as catastrophic as headlines suggest. Some groups have challenged alarmist projections, claiming that AIs current contribution of just 0.02% of global energy consumption isnt a cause for concern. Proponents also highlight AIs supposed environmental benefits. There are claims that AI could reduce economy-wide greenhouse gas emissions by 0.1% to 1.1% through efficiency improvements. Google reported that five AI-powered solutions removed 26 million metric tons of emissions in 2024. The optimistic view holds that AIs capacity to optimize everything from energy grids to transportation systems will more than compensate for its data center demands.However, recent scientific analysis reveals these arguments underestimate AIs true impact. MIT found that data centers already consume 4.4% of all US electricity, with projections showing AI alone could use as much power as 22% of US households by 2028. Research indicates AI-specific electricity use could triple from current levels annually by 2028. Moreover, Harvard research revealed that data centers use electricity with 48% higher carbon intensity than the US average.Advice For Sustainable AI Use In Web DesignDespite the environmental costs, AIs use in business, particularly web design, isnt going away anytime soon, with 70% of large businesses looking to increase their AI investments to increase efficiencies. AIs immense impact on productivity means those not using it are likely to be left behind. This means that environmentally conscious businesses and designers must find the right balance between AIs environmental cost and the efficiency gains it brings. Make Sure You Have A Strong Foundation Of Sustainable Web Design PrinciplesBefore you plug in any AI magic, start by making sure the bones of your site are sustainable. Lean web fundamentals, like system fonts instead of hefty custom files, minimal JavaScript, and judicious image use, can slash a pages carbon footprint by stripping out redundancies that increase energy consumption. For instance, the global average web page emits about 0.8g of CO per view, whereas sustainably crafted sites can see a roughly 70% reduction. Once that lean baseline is in place, AI-driven optimizations (image format selection, code pruning, responsive layout generation) arent adding to bloat but building on efficiency, ensuring every joule spent on AI actually yields downstream energy savings in delivery and user experience.Choosing The Right Tools And VendorsIn order to make sustainable tool choices, transparency and awareness are the first steps. Many AI vendors have pledged to work towards sustainability, but independent audits are necessary, along with clear, cohesive metrics. Standardized reporting on energy and water footprints will help us understand the true cost of AI tools, allowing for informed choices.You can look for providers that publish detailed environmental reports and hold third-party renewable energy certifications. Many major providers now offer PUE (Power Usage Effectiveness) metrics alongside renewable energy matching to demonstrate real-world commitments to clean power. When integrating AI into your build pipeline, choosing lightweight, specialized models for tasks like image compression or code linting can be more sustainable than full-scale generative engines. Task-specific tools often use considerably less energy than general AI models, as general models must process what task you want them to complete. There are a variety of guides and collectives out there that can guide you on choosing the green web hosts that are best for your business. When choosing AI-model vendors, you should look at options that prioritize efficiency by design: smaller, pruned models and edge-compute deployments can cut energy use by up to 50% compared to monolithic cloud-only models. Theyre trained for specific tasks, so they dont have to expend energy computing what the task is and how to go about it. Using AI Tools SustainablyOnce youve chosen conscientious vendors, optimize how you actually use AI. You can take steps like batching non-urgent inference tasks to reduce idle GPU time, an approach shown to lower energy consumption overall compared to requesting ad-hoc, as you dont have to keep running the GPU constantly, only when you need to use it.Smarter prompts can also help make AI usage slightly more sustainable. Sam Altman of ChatGPT revealed early in 2025 that peoples propensity for saying please and thank you to LLMs is costing millions of dollars and wasting energy as the Generative AI has to deal with extra phrases to compute that arent relevant to its task. You need to ensure that your prompts are direct and to the point, and deliver the context required to complete the task to reduce the need to reprompt.Additional Strategies To Balance AIs Environmental CostOn top of being responsible with your AI tool choice and usage, there are other steps you can take to offset the carbon cost of AI usage and enjoy the efficiency benefits it brings. Organizations can reduce their own emissions and use carbon offsetting to reduce their own carbon footprint as much as possible. Combined with the apparent sustainability benefits of AI use, this approach can help mitigate the harmful impacts of energy-hungry AI.You can ensure that youre using green server hosting (servers run on sustainable energy) for your own site and cloud needs beyond AI, and refine your content delivery network (CDN) to ensure your sites and apps are serving compressed, optimized assets from edge locations, cutting the distance data must travel, which should reduce the associated energy use.Organizations and individuals, particularly those with thought leadership status, can be advocates pushing for transparent sustainability specifications. This involves both lobbying politicians and regulatory bodies to introduce and enforce sustainability standards and ensuring that other members of the public are kept aware of the environmental costs of AI use.Its only through collective action that were likely to see strict enforcement of both sustainable AI data centers and the standardization of emissions reporting.Regardless, it remains a tricky path to walk, along the double-edged sword of AIs use in web design.Use AI too much, and youre contributing to its massive carbon footprint. Use it too little, and youre likely to be left behind by rivals that are able to work more efficiently and deliver projects much faster.The best environmentally conscious designers and organizations can currently do is attempt to navigate it as best they can and stay informed on best practices.ConclusionWe cant dispute that AI use in web design delivers on its promise of agility, personalization, and resource savings at the page-level. Yet without a holistic view that accounts for the environmental demands of AI infrastructure, these gains risk being overshadowed by an expanding energy and water footprint. Achieving the balance between enjoying AIs efficiency gains and managing its carbon footprint requires transparency, targeted deployment, human oversight, and a steadfast commitment to core sustainable web practices.
    Like
    Love
    Wow
    Sad
    Angry
    1كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • A Week In The Life Of An AI-Augmented Designer
    smashingmagazine.com
    Artificial Intelligence isnt new, but in November 2022, something changed. The launch of ChatGPT brought AI out of the background and into everyday life. Suddenly, interacting with a machine didnt feel technical it felt conversational.Just this March, ChatGPT overtook Instagram and TikTok as the most downloaded app in the world. That level of adoption shows that millions of everyday users, not just developers or early adopters, are comfortable using AI in casual, conversational ways. People are using AI not just to get answers, but to think, create, plan, and even to help with mental health and loneliness. In the past two and a half years, people have moved through the Kbler-Ross Change Curve only instead of grief, its AI-induced uncertainty. UX designers, like Kate (who youll meet shortly), have experienced something like this:Denial: AI cant design like a human; it wont affect my workflow.Anger: AI will ruin creativity. Its a threat to our craft.Bargaining: Okay, maybe just for the boring tasks.Depression: I cant keep up. Whats the future of my skills?Acceptance: Alright, AI can free me up for more strategic, human work.As designers move into experimentation, theyre not asking, Can I use AI? but How might I use it well?.Using AI isnt about chasing the latest shiny object but about learning how to stay human in a world of machines, and use AI not as a shortcut, but as a creative collaborator.It isnt about finding, bookmarking, downloading, or hoarding prompts, but experimenting and writing your own prompts. To bring this to life, well follow Kate, a mid-level designer at a FinTech company, navigating her first AI-augmented design sprint. Youll see her ups and downs as she experiments with AI, tries to balance human-centered skills with AI tools, when she relies on intuition over automation, and how she reflects critically on the role of AI at each stage of the sprint.The next two planned articles in this series will explore how to design prompts (Part 2) and guide you through building your own AI assistant (aka CustomGPT; Part 3). Along the way, well spotlight the designerly skills AI cant replicate like curiosity, empathy, critical thinking, and experimentation that will set you apart in a world where automation is easy, but people and human-centered design matter even more.Note: This article was written by a human (with feelings, snacks, and deadlines). The prompts are real, the AI replies are straight from the source, and no language models were overworked just politely bossed around. All em dashes are the handiwork of MS Words autocorrect not AI. Kate is fictional, but her week is stitched together from real tools, real prompts, real design activities, and real challenges designers everywhere are navigating right now. She will primarily be using ChatGPT, reflecting the popularity of this jack-of-all-trades AI as the place many start their AI journeys before branching out. If you stick around to the end, youll find other AI tools that may be better suited for different design sprint activities. Due to the pace of AI advances, your outputs may vary (YOMV), possibly by the time you finish reading this sentence. Cautionary Note: AI is helpful, but not always private or secure. Never share sensitive, confidential, or personal information with AI tools even the helpful-sounding ones. When in doubt, treat it like a coworker who remembers everything and may not be particularly good at keeping secrets.Prologue: Meet Kate (As She Preps For The Upcoming Week)Kate stared at the digital mountain of feedback on her screen: transcripts, app reviews, survey snippets, all waiting to be synthesized. Deadlines loomed. Her calendar was a nightmare. Meanwhile, LinkedIn was ablaze with AI hot takes and success stories. Everyone seemed to have found their AI groove except her. She wasnt anti-AI. She just hadnt figured out how it actually fit into her work. She had tried some of the prompts she saw online, played with some AI plugins and extensions, but it felt like an add-on, not a core part of her design workflow. Her team was focusing on improving financial confidence for Gen Z users of their FinTech app, and Kate planned to use one of her favorite frameworks: the Design Sprint, a five-day, high-focus process that condenses months of product thinking into a single week. Each day tackles a distinct phase: Understand, Sketch, Decide, Prototype, and Test. All designed to move fast, make ideas tangible, and learn from real users before making big bets.This time, she planned to experiment with a very lightweight version of the design sprint, almost solo-ish since her PM and engineer were available for check-ins and decisions, but not present every day. That gave her both space and a constraint, and made it the perfect opportunity to explore how AI could augment each phase of the sprint. She decided to lean on her designerly behavior of experimentation and learning and integrate AI intentionally into her sprint prep, using it as both a creative partner and a thinking aid. Not with a rigid plan, but with a working hypothesis that AI would at the very least speed her up, if nothing else. She wouldnt just be designing and testing a prototype, but prototyping and testing what it means to design with AI, while still staying in the drivers seat.Follow Kate along her journey through her first AI-powered design sprint: from curiosity to friction and from skepticism to insight.Monday: Understanding the Problem (aka: Kate Vs. Digital Pile Of Notes)The first day of a design sprint is spent understanding the user, their problems, business priorities, and technical constraints, and narrowing down the problem to solve that week.This morning, Kate had transcripts from recent user interviews and customer feedback from the past year from app stores, surveys, and their customer support center. Typically, she would set aside a few days to process everything, coming out with glazed eyes and a few new insights. This time, she decided to use ChatGPT to summarize that data: Read this customer feedback and tell me how we can improve financial literacy for Gen Z in our app. ChatGPTs outputs were underwhelming to say the least. Disappointed, she was about to give up when she remembered an infographic about good prompting that she had emailed herself. She updated her prompt based on those recommendations:Defined a role for the AI (product strategist),Provided context (user group and design sprint objectives), andClearly outlined what she was looking for (financial literacy related patterns in pain points, blockers, confusion, lack of confidence; synthesis to identify top opportunity areas).By the time she Aero-pressed her next cup of coffee, ChatGPT had completed its analysis, highlighting blockers like jargon, lack of control, fear of making the wrong choice, and need for blockchain wallets. Wait, what? That last one felt off.Kate searched her sources and confirmed her hunch: AI hallucination! Despite the best of prompts, AI sometimes makes things up based on trendy concepts from its training data rather than actual data. Kate updated her prompt with constraints to make ChatGPT only use data she had uploaded, and to cite examples from that data in its results. 18 seconds later, the updated results did not mention blockchain or other unexpected results. By lunch, Kate had the makings of a research summary that would have taken much, much longer, and a whole lot of caffeine.That afternoon, Kate and her product partner plotted the pain points on the Gen Z app journey. The emotional mapping highlighted the most critical moment: the first step of a financial decision, like setting a savings goal or choosing an investment option. That was when fear, confusion, and lack of confidence held people back. AI synthesis combined with human insight helped them define the problem statement as: How might we help Gen Z users confidently take their first financial action in our app, in a way that feels simple, safe, and puts them in control? Kates ReflectionAs she wrapped up for the day, Kate jotted down her reflections on her first day as an AI-augmented designer: Theres nothing like learning by doing. Ive been reading about AI and tinkering around, but took the plunge today. Turns out AI is much more than a tool, but I wouldnt call it a co-pilot. Yet. I think its like a sharp intern: it has a lot of information, is fast, eager to help, but it lacks context, needs supervision, and can surprise you. You have to give it clear instructions, double-check its work, and guide and supervise it. Oh, and maintain boundaries by not sharing anything I wouldnt want others to know.Today was about listening to users, to patterns, to my own instincts. AI helped me sift through interviews fast, but I had to stay curious to catch what it missed. Some quotes felt too clean, like the edges had been smoothed over. Thats where observation and empathy kicked in. I had to ask myself: whats underneath this summary?Critical thinking was the designerly skill I had to exercise most today. It was tempting to take the AIs synthesis at face value, but I had to push back by re-reading transcripts, questioning assumptions, and making sure I wasnt outsourcing my judgment. Turns out, the thinking part still belongs to me.Tuesday: Sketching (aka: Kate And The Sea of OKish Ideas)Day 2 of a design sprint focuses on solutions, starting by remixing and improving existing ideas, followed by people sketching potential solutions.Optimistic, yet cautious after her experience yesterday, Kate started thinking about ways she could use AI today, while brewing her first cup of coffee. By cup two, she was wondering if AI could be a creative teammate. Or a creative intern at least. She decided to ask AI for a list of relevant UX patterns across industries. Unlike yesterdays complex analysis, Kate was asking for inspiration, not insight, which meant she could use a simpler prompt: Give me 10 unique examples of how top-rated apps reduce decision anxiety for first-time users from FinTech, health, learning, or ecommerce.She received her results in a few seconds, but there were only 6, not the 10 she asked for. She expanded her prompt for examples from a wider range of industries. While reviewing the AI examples, Kate realized that one had accessibility issues. To be fair, the results met Kates ask since she had not specified accessibility considerations. She then went pre-AI and brainstormed examples with her product partner, coming up with a few unique local examples. Later that afternoon, Kate went full human during Crazy 8s by putting a marker to paper and sketching 8 ideas in 8 minutes to rapidly explore different directions. Wondering if AI could live up to its generative nature, she uploaded pictures of her top 3 sketches and prompted AI to act as a product design strategist experienced in Gen Z behavior, digital UX, and behavioral science, gave it context about the problem statement, stage in the design sprint, and explicitly asked AI the following:Analyze the 3 sketch concepts and identify core elements or features that resonated with the goal.Generate 5 new concept directions, each of which should:Address the original design sprint challenge.Reflect Gen Z design language, tone, and digital behaviors.Introduce a unique twist, remix, or conceptual inversion of the ideas in the sketches.For each concept, provide:Name (e.g., Monopoly Mode, Smart Start);12 sentence concept summary;Key differentiator from the original sketches;Design tone and/or behavioral psychology technique used.The results included ideas that Kate and her product partner hadnt considered, including a progress bar that started at 20% (to build confidence), and a sports-like stock bracket for first-time investors. Not bad, thought Kate, as she cherry-picked elements, combined and built on these ideas in her next round of sketches. By the end of the day, they had a diverse set of sketched solutions some original, some AI-augmented, but all exploring how to reduce fear, simplify choices, and build confidence for Gen Z users taking their first financial step. With five concept variations and a few rough storyboards, Kate was ready to start converging on day 3. Kates ReflectionToday was creatively energizing yet a little overwhelming! I leaned hard on AI to act as a creative teammate. It delivered a few unexpected ideas and remixed my Crazy 8s into variations I never wouldve thought of!It also reinforced the need to stay grounded in the human side of design. AI was fast too fast, sometimes. It spit out polished-sounding ideas that sounded right, but I had to slow down, observe carefully, and ask: Does this feel right for our users? Would a first-time user feel safe or intimidated here?Critical thinking helped me separate what mattered from what didnt. Empathy pulled me back to what Gen Z users actually said, and kept their voices in mind as I sketched. Curiosity and experimentation were my fuel. I kept tweaking prompts, remixing inputs, and seeing how far I could stretch a concept before it broke. Visual communication helped translate fuzzy AI ideas into something I could react to and more importantly, test.Wednesday: Deciding (aka Kate Tries to Get AI to Pick a Side)Design sprint teams spend Day 3 critiquing each of their potential solutions to shortlist those that have the best chance of achieving their long-term goal. The winning scenes from the sketches are then woven into a prototype storyboard.Design sprint Wednesdays were Kates least favorite day. After all the generative energy during Sketching Tuesday, today, she would have to decide on one clear solution to prototype and test. She was unsure if AI would be much help with judging tradeoffs or narrowing down options, and it wouldnt be able to critique like a team. Or could it?Kate reviewed each of the five concepts, noting strengths, open questions, and potential risks. Curious about how AI would respond, she uploaded images of three different design concepts and prompted ChatGPT for strengths and weaknesses. AIs critique was helpful in summarizing the pros and cons of different concepts, including a few points she had not considered like potential privacy concerns. She asked a few follow-up questions to confirm the actual reasoning. Wondering if she could simulate a team critique by prompting ChatGPT differently, Kate asked it to use the 6 thinking hats technique. The results came back dense, overwhelming, and unfocused. The AI couldnt prioritize, and it couldnt see the gaps Kate instinctively noticed: friction in onboarding, misaligned tone, unclear next steps. In that moment, the promise of AI felt overhyped. Kate stood up, stretched, and seriously considered ending her experiments with the AI-driven process. But she paused. Maybe the problem wasnt the tool. Maybe it was how she was using it. She made a note to experiment when she wasnt on a design sprint clock.She returned to her sketches, this time laying them out on the wall. No screens, no prompts. Just markers, sticky notes, and Sharpie scribbles. Human judgment took over. Kate worked with her product partner to finalize the solution to test on Friday and spent the next hour storyboarding the experience in Figma.Kate re-engaged with AI as a reviewer, not a decider. She prompted it for feedback on the storyboard and was surprised to see it spit out detailed design, content, and micro-interaction suggestions for each of the steps of the storyboarded experience. A lot of food for thought, but shed have to judge what mattered when she created her prototype. But that wasnt until tomorrow!Kates ReflectionAI exposed a few of my blind spots in the critique, which was good, but it basically pointed out that multiple options could work. I had to rely on my critical thinking and instincts to weigh options logically, emotionally, and contextually in order to choose a direction that was the most testable and aligned with the user feedback from Day 1.I was also surprised by the suggestions it came up with while reviewing my final storyboard, but I will need a fresh pair of eyes and all the human judgement I can muster tomorrow.Empathy helped me walk through the flow like I was a new user. Visual communication helped pull it all together by turning abstract steps into a real storyboard for the team to see instead of imagining.TO DO: Experiment prompting around the 6 Thinking Hats for different perspectives.Thursday: Prototype (aka Kate And Faking It)On Day 4, the team usually turns the storyboard from the previous day into a prototype that can be tested with users on Day 5. The prototype doesnt need to be fully functional; a simulated experience is sufficient to gather user feedback.Kates prototype day often consisted of marathon Figma Design sessions and late-night pizza dinners. She was hoping AI would change that today. She fed yesterdays storyboard to ChatGPT and asked it for screens. It took a while to generate, but she was excited to see a screen flow gradually appear on her screen, except that it had 3 screens, instead of the 6 frames from her storyboard, as you can see in the image below.She tried different prompts but couldnt get a complete flow. She was excited when it offered to create a Figma file. She waited. And waited. And reminded it that she was right there waiting. It finally gave her a link a link that didnt work. That made her question why she needed prototype screens in Figma at all. Could one of those no-code tools do the job? Kate remembered that she had just received beta access to Figma Make (Figmas AI prompt-to-prototype tool) and decided to put it to the test. She shared her storyboard, added context about the prototype for a design sprint, the problem, and a brief description of the audience. In a few seconds, she saw code being generated and a running description on the left. It seemed like it would take some time, so she headed to the cafeteria for lunch. When Kate returned, she was surprised to see Figma Make had created an app along with a detailed explanation:Ive created a complete finance app prototype with five key screens that guide users through setting financial goals in a friendly, low-pressure way. The app features smooth animations with Framer Motion, a progress indicator to show users where they are in the journey, and emotionally supportive visuals and copy throughout the experience. Each screen serves a specific purpose: welcoming users, helping them define relatable goals, building their plan with visual feedback, presenting a simple summary, and offering next steps without pressure.You might want to:Add personalization options like user profiles or avatars.Implement a dark mode toggle for accessibility.Create additional goal templates and saving strategies.Since Figma Make had built the prototype, not her, Kate felt like she was User 0, and clicked her way through the interactive prototype, making a mental note of what worked, what didnt. She was impressed to see that she could not only select a common goal (buy a home, save for education, and so on) but she could also create her own goal (hello DeLonghi Coffee Maker) and it carried over for the rest of the experience. This was something she had never been able to do in Figma Design!Despite some obvious misses like a missing header and navigation, and some buttons not working, she was impressed! Kate tried the option to Publish and it gave her a link that she immediately shared with her product and engineering partners. A few minutes later, they joined her in the conference room, exploring it together. The engineer scanned the code, didnt seem impressed, but said it would work as a disposable prototype.Kate prompted Figma Make to add an orange header and app navigation, and this time the trio kept their eyes peeled as they saw the progress in code and in English. The results were pretty good. They spent the next hour making changes to get it ready for testing. Even though he didnt admit it, the engineer seemed impressed with the result, if not the code.By late afternoon, they had a functioning interactive prototype. Kate fed ChatGPT the prototype link and asked it to create a usability testing script. It came up with a basic, but complete test script, including a checklist for observers to take notes. Kate went through the script carefully and updated it to add probing questions about AI transparency, emotional check-ins, more specific task scenarios, and a post-test debrief that looped back to the sprint goal. Kate did a dry run with her product partner, who teased her: Did you really need me? Couldnt your AI do it? It hadnt occurred to her, but she was now curious! Act as a Gen Z user seeing this interactive prototype for the first time. How would you react to the language, steps, and tone? What would make you feel more confident or in control?It worked! ChatGPT simulated user feedback for the first screen and asked if she wanted it to continue. Yes, please, she typed. A few seconds later, she was reading what could have very well been a screen-by-screen transcript from a test. Kate was still processing what she had seen as she drove home, happy she didnt have to stay late. The simulated test using AI appeared impressive at first glance. But the more she thought about it, the more disturbing it became. The output didnt mention what the simulated user clicked, and if she had asked, she probably would have received an answer. But how useful would that be? After almost missing her exit, she forced herself to think about eating a relaxed meal at home instead of her usual Prototype-Thursday-Multitasking-Pizza-Dinner.Kates ReflectionToday was the most meta Ive felt all week: building a prototype about AI, with AI, while being coached by AI. And it didnt all go the way I expected.While ChatGPT didnt deliver prototype screens, Figma Make coded a working, interactive prototype with interactions I couldnt have built in Figma Design. I used curiosity and experimentation today, by asking: What if I reworded this? What if I flipped that flow?AI moved fast, but I had to keep steering. But I have to admit that tweaking the prototype by changing the words, not code, felt like magic!Critical thinking isnt optional anymore it is table stakes.My impromptu ask of ChatGPT to simulate a Gen Z user testing my flow? That part both impressed and unsettled me. Im going to need time to process this. But that can wait until next week. Tomorrow, I test with 5 Gen Zs real people.Friday: Test (aka Prototype Meets User)Day 5 in a design sprint is a culmination of the weeks work from understanding the problem, exploring solutions, choosing the best, and building a prototype. Its when teams interview users and learn by watching them react to the prototype and seeing if it really matters to them.As Kate prepped for the tests, she grounded herself in the sprint problem statement and the users: How might we help Gen Z users confidently take their first financial action in our app in a way that feels simple, safe, and puts them in control? She clicked through the prototype one last time the link still worked! And just in case, she also had screenshots saved. Kate moderated the five tests while her product and engineering partners observed. The prototype may have been AI-generated, but the reactions were human. She observed where people hesitated, what made them feel safe and in control. Based on the participant, she would pivot, go off-script, and ask clarifying questions, getting deeper insights.After each session, she dropped the transcripts and their notes into ChatGPT, asking it to summarize that users feedback into pain points, positive signals, and any relevant quotes. At the end of the five rounds, Kate prompted them for recurring themes to use as input for their reflection and synthesis.The trio combed through the results, with an eye out for any suspicious AI-generated results. They ran into one: Users Trust AI. Not one user mentioned or clicked the Why this? link, but AI possibly assumed transparency features worked because they were available in the prototype.They agreed that the prototype resonated with users, allowing all to easily set their financial goals, and identified a couple of opportunities for improvement: better explaining AI-generated plans and celebrating win moments after creating a plan. Both were fairly easy to address during their product build process.That was a nice end to the week: another design sprint wrapped, and Kates first AI-augmented design sprint! She started Monday anxious about falling behind, overwhelmed by options. She closed Friday confident in a validated concept, grounded in real user needs, and empowered by tools she now knew how to steer.Kates ReflectionTest driving my prototype with AI yesterday left me impressed and unsettled. But todays tests with people reminded me why we test with real users, not proxies or people who interact with users, but actual end users. And GenAI is not the user. Five tests put my designerly skill of observation to the test.GenAI helped summarize the test transcripts quickly but snuck in one last hallucination this week about AI! With AI, dont trust always verify! Critical thinking is not going anywhere.AI can move fast with words, but only people can use empathy to move beyond words to truly understand human emotions.My next goal is to learn to talk to AI better, so I can get better results.ConclusionOver the course of five days, Kate explored how AI could fit into her UX work, not by reading articles or LinkedIn posts, but by doing. Through daily experiments, iterations, and missteps, she got comfortable with AI as a collaborator to support a design sprint. It accelerated every stage: synthesizing user feedback, generating divergent ideas, giving feedback, and even spinning up a working prototype, as shown below.What was clear by Friday was that speed isnt insight. While AI produced outputs fast, it was Kates designerly skills curiosity, empathy, observation, visual communication, experimentation, and most importantly, critical thinking and a growth mindset that turned data and patterns into meaningful insights. She stayed in the drivers seat, verifying claims, adjusting prompts, and applying judgment where automation fell short.She started the week on Monday, overwhelmed, her confidence dimmed by uncertainty and the noise of AI hype. She questioned her relevance in a rapidly shifting landscape. By Friday, she not only had a validated concept but had also reshaped her entire approach to design. She had evolved: from AI-curious to AI-confident, from reactive to proactive, from unsure to empowered. Her mindset had shifted: AI was no longer a threat or trend; it was like a smart intern she could direct, critique, and collaborate with. She didnt just adapt to AI. She redefined what it meant to be a designer in the age of AI.The experience raised deeper questions: How do we make sure AI-augmented outputs are not made up? How should we treat AI-generated user feedback? Where do ethics and human responsibility intersect?Besides a validated solution to their design sprint problem, Kate had prototyped a new way of working as an AI-augmented designer. The question now isnt just Should designers use AI?. Its How do we work with AI responsibly, creatively, and consciously?. Thats what the next article will explore: designing your interactions with AI using a repeatable framework.Poll: If you could design your own AI assistant, what would it do?Assist with ideation?Research synthesis?Identify customer pain points?Or something else entirely?Share your idea, and in the spirit of learning by doing, well build one together from scratch in the third article of this series: Building your own CustomGPT.ResourcesSprint: How to Solve Big Problems and Test New Ideas in Just Five Days, by Jake KnappThe Design SprintFigma MakeOpenAI Appeals Sweeping, Unprecedented Order Requiring It Maintain All ChatGPT Logs, Vanessa TaylorToolsAs mentioned earlier, ChatGPT was the general-purpose LLM Kate leaned on, but you could swap it out for Claude, Gemini, Copilot, or other competitors and likely get similar results (or at least similarly weird surprises). Here are some alternate AI tools that might suit each sprint stage even better. Note that with dozens of new AI tools popping up every week, this list is far from exhaustive. Stage Tools Capability Understand Dovetail, UserTestings Insights Hub, Marvin Summarize & Synthesize data Sketch Any LLM, Musely Brainstorm concepts and ideas Decide Any LLM Critique/provide feedback Prototype UIzard, UXPilot, Visily, Krisspy, Figma Make, Lovable, Bolt Create wireframes and prototypes Test UserTesting, UserInterviews, PlaybookUX, Maze, plus tools from the Understand stage Moderated and unmoderated user tests/synthesis
    Like
    Love
    Wow
    Sad
    Angry
    1كيلو بايت
    · 0 التعليقات ·0 المشاركات
المزيد من المنشورات
CGShares https://cgshares.com