0 Commenti
·0 condivisioni
-
Effettua l'accesso per mettere mi piace, condividere e commentare!
-
Prompting Is A Design Act: How To Brief, Guide And Iterate With AIsmashingmagazine.comIn A Week In The Life Of An AI-Augmented Designer, we followed Kates weeklong journey of her first AI-augmented design sprint. She had three realizations through the process:AI isnt a co-pilot (yet); its more like a smart, eager intern.One with access to a lot of information, good recall, fast execution, but no context. That mindset defined how she approached every interaction with AI: not as magic, but as management. Dont trust; guide, coach, and always verify.Like any intern, AI needs coaching and supervision, and thats where her designerly skills kicked in. Kate relied on curiosity to explore, observation to spot bias, empathy to humanize the output, and critical thinking to challenge what didnt feel right. Her learning mindset helped her keep up with advances, and experimentation helped her learn by doing.Prompting is part creative brief, and part conversation design, just with an AI instead of a person.When you prompt an AI, youre not just giving instructions, but designing how it responds, behaves, and outputs information. If AI is like an intern, then the prompt is your creative brief that frames the task, sets the tone, and clarifies what good looks like. Its also your conversation script that guides how it responds, how the interaction flows, and how ambiguity is handled.As designers, were used to designing interactions for people. Prompting is us designing our own interactions with machines it uses the same mindset with a new medium. It shapes an AIs behavior the same way youd guide a user with structure, clarity, and intent. If youve bookmarked, downloaded, or saved prompts from others, youre not alone. Weve all done that during our AI journeys. But while someone elses prompts are a good starting point, you will get better and more relevant results if you can write your own prompts tailored to your goals, context, and style. Using someone elses prompt is like using a Figma template. It gets the job done, but mastery comes from understanding and applying the fundamentals of design, including layout, flow, and reasoning. Prompts have a structure too. And when you learn it, you stop guessing and start designing.Note: All prompts in this article were tested using ChatGPT not because its the only game in town, but because its friendly, flexible, and lets you talk like a person, yes, even after the recent GPT-5 update. That said, any LLM with a decent attention span will work. Results for the same prompt may vary based on the AI model you use, the AIs training, mood, and how confidently it can hallucinate.Privacy PSA: As always, dont share anything you wouldnt want leaked, logged, or accidentally included in the next AI-generated meme. Keep it safe, legal, and user-respecting.With that out of the way, lets dive into the mindset, anatomy, and methods of effective prompting as another tool in your design toolkit.Mindset: Prompt Like A DesignerAs designers, we storyboard journeys, wireframe interfaces to guide users, and write UX copy with intention. However, when prompting AI, we treat it differently: Summarize these insights, Make this better, Write copy for this screen, and then wonder why the output feels generic, off-brand, or just meh. Its like expecting a creative team to deliver great work from a one-line Slack message. We wouldnt brief a freelancer, much less an intern, with Design a landing page, so why brief AI that way?Prompting Is A Creative Brief For A MachineThink of a good prompt as a creative brief, just for a non-human collaborator. It needs similar elements, including a clear role, defined goal, relevant context, tone guidance, and output expectations. Just as a well-written creative brief unlocks alignment and quality from your team, a well-structured prompt helps the AI meet your expectations, even though it doesnt have real instincts or opinions. Prompting Is Also Conversation DesignA good prompt goes beyond defining the task and sets the tone for the exchange by designing a conversation: guiding how the AI interprets, sequences, and responds. You shape the flow of tasks, how ambiguity is handled, and how refinement happens thats conversation design. Anatomy: Structure It Like A DesignerSo how do you write a designer-quality prompt? Thats where the W.I.R.E.+F.R.A.M.E. prompt design framework comes in a UX-inspired framework for writing intentional, structured, and reusable prompts. Each letter represents a key design direction, grounded in the way UX designers already think: Just as a wireframe doesnt dictate final visuals, this WIRE+FRAME framework doesnt constrain creativity, but guides the AI with structured information it needs. Why not just use a series of back-and-forth chats with AI?You can, and many people do. But without structure, AI fills in the gaps on its own, often with vague or generic results. A good prompt upfront saves time, reduces trial and error, and improves consistency. And whether youre working on your own or across a team, a framework means youre not reinventing a prompt every time but reusing what works to get better results faster.Just as we build wireframes before adding layers of fidelity, the WIRE+FRAME framework has two parts:WIRE is the must-have skeleton. It gives the prompt its shape.FRAME is the set of enhancements that bring polish, logic, tone, and reusability like building a high-fidelity interface from the wireframe.Lets improve Kates original research synthesis prompt (Read this customer feedback and tell me how we can improve financial literacy for Gen Z in our app). To better reflect how people actually prompt in practice, lets tweak it to a more broadly applicable version: Read this customer feedback and tell me how we can improve our app for Gen Z users. This one-liner mirrors the kinds of prompts we often throw at AI tools: short, simple, and often lacking structure. Now, well take that prompt and rebuild it using the first four elements of the W.I.R.E. framework the core building blocks that provide AI with the main information it needs to deliver useful results.W: Who & WhatDefine who the AI should be, and what its being asked to deliver.A creative brief starts with assigning the right hat. Are you briefing a copywriter? A strategist? A product designer? The same logic applies here. Give the AI a clear identity and task. Treat AI like a trusted freelancer or intern. Instead of saying help me, tell it who it should act as and whats expected.Example: You are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.I: Input ContextProvide background that frames the task.Creative partners dont work in a vacuum. They need context: the audience, goals, product, competitive landscape, and whats been tried already. This is the What you need to know before you start section of the brief. Think: key insights, friction points, business objectives. The same goes for your prompt. Example: You are analyzing customer feedback for Fintech Brands app, targeting Gen Z users. Feedback will be uploaded from sources such as app store reviews, survey feedback, and usability test transcripts.R: Rules & ConstraintsClarify any limitations, boundaries, and exclusions.Good creative briefs always include boundaries what to avoid, whats off-brand, or whats non-negotiable. Things like brand voice guidelines, legal requirements, or time and word count limits. Constraints dont limit creativity they focus it. AI needs the same constraints to avoid going off the rails.Example: Only analyze the uploaded customer feedback data. Do not fabricate pain points, representative quotes, journey stages, or patterns. Do not supplement with prior knowledge or hypothetical examples. Use clear, neutral, stakeholder-facing language.E: Expected OutputSpell out what the deliverable should look like.This is the deliverable spec: What does the finished product look like? What tone, format, or channel is it for? Even if the task is clear, the format often isnt. Do you want bullet points or a story? A table or a headline? If you dont say, the AI will guess, and probably guess wrong. Even better, include an example of the output you want, an effective way to help AI know what youre expecting. If youre using GPT-5, you can also mix examples across formats (text, images, tables) together. Example: Return a structured list of themes. For each theme, include:Theme TitleSummary of the IssueProblem StatementOpportunityRepresentative Quotes (from data only)Journey Stage(s)Frequency (count from data)Severity Score (15) where 1 = Minor inconvenience or annoyance; 3 = Frustrating but workaround exists; 5 = Blocking issueEstimated Effort (Low / Medium / High), where Low = Copy or content tweak; Medium = Logic/UX/UI change; High = Significant changes.WIRE gives you everything you need to stop guessing and start designing your prompts with purpose. When you start with WIRE, your prompting is like a briefing, treating AI like a collaborator. Once youve mastered this core structure, you can layer in additional fidelity, like tone, step-by-step flow, or iterative feedback, using the FRAME elements. These five elements provide additional guidance and clarity to your prompt by layering clear deliverables, thoughtful tone, reusable structure, and space for creative iteration. F: Flow of TasksBreak complex prompts into clear, ordered steps.This is your project plan or creative workflow that lays out the stages, dependencies, or sequence of execution. When the task has multiple parts, dont just throw it all into one sentence. You are doing the thinking and guiding AI. Structure it like steps in a user journey or modules in a storyboard. In this example, it fits as the blueprint for the AI to use to generate the table described in E: Expected OutputExample: Recommended flow of tasks:Step 1: Parse the uploaded data and extract discrete pain points.Step 2: Group them into themes based on pattern similarity.Step 3: Score each theme by frequency (from data), severity (based on content), and estimated effort.Step 4: Map each theme to the appropriate customer journey stage(s).Step 5: For each theme, write a clear problem statement and opportunity based only on whats in the data.R: Reference Voice or StyleName the desired tone, mood, or reference brand.This is the brand voice section or style mood board reference points that shape the creative feel. Sometimes you want buttoned-up. Other times, you want conversational. Dont assume the AI knows your tone, so spell it out.Example: Use the tone of a UX insights deck or product research report. Be concise, pattern-driven, and objective. Make summaries easy to scan by product managers and design leads.A: Ask for ClarificationInvite the AI to ask questions before generating, if anything is unclear.This is your Any questions before we begin? moment a key step in collaborative creative work. You wouldnt want a freelancer to guess what you meant if the brief was fuzzy, so why expect AI to do better? Ask AI to reflect or clarify before jumping into output mode.Example: If the uploaded data is missing or unclear, ask for it before continuing. Also, ask for clarification if the feedback format is unstructured or inconsistent, or if the scoring criteria need refinement.M: Memory (Within The Conversation)Reference earlier parts of the conversation and reuse whats working.This is similar to keeping visual tone or campaign language consistent across deliverables in a creative brief. Prompts are rarely one-shot tasks, so this reminds AI of the tone, audience, or structure already in play. GPT-5 got better with memory, but this still remains a useful element, especially if you switch topics or jump around.Example: Unless I say otherwise, keep using this process: analyze the data, group into themes, rank by importance, then suggest an action for each.E: Evaluate & IterateInvite the AI to critique, improve, or generate variations.This is your revision loop your way of prompting for creative direction, exploration, and refinement. Just like creatives expect feedback, your AI partner can handle review cycles if you ask for them. Build iteration into the brief to get closer to what you actually need. Sometimes, you may see ChatGPT test two versions of a response on its own by asking for your preference. Example: After listing all themes, identify the one with the highest combined priority score (based on frequency, severity, and effort).For that top-priority theme:Critically evaluate its framing: Is the title clear? Are the quotes strong and representative? Is the journey mapping appropriate?Suggest one improvement (e.g., improved title, more actionable implication, clearer quote, tighter summary).Rewrite the theme entry with that improvement applied.Briefly explain why the revision is stronger and more useful for product or design teams.Heres a quick recap of the WIRE+FRAME framework: Framework Component Description W: Who & What Define the AI persona and the core deliverable. I: Input Context Provide background or data scope to frame the task. R: Rules & Constraints Set boundaries E: Expected Output Spell out the format and fields of the deliverable. F: Flow of Tasks Break the work into explicit, ordered sub-tasks. R: Reference Voice/Style Name the tone, mood, or reference brand to ensure consistency. A: Ask for Clarification Invite AI to pause and ask questions if any instructions or data are unclear before proceeding. M: Memory Leverage in-conversation memory to recall earlier definitions, examples, or phrasing without restating them. E: Evaluate & Iterate After generation, have the AI self-critique the top outputs and refine them. And heres the full WIRE+FRAME prompt: (W) You are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.(I) You are analyzing customer feedback for Fintech Brands app, targeting Gen Z users. Feedback will be uploaded from sources such as app store reviews, survey feedback, and usability test transcripts.(R) Only analyze the uploaded customer feedback data. Do not fabricate pain points, representative quotes, journey stages, or patterns. Do not supplement with prior knowledge or hypothetical examples. Use clear, neutral, stakeholder-facing language.(E) Return a structured list of themes. For each theme, include:Theme TitleSummary of the IssueProblem StatementOpportunityRepresentative Quotes (from data only)Journey Stage(s)Frequency (count from data)Severity Score (15) where 1 = Minor inconvenience or annoyance; 3 = Frustrating but workaround exists; 5 = Blocking issueEstimated Effort (Low / Medium / High), where Low = Copy or content tweak; Medium = Logic/UX/UI change; High = Significant changes(F) Recommended flow of tasks:Step 1: Parse the uploaded data and extract discrete pain points.Step 2: Group them into themes based on pattern similarity.Step 3: Score each theme by frequency (from data), severity (based on content), and estimated effort.Step 4: Map each theme to the appropriate customer journey stage(s).Step 5: For each theme, write a clear problem statement and opportunity based only on whats in the data.(R) Use the tone of a UX insights deck or product research report. Be concise, pattern-driven, and objective. Make summaries easy to scan by product managers and design leads.(A) If the uploaded data is missing or unclear, ask for it before continuing. Also, ask for clarification if the feedback format is unstructured or inconsistent, or if the scoring criteria need refinement.(M) Unless I say otherwise, keep using this process: analyze the data, group into themes, rank by importance, then suggest an action for each.(E) After listing all themes, identify the one with the highest combined priority score (based on frequency, severity, and effort).For that top-priority theme:Critically evaluate its framing: Is the title clear? Are the quotes strong and representative? Is the journey mapping appropriate?Suggest one improvement (e.g., improved title, more actionable implication, clearer quote, tighter summary).Rewrite the theme entry with that improvement applied.Briefly explain why the revision is stronger and more useful for product or design teams.You could use ## to label the sections (e.g., ##FLOW) more for your readability than for AI. At over 400 words, this Insights Synthesis prompt example is a detailed, structured prompt, but it isnt customized for you and your work. The intent wasnt to give you a specific prompt (the proverbial fish), but to show how you can use a prompt framework like WIRE+FRAME to create a customized, relevant prompt that will help AI augment your work (teaching you to fish).Keep in mind that prompt length isnt a common concern, but rather a lack of quality and structure is. As of the time of writing, AI models can easily process prompts that are thousands of words long.Not every prompt needs all the FRAME components; WIRE is often enough to get the job done. But when the work is strategic or highly contextual, pick components from FRAME the extra details can make a difference. Together, WIRE+FRAME give you a detailed framework for creating a well-structured prompt, with the crucial components first, followed by optional components:WIRE builds a clear, focused prompt with role, input, rules, and expected output.FRAME adds refinement like tone, reusability, and iteration. Here are some scenarios and recommendations for using WIRE or WIRE+FRAME: Scenarios Description Recommended Simple, One-Off Analyses Quick prompting with minimal setup and no need for detailed process transparency. WIRE Tight Sprints or Hackathons Rapid turnarounds, and times you dont need embedded review and iteration loops. WIRE Highly Iterative Exploratory Work You expect to tweak results constantly and prefer manual control over each step. WIRE Complex Multi-Step Playbooks Detailed workflows that benefit from a standardized, repeatable, visible sequence. WIRE+FRAME Shared or Hand-Off Projects When different teams will rely on embedded clarification, memory, and consistent task flows for recurring analyses. WIRE+FRAME Built-In Quality Control You want the AI to flag top issues, self-critique, and refine, minimizing manual QC steps. WIRE+FRAME Prompting isnt about getting it right the first time. Its about designing the interaction and redesigning when needed. With WIRE+FRAME, youre going beyond basic prompting and designing the interaction between you and AI.From Gut Feel To Framework: A Prompt MakeoverLets compare the results of Kates first AI-augmented design sprint prompt (to synthesize customer feedback into design insights) with one based on the WIRE+FRAME prompt framework, with the same data and focusing on the top results:Original prompt: Read this customer feedback and tell me how we can improve our app for Gen Z users.Initial ChatGPT Results:Improve app reliability to reduce crashes and freezing.Provide better guidance or tutorials for financial tools like budgeting or goal setting.Enhance the transparency of Zelle transfers by showing confirmation messages.Speed up app loading and reduce lag on key actions.With this version, youd likely need to go back and forth with follow-up questions, rewrite the output for clarity, and add structure before sharing with your team.WIRE+FRAME prompt above (with defined role, scope, rules, expected format, tone, flow, and evaluation loop).Initial ChatGPT Results:You can clearly see the very different results from the two prompts, both using the exact same data. While the first prompt returns a quick list of ideas, the detailed WIRE+FRAME version doesnt just summarize feedback but structures it. Themes are clearly labeled, supported by user quotes, mapped to customer journey stages, and prioritized by frequency, severity, and effort. The structured prompt results can be used as-is or shared without needing to reformat, rewrite, or explain them (see disclaimer below). The first prompt output needs massaging: its not detailed, lacks evidence, and would require several rounds of clarification to be actionable. The first prompt may work when the stakes are low and you are exploring. But when your prompt is feeding design, product, or strategy, structure comes to the rescue.Disclaimer: Know Your DataA well-structured prompt can make AI output more useful, but it shouldnt be the final word, or your single source of truth. AI models are powerful pattern predictors, not fact-checkers. If your data is unclear or poorly referenced, even the best prompt may return confident nonsense. Dont blindly trust what you see. Treat AI like a bright intern: fast, eager, and occasionally delusional. You should always be familiar with your data and validate what AI spits out. For example, in the WIRE+FRAME results above, AI rated the effort as low for financial tool onboarding. That could easily be a medium or high. Good prompting should be backed by good judgment.Try This NowStart by using the WIRE+FRAME framework to create a prompt that will help AI augment your work. You could also rewrite the last prompt you were not satisfied with, using the WIRE+FRAME, and compare the output.Feel free to use this simple tool to guide you through the framework.Methods: From Lone Prompts to a Prompt SystemJust as design systems have reusable components, your prompts can too. You can use the WIRE+FRAME framework to write detailed prompts, but you can also use the structure to create reusable components that are pre-tested, plug-and-play pieces you can assemble to build high-quality prompts faster. Each part of WIRE+FRAME can be transformed into a prompt component: small, reusable modules that reflect your teams standards, voice, and strategy.For instance, if you find yourself repeatedly using the same content for different parts of the WIRE+FRAME framework, you could save them as reusable components for you and your team. In the example below, we have two different reusable components for W: Who & What an insights analyst and an information architect.W: Who & WhatYou are a senior UX researcher and customer insights analyst. You specialize in synthesizing qualitative data from diverse sources to identify patterns, surface user pain points, and map them across customer journey stages. Your outputs directly inform product, UX, and service priorities.You are an experienced information architect specializing in organizing enterprise content on intranets. Your task is to reorganize the content and features into categories that reflect user goals, reduce cognitive load, and increase findability.Create and save prompt components and variations for each part of the WIRE+FRAME framework, allowing your team to quickly assemble new prompts by combining components when available, rather than starting from scratch each time. Behind The Prompts: Questions About PromptingQ: If I use a prompt framework like WIRE+FRAME every time, will the results be predictable?A: Yes and no. Yes, your outputs will be guided by a consistent set of instructions (e.g., Rules, Examples, Reference Voice / Style) that will guide the AI to give you a predictable format and style of results. And no, while the framework provides structure, it doesnt flatten the generative nature of AI, but focuses it on whats important to you. In the next article, we will look at how you can use this to your advantage to quickly reuse your best repeatable prompts as we build your AI assistant.Q: Could changes to AI models break the WIRE+FRAME framework?A: AI models are evolving more rapidly than any other technology weve seen before in fact, ChatGPT was recently updated to GPT-5 to mixed reviews. The update didnt change the core principles of prompting or the WIRE+FRAME prompt framework. With future releases, some elements of how we write prompts today may change, but the need to communicate clearly with AI wont. Think of how you delegate work to an intern vs. someone with a few years experience: you still need detailed instructions the first time either is doing a task, but the level of detail may change. WIRE+FRAME isnt built only for todays models; the components help you clarify your intent, share relevant context, define constraints, and guide tone and format all timeless elements, no matter how smart the model becomes. The skill of shaping clear, structured interactions with non-human AI systems will remain valuable.Q: Can prompts be more than text? What about images or sketches?A: Absolutely. With tools like GPT-5 and other multimodal models, you can upload screenshots, pictures, whiteboard sketches, or wireframes. These visuals become part of your Input Context or help define the Expected Output. The same WIRE+FRAME principles still apply: youre setting context, tone, and format, just using images and text together. Whether your input is a paragraph or an image and text, youre still designing the interaction.Have a prompt-related question of your own? Share it in the comments, and Ill either respond there or explore it further in the next article in this series.From Designerly Prompting To Custom AssistantsGood prompts and results dont come from using others prompts, but from writing prompts that are customized for you and your context. The WIRE+FRAME framework helps with that and makes prompting a tool you can use to guide AI models like a creative partner instead of hoping for magic from a one-line request.Prompting uses the designerly skills you already use every day to collaborate with AI:Curiosity to explore what the AI can do and frame better prompts.Observation to detect bias or blind spots.Empathy to make machine outputs human.Critical thinking to verify and refine.Experiment & Iteration to learn by doing and improve the interaction over time.Growth Mindset to keep up with new technology like AI and prompting.Once you create and refine prompt components and prompts that work for you, make them reusable by documenting them. But wait, theres more what if your best prompts, or the elements of your prompts, could live inside your own AI assistant, available on demand, fluent in your voice, and trained on your context? Thats where were headed next.In the next article, Design Your Own Design Assistant, well take what youve learned so far and turn it into a Custom AI assistant (aka Custom GPT), a design-savvy, context-aware assistant that works like you do. Well walk through that exact build, from defining the assistants job description to uploading knowledge, testing, and sharing it with others. ResourcesGPT-5 Prompting GuideGPT-4.1 Prompting GuideAnthropic Prompt Engineering Prompt Engineering by GooglePerplexity Webapp to guide you through the WIRE+FRAME framework0 Commenti ·0 condivisioni
-
F5: Jule Cats on Making Music, Demolished Buildings, Nature + Moredesign-milk.comWhen Jule Cats was a young girl her mother had already sensed that she was destined to choose a creative profession because of her interest in all things artistic. A significant event when Cats was 12 years old set her on a lifelong path. I visited the graduation show of Design Academy Eindhoven during Dutch Design Week when I was in primary school, she says. I still remember the feeling I got when I walked into the exhibition, in awe of all the amazing projects and designs.Cats studied at the Willem de Kooning Academy, where she earned a bachelors degree in product design. Just a year later, in 2016, she opened her eponymous studio. Based in Rotterdam, the artist and designer is known for her bespoke objects for the interior that range from vases to paperweights.Jule Cats \\\ Photo: Gabriela LarreaShe is continually inspired by the concept of time, from the way individuals cling to memories to how they experience the present. With materials like water-based resin and mineral powder, her pieces have folds and creases that seem to shift and change, similar to a viewers perception.By carrying a notebook with her, Cats can take her time and record her thoughts. She puts the initial concepts down on paper, writing or sketching, and then reflects upon the elements later. It is a process that helps her in the initial stages of her work when everything is still new and fragile.For Cats, transformation is poetic, and she savors each phase as a project comes together. I get very excited when I see my ideas turn into something tangible by making small prototypes, she explains. This is the moment where it clicks for me.Today, Jule Cats joins us for Friday Five!1. Rock and StonesI always find myself looking for interesting stones whenever Im out in nature. Whether its along a riverside, at the beach or in the mountains, a special gradient or texture will always catch my eye.2. Ever-Changing SkiesThe color palettes that the sky offers keep amazing me. I like it when quite unexpected colors come together, like bright pink and soft blue. It motivates me to choose colors for my designs that I otherwise wouldnt have combined.3. Demolished BuildingsWhen a building is in the middle of its demolition process, it offers a glimpse into the lives that have gone on inside. To me its a very intimate moment, as if the stones start to reveal their stories. Thats why I like visiting these demolition sites and collect materials to work with.4. Details in NatureWhenever Im out in nature, I keep being surprised by the unintended beauty in certain things. Patterns, shapes, or the way a plant folds itself. I capture the things that surprise me, as a reminder that you dont always have to overthink a design and to follow your intuition.5. Making MusicThe reason why my creative process usually starts with writing, is probably because I used to write and play songs. I still have the urge to tell a story, but now my medium has turned into something tangible. Still, I like to play music every now and then, because I like the liberating feeling it gives me.Works by Jule Cats:Photo: Jacqueline FuijkschotSWAY Floor LampThis 1,5-meter-high floor lamp looks like a solid marble sculpture, but its actually a very lightweight hollow design. In the creating process, I guide the shapes, but I dont aim for full control. Each piece finds its own unique flow, offering a reminder of the beauty in letting go and allowing things to unfold naturally.IN DISGUISE VasesThis series was the starting point of my design studio. When I was graduating, I was living in houses which were up for demolition. By combining concrete remains with polyester resin, I started giving discarded materials a new purpose and presence. While at the same time, I revealed the emotional beauty hidden in what is often seen as waste. The series became an international success and encouraged me to dive deeper into this concept.RISE Table LampsThe RISE lamps dive deeper into the concept of revealing the beauty of waste. After creating the IN DISGUISE vases, I wanted to expand this idea into a new shape. Thats how the lamps came to life, in which the light is quite literally rising out of the ashes from the demolished buildings. In the bottom layer, in which the remains are integrated, I often use extra pigments to give each lamp a unique touch.FLOW Wall PieceMy FLOW pieces are very suitable for commissions. They look like theyve been carved out of marble, and even by touching the surface you might think so. But because I use a liquid resin to create these works, Im flexible to adjust the sizes. This project was quite special, as it was the longest piece that has been requested so far: 1,80 meters long.FLOW TilesBesides creating big sculptures within the FLOW series, Ive recently worked on a design which is modular. This design includes individual tiles, which can build up to an artwork in the preferred size of my client. The composition in the picture is 70 cm by 32 cm wide, but I recently installed artwork from 1 meter high and 2 meters wide!0 Commenti ·0 condivisioni
-
Your poor work/life balance might be my faultuxdesign.ccUnplugging isnt just an individual responsibility.Photo by Declan Sun onUnsplashYes! I could take care of review tasks during my sons baseball game. My interviewee chewed on her fingernails. I feel like Im chained to my desk. I would definitely use anapp.I was appalled.Our brief was to concept test the idea of a mobile app for financial and regulatory compliance users. It would complement the companys existing software, allowing users to review tasks and comments on the go. We received requests for this from Sales, Marketing, and customer surveysafter all, every B2B or B2C company needs its own mobile app,right?But I walked away from the research feeling shaken. The people we interviewed were excited about the app conceptnot because it would decrease their workload or simplify anything, but because they could use it to workmore.Creating something that entices people to spend extra time, off-the-clock, on work? Thats exploitation.Unethical design or flexibility enhancer?Let me explain. That mobile app could encourage inequity and manipulation in multipleways:Some folks start responding at all hours, which pressures the rest of the team to do the same. Failure to participate leads to resentment (shes not working as hard as I am) and poor performance reviewswhich disproportionately impact caretakers, parents (especially women), and people with disabilities.Hourly employees arent paid for overtime spent using the appwhich is corporate wagetheft.Companies often expect people to use their personal phones for work instead of providing one. This means agreeing to let the company wipe the device remotely including personal apps, data, andphotos.People cant properly unplug and rest after work. Burnout ensues. (Not so good for business in the long run,either.)At least, thats what I thought at thetime.Then the 2020 pandemic descended. All of those bad effects happened anyway as businesses shifted to remote workalong with positivechanges.We found out that working from home improves productivity and job satisfaction for manypeople.Remote work is good for the environment as fewer people commute. Hey, thats unpaid timetoo!Flexible schedules become the norm, benefitting the same groups singled outbefore.Now I work remotely too, and while I dont use mobile apps for my job, its due to diligent culture-building by management: loudly announcing PTO, mildly shaming folks who respond when out of office, and respecting workinghours.Were living in a cultural moment where work/life balance is seen as an individual responsibility. But the systems we design shape peoples lived experience just as much. As designers, we must take responsibility forthat.First of all, dont design deceptive patternsDeceptive patterns are designs that force the user to take an action that is not in their best interest. They are prolific on the web because they are phenomenally effective at boosting conversions. However, their use is unethical and legally problematic.Maria Rosala for Nielsen NormanGroupThese patterns are usually associated with ecommerce sites or services, but can also sneak into B2B software that people use for work. For example, making certain essential features of your web-based software available only in the mobile app to force people into downloading it. (Not cool,Etsy.)Etsy screenshot by mtmadhatt onReddit.Another common deceptive pattern in B2B settings is naggingsending repeated emails or notifications to users trying to upsell the product or touting a new feature. Dont waste peoples time by emailing them constantly and driving up the number of pings competing for their attention during (or outside of) theworkday.Further reading on deceptive patterns:Types of deceptive patternWhy I dont use the term dark patternanymoreSolid examples of deceptive patternsHelp users set boundariesIf your app is collaborative, provide settings for working hours and timezone. (You can prompt new users to set this up early to encourage consistent usage.) Then, consider how working hours should affect other features like notifications and messaging. Can the software shift subtly into night mode at the end of the day? Are login timeouts long enough to accommodate most workingdays?Google Calendars setting lets users automatically decline meetings outside working hours, and informs others when they book or view meetings outside that timeframe.Google Calendars working hours settings. Image byme.Outside working hours indicator, shown when a meeting is auto-declined. Image byme.Announce boundaries so users dont havetoClearly indicate when people are out of office, in a different timezone, or have upcoming PTO. Software can serve as a gentle buffer between coworkers, setting expectations for when a message will be read or responded to.Slack does this wellcoworkers can even schedule messages to be delivered later. This results in more intentional interactions and fewer that come across as rude orurgent.Some of Slacks automated messages about notifications and timezone. Image byme.Help people perceivetimeIts great to get into a flow state at work, but sometimes that leads to working right through the end of the day and cutting into family time. Consumer apps like Youtube help people disengage with bedtime reminders and sleep timers. (Reminders are automatically on for minors.) And Tiktok took this a step farther by partnering with Headspace to interrupt doomscrolling with mini meditation breaks. Consider: how could your product could help build time awareness without becoming annoying?Image credit:TiktokFor employees, its often meetings or busywork that eat up time rather than bite-size videos. Good UX can help users understand where their time is going in your product so they can better manage their workday. (Even better if you can eliminate busywork before it happens.)Google Calendar has solved for time awareness pretty well. Time Insights allow users to analyze how much focus time and meeting time their week contains and plan ahead. They can even present this data to a manager or team to help advocate for reducing unproductive meetings.Googles Time Insights. Image byme.Provide notification optionsMake sure users have flexibility in choosing which notifications are muted and when. Can a working hours setting double as quiet hours, with notifications muted outside that range? Can newsletters, updates, and direct messages be managed independently, so only the most important updates gothrough?Slack provides lots of notification pausing options. Image byme.The future of work/life balance is in ourhandsRemote work is here to stay. UX and product designers are set to shape the future of workplace culture so lets do itwell.What other patterns have you seen that affect users work-life balance? Leave a comment and keep the discussion going! And feel free to connect with me on LinkedIn.No AI was used in the creation of thisarticle.Your poor work/life balance might be my fault was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 Commenti ·0 condivisioni
-
Here's When iOS 26 Will Likely Droplifehacker.comThe iPhone 17 is nearly upon us. Next week, Apple will take the virtual stage and announce its next generation of smartphones. There will be some big updates, like the ultra-thin iPhone Air and redesigned camera bump on the Pro, as well as some standard revisions, like the iPhone 17.But while Apple's new iPhones may be quite different, what they all have in common is they'll very likely ship with iOS 26, Apple's latest and great iPhone OS, out of the box. Apple has big plans for iOS 26, including the new Liquid Glass redesign, fresh messaging features, updates to Apple Music, among many other changes. You won't need to buy an iPhone 17 series device in order to run iOS 26, of course: Apple is supporting iPhone 11 and newer with this upcoming update. What this means is that iOS 26 will need to launch at least by the iPhone 17's release date to be in sync with the rest of Apple's iPhones.When will iOS 26 come out?No one can predict the future, and there are few leaks and rumors out there spilling the beans on this release date. That said, we can turn to recent history to make an educated guess as to when Apple plans to launch the new OS this year. Wccftech believes Apple will drop iOS 26 one week after announcing its latest iPhones. That's because the company followed that strategy last year: Apple announced the iPhone 16 on Sept. 9, then dropped the official iOS 18 update on Sept. 16. It would make sense if Apple mirrored this timeline with the iPhone 17 and iOS 26, and, if so, iOS 26's release date would also be Sept. 16. The year before, Apple deviated from this pattern slightly, releasing iOS 17 six days after unveiling the iPhone 15 series. One year before that, however, Apple followed the one-week-later pattern for iOS 16. While anything is possible, I wouldn't be surprised in the slightest to see iOS 26 one week after Apple's event on Tuesday, though historical precedent could allow for a Monday launch, as well. Whenever Apple does launch iOS 26, it will likely also release its other software updates as well, including iPadOS 26, macOS 26, watchOS 26, tvOS 26, and watchOS 26unless the company runs into issues with any update it needs additional time to troubleshoot.You can run iOS 26 right nowYou don't actually have to wait until Apple officially releases iOS 26 to try out all these new features and changes. The company has been beta testing the software since June (though the public beta didn't launch until July), rooting out bugs and glitches to optimize the experience for the general public. There's a risk in installing any beta on your device, since the software is not finished yet. But if you want to experience iOS 26 before most of your friends and family, the option is there. As we're inching closer to the end of the beta cycle, the risks are lower than they were when Apple was still looking out for larger bugs. Still, if you choose to install the beta on your iPhone, make sure to archive your phone to a secure source, like your Mac or PC, as you could lose your data should you need to downgrade back to iOS 18.0 Commenti ·0 condivisioni
-
Porsche and Audi's EVs can now recharge on any Tesla Supercharger in North Americawww.engadget.comStarting September 9, Porsche and Audi will be the latest non-Tesla brands to utilize the Supercharger network. The two automakers announced that some of their owners will get adapters that allow them to charge via the NACS port, which Tesla developed and opened up to other automakers. The rollout comes after the Volkswagen Group, which owns both Porsche and Audi, announced that it would implement NACS compatibility for Volkswagen, Audi, Porsche and Scout Motors in December 2023.Porsche / Ashton StanPorsche is kicking off its NACS adoption with a "soft launch," where existing owners of Taycan and Macan Electric models have to reserve a free NACS to DC adapter with the My Porsche app to connect to the Tesla Supercharger network. During this initial phase, drivers of compatible Porsche EVs have to use the Tesla app at Superchargers, but will eventually be able to charge with the My Porsche app in "the coming months," according to Porsche. Like Porsche, Audi is getting its own branded adapter that will arrive with newer 2025 model year options, including its Q6 e-tron, A6 Sportback e-tron and e-tron GT. Notably, Audi said its Q4 e-tron won't currently have access to Tesla Superchargers.For Porsche, any Taycan and Macan Electric from model year 2026 onward will include a free NACS adapter. However, Porsche EVs from model year 2024 or older will have to buy the adapter from Porsche's online shop or dealerships, which will go for $185. Porsche and Audi are also working on software updates to show Tesla Superchargers on their navigation systems. Despite Porsche and Audi now gaining access to the Supercharger network, Volkswagen Group's other subsidiaries, including Lamborghini and Bentley, still haven't committed to adopting NACS.This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/porsche-and-audis-evs-can-now-recharge-on-any-tesla-supercharger-in-north-america-173333649.html?src=rss0 Commenti ·0 condivisioni
-
In case you want to watch a video tutorial underwater, Ulefone has a waterproof tablet with a projector and a humongous 24,200mAh batterywww.techradar.comUlefone Armor Pad 5 Ultra combines a projector, floodlights, an oversized battery, rugged certifications, and a heavy build into a survivalist-style tablet.0 Commenti ·0 condivisioni
-
Anthropic agrees to pay $1.5 billion to settle authors' copyright lawsuitwww.cnbc.comAuthors had sued Anthropic, claiming the artificial intelligence startup had illegally used their books to train its models.0 Commenti ·0 condivisioni
-
Orchestrating a Bullet Train Explosionbeforesandafters.comAn excerpt from issue #37 of befores & afters magazine, which is all about models and miniatures.Netflixs Bullet Train Explosion (Shinkansen Daibakuha), from director Shinji Higuchi, tells the story of a bomb threat made to blow up an E5 Series Shinkansen bound from Shin-Aomori to Tokyo, if the train slows down below 100 km/h. For key moments of sections of the train crashing and exploding, the production utilized miniatures.Visual effects supervisor Katsur Onoue tells befores & afters that the decision to use miniatures, rather than relying solely on digital visual effects, was driven by the desire to create some distinctive and unpredictable imagery, plus tap into Japans rich history in using models.In Japan, says Onoue, miniature techniques have been widely used in films since the 1940s, and the craft has been handed down over generations. Its one of the techniques that Japanese cinema excels at. Director Higuchi and I were both raised in the world of special effects units and are very familiar with miniature filming techniquesboth their strengths and weaknesses.Planning the scenesTo help plan the crash scenes, Director Higuchi drew detailed storyboards. Animatics were also created and edited together. From the animatics, recounts Onoue, we selected the shots that needed to be done with miniatures and considered the filming methods and set designs. Because we could predict not only the scale and shooting methods but also many other aspects with high precision, we were also able to clarify the costs.Using the animatics as a base, we worked closely with the VFX team and the main units art department, clearly defining our respective areas of responsibility ahead of time and holding detailed coordination meetings, discusses Onoue. For example, in the rescue scene, the rear of the Shinkansen is shown damaged. The main unit had filmed their portion earlier, and the VFX team created 3D data and photographs of that set. We in the miniature team then used that data to recreate a perfectly matched miniature.It was important for all teamsVFX, art, and miniaturesto organize and share the information gathered from on-site investigations so that everything, from the main unit set to the miniatures, aligned as closely as possible with the real train, adds Onoue. We also scheduled production so that filming of the main unit set that would directly connect to the miniature would be completed before our miniature unit began shooting. This was necessary to ensure not only visual continuity in art direction but also consistency in lighting and camera work.Onoue also notes that the miniature units cinematographer served as the B-camera operator for the main unit and that the team shot elements needed for VFX compositing, such as explosion plates, water splashes, fire effects, and even shots of water being filled into the cushion drums, which the main unit couldnt capture.Building the miniaturesThe team then set out to determine the scale of the miniatures required, before building them. Since the shots to be done with miniatures were crash scenes, we decided that making the miniatures relatively large would increase the chances of success, says Onoue. After deliberation, we decided to create the Shinkansen cars and elevated track set at a 1/6 scale.Actual train blueprints from JR Eastnormally not available to the publicwere acquired to help accurately build the train. We also conducted research at a Shinkansen maintenance yard where we could observe up-close areas normally out of view, such as the wheels, undercarriage, pantographs, and paint details, outlines Onoue. We further enhanced accuracy by photographing and 3D scanning actual train cars.The art of miniatures: new issue of befores & afters magazine now out!The train body was built by laser-cutting MDF (medium density fiberboard) and then assembling the basic shape. The exterior and roof were coated with FRP (fiber-reinforced polymer), after which they were polished and painted. The windows were made of acrylic panels, and the wheels were metal machined parts. Notes Onoue: A sturdy aluminum frame was attached underneath the body, which connected via two 100mm-diameter iron pillars to a monorail dolly hidden below the set, allowing the miniature to move. Small parts like window frames, seats and nameplates were 3D-printed.An important part of the build was representing the damage on the train cars. We first created the train cars in a pristine, undamaged state, then redecorated them to reflect damage, advises Onoue. Crushed body parts were shaped by cutting out parts of the MDF frame and layering lead sheets on top. Passenger luggage was also loaded inside.Other items that would be part of the crash scenes were built, including the tracks and water cushion drums that feature spectacularly in one scene. Since no pre-made 1/6-scale rails existed, the team welded square iron pipes and L-shaped steel bars to make them look like rails. Around 200 water cushion drums were made, and then engineered to break easily. We printed the top and bottom parts as thin as possible with a 3D printer, wrapped thin plastic sheets into tubes, painted them, and applied weathering manually to each one, details Onoue.Filming the crashThe miniature train crash shots depict the train sliding laterally at an actual speed of 100 km/h. Converting this to a 1/6 scale meant the miniature needed to move at about 40 km/h during filming. The speed of the camera and the model Shinkansen were synchronized to help achieve the crash. To do that, a monorail was installed beneath the set to move the Shinkansen. Then, a mechanism was devised so that the camera car and the dolly moving the train would run in the same direction and at the same speed. Ultimately, notes Onoue, the power of the camera car itself was used to pull the dolly carrying the miniature Shinkansen.In real scale, the derailed car (Car No. 8) continues moving for about 1,200 meters before coming to a stop, continues Onoue. Reproducing this entire distance with miniatures was unrealistic and inefficient. Upon analyzing the animatics, we realized we could break the scene into three main segments. Each segment required only a 20-meter-long elevated track set. Luckily, most Shinkansen tracks are elevated straight lines with consistent visual appearance. So, by altering the background in each part with VFX, we could create the illusion of a 1,200-meter journey.In order to accelerate the camera car to 40 km/h as well as maintain speed over the 20-meter set, while also allowing for safe deceleration, it was determined that a paved run of about 80 meters was required. This was based on the drivers judgment, says Onoue. We paved a 4-meter-wide, 100-meter-long stretch of a borrowed open space and installed an H-beam monorail over 80 meters long parallel to it. The camera car was equipped with three cameras mounted on an isolator arm, plus three fixed cameras and a drone that flew simultaneously. Additionally, two small cameras were mounted directly onto the miniature Shinkansen. At most, seven cameras were used in a single take.For the crash and explosion, debris and water became a significant part of the final look. Cushion drums were filled with water to soften the impact. In actual footage, when these are destroyed in a crash, they produce dramatic sprays of water, notes Onoue. Although the miniature cushion drums were designed to break easily, the impact alone wasnt sufficient to produce the desired effectso we decided to use explosives to assist in their destruction.However, says Onoue, water doesnt scale down like a physical object. To simulate the water splash, we used a substitute material that would look like water droplets when dispersed. In this case, the material was called Hakuryu Saiseiki, a construction-grade white crushed limestone mixed with salt. We exploded this material using effects-grade explosives timed to the moment of impact.Meanwhile, the droplets streaming down the train body were created using milky-white wateressentially water mixed with milkand blasted with air cannons. Inside the train, photographic flash bulbs were installed to simulate the flickering of short-circuited electrical sparks. We also embedded blowers inside the Shinkansen to blow away dust scattered on the tracks, states Onoue. Ballast, which was crushed stone on the tracks, was also blown away using air cannons. As for sparks flying from under the vehicle, we filmed real sparks by grinding iron with a grinder and then composited that footage into the scene.In the end, the miniature train cars weighed approximately 300 kilograms. This posed a unique problem of designing models that could withstand sudden acceleration and vibration, yet still break apart as needed, despite the conflicting engineering demands. We prepared two cars specifically to be destroyed and captured the crash in two takes using seven cameras, describes Onoue.Ultimately, the first take was the one used in the final film. As for the shots involving the cushion drum impacts, those were broken up into three sequences, with each sequence filmed in two to three takes. Additionally, the freight train explosion at the beginningset off by the culpritwas also done with a 1/8 scale miniature.Read the full story in the magazine.The post Orchestrating a Bullet Train Explosion appeared first on befores & afters.0 Commenti ·0 condivisioni
-
[Event] Swordtember 2024 Challengeblog.cara.appKsenia Palchikova @kotartist - Royal Hunt Leader List by @faith_schaffer on Instagram - shared on Cara by Hayden Stutz @hsartist S Morris @braverobynart - Swordtember 2024 Prompt List Sharpen your pencils and prepare for art! September is upon us, and that means it's time to create legendary blades! Ready to wield your artistic skill? Then join the Swordtember challenge and bring your sword designs to life:About the EventSwordtember is a community event celebrating the month of September with sword-themed artwork!To participate in the challenge, you are free to use one of the many theme lists that are meant to inspire you like the official Swordtember themes, Cara's List or your very own!Whether you draw a sword a day throughout the entire month or participate with a single entry is completely up to you: You can join in as much or as little as you want and can.How to join Swordtember 2024 on Cara1. Participate by posting new work youve made following either Caras Theme List, the official Swordtember themes or your own ideas.2. Make sure to mention #swordtember2024 in the description box when posting and feel free to add the day and theme if you drew inspiration from a list!Cara Theme ListWe encourage everyone to take this chance to explore new ideas, experiment with your art and take a chance at learning something new within our slashing theme. Most of all, we are excited to see everyone having fun with their art and raving about each other's works in the comments!We hope you can forge fresh friendships, discover new artwork and artists, collaborate and enjoy Swordtember together. We look forward to seeing your creativity unfold! - The Cara Team @volcanojungle - This Year's Swordtember List! Olga Tereshenko @olgatereshenko - Koschei Becky Cloonan @beckycloonan - Swordtember sketch dump Jean Paul Fiction @jeanp - Swortember Alexandrea @arcanumlenz - Angelic Knight @hopefulundertone - Swordtember - Renaissance @volcanojungle - Steampunk Sword (2023) Kyo Legends @kyoartworks - Gothic Alex F @stormsong - Swordtember 2023 Ksenia Palchikova @kotartist - Vampire Ksenia Palchikova @kotartist - Witch Ksenia Palchikova @kotartist - Inquisitor Alex Piasecka @alexpiasecka - Swordtember Devin Yang @devinscribbles - Swordtember2022 @slumberprince - Path of the wicked @slumberprince - Selunite Blade Jennifer Smart @jaesmart - The Veridian Lord Antoine Gadoud @axiominus - Swordtember of last year ! @slumberprince - coral and glass blade prompts 2023 Nic Rodriguez @nrgalactic - Swordtember 2023 Drawings Zaba @zabacraft - Alchemist's Sword! Jacob @ponk - Swordtember'22 showcase CHICHI @chichitea - Swordtember drawings from 2023 Nick Rodriguez @nrgalactic - Swordtember 2022 Drawings S Morris @braverobynart -Swordtember Day 14 Arcane & Divine S Morris @braverobynart - Swordtember Day 9, Arcane & Nature Jennifer Smart @jaesmart - Terrible be the Elder Gods in Their wrath @augichii - Doom Royal Heir @augichii-HorrorKnightErrant @dgirael - Swordtember 2022 Leona Florianova @lostfool - Sowrdtember 2022 - Void, Rot @shunyah - Swordtember 2023 2/7 Po @hlifft - Swordtember 2023 (2) Po @hlifft - Swordtember 2023 (2) Po @hlifft - Swordtember 2023 (2) Jet K @jetkadett - Swordtember 20220 Commenti ·0 condivisioni