• Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale All Built on NVIDIA
    blogs.nvidia.com
    AI reasoning, inference and networking will be top of mind for attendees of next weeks Hot Chips conference.A key forum for processor and system architects from industry and academia, Hot Chips running Aug. 24-26 at Stanford University showcases the latest innovations poised to advance AI factories and drive revenue for the trillion-dollar data center computing market.At the conference, NVIDIA will join industry leaders including Google and Microsoft in a tutorial session taking place on Sunday, Aug. 24 that discusses designing rack-scale architecture for data centers.In addition, NVIDIA experts will present at four sessions and one tutorial detailing how:NVIDIA networking, including the NVIDIA ConnectX-8 SuperNIC, delivers AI reasoning at rack- and data-center scale. (Featuring Idan Burstein, principal architect of network adapters and systems-on-a-chip at NVIDIA)Neural rendering advancements and massive leaps in inference powered by the NVIDIA Blackwell architecture, including the NVIDIA GeForce RTX 5090 GPU provide next-level graphics and simulation capabilities. (Featuring Marc Blackstein, senior director of architecture at NVIDIA)Co-packaged optics (CPO) switches with integrated silicon photonics built with light-speed fiber rather than copper wiring to send information quicker and using less power enable efficient, high-performance, gigawatt-scale AI factories. The talk will also highlight NVIDIA Spectrum-XGS Ethernet, a new scale-across technology for unifying distributed data centers into AI super-factories. (Featuring Gilad Shainer, senior vice president of networking at NVIDIA)The NVIDIA GB10 Superchip serves as the engine within the NVIDIA DGX Spark desktop supercomputer. (Featuring Andi Skende, senior distinguished engineer at NVIDIA)Its all part of how NVIDIAs latest technologies are accelerating inference to drive AI innovation everywhere, at every scale.NVIDIA Networking Fosters AI Innovation at ScaleAI reasoning when artificial intelligence systems can analyze and solve complex problems through multiple AI inference passes requires rack-scale performance to deliver optimal user experiences efficiently.In data centers powering todays AI workloads, networking acts as the central nervous system, connecting all the components servers, storage devices and other hardware into a single, cohesive, powerful computing unit.NVIDIA ConnectX-8 SuperNICBursteins Hot Chips session will dive into how NVIDIA networking technologies particularly NVIDIA ConnectX-8 SuperNICs enable high-speed, low-latency, multi-GPU communication to deliver market-leading AI reasoning performance at scale.As part of the NVIDIA networking platform, NVIDIA NVLink, NVLink Switch and NVLink Fusion deliver scale-up connectivity linking GPUs and compute elements within and across servers for ultra low-latency, high-bandwidth data exchange.NVIDIA Spectrum-X Ethernet provides the scale-out fabric to connect entire clusters, rapidly streaming massive datasets into AI models and orchestrating GPU-to-GPU communication across the data center. Spectrum-XGS Ethernet scale-across technology extends the extreme performance and scale of Spectrum-X Ethernet to interconnect multiple, distributed data centers to form AI super-factories capable of giga-scale intelligence.Connecting distributed AI data centers with NVIDIA Spectrum-XGS Ethernet.At the heart of Spectrum-X Ethernet, CPO switches push the limits of performance and efficiency for AI infrastructure at scale, and will be covered in detail by Shainer in his talk.NVIDIA GB200 NVL72 an exascale computer in a single rack features 36 NVIDIA GB200 Superchips, each containing two NVIDIA B200 GPUs and an NVIDIA Grace CPU, interconnected by the largest NVLink domain ever offered, with NVLink Switch providing 130 terabytes per second of low-latency GPU communications for AI and high-performance computing workloads.An NVIDIA rack-scale system.Built with the NVIDIA Blackwell architecture, GB200 NVL72 systems deliver massive leaps in reasoning inference performance.NVIDIA Blackwell and CUDA Bring AI to Millions of DevelopersThe NVIDIA GeForce RTX 5090 GPU also powered by Blackwell and to be covered in Blacksteins talk doubles performance in todays games with NVIDIA DLSS 4 technology.NVIDIA GeForce RTX 5090 GPUIt can also add neural rendering features for games to deliver up to 10x performance, 10x footprint amplification and a 10x reduction in design cycles, helping enhance realism in computer graphics and simulation. This offers smooth, responsive visual experiences at low energy consumption and improves the lifelike simulation of characters and effects.NVIDIA CUDA, the worlds most widely available computing infrastructure, lets users deploy and run AI models using NVIDIA Blackwell anywhere.Hundreds of millions of GPUs run CUDA across the globe, from NVIDIA GB200 NVL72 rack-scale systems to GeForce RTX and NVIDIA RTX PRO-powered PCs and workstations, with NVIDIA DGX Spark powered by NVIDIA GB10 discussed in Skendes session coming soon.From Algorithms to AI Supercomputers Optimized for LLMsNVIDIA DGX SparkDelivering powerful performance and capabilities in a compact package, DGX Spark lets developers, researchers, data scientists and students push the boundaries of generative AI right at their desktops, and accelerate workloads across industries.As part of the NVIDIA Blackwell platform, DGX Spark brings support for NVFP4, a low-precision numerical format to enable efficient agentic AI inference, particularly of large language models (LLMs). Learn more about NVFP4 in this NVIDIA Technical Blog.Open-Source Collaborations Propel Inference InnovationNVIDIA accelerates several open-source libraries and frameworks to accelerate and optimize AI workloads for LLMs and distributed inference. These include NVIDIA TensorRT-LLM, NVIDIA Dynamo, TileIR, Cutlass, the NVIDIA Collective Communication Library and NIX which are integrated into millions of workflows.Allowing developers to build with their framework of choice, NVIDIA has collaborated with top open framework providers to offer model optimizations for FlashInfer, PyTorch, SGLang, vLLM and others.Plus, NVIDIA NIM microservices are available for popular open models like OpenAIs gpt-oss and Llama 4, making it easy for developers to operate managed application programming interfaces with the flexibility and security of self-hosting models on their preferred infrastructure.Learn more about the latest advancements in inference and accelerated computing by joining NVIDIA at Hot Chips.
    Like
    Love
    Wow
    Sad
    Angry
    2كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • DIGGING DEEPLY INTO VFX FOR THE LIVE-ACTION HOW TO TRAIN YOUR DRAGON
    www.vfxvoice.com
    By TREVOR HOGGImages courtesy of Universal Studios.While Shrek launched the first franchise for DreamWorks Animation, How to Train Your Dragon has become such a worthy successor that the original director, Dean DeBlois, has returned to do a live-action adaptation of a teenage Viking crossing the social conflict divide between humans and flying beasts by befriending a Night Fury. Given that the fantasy world does not exist, there is no shortage of CG animation provided by Christian Manz and Framestore, in particular with scenes featuring Toothless and the Red Death. Framestore facilities in London, Montreal, Melbourne and Mumbai, as well an in-house team, provided concept art, visual development, previs, techvis, postvis and 1,700 shots to support the cast of Mason Thames, Nico Parker, Gerard Butler, Nick Frost, Gabriel Howell, Bronwyn James and Nick Cornwall.A full-size puppet of Toothless was constructed, minus the wings, that could be broken down into various sections to get the proper interaction with Hiccup.What I hoped is that people would watch it and see real human beings flying dragons. Youre emotionally more connected because youre seeing it for real. The animation is amazing and emotional, but we wanted to try to elevate that in terms of storytelling, emotion and wish fulfillment.Christian Manz, VFX SupervisorEven though the animated features were not treated as glorified previs by the production, the trilogy was the visual starting point for the live-action adaptation. Deans challenge from the beginning was, If you can come up with better shots or work, thats great. If you cant come up with better shots then it will be the one from the animated movie, states VFX Supervisor Manz. When it came to a few key things like flying and reestablishing what that would look like in the real world, we began to deviate. Elevating the complexity of the visual effects work was the sheer amount of interaction between digital creatures and live-action cast. What I hoped is that people would watch it and see real human beings flying dragons, Manz notes. Youre emotionally more connected because youre seeing it for real. The animation is amazing and emotional, but we wanted to try to elevate that in terms of storytelling, emotion and wish fulfillment.Despite having significant set builds, digital extensions were still required to achieve the desired scope for Berk.The nature of live-action filmmaking presented limitations that do not exist in animation. Glen McIntosh, our Animation Supervisor, said from the beginning that, Everything is going to move slower, Manz remarks. You watch Stoick pick up Hiccup at the end of the animated movie, and in about three frames hes grabbed and flung him over his shoulder. In our version, Gerard Butler has to kneel down, shuffle over to where Mason Thames is and lift him up. All of that takes more time. The sizes of the dragons also had to be more consistent. Manz comments, We all had a go at ribbing Dean about continuity because every dragon changed in size throughout the original film. It works and you believe it. However, here we had to obey the size and physics to feel real. An extensive amount of time was spent during pre-production to discover the performances of the dragons. Because we were literally inhabiting a real world, Dominic Watkins was building sets, so we had to find out how big they are, how fast they would move, and their fire. It was important we figured that out ahead of time.One of the hardest scenes to recreate and animate was Hiccup befriending Toothless.We all had a go at ribbing Dean [DeBlois, director] about continuity because every dragon changed in size throughout the original film. It works and you believe it. However, here we had to obey the size and physics to feel real. Because we were literally inhabiting a real world, Dominic Watkins was building sets, so we had to find out how big they are, how fast they would move, and their fire. It was important we figured that out ahead of time.Christian Manz, VFX SupervisorRetaining the cartoon stylization of Toothless was important while also taking advantage of the photorealism associated with live-action. Three months before we officially began working on the film, Peter Cramer, the President of Universal Pictures, wanted to know that Toothless would work, Manz explains. We did visual development but didnt concept him because we already had the animated one. From there we did sculpting in ZBrush, painting in Photoshop and rendering in Blender. We spent three months pushing him around. I went out to woods nearby with a camera, HDRI package, color chart and sliver ball to try to shoot some background photographs that we could then put him into, rather than sticking him in a gray room. I even used my son as a stand-in for Hiccup to see what Toothless looked like against a real human. We looked at lizards to horses to snakes to panthers to bats for the wings. The studio wanted him big, so he is a lot bigger than the animated version; his head compared to his body is a lot smaller, head-to-neck proportion is smaller, his eyes are smaller proportion compared to the animated one, and the wings are much bigger. We ended up with a turntable, ran some animation through Blender, and came up with a close-up of Toothless where hes attached to the rope, which proved to the studio it would work.Other recreations were the sequences that take place in the training arena.Hiccup befriending Toothless was the sequence that took the longest to develop and produce. During the gestation of that, we slowly pulled it back because when you watch animals in the real world, when they want something rather than moving around and doing lots of stuff, theyll just look at you and have simple poses, Manz notes. That simplicity, but with lots of subtlety, was difficult. To get the proper interaction, there was a puppet on set for Toothless. We had a simple puppet from nose to tail for him, apart from the wings, that could be broken up. For that scene, it would only be Tom Wilson [Creature Puppetry Supervisor] and the head at the right height. We did previs animation for the whole sequence. Framestore has an AR iPad tool called Farsight, which you could load up, put the right lens on, and both us, Dean and camera could look to make sure that Toothless was framed correctly. We could show Mason what he was looking at and use it to make sure that Tom was at the right height and angle. Im a firm believer that you need that interaction. Anything where an actor is just pretending never works.The live-action version was able to elevate the flying scenes.Red Death was so massive that separate sets were constructed to represent different parts of her body. We had simple forms, but based off our models, the art department built us a mouth set with some teeth. We had an eye set that provided something for Snotlout [Gabriel Howell] to hang off of and bash the eye, which had the brow attached to it. Then we had like a skate ramp, which was the head and horn, to run up, Manz reveals. When Asterid [Nico Parker] is chopping off teeth, she is not hitting air. We had teeth that could be slotted in and out based on the shots that were needed. The set could tip as well, so you could be teetered around. Scale was conveyed through composition. We made it a thing never to frame Red Death because she was so big and that was part of making her look big. One of the challenges of animating her is, when flying she looks like shes underwater because of having to move so slowly. Her wingtips are probably going 100 miles per hour, but theyre so huge and covering such a large area of space that having Toothless and rocks falling in the shot gave it scale.Fire was a principal cast member. I called up YouTube footage of a solid rocket booster being tested last year, strapped to the ground and lit, Manz states. The sheer power of the force of that fire, and it was done in a desert, kicked up lots of dust. We used that as the reference for her fire. Another unique thing in this world is that each dragon has a different fire. Her fire felt like it should be massive. Toothless has purple fire. Deadly Nadder has magnesium fire. We have lava slugs from Gronckle. For a number of those, we had Tez Palmer and his special effects team creating stuff on set that had those unique looks we could start with and add to. When we saw the first take of the Red Death blasting the boats, we were like, Thats going to look amazing! The jets of fire would always involve us because they had to be connected to the dragon. The practical fire added an extra layer of fun to try to work out.An aerial view of the training arena showcases a maze configuration.Another significant element was flying. I felt the more analogue we could be, the more real it could look, but it still had to be driven by the movement and shapes of our dragons, Manz remarks. We worked with Alistair Williams [Special Effects Supervisor] motion control team and used their six-axis rig, which can carry massive planes and helicopters, and placed an animatronic buck of the head, neck and shoulders of each dragon on top of that. We designed flight cycles for the dragons, and as actors were cast, we digitally worked out the scale and constraints of having a person on them. When the special effects came on, we passed over the models, and they returned files in Blender, overlaying our animation with their rig. The rigs were built and shipped out to Belfast one by one. There were no motion control cameras. I had simple techvis of what the camera would be doing and would say, This bit we need to get. That bit will always be CG. We would find the shot on the day. The six-axis rigs could be driven separately from animation. but also be driven by a Wahlberg remote control. You could blend between the animation and remote control or different flight cycles. The aim was that Mason was not just on a fairground ride but is controlling, or is being controlled, by this beast he is riding; that was a freeing process.A character that required a number of limb replacement shots was Gobber, who is missing an arm and a leg.Not entirely framing the Red Death in the shot was a way to emphasize the enormous size of the dragon.Glen McIntosh, our Animation Supervisor, said from the beginning that, Everything is going to move slower [in live-action than in animation], You watch Stoick pick up Hiccup at the end of the animated movie, and in about three frames hes grabbed and flung him over his shoulder. In our version, Gerard Butler has to kneel down, shuffle over to where Mason Thames is and lift him up. All of that takes more time.Christian Manz, VFX SupervisorA 360-degree set was physically constructed for the training arena, and was built to full height. We didnt have the roof and had a partial rock wall, but the whole thing was there. We were doing previs and designing alongside Dominic Watkins building the training arena. One of the big things was how fast is the Nadder going to run and how big does this arena have to be? We were also working with Roy Taylor [Stunt Coordinator], who did some stuntvis that was cut into the previs, and then started building our sequence. I ended up with a literal plan in which fences had to be real and what the actions were. It was shot sequentially so we could strike fences as we went; some fences would become CG. That was the first thing we shot, and it snowed! We had ice on the ground that froze the fences to the ground. They had a flamethrower out melting snow. We had short shooting days, so some of it had to be shot as the sun went down. Bill Pope would shoot closer and closer, which meant we could replace bits of environment and still make it look like it was day further away. There was a lot in there to do.Each dragon was given a distinct fire that was a combination of practical and digital elements.Live-action actors do not move as quickly as animated characters, adding to the screentime.Environments were important for the flying sequences. Flying was going to be us or plates, and I wanted to capture that material early, so we were recceing within two months of starting, back in the beginning of 2023, Manz states. We went to Faroe Islands, Iceland and Scotland, and Dean was blown away because he had never been on a recce like that before. All of the landscapes were astonishing. We picked the key places that Dean and Dominic liked and went back with Jeremy Braben of Helicopter Film Services and Dominic Ridley of Clear Angle Studios to film plates for three weeks. We caught 30 different locations, full-length canyons and whole chunks of coastline. My gut told me that what we wanted to do was follow Toothless and the other dragons, which meant that the backgrounds would be digital. Creating all of those different environments was one of the biggest challenges of the whole show, even before we shot the strung-out shots of Toothless flying alone around Berk that made everyone go, That could look cool. It was using all of that visual reference in terms of the plates we shot, the actual date and the stuff we learned. There were birds everywhere, the color of the water was aquamarine in Faroe, and you could get the light for real.Using the practical set as base, the entire environment for the training arena was digitally rebuilt.Wind assisted in conveying a sense of speed. No matter how much wind you blow at people for real, you can never get enough, Manz observes. They were using medically filtered compressed air so we could film without goggles. Terry Bambers [1st Assistant Director: Gimbal Unit] team rigged those to the gimbals and had additional ones blowing at bits of costume and boots. For a lot of the takes, we had to go again because we needed to move more; clothes dont move as much as you think theyre going to. Framestore built some incredible digital doubles that, through the sequence, are either used as whole or part. We utilized much of the live-action as the source, but theres whole lot going on to create that illusion and bond it to the dragon and background.Having smaller elements in the frame assisted in conveying the enormous size of the Red Death.Missing an arm and a leg is Gobber (Nick Frost). Dean and I were keen not to have the long and short arm thing. Our prop modeler built the arm so it could be the actual hammer or stone, and Nicks arm would be inside of that with a handle inside. He had a brace on his arm, then we had the middle bit we had to replace. Most of the time, that meant we could use the real thing, but the paint-out was a lot of work. Framestore built a partial CG version of him so we could replace part of his body where his arm crossed. Like with Nick, the main thing with Hiccup was to try to get almost a ski boot on Mason so he couldnt bend his ankle. The main thing was getting his body to move in the correct way. In the end, Nick came up to me one day and asked, Could I just limp? We got Dean to speak to him sometimes when he would forget to limp. You cant fix that stuff. Once all of that body language is in there, thats what makes it believable. The Gobber work is some of the best work. You dont notice it because it feels real, even though its a lot of shots.
    Like
    Love
    Wow
    Sad
    Angry
    1كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • Revealing 4 Middle East and North Africa (MENA) Hero Project games coming to PlayStation
    blog.playstation.com
    Over the past year, weve had the privilege of connecting with talented game developers from across the Middle East and North Africa (MENA) through the PlayStation MENA Hero Project. This initiative was launched with a simple goal: to identify and support original voices and empower creators from emerging regions to tell their stories through games.Today, were thrilled to announce the first cohort of titles supported under the MENA Hero Project, each one a unique reflection of the creativity, and spirit of developers in the region.The First Cohort of MENA Hero Project GamesRed Bandits | Developer: Team Agenda | Country: Saudi ArabiaRed Bandits is a fast-paced robbery action game set in a hyper-capitalist age where one company rules the world through complete monopoly. In this world, a newly formed thieves organization known as the Red Bandits, emerges to challenge the system and spark a rebellion. You play as Stutt, a seasoned old thief with a stuttering condition and a mysterious past he cant fully remember.Break into the companys fortified floors, take down the powerful board of directors, and bring back a de-monopolized world order. With a dynamic cover system, evolving heists, and a vibrant comrades hideout, Red Bandits blends fast, stylish action with a deeply personal story of rebellion.Robbing its way to PS5 and PC.Encis Solution | Developer: Dark Emerald | Country: United Arab EmiratesExiled by humans centuries ago, the Aeons are confined to the desolate underground. Their village is safe, but beyond the gates, danger lurks at every corner, and not everyone can be trusted. No one has left the village before, except for Jiwe.Inspired by techno-dystopian aesthetics, Encis Solution is a hand-painted, 2.5D narrative platformer following the story of Jiwe, a young Aeon venturing out into the unknown in an attempt to save his dearest friend. Stumbling upon Enci, a lost encyclopedic robot who finds himself far from home, the two form an unusual bond and set out together to the surface of planet Regalia.Play through 40+ levels with increasing difficulty and explore uncharted territories. Find collectibles and uncover the secrets they carry about the world and its odd inhabitants. Keep an eye out for hidden levels and challenge yourself to think outside the box.Will you find your solution on PS5 and PC?Play VideoThe Perfect Run | Developer: Lanterns Studios | Country: TunisiaSave the world or blow it up yourself, one loop at a time! The Perfect Run is an action-adventure RPG game where the player controls Quicksave, a time-traveling hero caught in an epic struggle between supervillain gangs, marketed superheroes, and a powerful mega corporation for the control of the city of New Rome.With three days to decide the citys fate, the player can go back in time to decide their perfect ending if they have the skill to do so! Interact with NPCs and unlock new dialogue options thanks to information collected in earlier loops, join a faction in one route and fight them the next, bend time itself in epic battle against superpowered bosses, collect the best upgrades before the reset, and unlock the citys secrets in this memorable superhero adventure.Find the perfect run on PS5 and PC.A Cats Manor | Developer: Happiest Dark Corner | Country: BahrainYou awaken trapped in a house infested with spiders and inhabited by an eccentric family. At the end of your tail, you discover a crudely stitched hand. With no memory of who you are or how you got here, you let curiosity guide you forward.A Cats Manor is an atmospheric adventure that blends puzzles, combat, crafting, and rhythm-based music challenges. Use your wits to escape the manor. Investigate your surroundings, solve puzzles, fight your way through deadly encounters, or outsmart your foes and avoid trouble.Inspect, observe, listen, and feel your surroundings for clues and cues, immersing yourself with 3D audio and PS5 DualSense controller features. Uncover the secrets of the manor and unravel what the family is hiding.Creeping its way to PS5 and PC near you.About the MENA Hero ProjectThe MENA Hero Project is the newest chapter in SIEs global Hero Project family, joining India and China in our mission to discover and nurture the next generation of original game creators. We believe that great games can come from anywhere. Through the MENA Hero Project, were committed to unlocking the regions creative potential, supporting locally inspired experiences with the power to captivate players around the world.
    Like
    Love
    Wow
    Sad
    Angry
    2كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • Watching Doctor Who with my American friends reminded me what I love about it
    www.polygon.com
    Fast forward to 2025. Im 28 years old and preparing for a summer of sci-fi with Doctor Who and Andor. Doctor Who received a new lease of life (and a whole lot of dough alongside it) in 2022 when Disney Plus became its new international home. For a show thats called One Piece for British people with its eye-watering number of episodes dating back to the series premiere in 1963, the fact that Doctor Who was suddenly available to much wider audience was exciting. (As an aside, Id also called it the summer of Varada Sethu, who appears in both Andor and in Doctor Who, as the latest companion to Ncuti Gatwas 15th Doctor. I couldnt wait to see how these shows would feature one of my favorite actors. Spoiler: This would not work out for me. Varada Sethu, Im so sorry.)
    Like
    Love
    Wow
    Sad
    Angry
    2كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • A Week In The Life Of An AI-Augmented Designer
    smashingmagazine.com
    Artificial Intelligence isnt new, but in November 2022, something changed. The launch of ChatGPT brought AI out of the background and into everyday life. Suddenly, interacting with a machine didnt feel technical it felt conversational.Just this March, ChatGPT overtook Instagram and TikTok as the most downloaded app in the world. That level of adoption shows that millions of everyday users, not just developers or early adopters, are comfortable using AI in casual, conversational ways. People are using AI not just to get answers, but to think, create, plan, and even to help with mental health and loneliness. In the past two and a half years, people have moved through the Kbler-Ross Change Curve only instead of grief, its AI-induced uncertainty. UX designers, like Kate (who youll meet shortly), have experienced something like this:Denial: AI cant design like a human; it wont affect my workflow.Anger: AI will ruin creativity. Its a threat to our craft.Bargaining: Okay, maybe just for the boring tasks.Depression: I cant keep up. Whats the future of my skills?Acceptance: Alright, AI can free me up for more strategic, human work.As designers move into experimentation, theyre not asking, Can I use AI? but How might I use it well?.Using AI isnt about chasing the latest shiny object but about learning how to stay human in a world of machines, and use AI not as a shortcut, but as a creative collaborator.It isnt about finding, bookmarking, downloading, or hoarding prompts, but experimenting and writing your own prompts. To bring this to life, well follow Kate, a mid-level designer at a FinTech company, navigating her first AI-augmented design sprint. Youll see her ups and downs as she experiments with AI, tries to balance human-centered skills with AI tools, when she relies on intuition over automation, and how she reflects critically on the role of AI at each stage of the sprint.The next two planned articles in this series will explore how to design prompts (Part 2) and guide you through building your own AI assistant (aka CustomGPT; Part 3). Along the way, well spotlight the designerly skills AI cant replicate like curiosity, empathy, critical thinking, and experimentation that will set you apart in a world where automation is easy, but people and human-centered design matter even more.Note: This article was written by a human (with feelings, snacks, and deadlines). The prompts are real, the AI replies are straight from the source, and no language models were overworked just politely bossed around. All em dashes are the handiwork of MS Words autocorrect not AI. Kate is fictional, but her week is stitched together from real tools, real prompts, real design activities, and real challenges designers everywhere are navigating right now. She will primarily be using ChatGPT, reflecting the popularity of this jack-of-all-trades AI as the place many start their AI journeys before branching out. If you stick around to the end, youll find other AI tools that may be better suited for different design sprint activities. Due to the pace of AI advances, your outputs may vary (YOMV), possibly by the time you finish reading this sentence. Cautionary Note: AI is helpful, but not always private or secure. Never share sensitive, confidential, or personal information with AI tools even the helpful-sounding ones. When in doubt, treat it like a coworker who remembers everything and may not be particularly good at keeping secrets.Prologue: Meet Kate (As She Preps For The Upcoming Week)Kate stared at the digital mountain of feedback on her screen: transcripts, app reviews, survey snippets, all waiting to be synthesized. Deadlines loomed. Her calendar was a nightmare. Meanwhile, LinkedIn was ablaze with AI hot takes and success stories. Everyone seemed to have found their AI groove except her. She wasnt anti-AI. She just hadnt figured out how it actually fit into her work. She had tried some of the prompts she saw online, played with some AI plugins and extensions, but it felt like an add-on, not a core part of her design workflow. Her team was focusing on improving financial confidence for Gen Z users of their FinTech app, and Kate planned to use one of her favorite frameworks: the Design Sprint, a five-day, high-focus process that condenses months of product thinking into a single week. Each day tackles a distinct phase: Understand, Sketch, Decide, Prototype, and Test. All designed to move fast, make ideas tangible, and learn from real users before making big bets.This time, she planned to experiment with a very lightweight version of the design sprint, almost solo-ish since her PM and engineer were available for check-ins and decisions, but not present every day. That gave her both space and a constraint, and made it the perfect opportunity to explore how AI could augment each phase of the sprint. She decided to lean on her designerly behavior of experimentation and learning and integrate AI intentionally into her sprint prep, using it as both a creative partner and a thinking aid. Not with a rigid plan, but with a working hypothesis that AI would at the very least speed her up, if nothing else. She wouldnt just be designing and testing a prototype, but prototyping and testing what it means to design with AI, while still staying in the drivers seat.Follow Kate along her journey through her first AI-powered design sprint: from curiosity to friction and from skepticism to insight.Monday: Understanding the Problem (aka: Kate Vs. Digital Pile Of Notes)The first day of a design sprint is spent understanding the user, their problems, business priorities, and technical constraints, and narrowing down the problem to solve that week.This morning, Kate had transcripts from recent user interviews and customer feedback from the past year from app stores, surveys, and their customer support center. Typically, she would set aside a few days to process everything, coming out with glazed eyes and a few new insights. This time, she decided to use ChatGPT to summarize that data: Read this customer feedback and tell me how we can improve financial literacy for Gen Z in our app. ChatGPTs outputs were underwhelming to say the least. Disappointed, she was about to give up when she remembered an infographic about good prompting that she had emailed herself. She updated her prompt based on those recommendations:Defined a role for the AI (product strategist),Provided context (user group and design sprint objectives), andClearly outlined what she was looking for (financial literacy related patterns in pain points, blockers, confusion, lack of confidence; synthesis to identify top opportunity areas).By the time she Aero-pressed her next cup of coffee, ChatGPT had completed its analysis, highlighting blockers like jargon, lack of control, fear of making the wrong choice, and need for blockchain wallets. Wait, what? That last one felt off.Kate searched her sources and confirmed her hunch: AI hallucination! Despite the best of prompts, AI sometimes makes things up based on trendy concepts from its training data rather than actual data. Kate updated her prompt with constraints to make ChatGPT only use data she had uploaded, and to cite examples from that data in its results. 18 seconds later, the updated results did not mention blockchain or other unexpected results. By lunch, Kate had the makings of a research summary that would have taken much, much longer, and a whole lot of caffeine.That afternoon, Kate and her product partner plotted the pain points on the Gen Z app journey. The emotional mapping highlighted the most critical moment: the first step of a financial decision, like setting a savings goal or choosing an investment option. That was when fear, confusion, and lack of confidence held people back. AI synthesis combined with human insight helped them define the problem statement as: How might we help Gen Z users confidently take their first financial action in our app, in a way that feels simple, safe, and puts them in control? Kates ReflectionAs she wrapped up for the day, Kate jotted down her reflections on her first day as an AI-augmented designer: Theres nothing like learning by doing. Ive been reading about AI and tinkering around, but took the plunge today. Turns out AI is much more than a tool, but I wouldnt call it a co-pilot. Yet. I think its like a sharp intern: it has a lot of information, is fast, eager to help, but it lacks context, needs supervision, and can surprise you. You have to give it clear instructions, double-check its work, and guide and supervise it. Oh, and maintain boundaries by not sharing anything I wouldnt want others to know.Today was about listening to users, to patterns, to my own instincts. AI helped me sift through interviews fast, but I had to stay curious to catch what it missed. Some quotes felt too clean, like the edges had been smoothed over. Thats where observation and empathy kicked in. I had to ask myself: whats underneath this summary?Critical thinking was the designerly skill I had to exercise most today. It was tempting to take the AIs synthesis at face value, but I had to push back by re-reading transcripts, questioning assumptions, and making sure I wasnt outsourcing my judgment. Turns out, the thinking part still belongs to me.Tuesday: Sketching (aka: Kate And The Sea of OKish Ideas)Day 2 of a design sprint focuses on solutions, starting by remixing and improving existing ideas, followed by people sketching potential solutions.Optimistic, yet cautious after her experience yesterday, Kate started thinking about ways she could use AI today, while brewing her first cup of coffee. By cup two, she was wondering if AI could be a creative teammate. Or a creative intern at least. She decided to ask AI for a list of relevant UX patterns across industries. Unlike yesterdays complex analysis, Kate was asking for inspiration, not insight, which meant she could use a simpler prompt: Give me 10 unique examples of how top-rated apps reduce decision anxiety for first-time users from FinTech, health, learning, or ecommerce.She received her results in a few seconds, but there were only 6, not the 10 she asked for. She expanded her prompt for examples from a wider range of industries. While reviewing the AI examples, Kate realized that one had accessibility issues. To be fair, the results met Kates ask since she had not specified accessibility considerations. She then went pre-AI and brainstormed examples with her product partner, coming up with a few unique local examples. Later that afternoon, Kate went full human during Crazy 8s by putting a marker to paper and sketching 8 ideas in 8 minutes to rapidly explore different directions. Wondering if AI could live up to its generative nature, she uploaded pictures of her top 3 sketches and prompted AI to act as a product design strategist experienced in Gen Z behavior, digital UX, and behavioral science, gave it context about the problem statement, stage in the design sprint, and explicitly asked AI the following:Analyze the 3 sketch concepts and identify core elements or features that resonated with the goal.Generate 5 new concept directions, each of which should:Address the original design sprint challenge.Reflect Gen Z design language, tone, and digital behaviors.Introduce a unique twist, remix, or conceptual inversion of the ideas in the sketches.For each concept, provide:Name (e.g., Monopoly Mode, Smart Start);12 sentence concept summary;Key differentiator from the original sketches;Design tone and/or behavioral psychology technique used.The results included ideas that Kate and her product partner hadnt considered, including a progress bar that started at 20% (to build confidence), and a sports-like stock bracket for first-time investors. Not bad, thought Kate, as she cherry-picked elements, combined and built on these ideas in her next round of sketches. By the end of the day, they had a diverse set of sketched solutions some original, some AI-augmented, but all exploring how to reduce fear, simplify choices, and build confidence for Gen Z users taking their first financial step. With five concept variations and a few rough storyboards, Kate was ready to start converging on day 3. Kates ReflectionToday was creatively energizing yet a little overwhelming! I leaned hard on AI to act as a creative teammate. It delivered a few unexpected ideas and remixed my Crazy 8s into variations I never wouldve thought of!It also reinforced the need to stay grounded in the human side of design. AI was fast too fast, sometimes. It spit out polished-sounding ideas that sounded right, but I had to slow down, observe carefully, and ask: Does this feel right for our users? Would a first-time user feel safe or intimidated here?Critical thinking helped me separate what mattered from what didnt. Empathy pulled me back to what Gen Z users actually said, and kept their voices in mind as I sketched. Curiosity and experimentation were my fuel. I kept tweaking prompts, remixing inputs, and seeing how far I could stretch a concept before it broke. Visual communication helped translate fuzzy AI ideas into something I could react to and more importantly, test.Wednesday: Deciding (aka Kate Tries to Get AI to Pick a Side)Design sprint teams spend Day 3 critiquing each of their potential solutions to shortlist those that have the best chance of achieving their long-term goal. The winning scenes from the sketches are then woven into a prototype storyboard.Design sprint Wednesdays were Kates least favorite day. After all the generative energy during Sketching Tuesday, today, she would have to decide on one clear solution to prototype and test. She was unsure if AI would be much help with judging tradeoffs or narrowing down options, and it wouldnt be able to critique like a team. Or could it?Kate reviewed each of the five concepts, noting strengths, open questions, and potential risks. Curious about how AI would respond, she uploaded images of three different design concepts and prompted ChatGPT for strengths and weaknesses. AIs critique was helpful in summarizing the pros and cons of different concepts, including a few points she had not considered like potential privacy concerns. She asked a few follow-up questions to confirm the actual reasoning. Wondering if she could simulate a team critique by prompting ChatGPT differently, Kate asked it to use the 6 thinking hats technique. The results came back dense, overwhelming, and unfocused. The AI couldnt prioritize, and it couldnt see the gaps Kate instinctively noticed: friction in onboarding, misaligned tone, unclear next steps. In that moment, the promise of AI felt overhyped. Kate stood up, stretched, and seriously considered ending her experiments with the AI-driven process. But she paused. Maybe the problem wasnt the tool. Maybe it was how she was using it. She made a note to experiment when she wasnt on a design sprint clock.She returned to her sketches, this time laying them out on the wall. No screens, no prompts. Just markers, sticky notes, and Sharpie scribbles. Human judgment took over. Kate worked with her product partner to finalize the solution to test on Friday and spent the next hour storyboarding the experience in Figma.Kate re-engaged with AI as a reviewer, not a decider. She prompted it for feedback on the storyboard and was surprised to see it spit out detailed design, content, and micro-interaction suggestions for each of the steps of the storyboarded experience. A lot of food for thought, but shed have to judge what mattered when she created her prototype. But that wasnt until tomorrow!Kates ReflectionAI exposed a few of my blind spots in the critique, which was good, but it basically pointed out that multiple options could work. I had to rely on my critical thinking and instincts to weigh options logically, emotionally, and contextually in order to choose a direction that was the most testable and aligned with the user feedback from Day 1.I was also surprised by the suggestions it came up with while reviewing my final storyboard, but I will need a fresh pair of eyes and all the human judgement I can muster tomorrow.Empathy helped me walk through the flow like I was a new user. Visual communication helped pull it all together by turning abstract steps into a real storyboard for the team to see instead of imagining.TO DO: Experiment prompting around the 6 Thinking Hats for different perspectives.Thursday: Prototype (aka Kate And Faking It)On Day 4, the team usually turns the storyboard from the previous day into a prototype that can be tested with users on Day 5. The prototype doesnt need to be fully functional; a simulated experience is sufficient to gather user feedback.Kates prototype day often consisted of marathon Figma Design sessions and late-night pizza dinners. She was hoping AI would change that today. She fed yesterdays storyboard to ChatGPT and asked it for screens. It took a while to generate, but she was excited to see a screen flow gradually appear on her screen, except that it had 3 screens, instead of the 6 frames from her storyboard, as you can see in the image below.She tried different prompts but couldnt get a complete flow. She was excited when it offered to create a Figma file. She waited. And waited. And reminded it that she was right there waiting. It finally gave her a link a link that didnt work. That made her question why she needed prototype screens in Figma at all. Could one of those no-code tools do the job? Kate remembered that she had just received beta access to Figma Make (Figmas AI prompt-to-prototype tool) and decided to put it to the test. She shared her storyboard, added context about the prototype for a design sprint, the problem, and a brief description of the audience. In a few seconds, she saw code being generated and a running description on the left. It seemed like it would take some time, so she headed to the cafeteria for lunch. When Kate returned, she was surprised to see Figma Make had created an app along with a detailed explanation:Ive created a complete finance app prototype with five key screens that guide users through setting financial goals in a friendly, low-pressure way. The app features smooth animations with Framer Motion, a progress indicator to show users where they are in the journey, and emotionally supportive visuals and copy throughout the experience. Each screen serves a specific purpose: welcoming users, helping them define relatable goals, building their plan with visual feedback, presenting a simple summary, and offering next steps without pressure.You might want to:Add personalization options like user profiles or avatars.Implement a dark mode toggle for accessibility.Create additional goal templates and saving strategies.Since Figma Make had built the prototype, not her, Kate felt like she was User 0, and clicked her way through the interactive prototype, making a mental note of what worked, what didnt. She was impressed to see that she could not only select a common goal (buy a home, save for education, and so on) but she could also create her own goal (hello DeLonghi Coffee Maker) and it carried over for the rest of the experience. This was something she had never been able to do in Figma Design!Despite some obvious misses like a missing header and navigation, and some buttons not working, she was impressed! Kate tried the option to Publish and it gave her a link that she immediately shared with her product and engineering partners. A few minutes later, they joined her in the conference room, exploring it together. The engineer scanned the code, didnt seem impressed, but said it would work as a disposable prototype.Kate prompted Figma Make to add an orange header and app navigation, and this time the trio kept their eyes peeled as they saw the progress in code and in English. The results were pretty good. They spent the next hour making changes to get it ready for testing. Even though he didnt admit it, the engineer seemed impressed with the result, if not the code.By late afternoon, they had a functioning interactive prototype. Kate fed ChatGPT the prototype link and asked it to create a usability testing script. It came up with a basic, but complete test script, including a checklist for observers to take notes. Kate went through the script carefully and updated it to add probing questions about AI transparency, emotional check-ins, more specific task scenarios, and a post-test debrief that looped back to the sprint goal. Kate did a dry run with her product partner, who teased her: Did you really need me? Couldnt your AI do it? It hadnt occurred to her, but she was now curious! Act as a Gen Z user seeing this interactive prototype for the first time. How would you react to the language, steps, and tone? What would make you feel more confident or in control?It worked! ChatGPT simulated user feedback for the first screen and asked if she wanted it to continue. Yes, please, she typed. A few seconds later, she was reading what could have very well been a screen-by-screen transcript from a test. Kate was still processing what she had seen as she drove home, happy she didnt have to stay late. The simulated test using AI appeared impressive at first glance. But the more she thought about it, the more disturbing it became. The output didnt mention what the simulated user clicked, and if she had asked, she probably would have received an answer. But how useful would that be? After almost missing her exit, she forced herself to think about eating a relaxed meal at home instead of her usual Prototype-Thursday-Multitasking-Pizza-Dinner.Kates ReflectionToday was the most meta Ive felt all week: building a prototype about AI, with AI, while being coached by AI. And it didnt all go the way I expected.While ChatGPT didnt deliver prototype screens, Figma Make coded a working, interactive prototype with interactions I couldnt have built in Figma Design. I used curiosity and experimentation today, by asking: What if I reworded this? What if I flipped that flow?AI moved fast, but I had to keep steering. But I have to admit that tweaking the prototype by changing the words, not code, felt like magic!Critical thinking isnt optional anymore it is table stakes.My impromptu ask of ChatGPT to simulate a Gen Z user testing my flow? That part both impressed and unsettled me. Im going to need time to process this. But that can wait until next week. Tomorrow, I test with 5 Gen Zs real people.Friday: Test (aka Prototype Meets User)Day 5 in a design sprint is a culmination of the weeks work from understanding the problem, exploring solutions, choosing the best, and building a prototype. Its when teams interview users and learn by watching them react to the prototype and seeing if it really matters to them.As Kate prepped for the tests, she grounded herself in the sprint problem statement and the users: How might we help Gen Z users confidently take their first financial action in our app in a way that feels simple, safe, and puts them in control? She clicked through the prototype one last time the link still worked! And just in case, she also had screenshots saved. Kate moderated the five tests while her product and engineering partners observed. The prototype may have been AI-generated, but the reactions were human. She observed where people hesitated, what made them feel safe and in control. Based on the participant, she would pivot, go off-script, and ask clarifying questions, getting deeper insights.After each session, she dropped the transcripts and their notes into ChatGPT, asking it to summarize that users feedback into pain points, positive signals, and any relevant quotes. At the end of the five rounds, Kate prompted them for recurring themes to use as input for their reflection and synthesis.The trio combed through the results, with an eye out for any suspicious AI-generated results. They ran into one: Users Trust AI. Not one user mentioned or clicked the Why this? link, but AI possibly assumed transparency features worked because they were available in the prototype.They agreed that the prototype resonated with users, allowing all to easily set their financial goals, and identified a couple of opportunities for improvement: better explaining AI-generated plans and celebrating win moments after creating a plan. Both were fairly easy to address during their product build process.That was a nice end to the week: another design sprint wrapped, and Kates first AI-augmented design sprint! She started Monday anxious about falling behind, overwhelmed by options. She closed Friday confident in a validated concept, grounded in real user needs, and empowered by tools she now knew how to steer.Kates ReflectionTest driving my prototype with AI yesterday left me impressed and unsettled. But todays tests with people reminded me why we test with real users, not proxies or people who interact with users, but actual end users. And GenAI is not the user. Five tests put my designerly skill of observation to the test.GenAI helped summarize the test transcripts quickly but snuck in one last hallucination this week about AI! With AI, dont trust always verify! Critical thinking is not going anywhere.AI can move fast with words, but only people can use empathy to move beyond words to truly understand human emotions.My next goal is to learn to talk to AI better, so I can get better results.ConclusionOver the course of five days, Kate explored how AI could fit into her UX work, not by reading articles or LinkedIn posts, but by doing. Through daily experiments, iterations, and missteps, she got comfortable with AI as a collaborator to support a design sprint. It accelerated every stage: synthesizing user feedback, generating divergent ideas, giving feedback, and even spinning up a working prototype, as shown below.What was clear by Friday was that speed isnt insight. While AI produced outputs fast, it was Kates designerly skills curiosity, empathy, observation, visual communication, experimentation, and most importantly, critical thinking and a growth mindset that turned data and patterns into meaningful insights. She stayed in the drivers seat, verifying claims, adjusting prompts, and applying judgment where automation fell short.She started the week on Monday, overwhelmed, her confidence dimmed by uncertainty and the noise of AI hype. She questioned her relevance in a rapidly shifting landscape. By Friday, she not only had a validated concept but had also reshaped her entire approach to design. She had evolved: from AI-curious to AI-confident, from reactive to proactive, from unsure to empowered. Her mindset had shifted: AI was no longer a threat or trend; it was like a smart intern she could direct, critique, and collaborate with. She didnt just adapt to AI. She redefined what it meant to be a designer in the age of AI.The experience raised deeper questions: How do we make sure AI-augmented outputs are not made up? How should we treat AI-generated user feedback? Where do ethics and human responsibility intersect?Besides a validated solution to their design sprint problem, Kate had prototyped a new way of working as an AI-augmented designer. The question now isnt just Should designers use AI?. Its How do we work with AI responsibly, creatively, and consciously?. Thats what the next article will explore: designing your interactions with AI using a repeatable framework.Poll: If you could design your own AI assistant, what would it do?Assist with ideation?Research synthesis?Identify customer pain points?Or something else entirely?Share your idea, and in the spirit of learning by doing, well build one together from scratch in the third article of this series: Building your own CustomGPT.ResourcesSprint: How to Solve Big Problems and Test New Ideas in Just Five Days, by Jake KnappThe Design SprintFigma MakeOpenAI Appeals Sweeping, Unprecedented Order Requiring It Maintain All ChatGPT Logs, Vanessa TaylorToolsAs mentioned earlier, ChatGPT was the general-purpose LLM Kate leaned on, but you could swap it out for Claude, Gemini, Copilot, or other competitors and likely get similar results (or at least similarly weird surprises). Here are some alternate AI tools that might suit each sprint stage even better. Note that with dozens of new AI tools popping up every week, this list is far from exhaustive. Stage Tools Capability Understand Dovetail, UserTestings Insights Hub, Marvin Summarize & Synthesize data Sketch Any LLM, Musely Brainstorm concepts and ideas Decide Any LLM Critique/provide feedback Prototype UIzard, UXPilot, Visily, Krisspy, Figma Make, Lovable, Bolt Create wireframes and prototypes Test UserTesting, UserInterviews, PlaybookUX, Maze, plus tools from the Understand stage Moderated and unmoderated user tests/synthesis
    Like
    Love
    Wow
    Sad
    Angry
    1كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • Gustaf Westman Gives IKEAs Swedish Meatballs Proper Presentation
    design-milk.com
    This summer has been bookended with playful designs by Gustaf Westman. First, Paris and the internet were wowed by Westmans hilarious, eccentric bubble-gum pink baguette holder a design so unapologetically silly its impossible not to smile.And now, as summer winds down, Westman has revealed his next act: a collaboration with IKEA. Launching September 9th, the playful 12-piece collection is a perfect example of both Westman and IKEAs shared fascination with elevating everyday rituals through experimentation.In Gustaf Westmans world, design is never too serious. The Swedish creative infuses everyday objects and interiors with playful shapes, bold colors, and a sense of humor proving that function and fun can coexist beautifully. So of course a collaboration with IKEA feels natural and cohesive.The first object of the collection turns the iconic Swedish meatball into a bigger star with a blue porcelain plate designed to serve the tasty golden brown spheres center stage. Its elongated design arranges them neatly in a row. The design is simple, lining up the meatballs so each one is visible, like theyre sitting on little thrones. And while it was created with meatballs in mind, it works just as well for many other dishes, says Westman.As someone who has built a career designing around specific food shapes and thoughtfully nestling them into their own little nooks, Im delighted by the versatility of this piece. The possibilities are endless: the plate doubles as a tray for round-bottomed olives or beet-pickled quail eggs. Flip the function, and the center channel becomes the perfect spot for butter or aioli, surrounded by arancini or warm focaccia. However its used, this is a dish worth sharing.One request: more colors, please.To learn more about Gustaf Westmans upcoming collaboration with IKEA, visit ikea.com.Photography courtesy of IKEA.
    Like
    Love
    Wow
    Sad
    Angry
    2كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • Data-driven is dead
    uxdesign.cc
    How the industry has shaped me to embrace data-driven design.Ive been working as an interaction designer and PM for years. When I first came to the US a decade ago, I wasnt sure how Id fit into the job market. I wasnt from here and didnt know the playbook. Through trial and error, I eventually found myself in the then-booming role of UX designera job that felt relatable, in demand, and easy to explain to others at thetime.Like many in the field, I leaned heavily into the mantra of data-driven design. Every choice had to be backed by numbers, validated by user tests, or confirmed by analytics. For a while, that approach was powerful. But Ive come to believe its no longer the true advantage of a designer. In fact, its becoming obsolete.Over the last decade, the digital product industry has centered itself on process: templates, frameworks, and ways to integrate design into business efficiently with data. But in doing so, designers, myself included, have slowly boxed ourselves in. Much of what designers produce todaystructured iterations, data-driven optimizationsare exactly the kinds of tasks that AI can do faster, or lower-cost labor can docheaper.My observations with data-driven designData-driven design is easily replicable, especially with AI. Its a great tool for an operator, but that has risked some designjobs.It flattens experiences. Optimizing for numbers alone converges toward sameness: endless scroll feeds, grid layouts, the samefunnels.Its reactive. Most available data reflects only the past. Leading indicators are often hard to identify or measure. As a result, we tend to focus on lagging data, making iterations reactive rather than inventive or preventive. When KPIs miss badly, debates over what to tweak can become paralyzing.The uncomfortable truth is this: by clinging to the data-driven process as our identity, weve made ourselves replaceable. You can see it in the job marketroles shrinking, tasks offloaded, design increasingly treated as a commodity.But Ive also noticed recent signs of a shift. After the euphoric rush of AI, some teams are realizing the limits of automation. AI improves productivity, yesbut when it comes to fine-tuning, to the subtle judgment calls that make an experience feel rightit falls flat. And thats exactly where designlives.Looking back at history, the pattern is clear: many of the most important products werent born from data at all, but from ambiguous, even irrational designchoices.Creative AmbiguityFeels right!iPods clickwheelThe click wheel was born less from data and more from a designers hunch about rhythm. Controlling thousands of songs with a tiny screen seemed impossibleuntil someone spun their thumb around a wheel and realized it could feel like scratching vinyl, a gesture with cadence and playfulness.Sometimes stupid things only seem stupid at first, but if you break through, it actually becomes smart.TonyFadellThe 3rd generation replaced the mechanical buttons with touch buttons, placing them in a separate location. The issue was that the wheel and the controls were no longer together, making the interaction less seamless. The 4th generation solved it by integrating the navigation and the control into a seamless single touch button. (Image source: KenSegall)Satisfying to see it working!Dysons transparent vacuumWhen James Dyson proposed a clear canister that displayed all the dirt being sucked up, designers told him: nobody wants to see their dust. Dyson flipped the logic: the visibility wasnt disgusting, it was satisfying.I persisted, because I found it really fascinating that you could see exactly what was happening I wouldnt have got that from researchId have gotten the exact opposite.JamesDysonThe DC01 vacuum cleaner, a bagless design, was inspired by how sawdust was removed from the air by large industrial cyclones at a local sawmill. Using clear plastic for the dust collector was a provocative choice, but it directly represented its functionshowing how the suction worked more effectively without the traditional use of a bag. (Image source: The Guardian)More living, more fluid!Snapchats ephemeral messagesOne of the defining traits of digital products is the effortless access to past content. Snapchat inverted that logic. Instead of permanence as the source of value, what if the value was in disappearance? Evan Spiegel described it as removing the pressure associated with permanence. The result was messaging that felt playful, intimate, and aliveless like an archive, more like a conversation. Most importantly, ephemerality nudged users to return frequently, knowing messages and stories would vanish if theydidnt.Snapchat isnt about capturing the traditional Kodak moment. Its about communicating with the full range of human emotionnot just what appears to be pretty or perfect.SpiegelBy 20152016, many users were already screenshotting snaps they wanted to keep or using third-party apps to save them. Snap saw this behavior and recognized people wanted a way to preserve certain moments rather than lose them forever, which lead to the release of Memories. (Image source: Gadgets360)The Emergence of the Walkman EffectSonyWalkmanSonys market research was clear: nobody wanted a tape player without a record button. Akio Morita, Sonys co-founder, ignored the data and pushed ahead with the Walkman (1979). He believed people didnt yet realize they wanted private, mobile music. He was right. The Walkman redefined how people consumed music, introducing the Walkman Effectgiving listeners control over their environment.The public does not know what is possible, but we do. Instead of doing a lot of market research, we refine our thinking on a product and its use and try to create a market for it by educating and communicating with the public.AkioMoritaThe Walkman, designed to enhance the listening experience in public spaces, was initially released with two earphone jacks for sharing music, but the feature was later removed as it wasnt widely used. (Image source: Bibliore)Iconic and Abstract: Absolut Vodka Campaign(1980s)In the 1980s, Absolut Vodka took a bold approach to advertising. Instead of describing the vodkas taste or craftsmanship, the team fixated on the bottle itself and treated its silhouette as a cultural canvas. No focus group or market data suggested this would workit looked risky, even puzzling. Yet the playful, surreal representations of the bottleas a halo, a snow globe, a stageresonated, and more importantly, made people curious about this mysterious foreign-born vodka. The campaign became one of the longest-running and most recognizable in advertising history, proving that imagination and ambiguity could break through traditional advertising templates.Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.ScottAdamsAbsoluts advertisements initially sparked curiosity among American consumers, highlighting the mystique of this foreign brand. Over time, using the iconic bottle silhouette, the campaign incorporated more cultural references to stay relevant and engaging. Eventually, print ads reached their limits, and the campaign expanded into broader advertising channels. (Image source: ReferralCandy)Its about the positioningI apologize for titling the article Data-driven is deadI admit I wanted it to sound a little more controversial. In fact, I love working with data, and the examples I mentioned above also evolved based on consumer reactions. More importantly, they were fundamentally functional. But Ive also found that relying on it too much can narrow the scope of the conversation and doesnt always help steer the direction when were far off course. I simply think its somewhat outdated to present data-driven design as your core role. Yes, as a professional, you should pay attention to business performance and client behavior. However, we should also feel confident talking about feeling, intuition, and creative ambiguity. That positioning makes me feel more optimistic as a product designertoday.References:Tony Fadell tells us the story of the iPod-based iPhone prototype | TheVergeA Conversation with James Dyson In Three Parts | TheCore77SNAPCHATS FAILED EPHEMERALITY |AMODERNCaseThe Sony Walkman | Commoncog CaseLibraryData-driven is dead was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    Like
    Love
    Wow
    Sad
    Angry
    2كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • These Are the Best Deals on Video Games and Gaming Accessories This Labor Day
    lifehacker.com
    We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.Labor Day sales are rolling in, and Lifehacker is sharing the best sales based on product reviews, comparisons, and price-tracking tools before theyre over. You can also subscribe to our shopping newsletter, Add to Cart, for the best sales sent to your inbox.Gamers, rejoice: Labor Day doesnt only mark the end of summer, it kicks off prime gaming season. The temperature is about to drop to "too damn cold to go out," and winters long nights are perfect for marathon gaming sessions, so Ive rounded up the best deals on games and gear to keep you gaming until the sun warms the Earth again.Best Labor Day deals on video game hardwareWhether you're looking for a new gaming laptop, a single solution for all you retro gaming needs, or a vast improvement to your racing game experience, you can get it on the cheap during Labor Day.ASUS Tuf Gaming Laptop: Battle-ready without breaking the bank Asus TUF Gaming Laptop $649.99 at Amazon $799.99 Save $150.00 Get Deal Get Deal $649.99 at Amazon $799.99 Save $150.00 This ASUS Tuf gaming laptop boasts an AMD Ryzen 6-core, 12-thread processor for solid gameplay and multi-tasking, an NVIDIA GeForce RTX 3050 GPU, and 8 GB DDR5-5600 MHz RAM, and a 15.6 FHD (19201080) display. Plus, it's designed for durability as well as performance, so dropping it shouldn't be an issue. All that for $649.99, 19% off the list price of $799.99. While I haven't used one, its Amazon rating is 4.3 out of 5 stars, and this is matches the lowest price Amazon has ever offered for this product. Retro gaming console: just about every old game, instantly Retro Gaming Console $39.99 at Amazon $64.99 Save $25.00 Get Deal Get Deal $39.99 at Amazon $64.99 Save $25.00 If you're into older games, but you don't want the hassle of installing a million emulators on your PC, consider this retro gaming console. It will let you play over 20,000 games, pre-installed on a 64GB TF card. Whether you like coin-op game cabinets of the 1980s, outdated PC games, or classic Nintendo titles, this console has you covered. It comes with two wireless controllers, so you can share. 12-year-old me would have paid hundreds of thousands of dollars for instant access to every game in existence, but 2025 me only needs to cough up $39.99, marked down from $64.99. Thrustmaster T248 racing wheel Thrustmaster - T248 Racing Wheel and Magnetic Pedals for Xbox Series X|S and PC $289.99 at Best Buy $349.99 Save $60.00 Get Deal Get Deal $289.99 at Best Buy $349.99 Save $60.00 If you like playing racing games on your Xbox S/X series console, take the adrenaline up a few notches with this Thrustmaster T248 dedicated racing wheel. It features force-feedback so you'll feel the road, 25 re-mappable buttons, magnetic paddles for fast shifting, a digital dashboard display, and a ton more. And it's on sale for Labor Day for $289.99, a saving of $60 off the list price. Best Labor Day deals for PS5 gamersPlayStation 5 gamers: here are some suggestions on new games you can pick up at cheap-as-chips prices. WWE 2K25 WWE 2K25 $34.99 at Best Buy $69.99 Save $35.00 Get Deal Get Deal $34.99 at Best Buy $69.99 Save $35.00 Get into the ring with WWE 2K25 on PS5. The newest WWE wrasslin' fest features a roster of over 300 superstars, new match types, and the return of intergender bouts. Relive iconic moments in the Bloodline Showcase, take your fury online, and explore the new open-world Island mode. WWE 2K25 has received critical raves, and it can be yours for only $34.99, half-off the regular $69.99 price. It Takes 2 It Takes Two $19.99 at Best Buy $39.99 Save $20.00 Get Deal Get Deal $19.99 at Best Buy $39.99 Save $20.00 My favorite kind of game is couch co-opthere's nothing like teaming up with a real-life friendbut the genre just isn't that populated these days. There are a few great couch co-op games, though, like It Takes 2, a whirlwind platformer, where you and Player 2 are a couple turned into dolls. You have to work together on every levelno single player allowedto save your relationship. Packed with creativity, charm, and the kind of teamwork-based puzzles I love, It Takes 2 is as much a bonding experience as a game, and it's currently on sale for $19.99 (down from $39.99). Best Labor Day Deals for PC gamersIf you're into PC gaming, I have some nice deals to wrap your mouse and keyboard around, including a ridiculous 95% off on Battlefield 2042.Battlefield 2042 This is one of those sale-prices that are so low it might as well be "free." The PC version of EA's Battlefield 2042 is currently on sale on Steam for $2.99, a full 95% off the list price of $59.99. You can also pick up the "Elite" edition for $13.49, which is 85% off the list price. While 2042 hasn't gotten the best reviews, it has its hardcore fans, and it costs less than a cup of coffee to find out if you're among them. Forza franchise sale Racing series Forza has been around since Forza Motorsport was released in 2005 for the original Xbox. That's 20 years of iteration and improvement on the racing game genre. If you want to get into it for half price, Steam is offering a 50% off deal on a ton of Forza games, including 2023's Forza Motorsports, Forza Horizon 5, and Forza Horizon 5: Hot Wheels. Best Labor Day deals for Switch 2 gamersBargains on games for the Switch 2 are rare, and price-chops on the console itself are even harder to find, but there are a few deals out there for Labor Day. Donkey Kong Bananza Donkey Kong Bananza - U.S. Version $69.00 at Walmart $79.99 Save $10.99 Get Deal Get Deal $69.00 at Walmart $79.99 Save $10.99 In this inventive 3D platformer, Nintendo OGs Donkey Kong and Pauline tunnel through destructible subterranean worlds to reclaim the stolen Banandium gems from the nefarious VoidCo. This adventure blends two of my favorite video game things: smashing and exploration, and can be played solo or co-op. Donkey Kong Bananza is a must-have, and it's currently available at Walmart for around $69, down from the list price of $79.99. innoAura Switch 2 Carrying Cases innoAura Switch 2 Case $27.99 at Amazon Get Deal Get Deal $27.99 at Amazon The Nintendo Switch 2 is undoubtedly the most stylish console, and these hard shell carrying cases can make it even more chic. They offer a snug fit for all ports, joy-cons, and buttons, and they deliver shock, drop, and dust protection, plus a soft inner lining to prevent scratches. Best of all, they come in a ton of color and design options to keep you and your Switch 2 looking fly. These cases are on sale for only $18.99, 30% off the list price. Best Labor Day deals for Xbox gamersLast, but never least, Xbox gamers! Here are a couple don't-miss-em deals on games for the Series X and S.Resident Evil Village Resident Evil Village is an excellent survival-horror game. You are Ethan Winters, thrust into a twisted nightmare after his daughters kidnapping, forced to venture into a superstition-shrouded village full of werewolves, vampires, and other spooky creatures. The eighth full entry in the Resident Evil saga features beautiful, haunting graphics and Dolby Atmos sound. And it's so cheap: $9.99, down from a regular price of $39.99.Stray If you're in the mood for something a little different, check out Stray, a quirky adventure where you play as a cat in a dystopian sci-fi future city. With the help of a robot companion, you'll use stealth and brains to navigate dark, dangerous streets, hack future tech, and overcome obstacles on your quest to get back home. Stray is a one-of-a-kind game, and it's on sale for $17.99, down from $29.99. Our Best Editor-Vetted Tech Deals Right Now Apple AirPods Pro 2 Noise Cancelling Wireless Earbuds $169.00 (List Price $249.00) Google Pixel 10 128GB Unlocked Phone With $100 Amazon Gift Card (Obsidian) $799.00 (List Price $899.00) Samsung Galaxy S25 Edge 256GB Unlocked AI Phone (Titanium JetBlack) $829.95 (List Price $1,099.99) Dell 16 DC16251 (Intel Core 7 150U, 1TB SSD, 32GB RAM) $699.99 (List Price $949.99) Blink Video Doorbell Wireless (Newest Model) + Sync Module Core $39.99 (List Price $69.99) Amazon Fire TV Stick 4K (2nd Gen, 2023) $29.99 (List Price $49.99) Apple iPad 11" 128GB A16 WiFi Tablet (Blue, 2025) $299.00 (List Price $349.00) Deals are selected by our commerce team
    Like
    Love
    Wow
    Sad
    Angry
    2كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • Blade Runner 2099 will reportedly be released next year on Prime Video
    www.engadget.com
    Amazon's Blade Runner limited series finally has a release window. Deadline reports that the upcoming sequel show, Blade Runner 2099, is slated for a 2026 release on Prime Video. The story at this point remains a mystery, though the title suggests it'll take place 50 years after the events of Blade Runner 2049. Ridley Scott is said to be involved in the production.It was revealed last year that Michelle Yeoh will star in the series, and according to Deadline, she'll be joined by Hunter Schafer, Dimitri Abold, Lewis Gribben, Katelyn Rose Downey and Daniel Rigby. We first heard about the possibility of Blade Runner 2099 back in 2022, when it was reported that Amazon Studios was developing a live-action series set in that universe, but there have been few updates since. The release window was noted in an internal memo obtained by Deadline, which reports that the series is now in post-production.This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/blade-runner-2099-will-reportedly-be-released-next-year-on-prime-video-210513272.html?src=rss
    Like
    Love
    Wow
    Sad
    Angry
    2كيلو بايت
    · 0 التعليقات ·0 المشاركات
  • Windows 95 at 30 - Way ahead of its time, or the greatest Microsoft game-changer?
    www.techradar.com
    Windows 95 changed the game for Microsoft and set the standard for the company's iconic operating system.
    Like
    Love
    Wow
    Angry
    Sad
    1كيلو بايت
    · 0 التعليقات ·0 المشاركات
CGShares https://cgshares.com