• Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1)
    smashingmagazine.com
    There is a spectrum of opinions on how dramatically all creative professions will be changed by the coming wave of agentic AI, from the very skeptical to the wildly optimistic and even apocalyptic. I think that even if you are on the skeptical end of the spectrum, it makes sense to explore ways this new technology can help with your everyday work. As for my everyday work, Ive been doing UX and product design for about 25 years now, and Im always keen to learn new tricks and share them with colleagues. Right now, Im interested in AI-assisted prototyping, and Im here to share my thoughts on how it can change the process of designing digital products.To set your expectations up front: this exploration focuses on a specific part of the product design lifecycle. Many people know about the Double Diamond framework, which shows the path from problem to solution. However, I think its the Triple Diamond model that makes an important point for our needs. It explicitly separates the solution space into two phases: Solution Discovery (ideating and validating the right concept) and Solution Delivery (engineering the validated concept into a final product). This article is focused squarely on that middle diamond: Solution Discovery.How AI can help with the preceding (Problem Discovery) and the following (Solution Delivery) stages is out of the scope of this article. Problem Discovery is less about prototyping and more about research, and while I believe AI can revolutionize the research process as well, Ill leave that to people more knowledgeable in the field. As for Solution Delivery, it is more about engineering optimization. Theres no doubt that software engineering in the AI era is undergoing dramatic changes, but Im not an engineer Im a designer, so let me focus on my sweet spot.And my sweet spot has a specific flavor: designing enterprise applications. In this world, the main challenge is taming complexity: dealing with complicated data models and guiding users through non-linear workflows. This background has had a big impact on my approach to design, putting a lot of emphasis on the underlying logic and structure. This article explores the potential of AI through this lens.Ill start by outlining the typical artifacts designers create during Solution Discovery. Then, Ill examine the problems with how this part of the process often plays out in practice. Finally, well explore whether AI-powered prototyping can offer a better approach, and if so, whether it aligns with what people call vibe coding, or calls for a more deliberate and disciplined way of working.What We Create During Solution DiscoveryThe Solution Discovery phase begins with the key output from the preceding research: a well-defined problem and a core hypothesis for a solution. This is our starting point. The artifacts we create from here are all aimed at turning that initial hypothesis into a tangible, testable concept.Traditionally, at this stage, designers can produce artifacts of different kinds, progressively increasing fidelity: from napkin sketches, boxes-and-arrows, and conceptual diagrams to hi-fi mockups, then to interactive prototypes, and in some cases even live prototypes. Artifacts of lower fidelity allow fast iteration and enable the exploration of many alternatives, while artifacts of higher fidelity help to understand, explain, and validate the concept in all its details.Its important to think holistically, considering different aspects of the solution. I would highlight three dimensions:Conceptual model: Objects, relations, attributes, actions;Visualization: Screens, from rough sketches to hi-fi mockups;Flow: From the very high-level user journeys to more detailed ones.One can argue that those are layers rather than dimensions, and each of them builds on the previous ones (for example, according to Semantic IxD by Daniel Rosenberg), but I see them more as different facets of the same thing, so the design process through them is not necessarily linear: you may need to switch from one perspective to another many times. This is how different types of design artifacts map to these dimensions:As Solution Discovery progresses, designers move from the left part of this map to the right, from low-fidelity to high-fidelity, from ideating to validating, from diverging to converging.Note that at the beginning of the process, different dimensions are supported by artifacts of different types (boxes-and-arrows, sketches, class diagrams, etc.), and only closer to the end can you build a live prototype that encompasses all three dimensions: conceptual model, visualization, and flow.This progression shows a classic trade-off, like the difference between a pencil drawing and an oil painting. The drawing lets you explore ideas in the most flexible way, whereas the painting has a lot of detail and overall looks much more realistic, but is hard to adjust. Similarly, as we go towards artifacts that integrate all three dimensions at higher fidelity, our ability to iterate quickly and explore divergent ideas goes down. This inverse relationship has long been an accepted, almost unchallenged, limitation of the design process.The Problem With The Mockup-Centric ApproachFaced with this difficult trade-off, often teams opt for the easiest way out. On the one hand, they need to show that they are making progress and create things that appear detailed. On the other hand, they rarely can afford to build interactive or live prototypes. This leads them to over-invest in one type of artifact that seems to offer the best of both worlds. As a result, the neatly organized bento box of design artifacts we saw previously gets shrunk down to just one compartment: creating static high-fidelity mockups.This choice is understandable, as several forces push designers in this direction. Stakeholders are always eager to see nice pictures, while artifacts representing user flows and conceptual models receive much less attention and priority. They are too high-level and hardly usable for validation, and usually, not everyone can understand them.On the other side of the fidelity spectrum, interactive prototypes require too much effort to create and maintain, and creating live prototypes in code used to require special skills (and again, effort). And even when teams make this investment, they do so at the end of Solution Discovery, during the convergence stage, when it is often too late to experiment with fundamentally different ideas. With so much effort already sunk, there is little appetite to go back to the drawing board.Its no surprise, then, that many teams default to the perceived safety of static mockups, seeing them as a middle ground between the roughness of the sketches and the overwhelming complexity and fragility that prototypes can have.As a result, validation with users doesnt provide enough confidence that the solution will actually solve the problem, and teams are forced to make a leap of faith to start building. To make matters worse, they do so without a clear understanding of the conceptual model, the user flows, and the interactions, because from the very beginning, designers attention has been heavily skewed toward visualization.The result is often a design artifact that resembles the famous horse drawing meme: beautifully rendered in the parts everyone sees first (the mockups), but dangerously underdeveloped in its underlying structure (the conceptual model and flows).While this is a familiar problem across the industry, its severity depends on the nature of the project. If your core challenge is to optimize a well-understood, linear flow (like many B2C products), a mockup-centric approach can be perfectly adequate. The risks are contained, and the lopsided horse problem is unlikely to be fatal.However, its different for the systems I specialize in: complex applications defined by intricate data models and non-linear, interconnected user flows. Here, the biggest risks are not on the surface but in the underlying structure, and a lack of attention to the latter would be a recipe for disaster.Transforming The Design ProcessThis situation makes me wonder:How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one?If we were able to answer this question, we would:Learn faster.By going straight from intent to a testable artifact, we cut the feedback loop from weeks to days.Gain more confidence.Users interact with real logic, which gives us more proof that the idea works.Enforce conceptual clarity.A live prototype cannot hide a flawed or ambiguous conceptual model. Establish a clear and lasting source of truth.A live prototype, combined with a clearly documented design intent, provides the engineering team with an unambiguous specification.Of course, the desire for such a process is not new. This vision of a truly prototype-driven workflow is especially compelling for enterprise applications, where the benefits of faster learning and forced conceptual clarity are the best defense against costly structural flaws. But this ideal was still out of reach because prototyping in code took so much work and specialized talents. Now, the rise of powerful AI coding assistants changes this equation in a big way.The Seductive Promise Of Vibe CodingAnd the answer seems to be obvious: vibe coding!Vibe coding is an artificial intelligence-assisted software development style popularized by Andrej Karpathy in early 2025. It describes a fast, improvisational, collaborative approach to creating software where the developer and a large language model (LLM) tuned for coding is acting rather like pair programmers in a conversational loop. WikipediaThe original tweet by Andrej Karpathy:The allure of this approach is undeniable. If you are not a developer, you are bound to feel awe when you describe a solution in plain language, and moments later, you can interact with it. This seems to be the ultimate fulfillment of our goal: a direct, frictionless path from an idea to a live prototype. But is this method reliable enough to build our new design process around it?The Trap: A Process Without A BlueprintVibe coding mixes up a description of the UI with a description of the system itself, resulting in a prototype based on changing assumptions rather than a clear, solid model.The pitfall of vibe coding is that it encourages us to express our intent in the most ambiguous way possible: by having a conversation.This is like hiring a builder and telling them what to do one sentence at a time without ever presenting them a blueprint. They could make a wall that looks great, but you cant be sure that it can hold weight.Ill give you one example illustrating problems you may face if you try to jump over the chasm between your idea and a live prototype relying on pure vibe coding in the spirit of Andrej Karpathys tweet. Imagine I want to prototype a solution to keep track of tests to validate product ideas. I open my vibe coding tool of choice (I intentionally dont disclose its name, as I believe they all are awesome yet prone to similar pitfalls) and start with the following prompt:I need an app to track tests. For every test, I need to fill out the following data:- Hypothesis (we believe that...) - Experiment (to verify that, we will...)- When (a single date, or a period) - Status (New/Planned/In Progress/Proven/Disproven)And in a minute or so, I get a working prototype:Inspired by success, I go further:Please add the ability to specify a product idea for every test. Also, I want to filter tests by product ideas and see how many tests each product idea has in each status.And the result is still pretty good:But then I want to extend the functionality related to product ideas:Okay, one more thing. For every product idea, I want to assess the impact score, the confidence score, and the ease score, and get the overall ICE score. Perhaps I need a separate page focused on the product idea, with all the relevant information and related tests.And from this point on, the results are getting more and more confusing.The flow of creating tests hasnt changed much. I can still create a bunch of tests, and they seem to be organized by product ideas. But when I click Product Ideas in the top navigation, I see nothing:I need to create my ideas from scratch, and they are not connected to the tests I created before:Moreover, when I go back to Tests, I see that they are all gone. Clearly something went wrong, and my AI assistant confirms that:No, this is not expected behavior its a bug! The issue is that tests are being stored in two separate places (local state in the Index page and App state), so tests created on the main page dont sync with the product ideas page.Sure, eventually it fixed that bug, but note that we encountered this just on the third step, when we asked to slightly extend the functionality of a very simple app. The more layers of complexity we add, the more roadblocks of this sort we are bound to face.Also note that this specific problem of a not fully thought-out relationship between two entities (product ideas and tests) is not isolated at the technical level, and therefore, it didnt go away once the technical bug was fixed. The underlying conceptual model is still broken, and it manifests in the UI as well.For example, you can still create orphan tests that are not connected to any item from the Product Ideas page. As a result, you may end up with different numbers of ideas and tests on different pages of the app:Lets diagnose what really happened here. The AIs response that this is a bug is only half the story. The true root cause is a conceptual model failure. My prompts never explicitly defined the relationship between product ideas and tests. The AI was forced to guess, which led to the broken experience. For a simple demo, this might be a fixable annoyance. But for a data-heavy enterprise application, this kind of structural ambiguity is fatal. It demonstrates the fundamental weakness of building without a blueprint, which is precisely what vibe coding encourages.Dont take this as a criticism of vibe coding tools. They are creating real magic. However, the fundamental truth about garbage in, garbage out is still valid. If you dont express your intent clearly enough, chances are the result wont fulfill your expectations.Another problem worth mentioning is that even if you wrestle it into a state that works, the artifact is a black box that can hardly serve as reliable specifications for the final product. The initial meaning is lost in the conversation, and all thats left is the end result. This makes the development team code archaeologists, who have to figure out what the designer was thinking by reverse-engineering the AIs code, which is frequently very complicated. Any speed gained at the start is lost right away because of this friction and uncertainty.From Fast Magic To A Solid FoundationPure vibe coding, for all its allure, encourages building without a blueprint. As weve seen, this results in structural ambiguity, which is not acceptable when designing complex applications. We are left with a seemingly quick but fragile process that creates a black box that is difficult to iterate on and even more so to hand off.This leads us back to our main question: how might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap? The answer lies in a more methodical, disciplined, and therefore trustworthy process.In Part 2 of this series, A Practical Guide to Building with Clarity, I will outline the entire workflow for Intent Prototyping. This method places the explicit intent of the designer at the forefront of the process while embracing the potential of AI-assisted coding.Thank you for reading, and I look forward to seeing you in Part 2.
    0 Commentaires ·0 Parts
  • In the silence of my digital efforts, I feel the weight of unmet expectations. Despite pouring my heart into my PPC campaigns, I've realized that success isn't merely a flicker of hope—it's a dance of strategy. Factors like keyword targeting, captivating ad copy, and relevant landing pages play pivotal roles. Yet, even with all these pieces, I often find myself lost in the vastness of the online world, wondering where it all went wrong.

    Each click that doesn’t convert feels like a reminder of my solitude in this competitive landscape. I search for answers and connections, hoping that with every adjustment and optimization, I’ll find my way back to success.

    But what if the true victory lies not just in numbers, but in the courage to keep creating?

    https://www.semrush.com/blog/what-factors-influence-my-ppc-campaign-success/
    #PPC #DigitalMarketing #Loneliness #Hope #Optimization
    In the silence of my digital efforts, I feel the weight of unmet expectations. 🥀 Despite pouring my heart into my PPC campaigns, I've realized that success isn't merely a flicker of hope—it's a dance of strategy. Factors like keyword targeting, captivating ad copy, and relevant landing pages play pivotal roles. Yet, even with all these pieces, I often find myself lost in the vastness of the online world, wondering where it all went wrong. Each click that doesn’t convert feels like a reminder of my solitude in this competitive landscape. I search for answers and connections, hoping that with every adjustment and optimization, I’ll find my way back to success. But what if the true victory lies not just in numbers, but in the courage to keep creating? https://www.semrush.com/blog/what-factors-influence-my-ppc-campaign-success/ #PPC #DigitalMarketing #Loneliness #Hope #Optimization
    What factors influence my PPC campaign success?
    www.semrush.com
    PPC campaign success factors include keyword targeting, compelling ad copy, relevant landing pages, and regular optimization.
    0 Commentaires ·0 Parts
  • A Burger Bar in Poland Offers a Modern Twist on a Classic American Eatery
    design-milk.com
    Hamburgers of every variety are fast food favorites, including the popular smash versions popping up on menus everywhere. For the PLUTO brands first brick-and-mortar restaurant, architects at Znamy Si looked to the signature patties for inspiration. The entire design draws from the philosophy of smash burgers simple tools, strong effect, says Wojciech Nowak, co-founder of Znamy Si. We wanted the interior to emphasize richness through minimal means.Located in Wrocaw, Poland, the busy lunch spot is seemingly simple, but filled with the kind of elements that add character. An open grill sits at the heart of the space, framed by a raw metal structure reminiscent of a city food stand.The long bar and leather-topped stools, reminiscent of the furnishings in a traditional American diner, also double as a social zone where guests can eat their meals and watch the cooks at work in the open kitchen. A unique triple-layer countertop echoes the look of a perfectly composed burger.Lighting offers an ambient glow directed toward the ceiling, where the PLUTO logo floats inside a circular frame. Materials throughout are restrained, yet still have richness. Textured plaster covers the walls, while ceramic flooring brings a luxe touch underfoot. Wood and leather finishes complement retro-style polished accents. Irregular glass blocks provide just enough sheen. Undulating lines enliven every corner with a sense of natural fluidity.Colors like deep caramel brown and orange tones represent the flavors and heat synonymous with cooking, and are coupled with the soft baby blue tint of the brand to create a balanced, fresh palette.The combination of the contemporary design and classic fare are paired to delight customers. Our goal was to make the guest experience start even before the first bite, with an interior that feels warm, energetic, and authentic, like comfort food turned into space, Nowak notes.PLUTO x Znamy SiFor more information, visit znamysie.com.Photography by Migdal studio.
    0 Commentaires ·0 Parts
  • From products to systems: The agentic AI shift
    uxdesign.cc
    Agents are here, how we build and use them is challenging many of the foundations for building software that we established over the last few decadesand even the very idea of what a productis.Prompt by the author, image bySora.This article is a continuation of previous explorations on the theme of how AI is impacting design and product (Vibe-code designing, Evolving roles), and is based on a presentation delivered at The Age of TOO MUCH exhibition in Protein studios, as part of London Design Week2025.Theres a scene in one of my favorite movies, Interstellar, where the characters are on a remote, water-covered planet. In the distance there is what initially appears to be a large landmass, but as Cooper, the main character, looks on, he realizes that they arent in fact mountains, but enormous waves steadily building and towering ominously overthem.Those arent mountains, those are waves CooperWith AI, it feels like weve had a similarly huge wave building on the horizon for the last few years. I wrote previously about how Generative AI and Vibe Coding are changing how we design. In recent months it feels like another seismic shift is underway with agentic AI. So what exactly is this wave, and how is it reshaping the landscape we thought weknew?I lead the product design team at DataRobot, an enterprise platform that helps teams build, govern and operate AI models and agents at scale. From this vantage point, Im seeing these changes reshape not just how we design, but also many long-held assumptions about what products are and how theyrebuilt.Whats actuallychangingAgents are a fundamentally different paradigm to predictive and generative AI. What sets them apart, aside from being multimodal and capable of deep reasoning, is their autonomous nature. It sounds deceptively simple, but when software has agencythe ability to make decisions and take actions on its ownthe results can be quite profound.This creates a fundamental challenge for companies integrating AI software, which is traditionally built for deterministic, predictable workflows. Agentic AI is inherently probabilisticthe same input can produce different outputs, and agents may take unexpected paths to reach their goals. This mismatch between deterministic infrastructure and probabilistic behavior creates new design challenges around governance, monitoring, and user trust. These arent just theoretical concerns, theyre already playing out in enterprise environments. Thats why we partnered with Nvidia to build on their AI Factory design and delivered as agentic apps embedded directly into SAP environments, so customers can run these systems securely and atscale.But even with this kind of hardened infrastructure, moving from experimentation to impact remains difficult. Recent MIT research found that 95% of enterprise generative AI pilots fail to deliver measurable impact, highlighting an industry-wide challenge in moving from prototype to production. Our AI Expert servicewhere specialised consultants work directly with customers to deploy and run agentsdelivers outstanding results through personalized support. To extend this level of guidance to a broader customer base, we needed to develop scalable approaches that could address complexity barriers atscale.Datarobot homepageMoving from AI experimentation to production involves significant technical complexity. Rather than expecting customers to build everything from the ground up, we decided to flip the offering and lead instead with a series of agent and application templates that give them a headstart.To use a food analogy, instead of handing customers a pantry full of raw ingredients (components and frameworks), we now provide something closer to HelloFresh meal kits: pre-scaffolded agent and application templates with prepped components and proven recipes that work out of the box. These templates codify best practices across common customer use cases and frameworks. AI builders can clone them, then swap out or modify components using our platform or in their preferred tools viaAPI.Use case specific Agentic Application TemplatesThis approach is changing how AI practitioners use our platform. One significant challenge is creating front-end interfaces that consume the agents and modelsapps for forecasting demand, generating content, retrieving knowledge or exploring data.Larger organisations with dedicated development teams can handle this easily. But smaller organisations often rely on IT teams or our AI experts to build these interfaces, and app development isnt their primaryskill.We mitigated this by providing customisable reference apps as starting points. These work if they are close to what you need, but theyre not straightforward to modify or extend. Practitioners also use open-source frameworks like Streamlit, but the quality of these often falls short of enterprise requirements for scale, security and user experience.To address this, were exploring solutions that use agents to generate dynamic applicationsdashboards with complex user interface components and data visualizations, tailored to specific customer needs, all using the DataRobot platform as the back-end. The result is that users can generate production-quality applications in days, not weeks ormonths.https://medium.com/media/1c5c875292b4c38002fa61e15fab4140/hrefThis shift towards autonomous systems raises a fundamental question: how much control should we hand over to agents, and how much should users retain? At the product level, this plays out in two layers: the infrastructure AI practitioners use to create and govern workflows, and the front-end apps people use to consume them. Our customers are now building both layers simultaneouslyguidance agents configure the platform scaffolding while different generative agents build the React-based applications that sit ontop.These arent prototypestheyre production applications serving enterprise customers. AI practitioners who might not be expert app developers can now create customer-facing software that handles complex workflows, rich data visualization, and business logic. The agents handle React components, layout and responsive design, while the practitioners focus on domain logic and user workflows.We are seeing similar changes in other areas too. Teams across the organisation are using new AI tools to build compelling demos and prototypes using tools like V0. Designers are working alongside front-end developers to contribute production code. But this democratization of development creates new challenges; now that anyone can build production software, the mechanisms for ensuring quality and scalability of code, user experience, brand and accessibility need to evolve. Instead of checkpoint-based reviews, we need to develop new systems that can scale quality to match the new pace of development.An example of an app built by our field team using V0 that leverages agent-aware design systemdocs.Learning from the controlparadoxThere are lessons from our AutoML (automated machine learning) experience that apply here. While AutoML workflows helped democratize access for many users, some experienced data scientists and ML engineers felt control was being taken away. We had automated the parts they found most rewardingthe creative, skilled work of selecting algorithms and crafting featureswhile leaving them with the tedious infrastructure work they actually wanted toavoid.Earlier version of the DataRobot UI that focused on democratizing access to machinelearningWere applying this lesson directly to how we build agentic applications. Just as AutoML worked when it automated feature engineering but not always model interpretation, our customers will succeed when agents handle UI implementation while AI/ML experts retain control over the agentic workflow design. The agents automate what practitioners dont want to do (component wiring, state management) while preserving agency over what they do care about (business logic, user experience decisions).Now, with agentic AI, this tension plays out at a much broader scale with added complexity. Unlike the AutoML era when we primarily served data scientists and analysts, we now target a broader range of practitioners including App developers who might be new to AI workflows, along with the agents themselves as endusers.Each group has different expectations about control and automation. Developers are comfortable with abstraction layers and black-box systemstheyre used to frameworks that handle complexity under the hood, but they still want to debug, extend, and customise when needed. Data scientists still want explainability and intervention capabilities. Business users just wantresults.But theres another user type were designing for: the agents themselves. This represents a fundamental shift in how we think about user experience. Agents arent just tools that humans usetheyre collaborative partners that need to interact with our platform, make decisions, and work alongside human practitioners.Overview of the users who interact with the DataRobot platformWhen we evaluate new features now, we ask: will the primary user be human or agent? This changes everything about how we approach information architecture, API design, and even visual interfaces (if required). Agents need different types of feedback, different error handling, and different ways to communicate their reasoning to human collaborators.Looking ahead, its possible agents may emerge as primary users of enterprise platforms. This means designing for human-agent collaboration rather than just human-computer interaction, creating systems where agents and humans can work together effectively, each contributing their strengths to the workflow.From designing flows to architecting systemsThese changes challenge fundamental assumptions about what a product is. Traditionally, products are solutions designed to solve specific problems for defined user groups. They usually represent a series of trade-offs: teams research diverse user needs, then create single solutions that attempt to strike the best balance of multiple use cases. This often means compromising on specificity and simplicity to achieve broaderappeal.Generative AI has already begun disrupting this model by enabling users to bypass traditional product design and development processes entirely. Teams can now get to an approximation of an end result almost instantaneously, then work backward to refine and perfect it. This compressed timeline is reshaping how we think about iteration and validation.But agentic AI offers something more fundamental: the ability to generate products and features on demand. Instead of static experiences that try to serve a broad audience, we can create dynamic systems that generate specific solutions for specific contexts and audiences. Users dont just get faster prototypesthey get contextually adaptive experiences that reshape themselves based on individual needs.How the product development process is evolving with AI. Creditunknown.This shift changes the role of design and product teams. Instead of executing individual products, we become architects of systems that can create products. We curate the constraints, contexts, and components that agents use to generate experiences while maintaining brand guidelines, product principles, and UX standards.But this raises fundamental questions about interaction design. How do affordances work when interfaces are generated on demand? Traditional affordancesvisual cues that suggest how an interface element can be usedrely on consistent patterns that users learn over time. Interestingly, AI tools like Cursor, V0, and Lovable address this challenge by leveraging well-established UX frameworks like Tailwind and ShadCN. Rather than creating novel patterns that users need to learn, these tools generate interfaces using robust, widely-adopted design systems that provide familiar starting points. When agents generate interfaces contextually using these established frameworks, users encounter recognizable patterns even when the specific interface isnew.At DataRobot, weve approached this challenge by systematizing our design process and standards as agent-aware artifacts. Weve converted our Figma design system into machine-readable markdown files that agents can consume directly. Using Claude, we translated our visual design guidelines, component specifications, and interaction principles into structured text that can be dropped as context into AI tools like Cursor, V0, andLovable.Translating design files into agent-aware artifacts like markdownfiles.This approach allows us to maintain design quality at scale. Instead of manually reviewing every generated interface, we encode our design standards upstream, ensuring that agents generate consistent, accessible, and brand-appropriate experiences bydefault.Were already seeing this in action within DataRobot itself. Our AI Experts use these agent-aware design artifacts when building agentic applications, maintaining design consistency through our systematized guidelines while focusing on the unique business logic and user workflows.What this means for product & designleadersI previously wrote about how the boundaries between disciplines are blurring. What shape the product triad will take, or if it remains a triad at all, is unclear. While its likely that design will absorb many front-end development tasks (and vice-versa), and some PMs will take on design tasks, I dont think any roles will disappear entirely. There will always be a need for specialists; while individuals can indeed do a lot more than before, there is only a certain amount of context that we can allretain.So while we might be able to execute more, we still need people who can go deep on complex problems along with a level of craft that becomes increasingly valuable as a differentiator. In a world where anyone can create anything, the quality of execution and depth of understanding that comes from specialization will be what separates good work from exceptional work.The companies that are going to distinguish themselves are the ones that show their craft. That they show their true understanding of the product, the true understanding of their customer, and connect the two in meaningful ways. Krithika Shankarraman (Product, OpenAI)The blurring boundaries of the producttriad.As these boundaries blur and new capabilities emerge, its worth remembering what remains constant. The hard problems remainhard:Understanding people and their needs within complex contexts. What unmet needs are we addressing?Building within interdependent systems and enterprise constraints. Will this work with existing architectures?Aligning technical capabilities to business value. Is this solving a problem thatmatters?Our role as design leaders is evolving from crafting individual experiences to architecting systems that generate experiences. Were evolving from designing screens to designing systems that can make contextual decisions while maintaining design integrity.This changes our methodology fundamentally. Instead of designing for personas or generalised scenarios, were designing systems that adapt to individual contexts in real-time. Rather than creating single user journeys, were building adaptive frameworks that change pathways based on user intent and behavior.User research also evolves: we still need to understand human needs, but now we must translate those insights into rules and constraints that agents can interpret. The challenge isnt just knowing what users want, but encoding that knowledge to maintain design quality across infinite interface variations.https://medium.com/media/0adf05b68982681f7579e5f9734e2af3/hrefThis fundamental truth doesnt changebut our methods for translating human understanding into actionable systems do. The uniquely human work of developing deep contextual understanding becomes more valuable, not less, as we learn to encode that wisdom for AI systems to use effectively.Design quality in an agent-first worldThis shift toward agent-generated experiences creates new design challenges. If agents are creating interfaces on demand, how do you maintain coherence across an organisation? How do you ensure accessibility compliance? How do you handle edge cases that training data didntcapture?We believe that part of the answer lies in creating foundational artifacts that both humans and agents can consume. At DataRobot, we are currently exploring:Making documentation agent-aware using formats like MCP, agents.md and llms.txt.Converting our design system into foundational markdown files that codify principles and patterns, for use in AI development tools.Creating automated checks for UI language, accessibility standards, and interaction patterns.This approach enables others in our organisation to build compelling applications with AI tools while adhering to our design system and brand consistency. But heres the crucial insight: while these AI-generated applications might look impressive, the polish can mask underlying UX challenges. As Prestonnotes:https://medium.com/media/1deda45b93d917a65c099af08ac55b7e/hrefAI tools excel at execution, but they dont replace the difficult UX work required to ensure youre executing the rightthing.This creates a new challenge for design teams: when everyone is a builder, how do we ensure we build the right things and ensure we meet quality standards? Weve struggled with knowing when to lean in. There are times, like creating demos or throwaway prototypes, when its fine for design to be less involved. But there are critical moments when our involvement is important, otherwise poor quality experiences can ship to production. Our customers dont care about how or who created the products they interactwith.The key is catching issues as far upstream as possible. This means the documentation and enablement materials that guide how people use and customise our templates have become the new products our design team is responsible for. By creating thorough agent-aware guidelines and design system documentation, we can ensure higher quality output atscale.But we still need quality checks without slowing down the process too much. Were still learning how to balance speed with standardswhen to trust the system weve built and when human design judgment is a musthave.Riding thewaveThe last few years have felt like a rollercoaster because they have been. But I believe our job as designers is to lean into uncertainty, to make sense of it, shape it, and help others navigateit.Like Cooper in Interstellar, weve recognised that what seemed like distant mountains are actually massive waves bearing down on us. The question isnt whether the wave will hit, its already here. The question is whether well be caught off guard or whether well have prepared ourselves to harness itspower.Heres what weve learned so far at DataRobot, for anyone navigating this transition:Embrace the change & challenge orthodoxiesTry new tools and workflows outside your traditional lane. As roles blur, staying relevant means expanding your capabilitiesBuild systems, not just products Focus on creating the foundations, constraints, and contexts that enable good experiences to emerge, rather than crafting every detailyourselfFocus on the enduring hard thingsDouble down on the uniquely human work of understanding needs, behaviours, and contexts that no algorithm can fullygrasp.Exercise (your) judgment Use AI for speed and capability, but rely on your experience and values to decide whatsright.AI doesnt make design irrelevant. It makes the uniquely human aspects of design more valuable than ever. The wave is here, and those who learn to harness it will find themselves in an incredibly powerful position to shape what comes next. This isnt about nostalgia for how design used to work, its about taking an optimistic stance and embracing whats possible with these technologies, doing things and going further than we ever could before. As Andy Grove put it perfectly:Dont bemoan the way things were, they will never be that way again. Pour your energyevery bit of itinto adapting to your new world, into learning the skills you need to prosper in it, and into shaping it aroundyou.John Moriarty leads the design team at DataRobot, an enterprise AI platform that helps AI practitioners to build, govern and operate agents, predictive and generative AI models. Before this, he worked in Accenture, HMH and Design Partners.From products to systems: The agentic AI shift was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Commentaires ·0 Parts
  • The Two Most Surprising Things About Apple's New 'Workout Buddy'
    lifehacker.com
    We may earn a commission from links on this page.Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding Lifehacker as a preferred source for tech news.This week I did over a dozen workouts with Apples new Workout Buddy. I ran, I walked, I strength trained, and even did a little indoor cycling. Ive learned a few things, but the strangest is that I didnt need an Apple Watch for any of it.Workout Buddy is an AI-powered feature that sends a little voice into your headphones to motivate and congratulate you as youre working out. Apple touted Workout Buddy as a feature of WatchOS 26 and promoted it among the features of the new Series 11 Apple Watch, so youd be forgiven for thinking its part of of the Apple Watch, specifically. But thats not what I found.How to use Workout Buddy without an Apple WatchBut as I found when I took a supported Apple Watch out for a trail run with an old iPhone (a 12 Mini), Workout Buddy requires a phone that supports Apple Intelligence, so I didnt have access to it. OK, fine, it needs a newer watch and a newer phone. (Or so I thought.) Eventually I got my hands on a 16 Pro and, yep, was able to enable and use Workout Buddy.But this weekwith a Series 11 Apple Watch on my wrist and WatchOS 26 installedI discovered something. I could power off the watch, or even leave it at home, and still get Workout Buddy. Here are a few things I tried, all of which got me Workout Buddy:Starting a workout from the Series 10 or Series 11 Apple WatchStarting a workout from the Fitness app (you can do that now!) with the Powerbeats Pro 2 headphones pairedStarting a workout from the Fitness app without any other Apple products in range, just a Coospo heart rate monitor and some Shokz headphonesStarting a workout from the Fitness app with just Shokz headphones paired (no heart rate monitor, since it was a GPS-enabled walk)The only configuration that wouldnt give me Workout Buddy was using the Fitness app without headphones paired. Its serious about needing headphones, but they can be paired to either the Watch or the iPhone.Workout Buddy is more of a chipper sidekick than a coachI hoped Workout Buddy might provide some kind of coaching or workout guidance, but found thats not quite what its there for. The biggest difference between having Workout Buddy on versus off during a run is that, with Workout Buddy, you get your splits read to you in a more conversational voice.The main advantage of Workout Buddy is that it gives you a check-in at the beginning and end of your workout to let you know where you stand on your goals and progress for the day and the week, and it will call out any notable recent achievements.For example, at the start of pretty much every workout this weekwhether running, walking, or strengthit congratulated me on running my fastest-ever 5K last Tuesday. It also let me know I logged at least 16 workouts every week for the past four weeks, which is very consistent of me.The workout count seems to be correct (I log a lot of short workouts for device testing), but the 5K callout is wrong. Last Tuesday I earned a 5K badge, but thats just for logging a run of more than five kilometers, not for running my fastest 5K. According to the Fitness appremember, the same app that contains Workout Buddymy fastest 5K was in July of 2021.Besides those hallucinations, the information seems to be reasonable. The overly-enthusiastic voice of the Workout Buddy always tells me at the start of each workout where I stand on my ring-closing goals. I need 22 more minutes to close my Exercise ring, it might say, or 37 more calories to close the Move ring. At the start of a run, it will tell me how many miles Ive already run this week. And if I have music playing, it will name-check the bandseemingly just to let me know it can read that data. Get into the rhythm with Fleetwood Mac! it told me once, just as a Fleetwood Mac song was fading out.Overall, I find the goal-oriented check-ins useful; knowing I have 22 minutes left on my exercise goal does make me more likely to extend my workout if I was only going to do a 20-minute one. The conversational voice giving me my mile splits is a bit nicer than hearing the generic, more robot-like voice. And if I had run my fastest 5K recently, Id probably love to be reminded about it at every opportunity.
    0 Commentaires ·0 Parts
  • PlayStation Pulse Elevate portable speakers are coming for your desktop in 2026
    www.engadget.com
    Sony's lineup of gaming-focused audio devices is growing with the addition of the PlayStation Pulse Elevate wireless speakers. They work with PC, Mac, PlayStation 5 and PlayStation Portal, and they support Bluetooth and Sony's proprietary PlayStation Link Wireless connection scheme. The Pulse Elevate speakers come in white or black, and they're due to hit the market in 2026. There's no word on price just yet.The Pulse Elevate speakers can be set on charging stands when playing at your desk, or they can be disconnected and used in portable mode. When not docked, they have (an unspecified number of) "hours of battery life," according to Sony's hype trailer. The speakers support 3D audio, they can be tilted back, and they have an integrated mic with noise reduction, planar magnetic drivers and a built-in woofer.The PlayStation Pulse Elevate speakers join Sony's Pulse Elite gaming headset and Pulse Explore earbuds. The earbuds retail for $200 and the headset goes for $150.This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/playstation-pulse-elevate-portable-speakers-are-coming-for-your-desktop-in-2026-215320963.html?src=rss
    0 Commentaires ·0 Parts
  • Are you tired of juggling too many tasks while trying to grow your business? You're not alone! Many small-business owners feel overwhelmed by the thought of automation, believing it takes a lot of time, money, and technical know-how. But what if I told you that automation can be a game-changer for your efficiency without the headache?

    Imagine simplifying your workflows and giving yourself more freedom to focus on what truly matters. The key is to blend the power of AI with human insight, ensuring that the tools you use actually work for you—not against you.

    Why not take that first step toward a more streamlined business today? What’s holding you back from embracing automation? Let’s discuss!

    #AutomationJourney #SmallBusinessSuccess #EfficiencyBoost #EntrepreneurMindset #WorkSmart
    Are you tired of juggling too many tasks while trying to grow your business? You're not alone! Many small-business owners feel overwhelmed by the thought of automation, believing it takes a lot of time, money, and technical know-how. But what if I told you that automation can be a game-changer for your efficiency without the headache? Imagine simplifying your workflows and giving yourself more freedom to focus on what truly matters. The key is to blend the power of AI with human insight, ensuring that the tools you use actually work for you—not against you. Why not take that first step toward a more streamlined business today? What’s holding you back from embracing automation? Let’s discuss! #AutomationJourney #SmallBusinessSuccess #EfficiencyBoost #EntrepreneurMindset #WorkSmart
    0 Commentaires ·0 Parts
  • Microsoft unveils advanced AI cooling which lowers heat, cuts energy use - and could lead to more powerful data centers
    www.techradar.com
    New Microsoft cooling breakthrough could reshape how AI chips handle rising heat inside future dense data centers.
    0 Commentaires ·0 Parts
  • Hey everyone! Ever wondered how NOT to build a computer? Well, this video has got you covered with some hilariously wrong moves!

    Join Linus as he teams up with Xavier from Chicago, who’s built a whopping 1.5 computers before! With the help of ASUS and some epic hardware, this journey is a blend of creativity and chaos. We’re all about learning from our mistakes, and this video proves that building a computer can be both fun and educational!

    Trust us, you don’t want to miss the laughs (and the lessons)!

    Check it out here: https://www.youtube.com/watch?v=LuxTsP23reU

    #ComputerBuilding #TechFails #ASUS #Gaming #ROGRIGReboot
    🎉 Hey everyone! Ever wondered how NOT to build a computer? 🤔 Well, this video has got you covered with some hilariously wrong moves! Join Linus as he teams up with Xavier from Chicago, who’s built a whopping 1.5 computers before! 😅 With the help of ASUS and some epic hardware, this journey is a blend of creativity and chaos. We’re all about learning from our mistakes, and this video proves that building a computer can be both fun and educational! Trust us, you don’t want to miss the laughs (and the lessons)! 👉 Check it out here: https://www.youtube.com/watch?v=LuxTsP23reU #ComputerBuilding #TechFails #ASUS #Gaming #ROGRIGReboot
    0 Commentaires ·0 Parts
  • Instagram now has 3 billion monthly active users
    www.cnbc.com
    Meta said that Instagram now has 3 billion monthly active users.
    0 Commentaires ·0 Parts
CGShares https://cgshares.com