UXDESIGN.CC
Beyond the Model: a systems approach to AI product design
Integrating AI from technical components to user experience.SourceReading Paz Perez’s “The Rise of the Model Designer” offers a clear and accessible perspective on the current wave in AI product development.She makes an interesting case for why designers should step beyond the interface and help shape the very behavior of AI agents, an argument I fully support. Her call for designers to “get a seat at the table” in model development is timely and necessary to help shape this major shift in society.Yet, as I reflected with a pen in hand, or shall I say, keyboard in hand, I found myself looking for a broader, more systemic view of her approach. This is a complex journey we’re all on together, shaping not only new roles but also new ways of thinking about design and language in this emerging and fascinating era.The designer’s opportunityThroughout her article, Perez encourages designers to take an active role in shaping both the interface and the underlying model. She argues that this dual focus is essential for creating AI products that truly serve people’s needs. Her perspective is a timely reminder that the future of design is about more than screens; it’s about shaping the intelligence that powers our digital experiences.The article focuses on how we should develop great writing skills, to craft great prompts, and align the LLM behaviour with user input. The author rightly emphasises cross-collaboration with engineers and the importance of a “feedback loop” to refine the agent’s performance. This refinement can be done in different ways; we, as a collective, are starting to explore and further understand.However, in this narrative, the model is placed at the centre, almost isolated, a brilliant mind, perhaps, but one without a body or environment. While Perez guides us through optimizing the AI’s knowledge and thinking, in my opinion, focusing solely on this approach can lead us to overlook crucial system dynamics.Consider customer service AI agents: designers often focus on refining response tone and troubleshooting capabilities, but at times, we overlook critical system integration factors. The AI agent needs seamless connection with customer data systems, smooth handoffs to human agents, and adaptability to both seasonal support demands and volume fluctuations. As Qian Yang et al. (2020) note, these system elements significantly impact user experience regardless of how well-crafted the prompts are.The missing layer: designing for the whole systemAI agents, like any product, exist within ecosystems. They are actors in complex, evolving gardens. User workflows, organisational processes, and even societal norms shape these models' outputs. For example, think of the healthcare sector, a medical AI deployed in a hospital simply provides clinical recommendations in isolation. Its outputs are influenced by a triage (giving priority to certain patients), document requirements (conforming to billing and legal standards), and cultural attitudes toward patient autonomy (different offerings of treatment options based on local medical practice norms).The same model applied in different hospitals might produce different recommendations, not only due to its core capabilities but because the surrounding ecosystem shaped what questions were asked and how its outputs were interpreted and also implemented.Adapted “Multi-agent debate is the answer, what is the question?Their impact is not limited to how accurate their responses are or how well curated their prompts are, but extends to how they reshape work, impact trust, and introduce new ethical dilemmas. If we solely focus on the model, we may be ignoring potential problems that only become visible when the AI operates in real-world contexts. Sometimes, these business priorities push against user needs, decisions that users can’t understand or even question. There is a big risk of creating negative effects that weren’t identified during testing.A systems-thinking approach to AI product designWhen we talk about “designing for the whole system, not just the model,” we’re advocating for a holistic, end-to-end approach to AI product design. An approach that recognizes how every stage of the AI lifecycle impacts user experience, trust, and long-term product value.This mindset draws from systems thinking, which encourages us to look beyond isolated components (like the AI model itself) and instead see the interconnected web of data, processes, people, and policies that shape the final product. This systems perspective aligns with Rahwan et al.’s (2019) argument for treating AI as an ecological rather than purely technical phenomenon. This research shows that we cannot understand AI behavior in isolation from the social, organizational, and physical environments in which it operates. In other words, machines increasingly operate with a high degree of autonomy in the same environments as humans; understanding their behavior is essential.For designers, this means shifting from a model-centric to an ecosystem-centric approach. It’s worth acknowledging that sometimes designers are boxed into interface work, and we don´t have any system-level influence.As Yang et al. (2023) note in their comprehensive study of AI design practice, ‘designers frequently encounter organizational barriers that limit their ability to influence algorithmic decisions, despite being uniquely positioned to advocate for user needs within technical systems’. Even when designers are limited to interface-level work, we can still apply systems thinking through what Dove et al. (2022) call interface-mediated advocacy.For instance, designers can document system friction points experienced by users, make visible the connections between interface decisions and broader organizational processes, and advocate for ‘systemic touchpoints as Liao et al. (2024) identified. These key moments are where users experience the consequences of upstream AI decisions. These brave designers who consistently frame interface challenges as system-level concerns gradually expand their influence beyond traditional UX boundaries, even in technically dominated organizations.I can’t help but think, what if, as designers, we´d create a specific mapping methodology for AI interaction lifecycles? Perhaps something that visualizes data flows and “meaning flows”. Analyzing how these interpretations and decisions evolve as information moves between model, interface, and user contexts. How do users make sense of the output, and how does it change their behavior? This systems-thinking lens can help us reveal the friction points and ethical dilemmas that prompt engineering alone can’t address.A real-world example: systems thinking in educational AIImagine deploying an AI-powered feedback tool for students. If you only design the model, you might optimize for grading accuracy. But if you were mapping the whole system, you´d consider:How is student data collected, and is it representative?How are feedback explanations presented so students understand and trust them?What happens if a student disagrees with the AI’s assessment?How is the system monitored for drift or bias as new cohorts use it?There are so many questions and factors to consider, so where do we begin? Drawing from my ongoing research on this topic, I’ve shared with you recommendations for product designers seeking to clarify their approach to AI systems design.1. Adopt a “First Principles” approach to AI designAbove: Balancing people-first and technology first thinking. Illustration by Thoka Maer.This means breaking down problems to their most basic elements and questioning every assumption, especially about who benefits from automation and why.Rather than accepting the status quo or relying on existing models, designers should start by examining the core needs and values at play: Who gains from automating a process? as it is well-stated in PAIR.Additionally, designers must consider transparency at different levels: what an expert user needs to understand about the AI’s reasoning may be very different from what a novice or a child needs.2. Create system visualizationsGraphic by authorCreating simple visual diagrams that show the boundaries of an AI system and how it connects with users and other systems is essential. These visuals help everyone understand what the AI is responsible for and what lies outside its control. It’s also important to highlight areas where the AI’s behavior might be unpredictable or uncertain, using clear markers or colors to show these zones. Sharing these diagrams with stakeholders makes complex AI systems easier to grasp, encourages open discussions, and helps the team agree on where human oversight is needed. This approach builds shared understanding and leads to better, more trustworthy AI products.3. Practice “temporal design”As designers, we need to consider how AI relationships evolve. Unlike static products, AI systems change through use, needing design patterns that anticipate and guide this evolution.For example, how might interfaces communicate the system’s growing understanding of user preferences without creating uncanny experiences? How do we design for the changing nature of trust as users become more familiar with AI capabilities and limitations?Design implications — how to communicate learning without creating uncanny experiencesThe question that mattersAre you designing for a model, or are you designing for a system? Because in the end, users don’t experience models, they experience systems. And it’s the quality of that system, not just the capabilities of the model, that will determine whether your AI product thrives or fails.If we want to shape AI that truly serves people, let’s design not just for the brilliance of the agent, but for the complexity of the world it operates in. This shift goes from thinking about designing interfaces to designing relationships and environments, a much more holistic approach to AI product design where human and machine intelligence grow together.REFERENCES :Kuang, C, & Fabricant, R. (2019). User Friendly: How the Hidden Rules of Design are Changing the Way We Live, Work, and Play. Macmillan.Hangfan Zhang1∗ Zhiyao Cui2∗ Xinrun Wang (2025).If Multi-Agent Debate is the Answer, What is the Question?3. Gray, C. M. (2016). “It’s More of a Mindset Than a Method”: UX Practitioners’ Conception of Design Methods. CHI Conference on Human Factors in Computing Systems.4. Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. In CHI Conference on Human Factors in Computing Systems (CHI ‘20), April 25–30, 2020, Honolulu, HI, USA. ACM, New York, NY, USA 17 Pages. https://doi.org/10.1145/3313831.33763015. Holmlid, S. (2009). Participative, Co-operative, Emancipatory: From Participatory Design to Service Design. First Nordic Conference on Service Design and Service Innovation.6.Subramanian, H., Maher, M. L., & Mahajan, S. (2022). The Role of Design Thinking in AI Implementation: A Case Study Analysis. International Journal of Design.7.Designers: AI needs context. How UX teams should embrace data to… | by Paz Perez | UX Collective8.AI product design: Identifying skills gaps and how to close them | by Tia Clement | UX Collective9.Exploring UX Design Through the Lens of AI: A Novel Perspective — AI/LLMBeyond the Model: a systems approach to AI product design was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
0 Comments 0 Shares 27 Views