Using simulation models in UX research Why it’s time we take behavior seriouslyUsers in an app behave like a complex system. The system’s behavior isn’t just the sum of its parts — it’s emergent. That means we can’t predict what the group..."> Using simulation models in UX research Why it’s time we take behavior seriouslyUsers in an app behave like a complex system. The system’s behavior isn’t just the sum of its parts — it’s emergent. That means we can’t predict what the group..." /> Using simulation models in UX research Why it’s time we take behavior seriouslyUsers in an app behave like a complex system. The system’s behavior isn’t just the sum of its parts — it’s emergent. That means we can’t predict what the group..." />

Upgrade to Pro

Using simulation models in UX research

Why it’s time we take behavior seriouslyUsers in an app behave like a complex system. The system’s behavior isn’t just the sum of its parts — it’s emergent. That means we can’t predict what the group will do by only studying individuals. Just like birds: each one may be predictable, but their flocking behavior isn’t something you’d guess from a single bird.You’ve probably felt it. Your research gets ignored, your methods can’t answer the questions being asked, and your role feels more like a presentation decorator than a decision-maker. It’s not just you — something deeper is off.Let’s be honest: UX research didn’t collapse — it was never solid to begin with.From the start, it’s been shaped by shallow interpretations, more focused on fitting into design teams than standing on solid methodological ground.It borrowed the language of science, psychology, and behavioral research — but only the surface-level tools, never the depth. It borrowed from the social sciences — for example, surveys from psychology and sociology, qualitative approaches from ethnology — and also borrowed usability testing from industrial design. But in all cases, it left behind the frameworks, the theoretical grounding, and the hard questions that came with them.Most of UX research sticks to surface-level methods. But beneath the surface lies a deeper set of tools from the social sciences — powerful, underused, and long overdue.What we got instead was something that looked analytical but rarely was. Something collaborative, friendly, and workshop-compatible — but not always meaningful. Something easy to sell to stakeholders — but hard to defend under scrutiny.And now we’re living with the fallout. UX research didn’t fall from grace — it was never built right. A wave of layoffs has hit researchers harder than most, because the value of our work has been reduced to surface-level activities — often nothing more than fancy post-its, sticky walls, and shallow observations that sound replaceable — and in some cases, are.UX research is stuck. We keep hitting the limits of our toolkit. We say no to questions we should be able to answer — not because the questions are bad, but because our methods can’t handle them. And when that happens, companies stop asking.Take companies like Spotify. AI is now personalizing experiences with incredible precision, and in many cases, replacing the need for human-driven personalization. Researchers are being laid off not because their role is obsolete, but because their toolkits haven’t kept pace. They could have stayed relevant — even in AI-heavy environments — if they had adapted, learned new methods, and shown how research can guide, challenge, or complement what AI alone can’t do.That’s on us. As researchers, part of our job — whether or not we’re paid for it — is to explore new tools. Not just to learn new workshop formats or frameworks, but to actually extend what’s possible in research. If we don’t, we risk fading out while the rest of the org moves forward.It’s not fair. But it’s real. And it means we need to push beyond traditional methods — not by inflating our value with vague storytelling, but by actually expanding our capabilities.And thanks to UXR’s shallow borrowing from multiple disciplines without their depth, there’s now an entire world of underused analytical methods sitting just outside our field. One of the most overlooked? Simulation modeling.These models let us simulate users — their behaviors, decisions, mental states, and more — and use those simulations to predict outcomes, test hypotheses, and validate assumptions. They treat users as complex, dynamic systems — not just averaged-out segments or simplified down to percentages and cherry-picked quotes.What are simulation modelings?Simulation models are simplified, programmable systems that help us test ideas about how users behave — without actually having to run a real study. Instead of asking users what they might do, you tell the model how they might act, and then simulate what happens when lots of users behave in those ways over time. These models are widely used in advanced social sciences — like sociology, political science, and behavioral economics — to study how complex systems evolve over time, especially when real-world testing is too slow, risky, or impractical.In these models, users are often treated as “agents” or units that follow certain behavioral rules — for example, “click if X is true” or “drop off if waiting more than 10 seconds.” You set these rules based on your assumptions, data, or existing research results.The model then simulates thousandsof interactions between users and the system. The goal isn’t to predict the future with precision — it’s to explore what might happen under different conditions, and whether your hypothesis holds up when scaled.You can simulate behavior over time, track events, test flow changes, model lifecycle transitions, or observe group-level effects like tipping pointsand virality — all without ever needing a live A/B test or full rollout.Why models?UX research, as it’s currently practiced, is split between two worlds — and both have serious problems.In small companiesFor teams with a limited user base, UXR is often out of reach. These teams usually rely on qualitative methods like interviews, open-ended surveys, or usability tests. While these give insights into how users think and behave, they come with major limitations. They don’t generalize. They don’t predict. And they’re not reliable for long-term decisions. On top of that, many small teams don’t even have stable access to their users — they can’t run constant tests or track behavior over time.Models help in both cases. When you can’t run a study, a model lets you explore what would happen if your assumptions were true. You can test hypotheses, simulate user flows, and make better calls — without having to get users on the line.In large companiesEven in companies that do run regular research, most of what we gather is biased.Participation bias: Most data comes from users who agree to take part in research. But what about those who never respond? They might behave completely differently — and they’re invisible to your team.Self-report bias: People don’t act the way they claim. Most of what users tell us in surveys or interviews doesn’t match what they actually do in the product.Contextless testing: Research methods like first-click or five-second tests remove the task from its context. Sure, they test clarity or attention — but the results aren’t grounded in real-world usage. They show an ideal version of behavior, not what actually happens.Models don’t fix all of this. But they give you a second layer of insight. One that’s testable, repeatable, and grounded in how behavior scales — not just what one user said in a session.Where modeling comes inModeling becomes useful exactly where traditional UXR starts falling short — when you’re working with limited access, incomplete data, or questions that research alone can’t answer.When you have a hypothesis and want to know how far from reality it is → A model lets you play it out. You can define what should happen if the hypothesis were true, simulate it, and then compare that to what’s actually happening. If the gap is huge, you know your assumption needs work.When you’re relying on self-reported data and suspect it’s off → You can simulate what real behavior would likely look like, based on system constraints or observed trends. If your survey says 80% of users did X, but the model shows that’s barely possible, there’s probably overreporting or misunderstanding in your responses.When individual behavior doesn’t tell you much about what happens at scale → Modeling helps you observe how individual user decisions scale into system-wide patterns. It shows you how small frictions add up, where bottlenecks form, or how drop-offs compound. This is where patterns like tipping points or hidden churn clusters show up.When you need to compare multiple design or policy options → Instead of running UXR methods like first-click testing, 5-second tests, or expensive quantitative usability studies and waiting weeks for results, you can simulate different flows and estimate their impact in advance. This includes metrics that usually require A/B testing — like conversion rates or pricing sensitivity. Modeling helps you explore which variants are worth testing at all, and where your efforts are likely to hit diminishing returns or breaking points — moments when users suddenly start behaving very differently because something pushed them just past their limit. Models won’t give you certainty, but they’ll narrow the field — quickly, and without exposing users to half-baked experiments.When you just don’t have the user data you need → Maybe your product is new. Maybe your user base is too small for meaningful patterns. Either way, models let you test assumptions and make early decisions without waiting for months of real usage data.Modeling won’t give you the truth. But it gives you a way to test how wrongyour current assumptions are — and that alone makes it worth using.In this NetLogo flocking model, each agentfollows just a few simple rules — move toward others, avoid collisions, and align direction. From those basics, coordinated group movement emerges. Agent-Based Modelinghelps us uncover how individual decisions create collective patterns like this.Some models and how we could use themDifferent questions call for different models. This table maps common UX research goals to the types of simulation or predictive models best suited to answering them.Simulation modeling isn’t one-size-fits-all. Different types of models serve different purposes depending on the kind of question you’re trying to answer. Here’s how to pick the right approach — explained simply.Predicting user behaviorModels available: Agent-Based Modeling, Discrete Event SimulationThese models help when you want to simulate what individual users do and how their actions play out over time.This NetLogo fireflies model shows how individual agents — each flashing at their own pace — eventually sync up over time. It doesn’t happen instantly, so watch closely as their light pulses slowly align. This is another example of how simple local rules can produce global coordination through Agent-Based Modeling.Agent-Based ModelingYou simulate thousands of individual “agents” — think of them as virtual users — each following a simple set of rules. They can behave differently from each other, and they can interact. As those interactions pile up, larger patternsstart to emerge. → Best for understanding how user behaviors interact and lead to bigger system-wide outcomes.Discrete Event SimulationHere you model a process: users move step-by-step through events — for example, seeing an update prompt, clicking, being redirected, updating, and returning. Each step can have delays, wait times, or drop-offs. → Best for modeling processes like onboarding, checkout flows, or support ticket systems.How users move through the appModels available: System Dynamics, Markov ModelsThese models work with user states — like “new user,” “active user,” “churned user” — and model how users move between these states over time.System DynamicsThis looks at the flow of large groups of users as they move through the system. It works well when you want to see how one group affects another — like how increasing active users affects support load or churn. → Best for long-term trends and feedback loops, like retention vs. burnout.Markov Models These are good for modeling step-by-step state changes, where each next step depends only on where the user is now. For example: active → inactive → churned. → Best for lifecycle modeling or estimating time to churn.How users influence each otherModels available: Network ModelsThese models are useful when what one user does depends on the people around them.Network Modeling You build a graph of users and the connections between them. Then you simulate how behaviors spread across that network — like sharing a coupon or copying someone’s playlist. → Best for peer influence, word-of-mouth, or viral spread modeling.Predicting what users will do nextModels available: Machine Learning When you already have user data and want to predict what might happen next.Machine Learning You train a model using past behavior — who clicked what, who churned, who converted — and the algorithm finds patterns you can use to predict future behavior. → Best for scoring users, prioritizing leads, or forecasting churn/conversion.These models don’t try to explain why users act the way they do. They just tell you what’s likely to happen next.Each of these modeling approaches offers something traditional UXR can’t: a way to test ideas at scale, before they hit the real world — something we also try to do in quantitative UX research — narrowing down hypotheses and running large-scale validation — though simulation allows us to go further when access to users is limited or behavior is too complex to test directly.What’s next: Where to go from hereIn future articles, I’ll walk through how models like Agent-Based Modeling and System Dynamics work in practice — with real case studies. We’ll look at how to build a simple simulation from scratch and what kinds of research questions it can help answer.If you’re unsure which type of model fits your case, want help setting one up, or just want to explore how this might work in your product or team, feel free to reach out. Also, if you’re interested in a more tailored and long-term approach, I’ve written about why we should adopt a person-oriented approach to UXR — and how models like ABM align with that. You can check out that article for a deeper theoretical foundation, as well as a separate case study where I applied this approach in practice. I’m happy to discuss use cases, guide you through the setup, or offer deeper consultation.Email: talieh.kazemi.esfeh@gmail.comLinkedInUsing simulation models in UX research was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
#using #simulation #models #research
Using simulation models in UX research
Why it’s time we take behavior seriouslyUsers in an app behave like a complex system. The system’s behavior isn’t just the sum of its parts — it’s emergent. That means we can’t predict what the group will do by only studying individuals. Just like birds: each one may be predictable, but their flocking behavior isn’t something you’d guess from a single bird.You’ve probably felt it. Your research gets ignored, your methods can’t answer the questions being asked, and your role feels more like a presentation decorator than a decision-maker. It’s not just you — something deeper is off.Let’s be honest: UX research didn’t collapse — it was never solid to begin with.From the start, it’s been shaped by shallow interpretations, more focused on fitting into design teams than standing on solid methodological ground.It borrowed the language of science, psychology, and behavioral research — but only the surface-level tools, never the depth. It borrowed from the social sciences — for example, surveys from psychology and sociology, qualitative approaches from ethnology — and also borrowed usability testing from industrial design. But in all cases, it left behind the frameworks, the theoretical grounding, and the hard questions that came with them.Most of UX research sticks to surface-level methods. But beneath the surface lies a deeper set of tools from the social sciences — powerful, underused, and long overdue.What we got instead was something that looked analytical but rarely was. Something collaborative, friendly, and workshop-compatible — but not always meaningful. Something easy to sell to stakeholders — but hard to defend under scrutiny.And now we’re living with the fallout. UX research didn’t fall from grace — it was never built right. A wave of layoffs has hit researchers harder than most, because the value of our work has been reduced to surface-level activities — often nothing more than fancy post-its, sticky walls, and shallow observations that sound replaceable — and in some cases, are.UX research is stuck. We keep hitting the limits of our toolkit. We say no to questions we should be able to answer — not because the questions are bad, but because our methods can’t handle them. And when that happens, companies stop asking.Take companies like Spotify. AI is now personalizing experiences with incredible precision, and in many cases, replacing the need for human-driven personalization. Researchers are being laid off not because their role is obsolete, but because their toolkits haven’t kept pace. They could have stayed relevant — even in AI-heavy environments — if they had adapted, learned new methods, and shown how research can guide, challenge, or complement what AI alone can’t do.That’s on us. As researchers, part of our job — whether or not we’re paid for it — is to explore new tools. Not just to learn new workshop formats or frameworks, but to actually extend what’s possible in research. If we don’t, we risk fading out while the rest of the org moves forward.It’s not fair. But it’s real. And it means we need to push beyond traditional methods — not by inflating our value with vague storytelling, but by actually expanding our capabilities.And thanks to UXR’s shallow borrowing from multiple disciplines without their depth, there’s now an entire world of underused analytical methods sitting just outside our field. One of the most overlooked? Simulation modeling.These models let us simulate users — their behaviors, decisions, mental states, and more — and use those simulations to predict outcomes, test hypotheses, and validate assumptions. They treat users as complex, dynamic systems — not just averaged-out segments or simplified down to percentages and cherry-picked quotes.What are simulation modelings?Simulation models are simplified, programmable systems that help us test ideas about how users behave — without actually having to run a real study. Instead of asking users what they might do, you tell the model how they might act, and then simulate what happens when lots of users behave in those ways over time. These models are widely used in advanced social sciences — like sociology, political science, and behavioral economics — to study how complex systems evolve over time, especially when real-world testing is too slow, risky, or impractical.In these models, users are often treated as “agents” or units that follow certain behavioral rules — for example, “click if X is true” or “drop off if waiting more than 10 seconds.” You set these rules based on your assumptions, data, or existing research results.The model then simulates thousandsof interactions between users and the system. The goal isn’t to predict the future with precision — it’s to explore what might happen under different conditions, and whether your hypothesis holds up when scaled.You can simulate behavior over time, track events, test flow changes, model lifecycle transitions, or observe group-level effects like tipping pointsand virality — all without ever needing a live A/B test or full rollout.Why models?UX research, as it’s currently practiced, is split between two worlds — and both have serious problems.In small companiesFor teams with a limited user base, UXR is often out of reach. These teams usually rely on qualitative methods like interviews, open-ended surveys, or usability tests. While these give insights into how users think and behave, they come with major limitations. They don’t generalize. They don’t predict. And they’re not reliable for long-term decisions. On top of that, many small teams don’t even have stable access to their users — they can’t run constant tests or track behavior over time.Models help in both cases. When you can’t run a study, a model lets you explore what would happen if your assumptions were true. You can test hypotheses, simulate user flows, and make better calls — without having to get users on the line.In large companiesEven in companies that do run regular research, most of what we gather is biased.Participation bias: Most data comes from users who agree to take part in research. But what about those who never respond? They might behave completely differently — and they’re invisible to your team.Self-report bias: People don’t act the way they claim. Most of what users tell us in surveys or interviews doesn’t match what they actually do in the product.Contextless testing: Research methods like first-click or five-second tests remove the task from its context. Sure, they test clarity or attention — but the results aren’t grounded in real-world usage. They show an ideal version of behavior, not what actually happens.Models don’t fix all of this. But they give you a second layer of insight. One that’s testable, repeatable, and grounded in how behavior scales — not just what one user said in a session.Where modeling comes inModeling becomes useful exactly where traditional UXR starts falling short — when you’re working with limited access, incomplete data, or questions that research alone can’t answer.When you have a hypothesis and want to know how far from reality it is → A model lets you play it out. You can define what should happen if the hypothesis were true, simulate it, and then compare that to what’s actually happening. If the gap is huge, you know your assumption needs work.When you’re relying on self-reported data and suspect it’s off → You can simulate what real behavior would likely look like, based on system constraints or observed trends. If your survey says 80% of users did X, but the model shows that’s barely possible, there’s probably overreporting or misunderstanding in your responses.When individual behavior doesn’t tell you much about what happens at scale → Modeling helps you observe how individual user decisions scale into system-wide patterns. It shows you how small frictions add up, where bottlenecks form, or how drop-offs compound. This is where patterns like tipping points or hidden churn clusters show up.When you need to compare multiple design or policy options → Instead of running UXR methods like first-click testing, 5-second tests, or expensive quantitative usability studies and waiting weeks for results, you can simulate different flows and estimate their impact in advance. This includes metrics that usually require A/B testing — like conversion rates or pricing sensitivity. Modeling helps you explore which variants are worth testing at all, and where your efforts are likely to hit diminishing returns or breaking points — moments when users suddenly start behaving very differently because something pushed them just past their limit. Models won’t give you certainty, but they’ll narrow the field — quickly, and without exposing users to half-baked experiments.When you just don’t have the user data you need → Maybe your product is new. Maybe your user base is too small for meaningful patterns. Either way, models let you test assumptions and make early decisions without waiting for months of real usage data.Modeling won’t give you the truth. But it gives you a way to test how wrongyour current assumptions are — and that alone makes it worth using.In this NetLogo flocking model, each agentfollows just a few simple rules — move toward others, avoid collisions, and align direction. From those basics, coordinated group movement emerges. Agent-Based Modelinghelps us uncover how individual decisions create collective patterns like this.Some models and how we could use themDifferent questions call for different models. This table maps common UX research goals to the types of simulation or predictive models best suited to answering them.Simulation modeling isn’t one-size-fits-all. Different types of models serve different purposes depending on the kind of question you’re trying to answer. Here’s how to pick the right approach — explained simply.Predicting user behaviorModels available: Agent-Based Modeling, Discrete Event SimulationThese models help when you want to simulate what individual users do and how their actions play out over time.This NetLogo fireflies model shows how individual agents — each flashing at their own pace — eventually sync up over time. It doesn’t happen instantly, so watch closely as their light pulses slowly align. This is another example of how simple local rules can produce global coordination through Agent-Based Modeling.Agent-Based ModelingYou simulate thousands of individual “agents” — think of them as virtual users — each following a simple set of rules. They can behave differently from each other, and they can interact. As those interactions pile up, larger patternsstart to emerge. → Best for understanding how user behaviors interact and lead to bigger system-wide outcomes.Discrete Event SimulationHere you model a process: users move step-by-step through events — for example, seeing an update prompt, clicking, being redirected, updating, and returning. Each step can have delays, wait times, or drop-offs. → Best for modeling processes like onboarding, checkout flows, or support ticket systems.How users move through the appModels available: System Dynamics, Markov ModelsThese models work with user states — like “new user,” “active user,” “churned user” — and model how users move between these states over time.System DynamicsThis looks at the flow of large groups of users as they move through the system. It works well when you want to see how one group affects another — like how increasing active users affects support load or churn. → Best for long-term trends and feedback loops, like retention vs. burnout.Markov Models These are good for modeling step-by-step state changes, where each next step depends only on where the user is now. For example: active → inactive → churned. → Best for lifecycle modeling or estimating time to churn.How users influence each otherModels available: Network ModelsThese models are useful when what one user does depends on the people around them.Network Modeling You build a graph of users and the connections between them. Then you simulate how behaviors spread across that network — like sharing a coupon or copying someone’s playlist. → Best for peer influence, word-of-mouth, or viral spread modeling.Predicting what users will do nextModels available: Machine Learning When you already have user data and want to predict what might happen next.Machine Learning You train a model using past behavior — who clicked what, who churned, who converted — and the algorithm finds patterns you can use to predict future behavior. → Best for scoring users, prioritizing leads, or forecasting churn/conversion.These models don’t try to explain why users act the way they do. They just tell you what’s likely to happen next.Each of these modeling approaches offers something traditional UXR can’t: a way to test ideas at scale, before they hit the real world — something we also try to do in quantitative UX research — narrowing down hypotheses and running large-scale validation — though simulation allows us to go further when access to users is limited or behavior is too complex to test directly.What’s next: Where to go from hereIn future articles, I’ll walk through how models like Agent-Based Modeling and System Dynamics work in practice — with real case studies. We’ll look at how to build a simple simulation from scratch and what kinds of research questions it can help answer.If you’re unsure which type of model fits your case, want help setting one up, or just want to explore how this might work in your product or team, feel free to reach out. Also, if you’re interested in a more tailored and long-term approach, I’ve written about why we should adopt a person-oriented approach to UXR — and how models like ABM align with that. You can check out that article for a deeper theoretical foundation, as well as a separate case study where I applied this approach in practice. I’m happy to discuss use cases, guide you through the setup, or offer deeper consultation.Email: talieh.kazemi.esfeh@gmail.comLinkedInUsing simulation models in UX research was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #using #simulation #models #research
UXDESIGN.CC
Using simulation models in UX research
Why it’s time we take behavior seriouslyUsers in an app behave like a complex system. The system’s behavior isn’t just the sum of its parts — it’s emergent. That means we can’t predict what the group will do by only studying individuals. Just like birds: each one may be predictable, but their flocking behavior isn’t something you’d guess from a single bird. (Photo by James Wainscoat on Unsplash)You’ve probably felt it. Your research gets ignored, your methods can’t answer the questions being asked, and your role feels more like a presentation decorator than a decision-maker. It’s not just you — something deeper is off.Let’s be honest: UX research didn’t collapse — it was never solid to begin with.From the start, it’s been shaped by shallow interpretations, more focused on fitting into design teams than standing on solid methodological ground.It borrowed the language of science, psychology, and behavioral research — but only the surface-level tools, never the depth. It borrowed from the social sciences — for example, surveys from psychology and sociology, qualitative approaches from ethnology — and also borrowed usability testing from industrial design. But in all cases, it left behind the frameworks, the theoretical grounding, and the hard questions that came with them.Most of UX research sticks to surface-level methods. But beneath the surface lies a deeper set of tools from the social sciences — powerful, underused, and long overdue. (Original photo by SIMON LEE on Unsplash, modified by the author)What we got instead was something that looked analytical but rarely was. Something collaborative, friendly, and workshop-compatible — but not always meaningful. Something easy to sell to stakeholders — but hard to defend under scrutiny.And now we’re living with the fallout. UX research didn’t fall from grace — it was never built right. A wave of layoffs has hit researchers harder than most, because the value of our work has been reduced to surface-level activities — often nothing more than fancy post-its, sticky walls, and shallow observations that sound replaceable — and in some cases, are.UX research is stuck. We keep hitting the limits of our toolkit. We say no to questions we should be able to answer — not because the questions are bad, but because our methods can’t handle them. And when that happens, companies stop asking.Take companies like Spotify. AI is now personalizing experiences with incredible precision, and in many cases, replacing the need for human-driven personalization. Researchers are being laid off not because their role is obsolete, but because their toolkits haven’t kept pace. They could have stayed relevant — even in AI-heavy environments — if they had adapted, learned new methods, and shown how research can guide, challenge, or complement what AI alone can’t do.That’s on us. As researchers, part of our job — whether or not we’re paid for it — is to explore new tools. Not just to learn new workshop formats or frameworks, but to actually extend what’s possible in research. If we don’t, we risk fading out while the rest of the org moves forward.It’s not fair. But it’s real. And it means we need to push beyond traditional methods — not by inflating our value with vague storytelling, but by actually expanding our capabilities.And thanks to UXR’s shallow borrowing from multiple disciplines without their depth, there’s now an entire world of underused analytical methods sitting just outside our field. One of the most overlooked? Simulation modeling.These models let us simulate users — their behaviors, decisions, mental states, and more — and use those simulations to predict outcomes, test hypotheses, and validate assumptions. They treat users as complex, dynamic systems — not just averaged-out segments or simplified down to percentages and cherry-picked quotes.What are simulation modelings?Simulation models are simplified, programmable systems that help us test ideas about how users behave — without actually having to run a real study. Instead of asking users what they might do, you tell the model how they might act, and then simulate what happens when lots of users behave in those ways over time. These models are widely used in advanced social sciences — like sociology, political science, and behavioral economics — to study how complex systems evolve over time, especially when real-world testing is too slow, risky, or impractical.In these models, users are often treated as “agents” or units that follow certain behavioral rules — for example, “click if X is true” or “drop off if waiting more than 10 seconds.” You set these rules based on your assumptions, data, or existing research results.The model then simulates thousands (or even millions) of interactions between users and the system. The goal isn’t to predict the future with precision — it’s to explore what might happen under different conditions, and whether your hypothesis holds up when scaled.You can simulate behavior over time, track events, test flow changes, model lifecycle transitions, or observe group-level effects like tipping points (moments when a small change leads to a big, sudden shift in user behavior) and virality — all without ever needing a live A/B test or full rollout.Why models?UX research, as it’s currently practiced, is split between two worlds — and both have serious problems.In small companiesFor teams with a limited user base, UXR is often out of reach. These teams usually rely on qualitative methods like interviews, open-ended surveys, or usability tests. While these give insights into how users think and behave, they come with major limitations. They don’t generalize. They don’t predict. And they’re not reliable for long-term decisions. On top of that, many small teams don’t even have stable access to their users — they can’t run constant tests or track behavior over time.Models help in both cases. When you can’t run a study, a model lets you explore what would happen if your assumptions were true. You can test hypotheses, simulate user flows, and make better calls — without having to get users on the line.In large companiesEven in companies that do run regular research, most of what we gather is biased.Participation bias: Most data comes from users who agree to take part in research. But what about those who never respond? They might behave completely differently — and they’re invisible to your team.Self-report bias: People don’t act the way they claim. Most of what users tell us in surveys or interviews doesn’t match what they actually do in the product.Contextless testing: Research methods like first-click or five-second tests remove the task from its context. Sure, they test clarity or attention — but the results aren’t grounded in real-world usage. They show an ideal version of behavior, not what actually happens.Models don’t fix all of this. But they give you a second layer of insight. One that’s testable, repeatable, and grounded in how behavior scales — not just what one user said in a session.Where modeling comes inModeling becomes useful exactly where traditional UXR starts falling short — when you’re working with limited access, incomplete data, or questions that research alone can’t answer.When you have a hypothesis and want to know how far from reality it is → A model lets you play it out. You can define what should happen if the hypothesis were true, simulate it, and then compare that to what’s actually happening. If the gap is huge, you know your assumption needs work.When you’re relying on self-reported data and suspect it’s off → You can simulate what real behavior would likely look like, based on system constraints or observed trends. If your survey says 80% of users did X, but the model shows that’s barely possible, there’s probably overreporting or misunderstanding in your responses.When individual behavior doesn’t tell you much about what happens at scale → Modeling helps you observe how individual user decisions scale into system-wide patterns. It shows you how small frictions add up, where bottlenecks form, or how drop-offs compound. This is where patterns like tipping points or hidden churn clusters show up.When you need to compare multiple design or policy options → Instead of running UXR methods like first-click testing, 5-second tests, or expensive quantitative usability studies and waiting weeks for results, you can simulate different flows and estimate their impact in advance. This includes metrics that usually require A/B testing — like conversion rates or pricing sensitivity. Modeling helps you explore which variants are worth testing at all, and where your efforts are likely to hit diminishing returns or breaking points — moments when users suddenly start behaving very differently because something pushed them just past their limit. Models won’t give you certainty, but they’ll narrow the field — quickly, and without exposing users to half-baked experiments.When you just don’t have the user data you need → Maybe your product is new. Maybe your user base is too small for meaningful patterns. Either way, models let you test assumptions and make early decisions without waiting for months of real usage data.Modeling won’t give you the truth. But it gives you a way to test how wrong (or right) your current assumptions are — and that alone makes it worth using.In this NetLogo flocking model, each agent (like a bird) follows just a few simple rules — move toward others, avoid collisions, and align direction. From those basics, coordinated group movement emerges. Agent-Based Modeling (ABM) helps us uncover how individual decisions create collective patterns like this. (Animation created by the author running the NetLogo model)Some models and how we could use themDifferent questions call for different models. This table maps common UX research goals to the types of simulation or predictive models best suited to answering them. (Table created by the author)Simulation modeling isn’t one-size-fits-all. Different types of models serve different purposes depending on the kind of question you’re trying to answer. Here’s how to pick the right approach — explained simply.Predicting user behaviorModels available: Agent-Based Modeling (ABM), Discrete Event Simulation (DES)These models help when you want to simulate what individual users do and how their actions play out over time.This NetLogo fireflies model shows how individual agents — each flashing at their own pace — eventually sync up over time. It doesn’t happen instantly, so watch closely as their light pulses slowly align. This is another example of how simple local rules can produce global coordination through Agent-Based Modeling. (Animation created by the author running the NetLogo model)Agent-Based Modeling (ABM) You simulate thousands of individual “agents” — think of them as virtual users — each following a simple set of rules. They can behave differently from each other, and they can interact. As those interactions pile up, larger patterns (like herd behavior, virality, or group churn) start to emerge. → Best for understanding how user behaviors interact and lead to bigger system-wide outcomes.Discrete Event Simulation (DES) Here you model a process: users move step-by-step through events — for example, seeing an update prompt, clicking, being redirected, updating, and returning. Each step can have delays, wait times, or drop-offs. → Best for modeling processes like onboarding, checkout flows, or support ticket systems.How users move through the appModels available: System Dynamics, Markov ModelsThese models work with user states — like “new user,” “active user,” “churned user” — and model how users move between these states over time.System Dynamics (SD) This looks at the flow of large groups of users as they move through the system. It works well when you want to see how one group affects another — like how increasing active users affects support load or churn. → Best for long-term trends and feedback loops, like retention vs. burnout.Markov Models These are good for modeling step-by-step state changes, where each next step depends only on where the user is now. For example: active → inactive → churned. → Best for lifecycle modeling or estimating time to churn.How users influence each otherModels available: Network ModelsThese models are useful when what one user does depends on the people around them.Network Modeling You build a graph of users and the connections between them (friends, follows, referrals, etc.). Then you simulate how behaviors spread across that network — like sharing a coupon or copying someone’s playlist. → Best for peer influence, word-of-mouth, or viral spread modeling.Predicting what users will do nextModels available: Machine Learning (ML)When you already have user data and want to predict what might happen next.Machine Learning You train a model using past behavior — who clicked what, who churned, who converted — and the algorithm finds patterns you can use to predict future behavior. → Best for scoring users, prioritizing leads, or forecasting churn/conversion.These models don’t try to explain why users act the way they do. They just tell you what’s likely to happen next.Each of these modeling approaches offers something traditional UXR can’t: a way to test ideas at scale, before they hit the real world — something we also try to do in quantitative UX research — narrowing down hypotheses and running large-scale validation — though simulation allows us to go further when access to users is limited or behavior is too complex to test directly.What’s next: Where to go from hereIn future articles, I’ll walk through how models like Agent-Based Modeling and System Dynamics work in practice — with real case studies. We’ll look at how to build a simple simulation from scratch and what kinds of research questions it can help answer.If you’re unsure which type of model fits your case, want help setting one up, or just want to explore how this might work in your product or team, feel free to reach out. Also, if you’re interested in a more tailored and long-term approach, I’ve written about why we should adopt a person-oriented approach to UXR — and how models like ABM align with that. You can check out that article for a deeper theoretical foundation, as well as a separate case study where I applied this approach in practice. I’m happy to discuss use cases, guide you through the setup, or offer deeper consultation.Email: talieh.kazemi.esfeh@gmail.comLinkedInUsing simulation models in UX research was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
·154 Views