
Building trust in opaque systems
uxdesign.cc
Why the better AI gets at conversation, the worse we get at questioning itillustration byauthorHow do we know when to trust what someone tells us? In person conversations give us many subtle cues we might pick up on, but when they happen with AI system designed to sound perfectly human, we lose any sort of frame of reference we mayhave.With every new model, conversational AI sounds more and more genuinely intelligent and human-like, so much so that every day, millions of people chat with these systems as if talking to their most knowledgeable friend.From a design perspective, theyre very successful in the way they feel natural, authoritative and even empathetic, but this very naturalness becomes problematic as it makes it hard to distinguish when outputs are true or simply just plausible.This creates exactly the setup for misplaced trust: trust works best when paired with critical thinking, but the more we rely on these systems, the worse we get at it, ending up in this odd feedback loop thats surprisingly difficult toescape.The illusion of understandingTraditional software is straightforwardclick this button, get that result. AI systems are something else entirely because theyre unpredictable as they can make new decisions based on their training data. If we ask the same question twice we might get completely different wording, reasoning, or even different conclusions eachtime.How this thing thinks and speaks in such human ways, feels like magic to many users. Without understanding whats happening under the hood, its easy to miss that those magical sentences are simply the most statistically probable chain of words, making these systems something closer to a glorified Magic 8Ball.Back in 2022 when ChatGPT opened to public, I was also admittedly mesmerised by it, and after it proved useful in a couple of real-world situations, I started reaching for it more and more, even for simple questions andtasks.Until one day I was struggling with a presentation segment that felt flat compared to the rest and asked Claude for ideas on how to make it more compelling. We came up with a story I could reference, one I was already familiar with, but there was this one detail that felt oddly specific, so I asked for thesource.Part of the conversation with Claude (screenshot byauthor)You can imagine my surprise when Claude casually mentioned it had essentially fabricated that detail for emphasis.How I could have so easily accepted that made-up information genuinely unsettled me and became the catalyst for me to really try and understand what I was playing with. What I didnt know at the time was that this behaviour represents exactly what these systems are designed to do: generate responses that sound right, regardless if theyre actually true ornot.Human-like, but nothumanThe core problem when it comes to building trust in AI is that the end goal of these systems (utility) works directly against the transparency needed to establish genuinetrust.To maximise usefulness, AI needs to feel seamless and naturalnobody wants to talk to a robot, its assistance should be almost invisible. We wouldnt consciously worry about the physics of speech during conversation, so why should we think about AI mechanics? We ask a question, we get ananswer.But healthy scepticism requires transparency, which inevitably introduces friction. We should pause, question, verify, and think critically about the information we receive. We should treat these systems as the sophisticated tools they are rather than all-knowing beings.The biggest players seem to be solving for trust by leaning into illusion rather than transparency.Claude thinking indicator (screenshot by author, Sept2025)One key technique is anthropomorphising the interface through language choices. For example, the many thinking indicators that appear while actually just preparing a response, its a deliberate attempt at building trust. This works brilliantly because these human-like touches make users feel connected and understood.However, giving AI qualities like these thinking indicators, conversational tone, personality, and empathy creates two subtle yet critical problems:#1Giving AI human-like qualities, makes us lose the uncertainty signals that would normally help us detect when something is off. Humans naturally show knowing what they dont know through hesitation, qualifying statements (like I think maybe), or simply by admitting uncertainty. These are very helpful signals that let us know when to be more careful about trusting what someone issaying.AI systems however, rarely do thisthey can sound equally confident whether theyre giving you the population of Tokyo (which they probably know) or making up a detail about a case study (which they definitely dont know). Thats why detecting a mistake or a lie in these cases can be extremely hard.#2On top of this, users are more likely to assume the AI will perform better while feeling a deeper connection to it. So we end up trusting it based on how it feels rather than how well it actuallyworks.The industry calls this trust calibration, which is about finding the right level of trust so that users rely on AI systems appropriately, or in other words, in just the right amount based on what those systems can actually do. This is no easy feat in general, but because AI often sounds confident while being opaque and inconsistent, getting this balance right is extremely challenging.So how are companies currently attempting to solve this calibration problem?The limits of current solutionsAs a solution, theres a lot of talk around explainability. This refers to turning AI systems hidden logic into something humans can make sense of, helping users decide when to trust the output (and more importantly, when not to doso).Yet, this information only appears spontaneously in scenarios like medical or financial advice, or when training data is limited. In more routine interactionsbrainstorming, seeking adviceusers would need to actively prompt the AI to reveal the reasoning (as I had to do withClaude).Imagine constantly interrupting a conversation to ask someone where they heard something. The chat format creates an illusion of natural conversation that ends up discouraging the very critical thinking that explainability is meant toenable.Recognising these challenges, companies implement various other guardrails: refusal behaviours for harmful tasks, contextual warnings for sensitive topics, or straight up restriction of certain capabilities. These aim to prevent automation bias: our tendency to over-rely on automated systems.These guardrails, tho, have significant limitations. Not only are there known workarounds, but they fail to account for how these tools are actually used by millions of people with vastly different backgrounds and technical literacy.The contradiction becomes obvious when you notice where warnings actually appear. ChatGPTs disclaimer that it can make mistakes. Check important info sits right below the input field, yet I wonder how many people actually see it, and of those who do, how many take that advice. After all that effort to anthropomorphise the interface and create connection, a small grey disclaimer hardly feels like genuine transparency.Although tiny, Claudes disclaimer appears more contextually within the last reply provided (screenshot by author, Sept2025)Companies invest heavily in making AI feel more human and trustworthy through conversational interfaces, while simultaneously expecting users to maintain critical distance through small warnings and occasional guardrails. The result is that these become another form of false reassurance allowing companies to claim plausible deniability while essentially paying lip service to transparency andtrust.Scaffolding overcrutchesThis reveals a fundamental flaw in the current approach: theyre asking users to bear the weight of responsible use while providing tools designed to discourage the very scepticism they require. This, not only contradicts established UX principles about designing for your users actual capabilities and contexts, but also ignores how trust is actuallyformed.In fact, trust isnt built through one single intervention, but rather systematically across many touchpoints. So how might we approach this problem differently?Photo by Ricardo Gomez Angel onUnsplashA first step, I believe, would be ditching the seamless approach and rethinking friction. What if, instead of treating transparency as friction to reduce, design treated it as a capability to build upon? Instead of hiding complexity to fast-track utility, interfaces could gradually build users ability to work effectively with AI systemseventually teaching them not only how to use them responsibly, but when to trust them aswell.As a parallel, think scaffolding versus crutches. Current AI systems function more like crutchesthey provide so much support that users become dependent on them. Users lean on AI for answers without developing the skills to evaluate them, and much like actual crutches, this helps in the moment but prevents underlying capabilities (critical thinking, in this case) from getting stronger overtime.Designing transparency as scaffoldingIn a scaffolding model instead, AI systems could be much more flexible and adaptable so to surface transparency and guidance based on the users developing skills and the stakes of the decision.For example, we could imagine having different modes. A learning mode could surface uncertainty more explicitly within responsesalerts prompting users to verify claims the AI cannot back up directly, or inviting users to take answers with a grain of salt. This could happen in expandable sections so as not to intrude on the conversation flow, and as users interact with these components, the interface could gradually reduce explicit prompts while maintaining the underlying safeguards.Quick and dirty explorations of a learning mode (byauthor)For high-stakes decision, the interface could default to maximum transparency, like for example requiring users to verify factual claims with external sources before accessing final outputs. Visual indicators could distinguish between trained knowledge, recent search results, and generated examples, helping users understand where information comesfrom.This approach would treats AI as temporary support that builds user capabilities rather than replacing them, and instead of optimising for immediate task completion, scaffolding design would help fostering long-term competence by helping users develop verification habits and critical thinkingskills.Googles Gemini offers inline tips while images are being generated and then persist them on screen. This type of content is clearly distinguishable from the rest of the conversation and provides useful and contextual information based on the task the user is performing (screenshot by author, Sept2025)A trade-off worthmakingMuch of this goes against conventional product design principles around maximising ease of use. Adding these steps and indicators might seem like deliberate obstacles to user engagement because they are, but thats thepoint.The friction introduced in this case, would serve a different purpose than arbitrary barriersits protective and educational rather than obstructive. If designed mindfully, friction can help users treat AI tools as scaffolding rather than crutches, by developing the judgment skills needed to work safely with thesesystems.That conversation with Claude taught me something crucial about the gap between how these systems are presented and what they actually are. We face a choice between immediate utility while undermining our critical thinking, or building people up rather than making them dependent by accepting some friction as the price of maintaining our ability to think independently. The path forward isnt avoiding AI, but demanding better design that teaches us to use these tools wisely rather than depending on them entirely.Footnotes Im aware that my example here is a pretty silly one compared to the amount of misinformation, bad advice and just factually incorrect tidbits people are potentially exposed to everyday through these interactions. But aha moments work in mysterious waysSuggested reads- Co-constructing intent with AI agents by TenoLiu- The Psychology Of Trust In A World Where Products Keep Breaking Promises by Mehekk Bassi- Designing for control in AI UX by RobChappellBuilding trust in opaque systems was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
0 Yorumlar
·0 hisse senetleri