Words Matter: Are Language Barriers Driving Quiet Failures in AI
towardsai.net
Author(s): Kris Naleszkiewicz Originally published on Towards AI. The AI revolution is upon us, transforming how we work, live, and interact with the world.Yup. We know. Weve all heard.The media loves to cover spectacular successes and failures.But what about the quiet failures? The stalled projects. The initiatives that never quite get off the ground. Not because the technology doesnt work but because something more human gets in the way.Is this a familiar situation? Youre discussing an AI solution with a client. Theyre excited. Youre excited. The initial meetings go great. Lets bring in more stakeholders! they say. Soon, youve got the infrastructure team involved, five additional use cases under consideration, and another business executive at the table. The energy is palpable.Everyone sees the potential.And then the fighting starts. Or maybe not fighting exactly, but something more subtle. Resistance. Friction. Suddenly, a project that had everyone thrilled hit a wall. Why? Everyone was excited. Everyone saw the value. What changed?The answer might surprise you: its language.Not Python or Java or SQL but the everyday language we use to talk about AI. We have no shortage of technical challenges, but the most unexpected roadblocks to AI adoption often stem from a fundamental shift in how we need to work together. AI isnt just the new electricity of our age its forcing unprecedented collaboration between groups that previously operated in comfortable silos.When straightforward terms like performance, explainability, and risk carry such different meanings across teams, its no wonder some AI projects struggle to gain traction. These concepts form the foundation for discussing, evaluating, and implementing AI systems, but their meanings shift depending on whos using them. This linguistic flexibility isnt just a communication challenge its a window into deeper questions about professional identity, authority, and the changing nature of expertise in an AI-augmented workplace.As we introduce increasingly complex technical terminology around AI, these fundamental translation gaps only widen, creating invisible barriers that technical solutions alone cannot address.Setting the StageWe have all heard AI is the new electricity, but what that comparison misses is that when electricity transformed manufacturing, it didnt just change how things were powered it fundamentally restructured how people worked together.The same thing is happening with AI but more broadly. Electricity mainly required engineers and operators to collaborate. AI? Its forcing everyone to work together in unprecedented ways.AI Enthusaism Slipping Away. Image generated in DALL-E by author.Data scientists need domain experts to understand the problems theyre solving. Business leaders need technical teams to understand the possibilities and limitations. Front-line workers need to collaborate with both groups to ensure solutions work in the real world.And heres the kicker none of these groups are particularly good at talking to each other. Not because they dont want to, but because theyve never had to at least not at this depth.When Silos CrumbleThink about traditional technology implementations. You had clear handoffs: Business teams defined requirements, technical teams built solutions, and users learned to adapt. Everyone stayed in their lane and spoke their own language, and things mostly worked out.AI doesnt play that game.When data scientists build a model, they need to understand the business context not just surface-level requirements. When business teams deploy AI solutions, they need to understand more than just features and benefits they need to grasp concepts like model drift and edge cases. And users? Theyre not just learning new interfaces; theyre learning to collaborate with AI systems in ways that fundamentally change how they work.This isnt just cross-functional collaboration its forced interdependence. And its causing friction in unexpected places.LendAssist: Illustrative ExampleLets introduce LendAssist, an LLM-based mortgage lending assistant that we will use to illustrate this new reality.On paper, its straightforward. An AI system designed to streamline mortgage lending decisions, reduce processing time, and improve accuracy. LendAssists struggles highlight a critical challenge in AI adoption: seemingly straightforward terms can have radically different meanings for different stakeholders, leading to miscommunication and misunderstanding.What constitutes performance for a data scientist might be completely different for a data scientist building the product, the loan officer working with the product, or a customer interacting with the product.Similarly, explainability can have varying levels of depth and complexity depending on the audience.And risk can encompass a variety of issues and concerns, from technical failures to ethical dilemmas and job displacement.In the following sections, well explore these three key areas where language barriers arise.Expertise Paradox in AI AdoptionBefore we dive into specific challenges with LendAssist, lets discuss the expertise paradox, a fundamental tension that underlies them all.When LendAssist was first introduced, something unexpected happened. The most resistance didnt come from technophobes or change-resistant employees it came from the experienced loan officers and underwriters. The experts whose knowledge the system was designed to augment became its biggest skeptics.Why? The rapid rise of AI presents a unique challenge for experts in traditional fields. Its like suddenly finding yourself in a world where the games rules have changed, and your hard-earned expertise might not translate as seamlessly as youd hoped.This expertise paradox is a psychological and organizational hurdle that often gets overlooked in the excitement of AI adoption. Traditional tech leaders feel threatened by the need to start over as learners. Subject matter experts struggling with AI systems that challenge their domain expertise. There is a tension between deep knowledge of traditional systems and the need to adapt to AI-driven approaches.Organizations often face a delicate balancing act. They need to leverage their existing experts valuable experience while embracing AIs transformative potential. This creates tension and uncertainty as teams grapple with integrating traditional knowledge with AI capabilities.Through my work with AI implementations, Ive noticed a consistent pattern in how experts respond to this challenge. It typically manifests as three competing pressures Ive started mapping out to help teams understand whats happening.Maintaining Credibility I still know what Im doingExperts feel intense pressure to demonstrate that their knowledge remains relevant and valuable. Ive watched seasoned loan officers, for instance, struggle to show how their years of experience still matter when an AI system seems to make decisions in milliseconds.Embracing Change: I need to adapt to AIAt the same time, these experts recognize they need to evolve. This isnt just about learning new tools its about fundamentally rethinking how they apply their expertise. Ive seen loan officers transform from decision-makers to decision interpreters, but this shift rarely comes easily.Preserving Value: My experience mattersPerhaps most importantly, experts need to find ways to show how their experience enhances AI capabilities rather than being replaced by them. The most successful transitions Ive observed happen when experts can clearly see how their knowledge makes the AI better, not obsolete.The key to successful AI adoption is finding a balance between these three corners. Experts need to acknowledge the limitations of their existing knowledge, embrace the learning process, and find ways to leverage AI to enhance their expertise rather than viewing it as a threat.Despite these challenges, there are inspiring examples of experts successfully navigating the expertise paradox. These individuals embrace AI as a tool to augment their expertise and guide others in adapting to AI-driven approaches.GenAI Rollouts by Maturity. (McKinsey, 2025)This could explain a puzzling trend in AI adoption. A McKinsey survey completed in November 2024 and published in January 2025 and discussed in Superagency in the Workplace: Empowering people to unlock AIs full potential found that while one-quarter of executives have defined a GenAI roadmap, just over half remain stuck in the draft being refined stage.The technical capabilities exist, but organizations struggle with the human side of implementation. As technology continues evolving at breakneck speed, roadmaps must be built to evolve but we should recognize that many of the barriers arent technical at all.The invisible psychological and organizational traps repeatedly derail even the most promising AI initiatives.Performance A Multifaceted ChallengeThe data science team is ecstatic. LendAssists new fraud detection model boasts a 98% accuracy rate in their meticulously crafted testing environment. Champagne corks pop, high-fives are exchanged, and LinkedIn posts are drafted. But the celebration is short-lived. The operations team pushes back, overwhelmed by a 30% increase in false positives that clog their workflows.Meanwhile, the IT infrastructure team grapples with the models insatiable appetite for computing resources.And the business leaders, well, theyre left wondering why those key performance indicators (KPIs) havent budged an inch.Welcome to the performance paradox of AI adoption, where impressive technical achievements often clash with the messy realities of real-world implementation.Performance in AI is a chameleon, adapting its meaning depending on whos using the word. To truly understand this multifaceted challenge, we need to dissect performance through the lens of different stakeholders:Business Performance: The language of executives and shareholders focuses on the bottom line. Does LendAssist increase revenue? Does it reduce costs? Does it improve customer satisfaction and retention? Does it boost market share?Technical Performance: This is the domain of data scientists and engineers who are focused on metrics and algorithms. How accurate is LendAssists risk assessment model? Whats its precision and recall? How does it compare to traditional credit scoring methods regarding AUC and F1-score?Operational Performance: This is the realm of IT and operations teams concerned with utilization, efficiency, and scalability. How fast does LendAssist process loan applications? How much computing power does it consume? Can it handle peak loads without crashing? How easily does it integrate with existing systems?Human Performance: This is the often-overlooked dimension, focusing on the impact of AI on human workers. Does LendAssist make loan officers more productive? Does it reduce errors and improve decision-making? Does it enhance job satisfaction or create anxiety and resistance?But performance challenges are just the beginning.When different groups cant even agree on what good performance means, how do they explain their decisions to each other or, more importantly, customers?This brings us to an even thornier challenge: the crisis of explainability.Explainability The Black Box DilemmaA loan officer sits across from a client whos just been denied a mortgage by LendAssist. The client, understandably bewildered, asks, Why? The loan officer, with 20 years of experience explaining such decisions, finds herself staring blankly at the screen, unable to provide a clear answer. This isnt just about a declined mortgage its about a fundamental shift in professional authority, a moment where human expertise collides with the opacity of AI.Explainable AI (XAI) is no longer a luxury; its required to maintain trust, ensure responsible AI development, and navigate the evolving landscape of professional expertise.However, explainability itself has layers of understanding for different stakeholders, too.Technical Explainability Challenge: Our model shows high feature importance for these variables This might satisfy data scientists, but it leaves business users and clients in the dark. How does LendAssists technical team explain the models risk assessment to the data science team in a technically sound and understandable way?Process Explainability Challenge: But how does this translate to our existing underwriting workflow? Integrating AI into established processes requires explaining how it interacts with human decision-making. How does the data science team explain LendAssists integration into the loan approval process to the loan officers and underwriters, clarifying how it augments their existing expertise?Decision Explainability Challenge: How do we explain this to the customer? Building trust with clients requires clear, understandable explanations of AI-driven decisions. How do loan officers explain LendAssists loan denial decision to the client in a transparent and empathetic way without resorting to technical jargon?Impact Explainability Challenge: What does this mean for our business and regulatory compliance? Understanding the broader implications of AI decisions is crucial for responsible adoption. How do executives explain LendAssists impact on loan origination volume, risk mitigation, and compliance to stakeholders and regulators in an informative and persuasive way?Explainability isnt just about understanding its about authority.When professionals cant explain why decisions are made in their own domain, they lose not just control but their role as knowledge authorities. This can lead to resistance, fear of obsolescence, and difficulty integrating AI into existing workflows.Risk Navigating UncertaintyThe CTO champions LendAssist as the future of lending, painting a picture of streamlined workflows and data-driven decisions.The compliance team, however, sees looming regulatory disasters haunted by visions of biased algorithms and data breaches.Middle managers envision organizational chaos, with confused employees and disrupted workflows.Loan officers on the front lines of client interaction fear professional extinction and are replaced by an emotionless algorithm that spits out loan approvals and denials with cold, hard efficiency.It has the same technology but radically different risk landscapes.However, these surface-level conflicts mask a deeper pattern that reveals how organizations and individuals process the fundamental changes AI brings.Hidden Psychology of Risk When Talking about AIWe can break down this complex risk perception into four distinct levels:Level 1: What if it doesnt work? (Technical Risk) This is the most immediate and obvious concern. Will LendAssists AI models be accurate and reliable? Will the system be secure against cyberattacks? Will it comply with relevant regulations? But beneath these technical anxieties lies a deeper fear: losing control over familiar processes. When compliance officers obsess over LendAssists error rates, they often express anxiety about shifting from rule-based to probability-based decision-making. Theyre grappling with the uncertainty inherent in AI systems, where outcomes arent always predictable or easily explained.Level 2: What if it works too well? (Operational Risk) This is where things get interesting. As AI proves its capabilities, concerns shift from technical failures to operational disruptions. How will LendAssist impact the daily work of loan officers and underwriters? Will it disrupt existing processes and create confusion? Will it lead to job losses? But the real fear here is more personal: Will AI erode the value of human skills and experience? When loan officers worry about LendAssist processing applications too quickly, theyre asking, Will speed make my experience irrelevant? Theyre grappling with the potential for AI to diminish their role and authority in the lending process.Level 3: What if it works differently than we expect? (Strategic Risk) This level delves into the broader implications of AI adoption. Will LendAssist have unintended consequences? Will it disrupt the competitive landscape? Will it create new ethical dilemmas? But the underlying fear is about professional identity. When managers resist LendAssists recommendations, they often protect their identity as decision-makers more than questioning the AIs judgment. Theyre grappling with the potential for AI to redefine their roles and responsibilities, challenging their authority and expertise.Level 4: What if it changes who we are? (Identity Risk) This is the deepest and most existential level of risk perception. Will LendAssist fundamentally change how we work and interact with each other? Will it alter our understanding of expertise and professional identity? Will it reshape our values and beliefs about the role of technology in our lives? This is where the fear of obsolescence truly takes hold. When senior underwriters label LendAssist too risky, theyre expressing fear about transitioning from decision-makers to decision-validators. Theyre grappling with the potential for AI to transform their sense of self-worth and professional purpose.How technical and identity risks become intertwined makes AI risk assessment particularly challenging. When a loan officer says, LendAssists risk models arent reliable enough, they might be expressing fear of losing their ability to make judgment calls or anxiety about their role in the organization changing.The more organizations focus on addressing technical risks, the more they might inadvertently amplify identity risks by suggesting that human judgment is secondary to AI capabilities. As AI systems like LendAssist become more capable, they dont just present technical risks they force us to reconsider what it means to be an expert in an AI-augmented world.These layered challenges might seem insurmountable when viewed through a purely technical lens. After all, how do you solve a technical problem when the real issue lies in professional identity? How do you address performance concerns when different stakeholders define success in fundamentally different ways?What Ive found is that acknowledging these language barriers is the first crucial step toward overcoming them. When we recognize that resistance to AI adoption often stems from communication gaps rather than technological limitations, we open up new paths forward.The Path Forward: A Practical PerspectiveOnce you recognize these language barriers, they become surprisingly manageable. Were not just dealing with technical challenges were dealing with translation challenges. We need to become multilingual in the different ways our stakeholders talk about and understand AI.The organizations Ive seen succeed with AI adoption arent just technically sophisticated theyre linguistically sophisticated.They create a shared vocabulary that respects different perspectives. They recognize expertise transitions as a core implementation part and build bridges between technical and professional languages. They value communication skills as much as technical skills.ConclusionThis isnt just another factor to consider in AI adoption its often the factor determining go or no go decisions.The good news? While technical challenges typically require significant resources, language barriers can be addressed through awareness and intentional communication. Were all figuring this out together, but recognizing how language shapes AI adoption has been one of the most potent insights. Its changing how I approach projects, how I work with stakeholders, and, most importantly, how I help organizations navigate the fundamental changes AI brings to professional expertise.The choice isnt between technical excellence and human understanding its about building bridges between them.And sometimes, those bridges start with something as simple as recognizing that we might mean different things when we say performance, explainability, or risk.Further reading and citationWhy AI Projects Fail and How They Can SucceedBy some estimates, more than 80 percent of AI projects fail. Thats twice the rate of failure of information technologywww.rand.orgKeep Your AI Projects on TrackAI-and especially its newest star, generative AI-is today a central theme in corporate boardrooms, leadershiphbr.orgSuperagency in the workplace: Empowering people to unlock AIs full potentialAlmost all companies invest in AI, but just 1% believe they are at maturity. Our new report looks at how AI is beingwww.mckinsey.comJoin thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
0 Reacties ·0 aandelen ·56 Views