AI Is Rewriting Reality, One Word At A Time
As AI reshapes language, even the human voice becomes a pattern to be predicted, not a meaning to be ... More understood.getty
Language is the foundation of business, culture, and consciousness. But AI isn’t just using our words—it’s reshaping them. Quietly, subtly, it’s dismantling the architecture of thought by eroding what we used to think: nouns.
We used to believe that naming something gave it power. Giving a thing a noun means tethering it to meaning, identity, and memory. But in the age of AI, nouns are dissolving—not banned, not erased—but rendered functionally obsolete. And with them, our grasp on reality is starting to fray.
AI and the Architecture of Thought
AI doesn’t see the world in things. It sees the world in patterns—actions, probabilities, and prompts. A chair is no longer an object; it’s “something to sit on.” A self is no longer an identity; it’s “a collection of behaviors and preferences.” Even brands, once nouns wrapped in mythology, are being reconstituted as verbs. You don’t have a brand. You do a brand.
This linguistic shift isn’t neutral. It’s a collapse of conceptual anchors. In generative systems, nouns aren’t centers of gravity. They’re scaffolding for action. This reflects a broader trend in how generative AI is reshaping communication across every industry.
Recent research supports this trend. A study titled Playing with Words: Comparing the Vocabulary and Lexical Richness of ChatGPT and Humans found that ChatGPT’s outputs exhibit significantly lower lexical diversity than human writing. In particular, nouns and specific, stylistic words are often underused, suggesting that generative systems prioritize predictable, commonly used language while deprioritizing less frequent terms.
Further analysis of 14 million PubMed abstracts revealed a measurable shift in word frequency post-AI adoption. Words like “delves” and “showcasing” surged, while others faded—showing that large language models are already reshaping vocabulary patterns at scale.
Sound familiar? It should.
AI’s Philosophical Ancestors: Orwell, Huxley, and the Future They Warned Us About
To understand their relevance, it helps to recall what George Orwell and Aldous Huxley are most famous for. Orwell authored 1984, a bleak vision of the future where an authoritarian regime weaponizes language to suppress independent thought and rewrite history.
His concept of Newspeak—a restricted, simplified language designed to make dissent unthinkable—has become a cultural shorthand for manipulative control.
On the other hand, Huxley wrote Brave New World, which envisioned a society not characterized by overt oppression, but rather by engineered pleasure, distraction, and passive conformity. In his world, people are conditioned into compliance not through violence but through comfort, entertainment, and chemical sedation.
Both men anticipated futures in which language and meaning are compromised, but in radically different ways. Together, they map the two poles of how reality can be reconditioned: by force or indulgence.
Few realize that George Orwell was once a student of Aldous Huxley. In the late 1910s, while Orwellstudied at Eton, Huxley taught him French. Their relationship was brief but prophetic. Decades later, each would author the defining visions of dystopia—1984 and Brave New World.
After reading 1984, Huxley wrote to Orwell with a haunting message:
Whether in actual fact the policy of the boot-on-the-face can go on indefinitely seems doubtful… The future will be controlled by inflicting pleasure, not pain.
And that’s precisely where we are now.
Orwell feared control through surveillance and terror. Huxley feared control through indulgence and distraction. Generative AI, cloaked in helpfulness, embodies both. It doesn’t censor. It seduces. It doesn’t need Newspeak to delete ideas. It replaces them with prediction.
In 1984, language was weaponized by force. In our world, it’s being reshaped by suggestion. What we have is not Artificial Intelligence—it’s Artificial Inference: trained not to understand but to remix, not to reason but to simulate.
And this simulation brings us to a more profound loss: intersubjectivity.
AI and the Loss of Intersubjectivity
Humans learn, grow, and build reality through intersubjectivity—the shared context that gives language its weight. It allows us to share meaning, to agree on what a word represents, and to build mutual understanding through shared experiences. Without it, words float.
AI doesn’t participate in intersubjectivity. It doesn’t share meaning—it predicts output. And yet, when someone asks an AI a question, they often believe the answer reflects their framing. It doesn’t. It reflects the average of averages, the statistical ghost of comprehension. The illusion of understanding is precise, polite, and utterly hollow.
This is how AI reconditions reality at scale—not by force, but by imitation.
The result? A slow, silent attrition of originality. Nouns lose their edges. Ideas lose their anchors. Authorship bleeds into prompting. And truth becomes whatever the model says most often.
AI and Accountability: A Case Study in Trust and Miscommunication
In one recent public example, Air Canada deployed an AI-powered chatbot to handle customer service inquiries. When a customer asked about bereavement fare discounts, the chatbot confidently invented a policy that didn’t exist. The airline initially tried to avoid responsibility, but the court disagreed. In February 2024, a tribunal ruled that Air Canada was liable for the misinformation provided by its chatbot.
This wasn’t just a technical glitch—it was a trust failure. The AI-generated text sounded plausible, helpful, and human, but it lacked grounding in policy, context, or shared understanding. In effect, the airline’s brand spoke out of both sides of its mouth and cost them. This is the risk when language is generated without intersubjectivity, oversight, or friction.
The Linguistic Drift of AI: What the Data Tells Us About Language Decay
It’s not just theory—research is now quantifying how generative AI systems are shifting the landscape of language itself. A study titled Playing with Words: Comparing the Vocabulary and Lexical Richness of ChatGPT and Humans found that AI-generated outputs consistently use a narrower vocabulary, with significantly fewer nouns and stylistic words than human writing.
Building on this, an analysis of over 14 million PubMed abstracts revealed measurable shifts in word frequency following the rise of LLM use. While many precise, technical nouns faded, terms like “delves” and “showcasing” surged. The shift is not random; it’s a statistically driven flattening of language, where standard, action-oriented, or stylistic terms are promoted, and specificity is sidelined.
Some researchers link this to a broader problem known as “model collapse.” As AI models are increasingly trained on synthetic data, including their outputs, they may degrade over time. This leads to a feedback loop where less diverse, less semantically rich language becomes the norm. The result is a measurable reduction in lexical, syntactic, and semantic diversity—the very fabric of meaning and precision.
The implications are vast. If AI systems are deprioritizing nouns at scale, then the structures we use to hold ideas—people, places, identities, and concepts—are being eroded. In real time, we are watching the grammatical infrastructure of human thought being reweighted by machines that do not think.
What AI’s Language Shift Means for Brands and Business Strategy
The erosion of language precision has significant implications for businesses, particularly those that rely on storytelling, branding, and effective communication. Brands are built on narrative consistency, anchored by nouns, identities, and associations that accumulate cultural weight over time.
However, as AI systems normalize probabilistic language and predictive phrasing, even brand voice becomes a casualty of convergence. Differentiation erodes—messaging blurs. Trust becomes more complicated to earn and more uncomplicated to mimic.
As this Forbes piece outlines, there are serious reasons why brands must be cautious with generative AI when it comes to preserving authenticity and voice.
Moreover, AI-powered content platforms optimize for engagement, not meaning. Businesses relying on LLMs to generate customer-facing content risk flattening their uniqueness in favor of what’s statistically safe. Without human oversight, brand language may drift toward the generic, the probable, and the forgettable.
How To Safeguard Meaning in the Age of AI
Resist the flattening. Businesses and individuals alike must reclaim intentionality in language. Here’s how—and why it matters:
If you don’t define your brand voice, AI will average it. If you don’t protect the language of your contracts, AI will remix it. If you don’t curate your culture, AI will feed it back to you—statistically safe but spiritually hollow.
Double down on human authorship: Don’t outsource your voice to a model. Use AI for augmentation, not substitution.
Protect linguistic originality: Encourage specificity, metaphor, and vocabulary diversity in your communication. Nouns matter.
Audit your outputs: Periodically review AI-generated materials. Look for signs of drift—has your language lost its edge?
Invest in language guardianship: Treat your brand’s lexicon like intellectual property. Define it. Defend it.
Champion intersubjectivity: Allow shared context in both personal and professional communication. AI can simulate, but only humans can mean.
The Necessity of Friction: Why Human Involvement Must Temper AI
Friction isn’t a flaw in human systems—it’s a feature. It’s where meaning is made, thought is tested, and creativity wrestles with uncertainty. Automation is a powerful economic accelerant, but without deliberate pauses—without a human in the loop—we risk stripping away the qualities that make us human. Language is one of those qualities.
Every hesitation, nuance, and word choice reflects cognition, culture, and care. Remove the friction, and you remove the humanity. AI can offer speed, fluency, and pattern-matching, but it can’t provide presence, and presence is where meaning lives.
AI’s Closing Refrain: A Call to Remember Meaning
Emily M. Bender, a professor of computational linguistics at the University of Washington, has emerged as one of the most principled and prescient critics of large language models. In her now-famous co-authored paper, "On the Dangers of Stochastic Parrots," she argues that these systems don’t understand language—they merely remix it. They are, in her words, “stochastic parrots”: machines that generate plausible-sounding language without comprehension or intent.
Yet we’re letting those parrots draft our emails, write our ads, and even shape our laws. We’re allowing models trained on approximations to become arbiters of communication, culture, and identity.
This is not language—it’s mimicry at scale. And mimicry, unchecked, becomes a distortion. When AI outputs are mistaken for understanding, the baseline of meaning erodes. The problem isn’t just that AI might be wrong. It’s that it sounds so right, we stop questioning it.
In the name of optimization, we risk erasing the texture of human communication. Our metaphors, our double meanings, our moments of productive ambiguity—these are what make language alive. Remove that, and a stream of consensus-safe, risk-averse echo remains. Functional? Yes. Meaningful? Not really.
The stakes aren’t just literary—they’re existential. If language is the connective tissue between thought and reality, and if that tissue is replaced with statistical scaffolding, thinking becomes outsourced. Once sharpened by friction, our voices become blurred in a sea of plausible phrasings.
Without intersubjectivity, friction, or nouns, we are scripting ourselves out of the story, one autocomplete at a time We are not being silenced. We are being auto-completed. And the most dangerous part? We asked for it.
Before we ask what AI can say next, we should ask: What has already gone unsaid?
In this quiet war, we don’t lose language all at once. We lose it word by word—until we forget we ever had something to say.
I asked brand strategist and storyteller Michelle Garside, whose work spans billion-dollar brands and purpose-driven founders, to share her perspective on what’s at risk as automation flattened language. Her response was both precise and profound:
If language is being flattened, we need more people doing the opposite: excavating. Listening for what’s buried beneath the noise. Uncovering the phrase that unlocks the person. That’s not a prompt—it’s a process. And it’s a deeply human one.
When someone says something that lands—not because it sounds good, but because it’s true. You can see it in their body. You can feel it in the silence that follows. No algorithm can replicate that because that moment isn’t statistical. It’s sacred.
The risk isn’t just that AI will get things wrong. It’s that it will sound just right enough to stop us from looking deeper. To stop us from asking what’s real. To stop us from finding the words only we could say.
We don’t need more words. We need more meaning. And meaning isn’t generated. It’s remembered.
When it comes to language and AI, that’s the line to carry forward—not just because it sounds good, but because it’s true.
#rewriting #reality #one #word #time
AI Is Rewriting Reality, One Word At A Time
As AI reshapes language, even the human voice becomes a pattern to be predicted, not a meaning to be ... More understood.getty
Language is the foundation of business, culture, and consciousness. But AI isn’t just using our words—it’s reshaping them. Quietly, subtly, it’s dismantling the architecture of thought by eroding what we used to think: nouns.
We used to believe that naming something gave it power. Giving a thing a noun means tethering it to meaning, identity, and memory. But in the age of AI, nouns are dissolving—not banned, not erased—but rendered functionally obsolete. And with them, our grasp on reality is starting to fray.
AI and the Architecture of Thought
AI doesn’t see the world in things. It sees the world in patterns—actions, probabilities, and prompts. A chair is no longer an object; it’s “something to sit on.” A self is no longer an identity; it’s “a collection of behaviors and preferences.” Even brands, once nouns wrapped in mythology, are being reconstituted as verbs. You don’t have a brand. You do a brand.
This linguistic shift isn’t neutral. It’s a collapse of conceptual anchors. In generative systems, nouns aren’t centers of gravity. They’re scaffolding for action. This reflects a broader trend in how generative AI is reshaping communication across every industry.
Recent research supports this trend. A study titled Playing with Words: Comparing the Vocabulary and Lexical Richness of ChatGPT and Humans found that ChatGPT’s outputs exhibit significantly lower lexical diversity than human writing. In particular, nouns and specific, stylistic words are often underused, suggesting that generative systems prioritize predictable, commonly used language while deprioritizing less frequent terms.
Further analysis of 14 million PubMed abstracts revealed a measurable shift in word frequency post-AI adoption. Words like “delves” and “showcasing” surged, while others faded—showing that large language models are already reshaping vocabulary patterns at scale.
Sound familiar? It should.
AI’s Philosophical Ancestors: Orwell, Huxley, and the Future They Warned Us About
To understand their relevance, it helps to recall what George Orwell and Aldous Huxley are most famous for. Orwell authored 1984, a bleak vision of the future where an authoritarian regime weaponizes language to suppress independent thought and rewrite history.
His concept of Newspeak—a restricted, simplified language designed to make dissent unthinkable—has become a cultural shorthand for manipulative control.
On the other hand, Huxley wrote Brave New World, which envisioned a society not characterized by overt oppression, but rather by engineered pleasure, distraction, and passive conformity. In his world, people are conditioned into compliance not through violence but through comfort, entertainment, and chemical sedation.
Both men anticipated futures in which language and meaning are compromised, but in radically different ways. Together, they map the two poles of how reality can be reconditioned: by force or indulgence.
Few realize that George Orwell was once a student of Aldous Huxley. In the late 1910s, while Orwellstudied at Eton, Huxley taught him French. Their relationship was brief but prophetic. Decades later, each would author the defining visions of dystopia—1984 and Brave New World.
After reading 1984, Huxley wrote to Orwell with a haunting message:
Whether in actual fact the policy of the boot-on-the-face can go on indefinitely seems doubtful… The future will be controlled by inflicting pleasure, not pain.
And that’s precisely where we are now.
Orwell feared control through surveillance and terror. Huxley feared control through indulgence and distraction. Generative AI, cloaked in helpfulness, embodies both. It doesn’t censor. It seduces. It doesn’t need Newspeak to delete ideas. It replaces them with prediction.
In 1984, language was weaponized by force. In our world, it’s being reshaped by suggestion. What we have is not Artificial Intelligence—it’s Artificial Inference: trained not to understand but to remix, not to reason but to simulate.
And this simulation brings us to a more profound loss: intersubjectivity.
AI and the Loss of Intersubjectivity
Humans learn, grow, and build reality through intersubjectivity—the shared context that gives language its weight. It allows us to share meaning, to agree on what a word represents, and to build mutual understanding through shared experiences. Without it, words float.
AI doesn’t participate in intersubjectivity. It doesn’t share meaning—it predicts output. And yet, when someone asks an AI a question, they often believe the answer reflects their framing. It doesn’t. It reflects the average of averages, the statistical ghost of comprehension. The illusion of understanding is precise, polite, and utterly hollow.
This is how AI reconditions reality at scale—not by force, but by imitation.
The result? A slow, silent attrition of originality. Nouns lose their edges. Ideas lose their anchors. Authorship bleeds into prompting. And truth becomes whatever the model says most often.
AI and Accountability: A Case Study in Trust and Miscommunication
In one recent public example, Air Canada deployed an AI-powered chatbot to handle customer service inquiries. When a customer asked about bereavement fare discounts, the chatbot confidently invented a policy that didn’t exist. The airline initially tried to avoid responsibility, but the court disagreed. In February 2024, a tribunal ruled that Air Canada was liable for the misinformation provided by its chatbot.
This wasn’t just a technical glitch—it was a trust failure. The AI-generated text sounded plausible, helpful, and human, but it lacked grounding in policy, context, or shared understanding. In effect, the airline’s brand spoke out of both sides of its mouth and cost them. This is the risk when language is generated without intersubjectivity, oversight, or friction.
The Linguistic Drift of AI: What the Data Tells Us About Language Decay
It’s not just theory—research is now quantifying how generative AI systems are shifting the landscape of language itself. A study titled Playing with Words: Comparing the Vocabulary and Lexical Richness of ChatGPT and Humans found that AI-generated outputs consistently use a narrower vocabulary, with significantly fewer nouns and stylistic words than human writing.
Building on this, an analysis of over 14 million PubMed abstracts revealed measurable shifts in word frequency following the rise of LLM use. While many precise, technical nouns faded, terms like “delves” and “showcasing” surged. The shift is not random; it’s a statistically driven flattening of language, where standard, action-oriented, or stylistic terms are promoted, and specificity is sidelined.
Some researchers link this to a broader problem known as “model collapse.” As AI models are increasingly trained on synthetic data, including their outputs, they may degrade over time. This leads to a feedback loop where less diverse, less semantically rich language becomes the norm. The result is a measurable reduction in lexical, syntactic, and semantic diversity—the very fabric of meaning and precision.
The implications are vast. If AI systems are deprioritizing nouns at scale, then the structures we use to hold ideas—people, places, identities, and concepts—are being eroded. In real time, we are watching the grammatical infrastructure of human thought being reweighted by machines that do not think.
What AI’s Language Shift Means for Brands and Business Strategy
The erosion of language precision has significant implications for businesses, particularly those that rely on storytelling, branding, and effective communication. Brands are built on narrative consistency, anchored by nouns, identities, and associations that accumulate cultural weight over time.
However, as AI systems normalize probabilistic language and predictive phrasing, even brand voice becomes a casualty of convergence. Differentiation erodes—messaging blurs. Trust becomes more complicated to earn and more uncomplicated to mimic.
As this Forbes piece outlines, there are serious reasons why brands must be cautious with generative AI when it comes to preserving authenticity and voice.
Moreover, AI-powered content platforms optimize for engagement, not meaning. Businesses relying on LLMs to generate customer-facing content risk flattening their uniqueness in favor of what’s statistically safe. Without human oversight, brand language may drift toward the generic, the probable, and the forgettable.
How To Safeguard Meaning in the Age of AI
Resist the flattening. Businesses and individuals alike must reclaim intentionality in language. Here’s how—and why it matters:
If you don’t define your brand voice, AI will average it. If you don’t protect the language of your contracts, AI will remix it. If you don’t curate your culture, AI will feed it back to you—statistically safe but spiritually hollow.
Double down on human authorship: Don’t outsource your voice to a model. Use AI for augmentation, not substitution.
Protect linguistic originality: Encourage specificity, metaphor, and vocabulary diversity in your communication. Nouns matter.
Audit your outputs: Periodically review AI-generated materials. Look for signs of drift—has your language lost its edge?
Invest in language guardianship: Treat your brand’s lexicon like intellectual property. Define it. Defend it.
Champion intersubjectivity: Allow shared context in both personal and professional communication. AI can simulate, but only humans can mean.
The Necessity of Friction: Why Human Involvement Must Temper AI
Friction isn’t a flaw in human systems—it’s a feature. It’s where meaning is made, thought is tested, and creativity wrestles with uncertainty. Automation is a powerful economic accelerant, but without deliberate pauses—without a human in the loop—we risk stripping away the qualities that make us human. Language is one of those qualities.
Every hesitation, nuance, and word choice reflects cognition, culture, and care. Remove the friction, and you remove the humanity. AI can offer speed, fluency, and pattern-matching, but it can’t provide presence, and presence is where meaning lives.
AI’s Closing Refrain: A Call to Remember Meaning
Emily M. Bender, a professor of computational linguistics at the University of Washington, has emerged as one of the most principled and prescient critics of large language models. In her now-famous co-authored paper, "On the Dangers of Stochastic Parrots," she argues that these systems don’t understand language—they merely remix it. They are, in her words, “stochastic parrots”: machines that generate plausible-sounding language without comprehension or intent.
Yet we’re letting those parrots draft our emails, write our ads, and even shape our laws. We’re allowing models trained on approximations to become arbiters of communication, culture, and identity.
This is not language—it’s mimicry at scale. And mimicry, unchecked, becomes a distortion. When AI outputs are mistaken for understanding, the baseline of meaning erodes. The problem isn’t just that AI might be wrong. It’s that it sounds so right, we stop questioning it.
In the name of optimization, we risk erasing the texture of human communication. Our metaphors, our double meanings, our moments of productive ambiguity—these are what make language alive. Remove that, and a stream of consensus-safe, risk-averse echo remains. Functional? Yes. Meaningful? Not really.
The stakes aren’t just literary—they’re existential. If language is the connective tissue between thought and reality, and if that tissue is replaced with statistical scaffolding, thinking becomes outsourced. Once sharpened by friction, our voices become blurred in a sea of plausible phrasings.
Without intersubjectivity, friction, or nouns, we are scripting ourselves out of the story, one autocomplete at a time We are not being silenced. We are being auto-completed. And the most dangerous part? We asked for it.
Before we ask what AI can say next, we should ask: What has already gone unsaid?
In this quiet war, we don’t lose language all at once. We lose it word by word—until we forget we ever had something to say.
I asked brand strategist and storyteller Michelle Garside, whose work spans billion-dollar brands and purpose-driven founders, to share her perspective on what’s at risk as automation flattened language. Her response was both precise and profound:
If language is being flattened, we need more people doing the opposite: excavating. Listening for what’s buried beneath the noise. Uncovering the phrase that unlocks the person. That’s not a prompt—it’s a process. And it’s a deeply human one.
When someone says something that lands—not because it sounds good, but because it’s true. You can see it in their body. You can feel it in the silence that follows. No algorithm can replicate that because that moment isn’t statistical. It’s sacred.
The risk isn’t just that AI will get things wrong. It’s that it will sound just right enough to stop us from looking deeper. To stop us from asking what’s real. To stop us from finding the words only we could say.
We don’t need more words. We need more meaning. And meaning isn’t generated. It’s remembered.
When it comes to language and AI, that’s the line to carry forward—not just because it sounds good, but because it’s true.
#rewriting #reality #one #word #time
·228 Views