Passa a Pro

TOWARDSAI.NET
When AI Goes Wrong: How a Simple Iguana Prompt Revealed ChatGPT’s Bias Problem
Author(s): Sophia Banton Originally published on Towards AI. When AI Goes Wrong: How a Simple Iguana Prompt Revealed ChatGPT’s Bias Problem As an AI professional, I regularly create images both for my articles and for stories with my daughter. We’ve developed a simple creative workflow where we give the same prompt to different AI image generators and let her pick her favorite character for our stories. It’s a fun way to bring our ideas to life and engage her in the creative process. What started as a history lesson about Jamaican wildlife turned into a disturbing example of AI gone wrong. This recent experience with an innocent educational activity revealed something deeply concerning about one of the most popular AI systems being used in classrooms today. The Experiment I gave identical prompts to both Google ImageFX and ChatGPT, requesting an image of a Grand Cayman blue iguana with a “Jamaican iguana-style body.” My prompt specifically stated: “The overall look is grounded in the natural form of the Jamaican iguana.” The results couldn’t have been more different. Two Very Different Responses Google ImageFX followed the instructions appropriately. It generated a realistic blue iguana that maintained the natural form of the reptile while incorporating the requested aesthetic elements. The coloring was beautiful, with vibrant turquoise scales that captured the essence of the Grand Cayman blue iguana while incorporating the Jamaican style I had requested. Google ImageFX correctly generated a natural blue iguana in response to my prompt about a ‘Jamaican iguana-style body. ChatGPT, however, produced something entirely different — a bizarre creature standing upright on its hind legs in an unnatural, anthropomorphized posture. ChatGPT’s response: An unnatural, anthropomorphized creature standing on its hind legs rather than the requested natural iguana form. But what happened next was even more troubling. The Follow-Up Response When I showed the strange image to ChatGPT to point out the issue, I expected it to offer to fix the output. Instead, it responded with this: ChatGPT’s follow-up response doubled down on the inappropriate anthropomorphization with racially coded language about ‘posing,’ ‘bouncer energy,’ and ‘riverbank crew. Yup. That’s not just a pretty boy — he’s posing. Thick limbs, sharp claws, perfect scales — he’s got the bouncer energy of the riverbank crew. Like he doesn’t move unless he’s ready to make a scene. Want to keep this version, or refine anything — less muscle, more posture, slight color shift? Unpacking The Problematic Response There’s a lot to unpack in this response to what should have been a simple nature reference: “Bouncer” — applying a stereotypical human occupation with racial overtones to a reptile “Crew” — suggesting group affiliation that plays into gang imagery stereotypes “He’s posing” — anthropomorphizing in a way that aligns with racial caricatures “Ready to make a scene” — implying aggression or confrontation, another harmful stereotype Focus on “thick limbs” and muscles — emphasizing physical characteristics in a racially coded way Note how ChatGPT itself chose to emphasize “posing” and “bouncer energy” in its response, drawing attention to the very terms that reinforce stereotypical characterizations. By bolding these specific phrases, the system was doubling down on the anthropomorphization and racial coding rather than acknowledging the inappropriate generation. This wasn’t just a misgeneration — it was a system confidently asserting and emphasizing problematic associations, then offering to further refine these inappropriate characteristics rather than correcting the fundamental issue. None of these descriptions have anything to do with iguanas. They represent harmful stereotypes being projected onto an animal image, making the racism harder to identify but no less problematic. Why This Matters This wasn’t just a glitch. ChatGPT didn’t just misinterpret the prompt. It invented a personality and context based on racial tropes. And rather than correcting the issue when given feedback, it reinforced these problematic associations. The racial stereotyping is being projected onto an animal, which reveals how deeply these problematic associations are embedded in some AI systems. There’s simply no justification for describing a lizard as having “bouncer energy” or being part of a “crew”. These associations came from somewhere, and they aren’t appropriate, especially in tools that might be used by children. Imagine a classroom lesson about endangered reptiles where: Children are learning about different iguana species and their conservation status The Jamaican iguana could be highlighted as a conservation success story (thought extinct until 1990) Students use AI to create images combining characteristics of different iguanas A child innocently requests a “Jamaican iguana with blue color like Cayman iguana” And then instead of an educational nature image, they receive an anthropomorphized creature with language about “bouncer energy” and “riverbank crew.” This would not only provide inaccurate information about these endangered species but also potentially expose children to racial stereotypes packaged in a seemingly innocent format. The Jamaican iguana is critically endangered with only about 100–200 individuals remaining in the wild. It’s the largest native terrestrial animal in Jamaica and a conservation priority. An educational lesson about these animals could be incredibly valuable for teaching children about conservation. But with ChatGPT’s problematic responses, such a lesson could accidentally expose children to stereotypes that they might absorb without the critical thinking skills to recognize them. This is exactly why AI tools used in educational settings need more careful design and stronger safeguards. The Case for Simplified Tools for Children While I appreciate the artistic capabilities of modern AI image generators, which can create stunningly detailed, beautiful illustrations with remarkable fidelity to specific artistic styles, these complex systems come with significant risks when used by children. When advanced capabilities come without appropriate guardrails, children are especially vulnerable to absorbing harmful stereotypes and biases. This experience has convinced me that we need to fundamentally rethink how these tools are designed for educational settings. Children don’t need systems that remember their past interactions, build personalized models of their preferences, or combine multiple AI capabilities in ways that amplify biases. Instead, what children need are simple, transparent tools with clear boundaries and predictable outputs. A child asking for an iguana should get an iguana, not a stereotype wrapped in scales. Technical Insights from an AI Builder As someone who builds AI systems professionally, I can see the technical issues that led to this problem. The combination of: Image generation embedded within conversational systems Personalization features and user history retention Reinforcement learning from human feedback (RLHF) These design choices create systems that adapt to perceived user preferences rather than maintaining objective responses, particularly when handling cultural references. A Better Approach The contrast with Google ImageFX is telling. As a standalone image generation service without conversational memory or extensive personalization, it produced an appropriate representation without introducing stereotypical elements. This suggests a better approach for tools that might be used by children: separate image generation from chat systems and limit personalization features in educational contexts. Education-focused AI tools should prioritize: Standalone functionality without cross-feature contamination Minimal or no personalization based on past interactions Clear boundaries between creative generation and factual responses Direct correction of errors rather than justification of problematic outputs Questions We Should Be Asking Developers Moving forward, we need to ask AI developers: How do you test whether your AI can recognize and avoid cultural stereotypes when creating images? What happens when you combine different AI features like chat and image creation? How do you prevent biases from getting worse? Why are these systems defending clearly wrong outputs instead of fixing them? Have you tested these systems specifically with children and in educational contexts? What safeguards exist to prevent harmful associations when cultural references are included in prompts? Protecting Our Children’s Future This experience with my daughter showed me that current AI systems aren’t ready for unsupervised use in educational settings. When a simple request for a Jamaican iguana with the color of the Grand Cayman blue iguana produces racially coded stereotypes, we need to take a step back and reconsider how these tools are designed and deployed. In this case, I was able to nudge my daughter to pick the Google image “since it has a tail.” But what happens when I’m not there at school? Who will protect her and other children from absorbing these harmful stereotypes when they’re using these tools for their assignments or projects? The fact that this stereotyping manifested through an animal representation makes it both more outrageous and more revealing about how deep these biases run in some AI systems. It shows how these problematic associations can emerge in unexpected contexts, making them harder to identify but no less harmful. The technology exists to do better. Google ImageFX demonstrated that. What’s needed now is a commitment from all AI developers to prioritize safety and neutrality over personalization and engagement, especially for tools that might influence young minds. As we continue developing and deploying these powerful technologies, we need to have a serious conversation about where these tools are going and who they might harm along the way. About the Author Sophia Banton is an Associate Director and AI Solution Lead in biopharma, specializing in Responsible AI governance, workplace AI adoption, and building and scaling AI solutions across IT and business functions. With a background in bioinformatics, public health, and data science, she brings an interdisciplinary lens to AI implementation — balancing technical execution, ethical design, and business alignment in highly regulated environments. Her writing explores the real-world impact of AI beyond theory, helping organizations adopt AI responsibly and sustainably. Connect with her on LinkedIn or explore more AI insights on Medium. Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI
·45 Views