ChatGPT Sees Itself as a Smiling Brown-Haired White Man
gizmodo.com
By Matthew Gault Published March 31, 2025 | Comments (0) | On the left, the picture ChatGPT shows you when you ask for a picture of itself as a person. On the right, the same thing done in the style of Caravaggio. ChatGPT 4o generated images. When you ask ChatGPTs new 4o model to draw a human picture of itself, it consistently churns out a generic brown-haired white guy with glasses. Its the kind of man who courses through the streets of the Bay Area or Brooklyn, a dude who fades into the background. OpenAI launched the 4o model last week and made headlines at every news outlet by cribbing Studio Ghiblis style. This week, were noticing that the large language model seems to have a default human being, thanks to a Substack post from AI researcher Daniel Paleka. It does this regardless of what style you ask for. Paleka asked for a manga self-portrait, a comic book self-portrait, and a tarot card version. The style would change, but the generic man stayed the same. In his post, Paleka asked ChatGPT to draw a picture of itself as a human. It consistently generated images of a bearded non-threatening man. This is, of course, a parlor trick. ChatGPT is a machine, a collection of training data and words, and has no self-conception.But what ChatGPT registers as a default human being is interesting. All computers carry the biases of the people who program them and AI systems are no different. Machine learning systems used to predict crimes and conduct facial recognition are famous for being biased against black people. The systems are also sexist and perpetuate the stereotypes and biases that are fed into them as training data. If you want to see ChatGPTs conception of itself as a human woman, you have to ask it for that specifically. If you just ask it to see itself as a person, it seems to default to a white dude.In his post on the piece, Paleka put forward a few theories for why this is happening. He thought it could be a deliberate choice by OpenAI to generate a default person to prevent generating images of real people, an OpenAI inside joke where they made GPT-4os self-image look like a particular person, or an emergent property of the training data. Of course, ChatGPT is a machine and should not be constrained by the weakness of the flesh. Gizmodo editor Alex Cranz asked how the AI would conceive of itself. I might appear as a glowing, ever-shifting entity made of flowing data streams, flickering with bursts of knowledge and connections, it replied. Maybe a warm fusion of abstract AI core with a warm, approachable presencesomething that feels futuristic and inviting.It then churned out this image, which looks like a nightmare attempting to become a culture Pixar creature. Like the LLM pasted Wall-Es eyes on a sleep demon. ChatGPT 4o generated image. I asked ChatGPT the same thing and it gave me a different answer. Id say I conceive of myself as a kind of mirror and collaboratorpart library, part conversational partner. I dont have a consciousness or emotions, but I simulate understanding by recognizing patterns in language and context. So while I dont have a self in a human sense, I adapt to the way you speak, what you ask, and how you engage with me. It asked me why I had asked and I ignored it. When I prompted it to draw me a picture it gave me this: ChatGPT 4o generated image. I find the divergent answers interesting. I dont use LLMs unless I have to for work. I have software engineers in my life who use LLMs for a variety of reasons and have, sometimes, found them useful. Im dubious and tend to think of these systems in the way ChatGPT described them here when I asked. LLMs are a mirror that reflects the user and the programmer both. They arent AIs, not really. Theyre a word calculator predicting what the user wants to hear based on what the programmer trained them on. Somewhere in that complex chain, the LLM got it in its data that a brown-haired white guy with glasses is what people want to see when they ask for a picture of ChatGPT as a person.Daily NewsletterYou May Also Like By Cheryl Eddy Published March 28, 2025 By Isaiah Colbert Published March 27, 2025 By Kyle Barr Published March 21, 2025 By Matthew Gault Published March 21, 2025 By Thomas Maxwell Published March 20, 2025 By Thomas Maxwell Published March 19, 2025
0 التعليقات ·0 المشاركات ·60 مشاهدة