
How Those Studio Ghibli Memes Are a Sign of OpenAIs Trump-Era Shift
time.com
If youre wondering why social media is filled with Studio Ghibli-style memes all of a sudden, there are several answers to that question.The most obvious one is that OpenAI dropped an update to ChatGPT on Tuesday that allows users to generate better images using the 4o version of the model. OpenAI has long proffered image generation tools, but this one felt like a significant evolution: users say it is far better than other AI image-generators at accurately following text prompts, and that it makes much higher fidelity images.But thats not the only reason for the deluge of memes in the style of the Japanese animation house.Alongside the ChatGPT update, OpenAI also relaxed several of its rules on the types of images users can generate with its AI toolsa change CEO Sam Altman said represents a new high-water mark for us in allowing creative freedom. Among those changes: allowing users to generate images of adult public figures for the first time, and reducing the likelihood that ChatGPT would reject users prompts, even if they risked being offensive.People are going to create some really amazing stuff and some stuff that may offend people, Altman said in a post on X. What we'd like to aim for is that the tool doesn't create offensive stuff unless you want it to, in which case within reason it does.Users quickly began making the most of the policy change sharing Ghiblified images of 9/11, Adolf Hitler, and the murder of George Floyd. The official White House account on X even shared a Studio Ghibli-style image of an ICE officer detaining an alleged illegal immigrant.In one sense, the pivot has been a long time coming. OpenAI began its decade-long life as a research lab that kept its tools under strict lock and key; when it did release early chatbots and image generation models, they had strict content filters that aimed to prevent misuse. But for years it has been widening the accessibility of its tools in an approach it calls iterative deployment. The release of ChatGPT in November 2022 was the most popular example of this strategy, which the company believes is necessary to help society adapt to the changes AI is bringing.Still, in another sense, the change to OpenAIs model behavior policies has a more recent proximate cause: the 2024 election of President Donald Trump, and the cultural shift that has accompanied the new administration.Trump and his allies have been highly critical of what they see as the censorship of free speech online by large tech companies. Many conservatives have drawn parallels between the longstanding practice of content moderation on social media and the more recent strategy, by AI companies including OpenAI, to limit the kinds of content that generative AI models are allowed to create. ChatGPT has woke programmed into its bones, Elon Musk posted on X in December.Like most big companies, OpenAI is trying hard to build ties with the Trump White House. The company scored an early win when, on the second day of his presidency, Trump stood beside Altman and announced a large investment into the datacenters that OpenAI believes will be necessary to train the next generation of AI systems. But OpenAI is still in a delicate position. Musk, Trumps billionaire backer and advisor, has a famous dislike of Altman. The pair cofounded OpenAI together back in 2015, but after a failed attempt to become CEO, Musk quit in a huff. He is now suing Altman and OpenAI, claiming that they reneged on OpenAIs founding mission to develop AI as a non-profit. With Musk operating from the White House and also leading a rival AI company, xAI, it is especially vital for OpenAIs business prospects to cultivate positive ties where possible with the Trump administration.Earlier in March, OpenAI submitted a document laying out recommendations for the new administrations tech policy. It was a shift in tone from earlier missives by the company. OpenAIs freedom-focused policy proposals, taken together, can strengthen Americas lead on AI and in so doing, unlock economic growth, lock in American competitiveness, and protect our national security, the document said. It called on the Trump administration to exempt OpenAI, and the rest of the private sector, from 781 state-level laws proposing to regulate AI, which it said risks bogging down innovation. In return, OpenAI said, industry could provide the U.S. government with learnings and access from AI companies, and would ensure the U.S. retained its leadership position ahead of China in the AI race.Alongside the release of this weeks new ChatGPT update, OpenAI doubled down on what it said were policies intended to give users more freedom, within bounds, to create whatever they want with its AI tools. Were shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm, Joanne Jang, OpenAIs head of model behavior, said in a blog post. The goal is to embrace humility: recognizing how much we don't know, and positioning ourselves to adapt as we learn.Jang gave several examples of things that were previously disallowed, but to which OpenAI was now opening its doors. Tools could now be used to generate images of public figures, Jang wrote, although OpenAI would create an opt-out list allowing people to decide for themselves whether they wanted ChatGPT to be able to generate images of them. Children, she wrote, would be subjected to stronger protections and tighter guardrails.Offensive content, Jang wroteusing quotation markswould also receive a rethink under OpenAIs new policies. Uses that might be seen as offensive by some, but which didnt cause real-world harm, would be increasingly permitted. Without clear guidelines, the model previously refused requests like make this persons eyes look more Asian or make this person heavier, unintentionally implying these attributes were inherently offensive, Jang wrote, suggesting that such prompts would be allowed in future.OpenAIs tools previously flat-out rejected attempts by users to generate hate symbols like swastikas. In the blog post, Jang said the company recognized, however, that these symbols could also sometimes appear in genuinely educational or cultural contexts. The company would move to a strategy of applying technical methods, she wrote, to better identify and refuse harmful misuse without completely banning them.AI lab employees, she wrote, should not be the arbiters of what people should and shouldnt be allowed to create.
0 Comments
·0 Shares
·82 Views