Anthropic CEO Dario Amodei Says AI Models Hallucinate Less Than Humans: Report
Photo Credit: Anthropic Anthropic’s lawyer was recently forced to apologise after Claude made a citation error
Highlights
Anthropic also released new Claude 4 AI models at the event
Amodei had previously said that AGI could arrive as early as 2026
Anthropic has released several papers on ways AI models can be grounded
Advertisement
Anthropic CEO Dario Amodei reportedly said that artificial intelligencemodels hallucinate less than humans. As per the report, the statement was made by the CEO at the company's inaugural Code With Claude event on Thursday. During the event, the San Francisco-based AI firm released two new Claude 4 models, as well as multiple new capabilities, including improved memory and tool use. Amodei reportedly also suggested that while critics are trying to find roadblocks for AI, “they are nowhere to be seen.”Anthropic CEO Downplays AI HallucinationsTechCrunch reports that Amodei's made the comment during a press briefing, while he was explaining how hallucinations are not a limitation for AI to reach artificial general intelligence. Answering a question from the publication, the CEO reportedly said, “It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.”Amodei further added that TV broadcasters, politicians, and humans involved in other professions make mistakes regularly, so AI making mistakes does not take away from its intelligence, as per the report. However, the CEO reportedly acknowledged that AI models confidently responding with untrue responses is a problem.Earlier this month, Anthropic's lawyer was forced to apologise in a courtroom after its Claude chatbot added an incorrect citation in a filing, according to a Bloomberg report. The incident occurred during the AI firm's ongoing lawsuit against music publishers over alleged copyright infringement of lyrics of at least 500 songs.In a October 2024 paper, Amodei claimed that Anthropic might achieve AGI as soon as next year. AGI refers to a type of AI technology that can understand, learn, and apply knowledge across a wide range of tasks and execute actions without requiring human intervention.
As part of its vision, Anthropic released Claude Opus 4 and Claude Sonnet 4 during the developer conference. These models bring major improvements in coding, tool use, and writing. Claude Sonnet 4 scored 72.7 percent on the SWE-Bench benchmark, achieving state-of-the-artdistinction in code writing.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.
Further reading:
Anthropic, AI, AI hallucination, Artificial Intelligence
Akash Dutta
Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food.
More
Related Stories
#anthropic #ceo #dario #amodei #says
Anthropic CEO Dario Amodei Says AI Models Hallucinate Less Than Humans: Report
Photo Credit: Anthropic Anthropic’s lawyer was recently forced to apologise after Claude made a citation error
Highlights
Anthropic also released new Claude 4 AI models at the event
Amodei had previously said that AGI could arrive as early as 2026
Anthropic has released several papers on ways AI models can be grounded
Advertisement
Anthropic CEO Dario Amodei reportedly said that artificial intelligencemodels hallucinate less than humans. As per the report, the statement was made by the CEO at the company's inaugural Code With Claude event on Thursday. During the event, the San Francisco-based AI firm released two new Claude 4 models, as well as multiple new capabilities, including improved memory and tool use. Amodei reportedly also suggested that while critics are trying to find roadblocks for AI, “they are nowhere to be seen.”Anthropic CEO Downplays AI HallucinationsTechCrunch reports that Amodei's made the comment during a press briefing, while he was explaining how hallucinations are not a limitation for AI to reach artificial general intelligence. Answering a question from the publication, the CEO reportedly said, “It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.”Amodei further added that TV broadcasters, politicians, and humans involved in other professions make mistakes regularly, so AI making mistakes does not take away from its intelligence, as per the report. However, the CEO reportedly acknowledged that AI models confidently responding with untrue responses is a problem.Earlier this month, Anthropic's lawyer was forced to apologise in a courtroom after its Claude chatbot added an incorrect citation in a filing, according to a Bloomberg report. The incident occurred during the AI firm's ongoing lawsuit against music publishers over alleged copyright infringement of lyrics of at least 500 songs.In a October 2024 paper, Amodei claimed that Anthropic might achieve AGI as soon as next year. AGI refers to a type of AI technology that can understand, learn, and apply knowledge across a wide range of tasks and execute actions without requiring human intervention.
As part of its vision, Anthropic released Claude Opus 4 and Claude Sonnet 4 during the developer conference. These models bring major improvements in coding, tool use, and writing. Claude Sonnet 4 scored 72.7 percent on the SWE-Bench benchmark, achieving state-of-the-artdistinction in code writing.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.
Further reading:
Anthropic, AI, AI hallucination, Artificial Intelligence
Akash Dutta
Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food.
More
Related Stories
#anthropic #ceo #dario #amodei #says
·32 مشاهدة