• Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud

    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    #googles #new #tool #generates #convincing
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.” #googles #new #tool #generates #convincing
    TIME.COM
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement. (Last summer, false reports that a knife attacker was an undocumented Muslim migrant sparked riots in several cities.) Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for $249 a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder. (Veo 3 titled the file “Election Fraud Video.”) Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare. (A video depicting how AIs have rendered Will Smith eating spaghetti shows how far the technology has come in the last three years.) For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization. (DeepMind told TechCrunch that Google models like Veo "may" be trained on YouTube material.) Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    Like
    Love
    Wow
    Angry
    Sad
    218
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Qualcomm CEO downplays importance of Apple relationship after C1 modem

    Qualcomm's CEO doesn't seem to be worried about losing one of its biggest customers as Apple shifts to in-house modems like the C1 for iPhone.Qualcomm expects that its contract with Apple won't be renewed.For over 15 years, Qualcomm's modem chips powered Apple's iPhones, enabling wireless connectivity to cellular networks. Analysts estimated that Apple paid over billion in 2024 alone for Qualcomm's patent licenses, while the company's annual modem revenue from Apple is said to be between billion and billion.Qualcomm CEO Cristiano Amon, in an appearance on Yahoo Finance's Opening Bid podcast, spotted by 9to5mac, revealed that the modem company is prepared to look well beyond the iPhone. Qualcomm's plans are based on the assumption that Apple will continue to use in-house modems going forward, meaning that the chipmaker will have to explore alternative avenues. Continue Reading on AppleInsider | Discuss on our Forums
    #qualcomm #ceo #downplays #importance #apple
    Qualcomm CEO downplays importance of Apple relationship after C1 modem
    Qualcomm's CEO doesn't seem to be worried about losing one of its biggest customers as Apple shifts to in-house modems like the C1 for iPhone.Qualcomm expects that its contract with Apple won't be renewed.For over 15 years, Qualcomm's modem chips powered Apple's iPhones, enabling wireless connectivity to cellular networks. Analysts estimated that Apple paid over billion in 2024 alone for Qualcomm's patent licenses, while the company's annual modem revenue from Apple is said to be between billion and billion.Qualcomm CEO Cristiano Amon, in an appearance on Yahoo Finance's Opening Bid podcast, spotted by 9to5mac, revealed that the modem company is prepared to look well beyond the iPhone. Qualcomm's plans are based on the assumption that Apple will continue to use in-house modems going forward, meaning that the chipmaker will have to explore alternative avenues. Continue Reading on AppleInsider | Discuss on our Forums #qualcomm #ceo #downplays #importance #apple
    APPLEINSIDER.COM
    Qualcomm CEO downplays importance of Apple relationship after C1 modem
    Qualcomm's CEO doesn't seem to be worried about losing one of its biggest customers as Apple shifts to in-house modems like the C1 for iPhone.Qualcomm expects that its contract with Apple won't be renewed.For over 15 years, Qualcomm's modem chips powered Apple's iPhones, enabling wireless connectivity to cellular networks. Analysts estimated that Apple paid over $2.5 billion in 2024 alone for Qualcomm's patent licenses, while the company's annual modem revenue from Apple is said to be between $5.7 billion and $5.9 billion.Qualcomm CEO Cristiano Amon, in an appearance on Yahoo Finance's Opening Bid podcast, spotted by 9to5mac, revealed that the modem company is prepared to look well beyond the iPhone. Qualcomm's plans are based on the assumption that Apple will continue to use in-house modems going forward, meaning that the chipmaker will have to explore alternative avenues. Continue Reading on AppleInsider | Discuss on our Forums
    Like
    Love
    Wow
    Sad
    Angry
    258
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Former ‘Grand Theft Auto’ Chief Leslie Benzies ‘Can’t Wait’ to Play ‘GTA 6,’ Downplays Similarities to His New Studio’s ‘MindsEye’

    Next week, the former president of “Grant Theft Auto” maker Rockstar North launches his first title since leaving the Take-Two Interactive-owned video game developer and opening his own studio, Build A Rocket Boy: the AAA narrative-driven action-adventure thriller “MindsEye.”

    Published by IOI Partners, the team behind the “Hitman” franchise, the Unreal Engine 5-built game will debut June 10 across PlayStation 5, Xbox Series X and S, and on PC via Steam and Epic Games Store with a price tag for the standard edition.

    Related Stories

    Set in the near-futuristic city of Redrock, “MindsEye” puts players into the role of Jacob Diaz, a former soldier haunted by fragmented memories from his mysterious MindsEye neural implant, as he uncovers a conspiracy involving rogue AI, corporate greed, an unchecked military, and a threat so sinister that it endangers the very survival of humanity.

    Popular on Variety

    But the base story isn’t the biggest draw for “MindsEye,” which includes Build A Rocket Boy’s proprietary Game Creation System, that enables players to, well, “craft anything in their minds eye.”

    Per the studio, “Players can craft their own experiences using all of the ‘MindsEye’ assets, creating everything from custom missions to entirely new scenarios within the game’s expansive, richly detailed world. Whether you’re designing a high-speed chase through Redrock’s bustling cityscapes or a stealth mission in its industrial outskirts, it is designed to be intuitive and easy to use, ensuring that players of all skill levels can bring their imagination to life.”

    Benzies’ Edinburgh-based Build A Rocket Boy has promised “fresh premium content” will rollout monthly for the game, including regular releases of new missions, challenges and game assets.

    While “MindsEye” is the first title from Benzies since he launched BARB after leaving Rockstar in 2016, it’s just step one in the prolific producer’s plan to shake up the gaming industry.

    “At Build A Rocket Boy, our vision goes far beyond a single title,” Benzies told Variety. “‘MindsEye’ is the first episode and central story around which ever-expanding interconnected episodes will span. We’re already working on future episodes, which will introduce alternate realities while maintaining it’s core themes of hope, redemption, and the intrigue of civilizations past and future, drawing from the lore and multiverse concepts.”

    See Variety‘s full interview with Benzies below, including the inevitable comparisons that will be drawn between “MindsEye” and the aesthetic of the “GTA” franchise, and his hopes for Rockstar Games’ highly anticipated and much-delayed “GTA 6.”

    Where did the concept for “MindsEye” come from?

    I pull a lot of inspiration from the real world. Watching the actions of humans – their foibles and their virtues. Watching the advancement of technology and how we adapt, or indeed, do not adapt. We’ve been moving to an automated world for many years now, and the impact on humans, especially with recent advancements in AI, which serves as good fodder for a story and even better for a video game. I think we all have this little nagging feeling about how humans and AI will blend together in the future—will it go smoothly, or will it turn sinister?

    We’re fans of all different types of media, and we’ve drawn influence from cinematic visionaries like Ridley Scott, Paul Greengrass, Christopher Nolan, and J.J. Abrams, and films like “The Bourne Identity,” “Memento,” and TV series “Lost” — they’re all exploring memory, perception, and control in their own ways.

    So, while we nod to those influences here and there, we wanted to build something that feels fresh, grounded in today’s world, but still asking the kinds of questions that have always made this genre powerful.

    With your “GTA” roots, obvious comparisons are already being drawn between the style and aesthetic of that franchise and “MindsEye.”

    Comparisons will always be made—it’s the way human beings pigeonhole concepts. But “MindsEye” isn’t built to fit into anyone else’s box.

    Many games share the same core elements: cars, guns, cities, and charismatic characters, and differentiation is even tougher in today’s entertainment landscape. Streaming, social media, and on-demand binge culture have fractured attention spans, and consumer mindshare is a brutal battlefield for all IP.

    Our industry continues to celebrate each other’s breakthroughs, and I’m proud that our collective innovation is advancing the medium of gaming, even if our paths diverge.

    As an independent studio we have the freedom to break ground in experimental new ways and the challenge is balancing innovation with familiarity—too much “new” risks alienating fans, too much “same” feels stale. It’s about nailing what makes your game’s world feel alive and urgent.

    “MindsEye” is about consequence and connection—it’s cinematic, reactive, and meant to feel like a world you’re not just playing in, but able to create in it too.

    We’re excited to see what they’ve crafted with “GTA VI ,” and I can’t wait to play it as a consumer for the first time. They’re always delivering something new, unique and at a scale that very few can pull off.

    What does MindsEye represent in BARB’s larger vision and long-term strategy? Are you plotting this out as a multi-game franchise or your first standalone?

    At Build A Rocket Boy, our vision goes far beyond a single title. “MindsEye” is the first episode and central story around which ever-expanding interconnected episodes will span. We’re already working on future episodes, which will introduce alternate realities while maintaining it’s core themes of hope, redemption, and the intrigue of civilizations past and future, drawing from the lore and multiverse concepts.

    It’s the future of entertainment to allow active participation so players feel like they have agency and can immerse themselves in our world as they want to. We are introducing three products in one game that will revolutionize AAA-quality interactive gaming and storytelling: “MindsEye” narrative story, Play.MindsEye, and Build.MindsEye.

    In our tightly crafted action-noir, “MindsEye” narrative story we have rips in time accessed through portals at strategic points throughout the game – so while you play as Jacob Diaz on his personal journey, players can also explore side stories and delve deeper into the backstories of characters they encounter along the way. In this way we are delivering companion content at the same time as the anchor content, weaving a rich narrative tapestry which will continue to evolve and expand giving greater depth to characters so you understand their personality and motivations.

    How do digital products Play.MindsEyeand Build.MindsEyetie in to plans for “MindsEye” and what BARB wants to offer gamers?

    In this new era of entertainment, where streaming platforms, boom-and-bust games, and an on-demand culture dominate, we’re pushing things in a new direction—with an interface that simplifies how we consume not just games, but all forms of entertainment. Consumers are moving away from 2D browsing into fully 3D, immersive experiences. Put simply, we’re shifting from passive interaction to active participation.

    As with all new products, things evolve. Arcadia was originally envisioned as our creation platform, but as we continued developing “MindsEye” and building out BARB’s ecosystem, it naturally grew into something more focused— Play.MindsEye and Build.MindsEye. Play delivers cinematic, high-intensity gameplay with missions and maps that constantly evolve. Build gives players intuitive tools to create their own content—no technical skills required, just imagination and intent.

    For BARB to fully realize our vision, we had to beta test our creation system with a community of builders in real-time and started with Everywhere while we were in stealth mode developing MindsEye.

    How did you settle on IOI as publishing partner?

    We’ve always found the way IOI handled the “Hitman” franchise interesting. They are one of the few publishers that have taken their single-player IP and increased their player count and amplified their community culture over time. From a technology point of view, their one executable approach for all of their content is very smart, and we always planned to have a similar approach, which encouraged us to join forces.

    This interview has been edited and condensed.
    #former #grand #theft #auto #chief
    Former ‘Grand Theft Auto’ Chief Leslie Benzies ‘Can’t Wait’ to Play ‘GTA 6,’ Downplays Similarities to His New Studio’s ‘MindsEye’
    Next week, the former president of “Grant Theft Auto” maker Rockstar North launches his first title since leaving the Take-Two Interactive-owned video game developer and opening his own studio, Build A Rocket Boy: the AAA narrative-driven action-adventure thriller “MindsEye.” Published by IOI Partners, the team behind the “Hitman” franchise, the Unreal Engine 5-built game will debut June 10 across PlayStation 5, Xbox Series X and S, and on PC via Steam and Epic Games Store with a price tag for the standard edition. Related Stories Set in the near-futuristic city of Redrock, “MindsEye” puts players into the role of Jacob Diaz, a former soldier haunted by fragmented memories from his mysterious MindsEye neural implant, as he uncovers a conspiracy involving rogue AI, corporate greed, an unchecked military, and a threat so sinister that it endangers the very survival of humanity. Popular on Variety But the base story isn’t the biggest draw for “MindsEye,” which includes Build A Rocket Boy’s proprietary Game Creation System, that enables players to, well, “craft anything in their minds eye.” Per the studio, “Players can craft their own experiences using all of the ‘MindsEye’ assets, creating everything from custom missions to entirely new scenarios within the game’s expansive, richly detailed world. Whether you’re designing a high-speed chase through Redrock’s bustling cityscapes or a stealth mission in its industrial outskirts, it is designed to be intuitive and easy to use, ensuring that players of all skill levels can bring their imagination to life.” Benzies’ Edinburgh-based Build A Rocket Boy has promised “fresh premium content” will rollout monthly for the game, including regular releases of new missions, challenges and game assets. While “MindsEye” is the first title from Benzies since he launched BARB after leaving Rockstar in 2016, it’s just step one in the prolific producer’s plan to shake up the gaming industry. “At Build A Rocket Boy, our vision goes far beyond a single title,” Benzies told Variety. “‘MindsEye’ is the first episode and central story around which ever-expanding interconnected episodes will span. We’re already working on future episodes, which will introduce alternate realities while maintaining it’s core themes of hope, redemption, and the intrigue of civilizations past and future, drawing from the lore and multiverse concepts.” See Variety‘s full interview with Benzies below, including the inevitable comparisons that will be drawn between “MindsEye” and the aesthetic of the “GTA” franchise, and his hopes for Rockstar Games’ highly anticipated and much-delayed “GTA 6.” Where did the concept for “MindsEye” come from? I pull a lot of inspiration from the real world. Watching the actions of humans – their foibles and their virtues. Watching the advancement of technology and how we adapt, or indeed, do not adapt. We’ve been moving to an automated world for many years now, and the impact on humans, especially with recent advancements in AI, which serves as good fodder for a story and even better for a video game. I think we all have this little nagging feeling about how humans and AI will blend together in the future—will it go smoothly, or will it turn sinister? We’re fans of all different types of media, and we’ve drawn influence from cinematic visionaries like Ridley Scott, Paul Greengrass, Christopher Nolan, and J.J. Abrams, and films like “The Bourne Identity,” “Memento,” and TV series “Lost” — they’re all exploring memory, perception, and control in their own ways. So, while we nod to those influences here and there, we wanted to build something that feels fresh, grounded in today’s world, but still asking the kinds of questions that have always made this genre powerful. With your “GTA” roots, obvious comparisons are already being drawn between the style and aesthetic of that franchise and “MindsEye.” Comparisons will always be made—it’s the way human beings pigeonhole concepts. But “MindsEye” isn’t built to fit into anyone else’s box. Many games share the same core elements: cars, guns, cities, and charismatic characters, and differentiation is even tougher in today’s entertainment landscape. Streaming, social media, and on-demand binge culture have fractured attention spans, and consumer mindshare is a brutal battlefield for all IP. Our industry continues to celebrate each other’s breakthroughs, and I’m proud that our collective innovation is advancing the medium of gaming, even if our paths diverge. As an independent studio we have the freedom to break ground in experimental new ways and the challenge is balancing innovation with familiarity—too much “new” risks alienating fans, too much “same” feels stale. It’s about nailing what makes your game’s world feel alive and urgent. “MindsEye” is about consequence and connection—it’s cinematic, reactive, and meant to feel like a world you’re not just playing in, but able to create in it too. We’re excited to see what they’ve crafted with “GTA VI ,” and I can’t wait to play it as a consumer for the first time. They’re always delivering something new, unique and at a scale that very few can pull off. What does MindsEye represent in BARB’s larger vision and long-term strategy? Are you plotting this out as a multi-game franchise or your first standalone? At Build A Rocket Boy, our vision goes far beyond a single title. “MindsEye” is the first episode and central story around which ever-expanding interconnected episodes will span. We’re already working on future episodes, which will introduce alternate realities while maintaining it’s core themes of hope, redemption, and the intrigue of civilizations past and future, drawing from the lore and multiverse concepts. It’s the future of entertainment to allow active participation so players feel like they have agency and can immerse themselves in our world as they want to. We are introducing three products in one game that will revolutionize AAA-quality interactive gaming and storytelling: “MindsEye” narrative story, Play.MindsEye, and Build.MindsEye. In our tightly crafted action-noir, “MindsEye” narrative story we have rips in time accessed through portals at strategic points throughout the game – so while you play as Jacob Diaz on his personal journey, players can also explore side stories and delve deeper into the backstories of characters they encounter along the way. In this way we are delivering companion content at the same time as the anchor content, weaving a rich narrative tapestry which will continue to evolve and expand giving greater depth to characters so you understand their personality and motivations. How do digital products Play.MindsEyeand Build.MindsEyetie in to plans for “MindsEye” and what BARB wants to offer gamers? In this new era of entertainment, where streaming platforms, boom-and-bust games, and an on-demand culture dominate, we’re pushing things in a new direction—with an interface that simplifies how we consume not just games, but all forms of entertainment. Consumers are moving away from 2D browsing into fully 3D, immersive experiences. Put simply, we’re shifting from passive interaction to active participation. As with all new products, things evolve. Arcadia was originally envisioned as our creation platform, but as we continued developing “MindsEye” and building out BARB’s ecosystem, it naturally grew into something more focused— Play.MindsEye and Build.MindsEye. Play delivers cinematic, high-intensity gameplay with missions and maps that constantly evolve. Build gives players intuitive tools to create their own content—no technical skills required, just imagination and intent. For BARB to fully realize our vision, we had to beta test our creation system with a community of builders in real-time and started with Everywhere while we were in stealth mode developing MindsEye. How did you settle on IOI as publishing partner? We’ve always found the way IOI handled the “Hitman” franchise interesting. They are one of the few publishers that have taken their single-player IP and increased their player count and amplified their community culture over time. From a technology point of view, their one executable approach for all of their content is very smart, and we always planned to have a similar approach, which encouraged us to join forces. This interview has been edited and condensed. #former #grand #theft #auto #chief
    VARIETY.COM
    Former ‘Grand Theft Auto’ Chief Leslie Benzies ‘Can’t Wait’ to Play ‘GTA 6,’ Downplays Similarities to His New Studio’s ‘MindsEye’
    Next week, the former president of “Grant Theft Auto” maker Rockstar North launches his first title since leaving the Take-Two Interactive-owned video game developer and opening his own studio, Build A Rocket Boy: the AAA narrative-driven action-adventure thriller “MindsEye.” Published by IOI Partners, the team behind the “Hitman” franchise, the Unreal Engine 5-built game will debut June 10 across PlayStation 5, Xbox Series X and S, and on PC via Steam and Epic Games Store with a $59.99 price tag for the standard edition. Related Stories Set in the near-futuristic city of Redrock, “MindsEye” puts players into the role of Jacob Diaz, a former soldier haunted by fragmented memories from his mysterious MindsEye neural implant, as he uncovers a conspiracy involving rogue AI, corporate greed, an unchecked military, and a threat so sinister that it endangers the very survival of humanity. Popular on Variety But the base story isn’t the biggest draw for “MindsEye,” which includes Build A Rocket Boy’s proprietary Game Creation System, that enables players to, well, “craft anything in their minds eye.” Per the studio, “Players can craft their own experiences using all of the ‘MindsEye’ assets, creating everything from custom missions to entirely new scenarios within the game’s expansive, richly detailed world. Whether you’re designing a high-speed chase through Redrock’s bustling cityscapes or a stealth mission in its industrial outskirts, it is designed to be intuitive and easy to use, ensuring that players of all skill levels can bring their imagination to life.” Benzies’ Edinburgh-based Build A Rocket Boy has promised “fresh premium content” will rollout monthly for the game, including regular releases of new missions, challenges and game assets. While “MindsEye” is the first title from Benzies since he launched BARB after leaving Rockstar in 2016 (Benzies was the lead “Grand Theft Auto” developer across the third through fifth games in the franchise, as well as “Grand Theft Auto Online,” and was in a legal battle with parent company Take Two over unpaid royalties from 2016 until 2019), it’s just step one in the prolific producer’s plan to shake up the gaming industry. “At Build A Rocket Boy, our vision goes far beyond a single title,” Benzies told Variety. “‘MindsEye’ is the first episode and central story around which ever-expanding interconnected episodes will span. We’re already working on future episodes, which will introduce alternate realities while maintaining it’s core themes of hope, redemption, and the intrigue of civilizations past and future, drawing from the lore and multiverse concepts.” See Variety‘s full interview with Benzies below, including the inevitable comparisons that will be drawn between “MindsEye” and the aesthetic of the “GTA” franchise, and his hopes for Rockstar Games’ highly anticipated and much-delayed “GTA 6.” Where did the concept for “MindsEye” come from? I pull a lot of inspiration from the real world. Watching the actions of humans – their foibles and their virtues. Watching the advancement of technology and how we adapt, or indeed, do not adapt. We’ve been moving to an automated world for many years now, and the impact on humans, especially with recent advancements in AI, which serves as good fodder for a story and even better for a video game. I think we all have this little nagging feeling about how humans and AI will blend together in the future—will it go smoothly, or will it turn sinister? We’re fans of all different types of media, and we’ve drawn influence from cinematic visionaries like Ridley Scott, Paul Greengrass, Christopher Nolan, and J.J. Abrams, and films like “The Bourne Identity,” “Memento,” and TV series “Lost” — they’re all exploring memory, perception, and control in their own ways. So, while we nod to those influences here and there, we wanted to build something that feels fresh, grounded in today’s world, but still asking the kinds of questions that have always made this genre powerful. With your “GTA” roots, obvious comparisons are already being drawn between the style and aesthetic of that franchise and “MindsEye.” Comparisons will always be made—it’s the way human beings pigeonhole concepts. But “MindsEye” isn’t built to fit into anyone else’s box. Many games share the same core elements: cars, guns, cities, and charismatic characters, and differentiation is even tougher in today’s entertainment landscape. Streaming, social media, and on-demand binge culture have fractured attention spans, and consumer mindshare is a brutal battlefield for all IP. Our industry continues to celebrate each other’s breakthroughs, and I’m proud that our collective innovation is advancing the medium of gaming, even if our paths diverge. As an independent studio we have the freedom to break ground in experimental new ways and the challenge is balancing innovation with familiarity—too much “new” risks alienating fans, too much “same” feels stale. It’s about nailing what makes your game’s world feel alive and urgent. “MindsEye” is about consequence and connection—it’s cinematic, reactive, and meant to feel like a world you’re not just playing in, but able to create in it too. We’re excited to see what they’ve crafted with “GTA VI ,” and I can’t wait to play it as a consumer for the first time. They’re always delivering something new, unique and at a scale that very few can pull off. What does MindsEye represent in BARB’s larger vision and long-term strategy? Are you plotting this out as a multi-game franchise or your first standalone? At Build A Rocket Boy, our vision goes far beyond a single title. “MindsEye” is the first episode and central story around which ever-expanding interconnected episodes will span. We’re already working on future episodes, which will introduce alternate realities while maintaining it’s core themes of hope, redemption, and the intrigue of civilizations past and future, drawing from the lore and multiverse concepts. It’s the future of entertainment to allow active participation so players feel like they have agency and can immerse themselves in our world as they want to. We are introducing three products in one game that will revolutionize AAA-quality interactive gaming and storytelling: “MindsEye” narrative story, Play.MindsEye, and Build.MindsEye. In our tightly crafted action-noir, “MindsEye” narrative story we have rips in time accessed through portals at strategic points throughout the game – so while you play as Jacob Diaz on his personal journey, players can also explore side stories and delve deeper into the backstories of characters they encounter along the way. In this way we are delivering companion content at the same time as the anchor content, weaving a rich narrative tapestry which will continue to evolve and expand giving greater depth to characters so you understand their personality and motivations. How do digital products Play.MindsEye (formerly named Arcadia) and Build.MindsEye (formerly Everywhere) tie in to plans for “MindsEye” and what BARB wants to offer gamers? In this new era of entertainment, where streaming platforms, boom-and-bust games, and an on-demand culture dominate, we’re pushing things in a new direction—with an interface that simplifies how we consume not just games, but all forms of entertainment. Consumers are moving away from 2D browsing into fully 3D, immersive experiences. Put simply, we’re shifting from passive interaction to active participation. As with all new products, things evolve. Arcadia was originally envisioned as our creation platform, but as we continued developing “MindsEye” and building out BARB’s ecosystem, it naturally grew into something more focused— Play.MindsEye and Build.MindsEye. Play delivers cinematic, high-intensity gameplay with missions and maps that constantly evolve. Build gives players intuitive tools to create their own content—no technical skills required, just imagination and intent. For BARB to fully realize our vision, we had to beta test our creation system with a community of builders in real-time and started with Everywhere while we were in stealth mode developing MindsEye. How did you settle on IOI as publishing partner? We’ve always found the way IOI handled the “Hitman” franchise interesting. They are one of the few publishers that have taken their single-player IP and increased their player count and amplified their community culture over time. From a technology point of view, their one executable approach for all of their content is very smart, and we always planned to have a similar approach, which encouraged us to join forces. This interview has been edited and condensed.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • What professionals really think about “Vibe Coding”

    Many don’t like it, buteverybody agrees it’s the future.“Vibe Coding” is everywhere. Tools and game engines are implementing AI-assisted coding, vibe coding interest skyrocketed on Google search, on social media, everybody claims to build apps and games in minutes, while the comment section gets flooded with angry developers calling out the pile of garbage code that will never be shipped.A screenshot from Andrej Karpathy with the original “definition” of Vibe CodingBUT, how do professionals feel about it?This is what I will cover in this article. We will look at:How people react to the term vibe coding,How their attitude differs based on who they are and their professional experienceThe reason for their stance towards “vibe coding”How they feel about the impact “vibe coding” will have in the next 5 yearsIt all started with this survey on LinkedIn. I have always been curious about how technology can support creatives and I believe that the only way to get a deeper understanding is to go beyond buzzwords and ask the hard questions. That’s why for over a year, I’ve been conducting weekly interviews with both the founders developing these tools and the creatives utilising them. If you want to learn their journeys, I’ve gathered their insights and experiences on my blog called XR AI Spotlight.Driven by the same motives and curious about people’s feelings about “vibe coding”, I asked a simple question: How does the term “Vibe Coding” make you feel?Original LinkedIn poll by Gabriele RomagnoliIn just three days, the poll collected 139 votes and it was clear that most responders didn’t have a good “vibe” about it. The remaining half was equally split between excitement and no specific feeling.But who are these people? What is their professional background? Why did they respond the way they did?Curious, I created a more comprehensive survey and sent it to everyone who voted on the LinkedIn poll.The survey had four questions:Select what describes you best: developers, creative, non-creative professionalHow many years of experience do you have? 1–5, 6–10, 11–15 or 16+Explain why the term “vibe coding” makes you feel excited/neutral/dismissive?Do you think “vibe coding” will become more relevant in the next 5 years?: It’s the future, only in niche use cases, unlikely, no idea)In a few days, I collected 62 replies and started digging into the findings, and that’s when I finally started understanding who took part in the initial poll.The audienceWhen characterising the audience, I refrained from adding too many options because I just wanted to understand:If the people responding were the ones making stuffWhat percentage of makers were creatives and what developersI was happy to see that only 8% of respondents were non-creative professionals and the remaining 92% were actual makers who have more “skin in the game“ with almost a 50/50 split between creatives and developers. There was also a good spread in the degree of professional experience of the respondents, but that’s where things started to get surprising.Respondents are mostly “makers” and show a good variety in professional experienceWhen creating 2 groups with people who have more or less than 10 years of experience, it is clear that less experienced professionals skew more towards a neutral or negative stance than the more experienced group.Experienced professionals are more positive and open to vibe codingThis might be because senior professionals see AI as a tool to accelerate their workflows, while more juniors perceive it as a competitor or threat.I then took out the non-professional creatives and looked at the attitude of these 2 groups. Not surprisingly, fewer creatives than developers have a negative attitude towards “vibe coding”, but the percentage of creatives and developers who have a positive attitude stays almost constant. This means that creatives have a more indecisive or neutral stance than developers.Creatives have a more positive attitude to vibe coding than developersWhat are people saying about “vibe coding”?As part of the survey, everybody had the chance to add a few sentences explaining their stance. This was not a compulsory field, but to my surprise, only 3 of the 62 left it empty. Before getting into the sentiment analysis, I noticed something quite interesting while filtering the data. People with a negative attitude had much more to say, and their responses were significantly longer than the other group. They wrote an average of 59 words while the others barely 37 and I think is a good indication of the emotional investment of people who want to articulate and explain their point. Let’s now look at what the different groups of people replied. Patterns in Positive Responses to “Vibe Coding”Positive responders often embraced vibe coding as a way to break free from rigid programming structures and instead explore, improvise, and experiment creatively.“It puts no pressure on it being perfect or thorough.”“Pursuing the vibe, trying what works and then adapt.”“Coding can be geeky and laborious… ‘vibing’ is quite nice.”This perspective repositions code not as rigid infrastructure, but something that favors creativity and playfulness over precision.Several answers point to vibe coding as a democratizing force opening up coding to a broader audience, who want to build without going through the traditional gatekeeping of engineering culture.“For every person complaining… there are ten who are dabbling in code and programming, building stuff without permission.”“Bridges creative with technical perfectly, thus creating potential for independence.”This group often used words like “freedom,” “reframing,” and “revolution.”. Patterns in Neutral Responses to “Vibe Coding”As shown in the initial LinkedIn poll, 27% of respondents expressed mixed feelings. When going through their responses, they recognised potential and were open to experimentation but they also had lingering doubts about the name, seriousness, and future usefulness.“It’s still a hype or buzzword.”“I have mixed feelings of fascination and scepticism.”“Unsure about further developments.”They were on the fence and were often enthusiastic about the capability, but wary of the framing.Neutral responders also acknowledged that complex, polished, or production-level work still requires traditional approaches and framed vibe coding as an early-stage assistant, not a full solution.“Nice tool, but not more than autocomplete on steroids.”“Helps get setup quickly… but critical thinking is still a human job.”“Great for prototyping, not enough to finalize product.”Some respondents were indifferent to the term itself, viewing it more as a label or meme than a paradigm shift. For them, it doesn’t change the substance of what’s happening.“At the end of the day they are just words. Are you able to accomplish what’s needed?”“I think it’s been around forever, just now with a new name.”These voices grounded the discussion in the terminology and I think they bring up a very important point that leads to the polarisation of a lot of the conversations around “vibe coding”. Patterns in Negative Responses to “Vibe Coding”Many respondents expressed concern that vibe coding implies a casual, unstructured approach to coding. This was often linked to fears about poor code quality, bugs, and security issues.“Feels like building a house without knowing how electricity and water systems work.”“Without fundamental knowledge… you quickly lose control over the output.”The term was also seen as dismissive or diminishing the value of skilled developers. It really rubbed people the wrong way, especially those with professional experience.“It downplays the skill and intention behind writing a functional, efficient program.”“Vibe coding implies not understanding what the AI does but still micromanaging it.”Like for “neutral” respondents, there’s a strong mistrust around how the term is usedwhere it’s seen as fueling unrealistic expectations or being pushed by non-experts.“Used to promote coding without knowledge.”“Just another overhyped term like NFTs or memecoins.”“It feels like a joke that went too far.”Ultimately, I decided to compare attitudes that are excitedand acceptingof vibe coding vs. those that reject or criticise it. After all, even among people who were neutral, there was a general acceptance that vibe coding has its place. Many saw it as a useful tool for things like prototyping, creative exploration, or simply making it easier to get started. What really stood out, though, was the absence of fear that was very prominent in the “negative” group and saw vibe coding as a threat to software quality or professional identity.People in the neutral and positive groups generally see potential. They view it as useful for prototyping, creative exploration, or making coding more accessible, but they still recognise the need for structure in complex systems. In contrast, the negative group rejects the concept outright, and not just the name, but what it stands for: a more casual, less rigorous approach to coding. Their opinion is often rooted in defending software engineering as a disciplined craft… and probably their job. “As long as you understand the result and the process, AI can write and fix scripts much faster than humans can.” “It’s a joke. It started as a joke… but to me doesn’t encapsulate actual AI co-engineering.”On the topic of skill and control, the neutral and positive group sees AI as a helpful assistant, assuming that a human is still guiding the process. They mention refining and reviewing as normal parts of the workflow. The negative group sees more danger, fearing that vibe coding gives a false sense of competence. They describe it as producing buggy or shallow results, often in the hands of inexperienced users. “Critical thinking is still a human job… but vibe coding helps with fast results.”“Vibe-Coding takes away the very features of a good developer… logical thinking and orchestration are crucial.”Culturally, the divide is clear. The positive and neutral voices often embrace vibe coding as part of a broader shift, welcoming new types of creators and perspectives. They tend to come from design or interdisciplinary backgrounds and are more comfortable with playful language. On the other hand, the negative group associates the term with hype and cringe, criticising it as disrespectful to those who’ve spent years honing their technical skills.“It’s about playful, relaxed creation — for the love of making something.”Creating a lot of unsafe bloatware with no proper planning.”What’s the future of “Vibe Coding”?The responses to the last question were probably the most surprising to me. I was expecting that the big scepticism towards vibe coding would align with the scepticism on its future, but that was not the case. 90% of people still see “vibe coding” becoming more relevant overall or in niche use cases.Vibe coding is here to stayOut of curiosity, I also went back to see if there was any difference based on professional experience, and that’s where we see the more experienced audience being more conservative. Only 30% of more senior Vs 50% of less experienced professionals see vibe coding playing a role in niche use cases and 13 % Vs only 3% of more experienced users don’t see vibe coding becoming more relevant at all.More experienced professionals are less likely to think Vibe Coding is the futureThere are still many open questions. What is “vibe coding” really? For whom is it? What can you do with it?To answer these questions, I decided to start a new survey you can find here. If you would like to further contribute to this research, I encourage you to participate and in case you are interested, I will share the results with you as well.The more I read or learn about this, I feel “Vibe Coding” is like the “Metaverse”:Some people hate it, some people love it.Everybody means something differentIn one form or another, it is here to stay.What professionals really think about “Vibe Coding” was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #what #professionals #really #think #about
    What professionals really think about “Vibe Coding”
    Many don’t like it, buteverybody agrees it’s the future.“Vibe Coding” is everywhere. Tools and game engines are implementing AI-assisted coding, vibe coding interest skyrocketed on Google search, on social media, everybody claims to build apps and games in minutes, while the comment section gets flooded with angry developers calling out the pile of garbage code that will never be shipped.A screenshot from Andrej Karpathy with the original “definition” of Vibe CodingBUT, how do professionals feel about it?This is what I will cover in this article. We will look at:How people react to the term vibe coding,How their attitude differs based on who they are and their professional experienceThe reason for their stance towards “vibe coding”How they feel about the impact “vibe coding” will have in the next 5 yearsIt all started with this survey on LinkedIn. I have always been curious about how technology can support creatives and I believe that the only way to get a deeper understanding is to go beyond buzzwords and ask the hard questions. That’s why for over a year, I’ve been conducting weekly interviews with both the founders developing these tools and the creatives utilising them. If you want to learn their journeys, I’ve gathered their insights and experiences on my blog called XR AI Spotlight.Driven by the same motives and curious about people’s feelings about “vibe coding”, I asked a simple question: How does the term “Vibe Coding” make you feel?Original LinkedIn poll by Gabriele RomagnoliIn just three days, the poll collected 139 votes and it was clear that most responders didn’t have a good “vibe” about it. The remaining half was equally split between excitement and no specific feeling.But who are these people? What is their professional background? Why did they respond the way they did?Curious, I created a more comprehensive survey and sent it to everyone who voted on the LinkedIn poll.The survey had four questions:Select what describes you best: developers, creative, non-creative professionalHow many years of experience do you have? 1–5, 6–10, 11–15 or 16+Explain why the term “vibe coding” makes you feel excited/neutral/dismissive?Do you think “vibe coding” will become more relevant in the next 5 years?: It’s the future, only in niche use cases, unlikely, no idea)In a few days, I collected 62 replies and started digging into the findings, and that’s when I finally started understanding who took part in the initial poll.The audienceWhen characterising the audience, I refrained from adding too many options because I just wanted to understand:If the people responding were the ones making stuffWhat percentage of makers were creatives and what developersI was happy to see that only 8% of respondents were non-creative professionals and the remaining 92% were actual makers who have more “skin in the game“ with almost a 50/50 split between creatives and developers. There was also a good spread in the degree of professional experience of the respondents, but that’s where things started to get surprising.Respondents are mostly “makers” and show a good variety in professional experienceWhen creating 2 groups with people who have more or less than 10 years of experience, it is clear that less experienced professionals skew more towards a neutral or negative stance than the more experienced group.Experienced professionals are more positive and open to vibe codingThis might be because senior professionals see AI as a tool to accelerate their workflows, while more juniors perceive it as a competitor or threat.I then took out the non-professional creatives and looked at the attitude of these 2 groups. Not surprisingly, fewer creatives than developers have a negative attitude towards “vibe coding”, but the percentage of creatives and developers who have a positive attitude stays almost constant. This means that creatives have a more indecisive or neutral stance than developers.Creatives have a more positive attitude to vibe coding than developersWhat are people saying about “vibe coding”?As part of the survey, everybody had the chance to add a few sentences explaining their stance. This was not a compulsory field, but to my surprise, only 3 of the 62 left it empty. Before getting into the sentiment analysis, I noticed something quite interesting while filtering the data. People with a negative attitude had much more to say, and their responses were significantly longer than the other group. They wrote an average of 59 words while the others barely 37 and I think is a good indication of the emotional investment of people who want to articulate and explain their point. Let’s now look at what the different groups of people replied.😍 Patterns in Positive Responses to “Vibe Coding”Positive responders often embraced vibe coding as a way to break free from rigid programming structures and instead explore, improvise, and experiment creatively.“It puts no pressure on it being perfect or thorough.”“Pursuing the vibe, trying what works and then adapt.”“Coding can be geeky and laborious… ‘vibing’ is quite nice.”This perspective repositions code not as rigid infrastructure, but something that favors creativity and playfulness over precision.Several answers point to vibe coding as a democratizing force opening up coding to a broader audience, who want to build without going through the traditional gatekeeping of engineering culture.“For every person complaining… there are ten who are dabbling in code and programming, building stuff without permission.”“Bridges creative with technical perfectly, thus creating potential for independence.”This group often used words like “freedom,” “reframing,” and “revolution.”.😑 Patterns in Neutral Responses to “Vibe Coding”As shown in the initial LinkedIn poll, 27% of respondents expressed mixed feelings. When going through their responses, they recognised potential and were open to experimentation but they also had lingering doubts about the name, seriousness, and future usefulness.“It’s still a hype or buzzword.”“I have mixed feelings of fascination and scepticism.”“Unsure about further developments.”They were on the fence and were often enthusiastic about the capability, but wary of the framing.Neutral responders also acknowledged that complex, polished, or production-level work still requires traditional approaches and framed vibe coding as an early-stage assistant, not a full solution.“Nice tool, but not more than autocomplete on steroids.”“Helps get setup quickly… but critical thinking is still a human job.”“Great for prototyping, not enough to finalize product.”Some respondents were indifferent to the term itself, viewing it more as a label or meme than a paradigm shift. For them, it doesn’t change the substance of what’s happening.“At the end of the day they are just words. Are you able to accomplish what’s needed?”“I think it’s been around forever, just now with a new name.”These voices grounded the discussion in the terminology and I think they bring up a very important point that leads to the polarisation of a lot of the conversations around “vibe coding”.🤮 Patterns in Negative Responses to “Vibe Coding”Many respondents expressed concern that vibe coding implies a casual, unstructured approach to coding. This was often linked to fears about poor code quality, bugs, and security issues.“Feels like building a house without knowing how electricity and water systems work.”“Without fundamental knowledge… you quickly lose control over the output.”The term was also seen as dismissive or diminishing the value of skilled developers. It really rubbed people the wrong way, especially those with professional experience.“It downplays the skill and intention behind writing a functional, efficient program.”“Vibe coding implies not understanding what the AI does but still micromanaging it.”Like for “neutral” respondents, there’s a strong mistrust around how the term is usedwhere it’s seen as fueling unrealistic expectations or being pushed by non-experts.“Used to promote coding without knowledge.”“Just another overhyped term like NFTs or memecoins.”“It feels like a joke that went too far.”Ultimately, I decided to compare attitudes that are excitedand acceptingof vibe coding vs. those that reject or criticise it. After all, even among people who were neutral, there was a general acceptance that vibe coding has its place. Many saw it as a useful tool for things like prototyping, creative exploration, or simply making it easier to get started. What really stood out, though, was the absence of fear that was very prominent in the “negative” group and saw vibe coding as a threat to software quality or professional identity.People in the neutral and positive groups generally see potential. They view it as useful for prototyping, creative exploration, or making coding more accessible, but they still recognise the need for structure in complex systems. In contrast, the negative group rejects the concept outright, and not just the name, but what it stands for: a more casual, less rigorous approach to coding. Their opinion is often rooted in defending software engineering as a disciplined craft… and probably their job.😍 “As long as you understand the result and the process, AI can write and fix scripts much faster than humans can.”🤮 “It’s a joke. It started as a joke… but to me doesn’t encapsulate actual AI co-engineering.”On the topic of skill and control, the neutral and positive group sees AI as a helpful assistant, assuming that a human is still guiding the process. They mention refining and reviewing as normal parts of the workflow. The negative group sees more danger, fearing that vibe coding gives a false sense of competence. They describe it as producing buggy or shallow results, often in the hands of inexperienced users.😑 “Critical thinking is still a human job… but vibe coding helps with fast results.”🤮“Vibe-Coding takes away the very features of a good developer… logical thinking and orchestration are crucial.”Culturally, the divide is clear. The positive and neutral voices often embrace vibe coding as part of a broader shift, welcoming new types of creators and perspectives. They tend to come from design or interdisciplinary backgrounds and are more comfortable with playful language. On the other hand, the negative group associates the term with hype and cringe, criticising it as disrespectful to those who’ve spent years honing their technical skills.😍“It’s about playful, relaxed creation — for the love of making something.”🤮Creating a lot of unsafe bloatware with no proper planning.”What’s the future of “Vibe Coding”?The responses to the last question were probably the most surprising to me. I was expecting that the big scepticism towards vibe coding would align with the scepticism on its future, but that was not the case. 90% of people still see “vibe coding” becoming more relevant overall or in niche use cases.Vibe coding is here to stayOut of curiosity, I also went back to see if there was any difference based on professional experience, and that’s where we see the more experienced audience being more conservative. Only 30% of more senior Vs 50% of less experienced professionals see vibe coding playing a role in niche use cases and 13 % Vs only 3% of more experienced users don’t see vibe coding becoming more relevant at all.More experienced professionals are less likely to think Vibe Coding is the futureThere are still many open questions. What is “vibe coding” really? For whom is it? What can you do with it?To answer these questions, I decided to start a new survey you can find here. If you would like to further contribute to this research, I encourage you to participate and in case you are interested, I will share the results with you as well.The more I read or learn about this, I feel “Vibe Coding” is like the “Metaverse”:Some people hate it, some people love it.Everybody means something differentIn one form or another, it is here to stay.What professionals really think about “Vibe Coding” was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #what #professionals #really #think #about
    UXDESIGN.CC
    What professionals really think about “Vibe Coding”
    Many don’t like it, but (almost) everybody agrees it’s the future.“Vibe Coding” is everywhere. Tools and game engines are implementing AI-assisted coding, vibe coding interest skyrocketed on Google search, on social media, everybody claims to build apps and games in minutes, while the comment section gets flooded with angry developers calling out the pile of garbage code that will never be shipped.A screenshot from Andrej Karpathy with the original “definition” of Vibe CodingBUT, how do professionals feel about it?This is what I will cover in this article. We will look at:How people react to the term vibe coding,How their attitude differs based on who they are and their professional experienceThe reason for their stance towards “vibe coding” (with direct quotes)How they feel about the impact “vibe coding” will have in the next 5 yearsIt all started with this survey on LinkedIn. I have always been curious about how technology can support creatives and I believe that the only way to get a deeper understanding is to go beyond buzzwords and ask the hard questions. That’s why for over a year, I’ve been conducting weekly interviews with both the founders developing these tools and the creatives utilising them. If you want to learn their journeys, I’ve gathered their insights and experiences on my blog called XR AI Spotlight.Driven by the same motives and curious about people’s feelings about “vibe coding”, I asked a simple question: How does the term “Vibe Coding” make you feel?Original LinkedIn poll by Gabriele RomagnoliIn just three days, the poll collected 139 votes and it was clear that most responders didn’t have a good “vibe” about it. The remaining half was equally split between excitement and no specific feeling.But who are these people? What is their professional background? Why did they respond the way they did?Curious, I created a more comprehensive survey and sent it to everyone who voted on the LinkedIn poll.The survey had four questions:Select what describes you best: developers, creative, non-creative professionalHow many years of experience do you have? 1–5, 6–10, 11–15 or 16+Explain why the term “vibe coding” makes you feel excited/neutral/dismissive?Do you think “vibe coding” will become more relevant in the next 5 years?: It’s the future, only in niche use cases, unlikely, no idea)In a few days, I collected 62 replies and started digging into the findings, and that’s when I finally started understanding who took part in the initial poll.The audienceWhen characterising the audience, I refrained from adding too many options because I just wanted to understand:If the people responding were the ones making stuffWhat percentage of makers were creatives and what developersI was happy to see that only 8% of respondents were non-creative professionals and the remaining 92% were actual makers who have more “skin in the game“ with almost a 50/50 split between creatives and developers. There was also a good spread in the degree of professional experience of the respondents, but that’s where things started to get surprising.Respondents are mostly “makers” and show a good variety in professional experienceWhen creating 2 groups with people who have more or less than 10 years of experience, it is clear that less experienced professionals skew more towards a neutral or negative stance than the more experienced group.Experienced professionals are more positive and open to vibe codingThis might be because senior professionals see AI as a tool to accelerate their workflows, while more juniors perceive it as a competitor or threat.I then took out the non-professional creatives and looked at the attitude of these 2 groups. Not surprisingly, fewer creatives than developers have a negative attitude towards “vibe coding” (47% for developers Vs 37% for creatives), but the percentage of creatives and developers who have a positive attitude stays almost constant. This means that creatives have a more indecisive or neutral stance than developers.Creatives have a more positive attitude to vibe coding than developersWhat are people saying about “vibe coding”?As part of the survey, everybody had the chance to add a few sentences explaining their stance. This was not a compulsory field, but to my surprise, only 3 of the 62 left it empty (thanks everybody). Before getting into the sentiment analysis, I noticed something quite interesting while filtering the data. People with a negative attitude had much more to say, and their responses were significantly longer than the other group. They wrote an average of 59 words while the others barely 37 and I think is a good indication of the emotional investment of people who want to articulate and explain their point. Let’s now look at what the different groups of people replied.😍 Patterns in Positive Responses to “Vibe Coding”Positive responders often embraced vibe coding as a way to break free from rigid programming structures and instead explore, improvise, and experiment creatively.“It puts no pressure on it being perfect or thorough.”“Pursuing the vibe, trying what works and then adapt.”“Coding can be geeky and laborious… ‘vibing’ is quite nice.”This perspective repositions code not as rigid infrastructure, but something that favors creativity and playfulness over precision.Several answers point to vibe coding as a democratizing force opening up coding to a broader audience, who want to build without going through the traditional gatekeeping of engineering culture.“For every person complaining… there are ten who are dabbling in code and programming, building stuff without permission.”“Bridges creative with technical perfectly, thus creating potential for independence.”This group often used words like “freedom,” “reframing,” and “revolution.”.😑 Patterns in Neutral Responses to “Vibe Coding”As shown in the initial LinkedIn poll, 27% of respondents expressed mixed feelings. When going through their responses, they recognised potential and were open to experimentation but they also had lingering doubts about the name, seriousness, and future usefulness.“It’s still a hype or buzzword.”“I have mixed feelings of fascination and scepticism.”“Unsure about further developments.”They were on the fence and were often enthusiastic about the capability, but wary of the framing.Neutral responders also acknowledged that complex, polished, or production-level work still requires traditional approaches and framed vibe coding as an early-stage assistant, not a full solution.“Nice tool, but not more than autocomplete on steroids.”“Helps get setup quickly… but critical thinking is still a human job.”“Great for prototyping, not enough to finalize product.”Some respondents were indifferent to the term itself, viewing it more as a label or meme than a paradigm shift. For them, it doesn’t change the substance of what’s happening.“At the end of the day they are just words. Are you able to accomplish what’s needed?”“I think it’s been around forever, just now with a new name.”These voices grounded the discussion in the terminology and I think they bring up a very important point that leads to the polarisation of a lot of the conversations around “vibe coding”.🤮 Patterns in Negative Responses to “Vibe Coding”Many respondents expressed concern that vibe coding implies a casual, unstructured approach to coding. This was often linked to fears about poor code quality, bugs, and security issues.“Feels like building a house without knowing how electricity and water systems work.”“Without fundamental knowledge… you quickly lose control over the output.”The term was also seen as dismissive or diminishing the value of skilled developers. It really rubbed people the wrong way, especially those with professional experience.“It downplays the skill and intention behind writing a functional, efficient program.”“Vibe coding implies not understanding what the AI does but still micromanaging it.”Like for “neutral” respondents, there’s a strong mistrust around how the term is used (especially on social media) where it’s seen as fueling unrealistic expectations or being pushed by non-experts.“Used to promote coding without knowledge.”“Just another overhyped term like NFTs or memecoins.”“It feels like a joke that went too far.”Ultimately, I decided to compare attitudes that are excited (positive) and accepting (neutral) of vibe coding vs. those that reject or criticise it. After all, even among people who were neutral, there was a general acceptance that vibe coding has its place. Many saw it as a useful tool for things like prototyping, creative exploration, or simply making it easier to get started. What really stood out, though, was the absence of fear that was very prominent in the “negative” group and saw vibe coding as a threat to software quality or professional identity.People in the neutral and positive groups generally see potential. They view it as useful for prototyping, creative exploration, or making coding more accessible, but they still recognise the need for structure in complex systems. In contrast, the negative group rejects the concept outright, and not just the name, but what it stands for: a more casual, less rigorous approach to coding. Their opinion is often rooted in defending software engineering as a disciplined craft… and probably their job.😍 “As long as you understand the result and the process, AI can write and fix scripts much faster than humans can.”🤮 “It’s a joke. It started as a joke… but to me doesn’t encapsulate actual AI co-engineering.”On the topic of skill and control, the neutral and positive group sees AI as a helpful assistant, assuming that a human is still guiding the process. They mention refining and reviewing as normal parts of the workflow. The negative group sees more danger, fearing that vibe coding gives a false sense of competence. They describe it as producing buggy or shallow results, often in the hands of inexperienced users.😑 “Critical thinking is still a human job… but vibe coding helps with fast results.”🤮“Vibe-Coding takes away the very features of a good developer… logical thinking and orchestration are crucial.”Culturally, the divide is clear. The positive and neutral voices often embrace vibe coding as part of a broader shift, welcoming new types of creators and perspectives. They tend to come from design or interdisciplinary backgrounds and are more comfortable with playful language. On the other hand, the negative group associates the term with hype and cringe, criticising it as disrespectful to those who’ve spent years honing their technical skills.😍“It’s about playful, relaxed creation — for the love of making something.”🤮Creating a lot of unsafe bloatware with no proper planning.”What’s the future of “Vibe Coding”?The responses to the last question were probably the most surprising to me. I was expecting that the big scepticism towards vibe coding would align with the scepticism on its future, but that was not the case. 90% of people still see “vibe coding” becoming more relevant overall or in niche use cases.Vibe coding is here to stayOut of curiosity, I also went back to see if there was any difference based on professional experience, and that’s where we see the more experienced audience being more conservative. Only 30% of more senior Vs 50% of less experienced professionals see vibe coding playing a role in niche use cases and 13 % Vs only 3% of more experienced users don’t see vibe coding becoming more relevant at all.More experienced professionals are less likely to think Vibe Coding is the futureThere are still many open questions. What is “vibe coding” really? For whom is it? What can you do with it?To answer these questions, I decided to start a new survey you can find here. If you would like to further contribute to this research, I encourage you to participate and in case you are interested, I will share the results with you as well.The more I read or learn about this, I feel “Vibe Coding” is like the “Metaverse”:Some people hate it, some people love it.Everybody means something differentIn one form or another, it is here to stay.What professionals really think about “Vibe Coding” was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Anthropic CEO Dario Amodei Says AI Models Hallucinate Less Than Humans: Report

    Photo Credit: Anthropic Anthropic’s lawyer was recently forced to apologise after Claude made a citation error

    Highlights

    Anthropic also released new Claude 4 AI models at the event
    Amodei had previously said that AGI could arrive as early as 2026
    Anthropic has released several papers on ways AI models can be grounded

    Advertisement

    Anthropic CEO Dario Amodei reportedly said that artificial intelligencemodels hallucinate less than humans. As per the report, the statement was made by the CEO at the company's inaugural Code With Claude event on Thursday. During the event, the San Francisco-based AI firm released two new Claude 4 models, as well as multiple new capabilities, including improved memory and tool use. Amodei reportedly also suggested that while critics are trying to find roadblocks for AI, “they are nowhere to be seen.”Anthropic CEO Downplays AI HallucinationsTechCrunch reports that Amodei's made the comment during a press briefing, while he was explaining how hallucinations are not a limitation for AI to reach artificial general intelligence. Answering a question from the publication, the CEO reportedly said, “It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.”Amodei further added that TV broadcasters, politicians, and humans involved in other professions make mistakes regularly, so AI making mistakes does not take away from its intelligence, as per the report. However, the CEO reportedly acknowledged that AI models confidently responding with untrue responses is a problem.Earlier this month, Anthropic's lawyer was forced to apologise in a courtroom after its Claude chatbot added an incorrect citation in a filing, according to a Bloomberg report. The incident occurred during the AI firm's ongoing lawsuit against music publishers over alleged copyright infringement of lyrics of at least 500 songs.In a October 2024 paper, Amodei claimed that Anthropic might achieve AGI as soon as next year. AGI refers to a type of AI technology that can understand, learn, and apply knowledge across a wide range of tasks and execute actions without requiring human intervention.

    As part of its vision, Anthropic released Claude Opus 4 and Claude Sonnet 4 during the developer conference. These models bring major improvements in coding, tool use, and writing. Claude Sonnet 4 scored 72.7 percent on the SWE-Bench benchmark, achieving state-of-the-artdistinction in code writing.

    For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

    Further reading:
    Anthropic, AI, AI hallucination, Artificial Intelligence

    Akash Dutta

    Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food.
    More

    Related Stories
    #anthropic #ceo #dario #amodei #says
    Anthropic CEO Dario Amodei Says AI Models Hallucinate Less Than Humans: Report
    Photo Credit: Anthropic Anthropic’s lawyer was recently forced to apologise after Claude made a citation error Highlights Anthropic also released new Claude 4 AI models at the event Amodei had previously said that AGI could arrive as early as 2026 Anthropic has released several papers on ways AI models can be grounded Advertisement Anthropic CEO Dario Amodei reportedly said that artificial intelligencemodels hallucinate less than humans. As per the report, the statement was made by the CEO at the company's inaugural Code With Claude event on Thursday. During the event, the San Francisco-based AI firm released two new Claude 4 models, as well as multiple new capabilities, including improved memory and tool use. Amodei reportedly also suggested that while critics are trying to find roadblocks for AI, “they are nowhere to be seen.”Anthropic CEO Downplays AI HallucinationsTechCrunch reports that Amodei's made the comment during a press briefing, while he was explaining how hallucinations are not a limitation for AI to reach artificial general intelligence. Answering a question from the publication, the CEO reportedly said, “It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.”Amodei further added that TV broadcasters, politicians, and humans involved in other professions make mistakes regularly, so AI making mistakes does not take away from its intelligence, as per the report. However, the CEO reportedly acknowledged that AI models confidently responding with untrue responses is a problem.Earlier this month, Anthropic's lawyer was forced to apologise in a courtroom after its Claude chatbot added an incorrect citation in a filing, according to a Bloomberg report. The incident occurred during the AI firm's ongoing lawsuit against music publishers over alleged copyright infringement of lyrics of at least 500 songs.In a October 2024 paper, Amodei claimed that Anthropic might achieve AGI as soon as next year. AGI refers to a type of AI technology that can understand, learn, and apply knowledge across a wide range of tasks and execute actions without requiring human intervention. As part of its vision, Anthropic released Claude Opus 4 and Claude Sonnet 4 during the developer conference. These models bring major improvements in coding, tool use, and writing. Claude Sonnet 4 scored 72.7 percent on the SWE-Bench benchmark, achieving state-of-the-artdistinction in code writing. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: Anthropic, AI, AI hallucination, Artificial Intelligence Akash Dutta Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More Related Stories #anthropic #ceo #dario #amodei #says
    WWW.GADGETS360.COM
    Anthropic CEO Dario Amodei Says AI Models Hallucinate Less Than Humans: Report
    Photo Credit: Anthropic Anthropic’s lawyer was recently forced to apologise after Claude made a citation error Highlights Anthropic also released new Claude 4 AI models at the event Amodei had previously said that AGI could arrive as early as 2026 Anthropic has released several papers on ways AI models can be grounded Advertisement Anthropic CEO Dario Amodei reportedly said that artificial intelligence (AI) models hallucinate less than humans. As per the report, the statement was made by the CEO at the company's inaugural Code With Claude event on Thursday. During the event, the San Francisco-based AI firm released two new Claude 4 models, as well as multiple new capabilities, including improved memory and tool use. Amodei reportedly also suggested that while critics are trying to find roadblocks for AI, “they are nowhere to be seen.”Anthropic CEO Downplays AI HallucinationsTechCrunch reports that Amodei's made the comment during a press briefing, while he was explaining how hallucinations are not a limitation for AI to reach artificial general intelligence (AGI). Answering a question from the publication, the CEO reportedly said, “It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways.”Amodei further added that TV broadcasters, politicians, and humans involved in other professions make mistakes regularly, so AI making mistakes does not take away from its intelligence, as per the report. However, the CEO reportedly acknowledged that AI models confidently responding with untrue responses is a problem.Earlier this month, Anthropic's lawyer was forced to apologise in a courtroom after its Claude chatbot added an incorrect citation in a filing, according to a Bloomberg report. The incident occurred during the AI firm's ongoing lawsuit against music publishers over alleged copyright infringement of lyrics of at least 500 songs.In a October 2024 paper, Amodei claimed that Anthropic might achieve AGI as soon as next year. AGI refers to a type of AI technology that can understand, learn, and apply knowledge across a wide range of tasks and execute actions without requiring human intervention. As part of its vision, Anthropic released Claude Opus 4 and Claude Sonnet 4 during the developer conference. These models bring major improvements in coding, tool use, and writing. Claude Sonnet 4 scored 72.7 percent on the SWE-Bench benchmark, achieving state-of-the-art (SOTA) distinction in code writing. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: Anthropic, AI, AI hallucination, Artificial Intelligence Akash Dutta Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More Related Stories
    0 Commentarii 0 Distribuiri 0 previzualizare
CGShares https://cgshares.com