• Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud

    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    #googles #new #tool #generates #convincing
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.” #googles #new #tool #generates #convincing
    TIME.COM
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement. (Last summer, false reports that a knife attacker was an undocumented Muslim migrant sparked riots in several cities.) Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for $249 a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder. (Veo 3 titled the file “Election Fraud Video.”) Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare. (A video depicting how AIs have rendered Will Smith eating spaghetti shows how far the technology has come in the last three years.) For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization. (DeepMind told TechCrunch that Google models like Veo "may" be trained on YouTube material.) Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    Like
    Love
    Wow
    Angry
    Sad
    218
    0 Comentários 0 Compartilhamentos 0 Anterior
  • You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await

    Home You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await

    News

    You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await

    6 min read

    Published: May 27, 2025

    Key Takeaways

    With Google’s Veo3, you can now render AI videos, audio, and background sounds.
    This would also make it easy for scammers to design deepfake scams to defraud innocent citizens.
    Users need to exercise self-vigilance to protect themselves. Developers’ responsibilities and government regulations will also play a key part.

    Google recently launched Veo3, an AI tool that lets you create videos with audio, including background tracks and various sound effects. Until recently, you could either use voice cloning apps to build AI voices or video rendering apps to generate AI videos. However, thanks to Veo3, folks can now create entire videos with audio.
    While this is an exciting development, we can’t help but think how easy it would be for scammers and swindlers to use Veo3’s videos to scam people.
    A video posted by a user on Threads shows a TV anchor breaking the news that ‘Secretary of Defence Pete Hegseth has died after drinking an entire litre of vodka on a dare by RFK.’ At first glance, the video is extremely convincing, and chances are that quite a few people might have believed it. After all, the quality is that of a professional news studio with a background of the Pentagon.
    Another user named Ari Kuschnir posted a 1-minute 16-second video on Reddit showing various characters in different settings talking to each other in various accents. The facial expressions are very close to those of a real human.
    A user commented, ‘Wow. The things that are coming. Gonna be wild!’ The ‘wild’ part is that the gap between reality and AI-generated content is closing daily. And remember, this is only the first version of this brand-new technology – things will only get worsefrom here.
    New AI Age for Scammers
    With the development of generative AI, we have already seen countless examples of people losing millions to such scams. 
    For example, in January 2024, an employee of a Hong Kong firm sent M to fraudsters who convinced the employee that she was talking to the CFO of the firm on a video call. Deloitte’s Center for Financial Services has predicted that generative AI could lead to a loss of B in the US alone by 2027, growing at a CAGR of 32%.
    Until now, scammers also had to take the effort of generating audio and video separately and syncing them to compile a ‘believable’ video. However, advanced AI tools like Veo3 make it easier for bad actors to catch innocent people off guard.

    In what is called the internet’s biggest scam so far, an 82-year-old retiree, Steve Beauchamp, lost after he invested his retirement savings in an investment scheme. The AI-generated video showed Elon Musk talking about this investment and how everyone looking to make money should invest in the scheme.
    In January 2024, sexually explicit images of Taylor Swift were spread on social media, drawing a lot of legislative attention to the matter. Now, imagine what these scammers can do with Veo3-like technology. Making deepfake porn would become easier and faster, leading to a lot of extortion cases.
    It’s worth noting, though, that we’re not saying that Veo3 specifically will be used for such criminal activities because they have several safeguards in place. However, now that Veo3 has shown the path, other similar products might be developed for malicious use cases.
    How to Protect Yourself
    Protection against AI-generated content is a multifaceted approach involving three key pillars: self-vigilance, developers’ responsibilities, and government regulations.
    Self Vigilance
    Well, it’s not entirely impossible to figure out which video is made via AI and which is genuine. Sure, AI has grown leaps and bounds in the last two years, and we have something as advanced as Veo3. However, there are still a few telltale signs of an AI-generated video. 

    The biggest giveaway is the lip sync. If you see a video of someone speaking, pay close attention to their lips. The audio in most cases will be out of sync by a few milliseconds.
    The voice, in most cases, will also sound robotic or flat. The tone and pitch might be inconsistent without any natural breathing sounds.

    We also recommend that you only trust official sources of information and not any random video you find while scrolling Instagram, YouTube, or TikTok. For example, if you see Elon Musk promoting an investment scheme, look for the official page or website of that scheme and dig deeper to find out who the actual promoters are. 
    You will not find anything reliable or trustworthy in the process. This exercise takes only a couple of minutes but can end up saving thousands of dollars.
    Developer’s Responsibilities
    AI developers are also responsible for ensuring their products cannot be misused for scams, extortion, and misinformation. For example, Veo3 blocks prompts that violate responsible AI guidelines, such as those involving politicians or violent acts. 
    Google has also developed its SynthID watermarking system, which watermarks content generated using Google’s AI tools. People can use the SynthID Detector to verify if a particular content was generated using AI.

    However, these safeguards are currently limited to Google’s products as of now. There’s a need for similar, if not better, prevention systems moving forward.
    Government Regulations
    Lastly, the government needs to play a crucial role in regulating the use of artificial intelligence. For example, the EU has already passed the AI Act, with enforcement beginning in 2025. Under this, companies must undergo stringent documentation, transparency, and oversight standards for all high-risk AI systems. 
    Even in the US, several laws are under proposal. For instance, the DEEPFAKES Accountability Act would require AI-generated content that shows any person to include a clear disclaimer stating that it is a deepfake. The bill was introduced in the House of Representatives in September 2023 and is currently under consideration. 
    Similarly, the REAL Political Advertisements Act would require political ads that contain AI content to include a similar disclaimer.
    That said, we are still only in the early stages of formulating legislation to regulate AI content. With time, as more sophisticated and advanced artificial intelligence tools develop, lawmakers must also be proactive in ensuring digital safety.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style.
    He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth.
    Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides. 
    Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh. 
    Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles by Krishi Chowdhary

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

    More from News

    View all

    News

    OpenAI Academy – A New Beginning in AI Learning

    Krishi Chowdhary

    44 minutes ago

    View all
    #you #can #now #make #videos
    You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await
    Home You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await News You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await 6 min read Published: May 27, 2025 Key Takeaways With Google’s Veo3, you can now render AI videos, audio, and background sounds. This would also make it easy for scammers to design deepfake scams to defraud innocent citizens. Users need to exercise self-vigilance to protect themselves. Developers’ responsibilities and government regulations will also play a key part. Google recently launched Veo3, an AI tool that lets you create videos with audio, including background tracks and various sound effects. Until recently, you could either use voice cloning apps to build AI voices or video rendering apps to generate AI videos. However, thanks to Veo3, folks can now create entire videos with audio. While this is an exciting development, we can’t help but think how easy it would be for scammers and swindlers to use Veo3’s videos to scam people. A video posted by a user on Threads shows a TV anchor breaking the news that ‘Secretary of Defence Pete Hegseth has died after drinking an entire litre of vodka on a dare by RFK.’ At first glance, the video is extremely convincing, and chances are that quite a few people might have believed it. After all, the quality is that of a professional news studio with a background of the Pentagon. Another user named Ari Kuschnir posted a 1-minute 16-second video on Reddit showing various characters in different settings talking to each other in various accents. The facial expressions are very close to those of a real human. A user commented, ‘Wow. The things that are coming. Gonna be wild!’ The ‘wild’ part is that the gap between reality and AI-generated content is closing daily. And remember, this is only the first version of this brand-new technology – things will only get worsefrom here. New AI Age for Scammers With the development of generative AI, we have already seen countless examples of people losing millions to such scams.  For example, in January 2024, an employee of a Hong Kong firm sent M to fraudsters who convinced the employee that she was talking to the CFO of the firm on a video call. Deloitte’s Center for Financial Services has predicted that generative AI could lead to a loss of B in the US alone by 2027, growing at a CAGR of 32%. Until now, scammers also had to take the effort of generating audio and video separately and syncing them to compile a ‘believable’ video. However, advanced AI tools like Veo3 make it easier for bad actors to catch innocent people off guard. In what is called the internet’s biggest scam so far, an 82-year-old retiree, Steve Beauchamp, lost after he invested his retirement savings in an investment scheme. The AI-generated video showed Elon Musk talking about this investment and how everyone looking to make money should invest in the scheme. In January 2024, sexually explicit images of Taylor Swift were spread on social media, drawing a lot of legislative attention to the matter. Now, imagine what these scammers can do with Veo3-like technology. Making deepfake porn would become easier and faster, leading to a lot of extortion cases. It’s worth noting, though, that we’re not saying that Veo3 specifically will be used for such criminal activities because they have several safeguards in place. However, now that Veo3 has shown the path, other similar products might be developed for malicious use cases. How to Protect Yourself Protection against AI-generated content is a multifaceted approach involving three key pillars: self-vigilance, developers’ responsibilities, and government regulations. Self Vigilance Well, it’s not entirely impossible to figure out which video is made via AI and which is genuine. Sure, AI has grown leaps and bounds in the last two years, and we have something as advanced as Veo3. However, there are still a few telltale signs of an AI-generated video.  The biggest giveaway is the lip sync. If you see a video of someone speaking, pay close attention to their lips. The audio in most cases will be out of sync by a few milliseconds. The voice, in most cases, will also sound robotic or flat. The tone and pitch might be inconsistent without any natural breathing sounds. We also recommend that you only trust official sources of information and not any random video you find while scrolling Instagram, YouTube, or TikTok. For example, if you see Elon Musk promoting an investment scheme, look for the official page or website of that scheme and dig deeper to find out who the actual promoters are.  You will not find anything reliable or trustworthy in the process. This exercise takes only a couple of minutes but can end up saving thousands of dollars. Developer’s Responsibilities AI developers are also responsible for ensuring their products cannot be misused for scams, extortion, and misinformation. For example, Veo3 blocks prompts that violate responsible AI guidelines, such as those involving politicians or violent acts.  Google has also developed its SynthID watermarking system, which watermarks content generated using Google’s AI tools. People can use the SynthID Detector to verify if a particular content was generated using AI. However, these safeguards are currently limited to Google’s products as of now. There’s a need for similar, if not better, prevention systems moving forward. Government Regulations Lastly, the government needs to play a crucial role in regulating the use of artificial intelligence. For example, the EU has already passed the AI Act, with enforcement beginning in 2025. Under this, companies must undergo stringent documentation, transparency, and oversight standards for all high-risk AI systems.  Even in the US, several laws are under proposal. For instance, the DEEPFAKES Accountability Act would require AI-generated content that shows any person to include a clear disclaimer stating that it is a deepfake. The bill was introduced in the House of Representatives in September 2023 and is currently under consideration.  Similarly, the REAL Political Advertisements Act would require political ads that contain AI content to include a similar disclaimer. That said, we are still only in the early stages of formulating legislation to regulate AI content. With time, as more sophisticated and advanced artificial intelligence tools develop, lawmakers must also be proactive in ensuring digital safety. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all News OpenAI Academy – A New Beginning in AI Learning Krishi Chowdhary 44 minutes ago View all #you #can #now #make #videos
    TECHREPORT.COM
    You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await
    Home You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await News You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await 6 min read Published: May 27, 2025 Key Takeaways With Google’s Veo3, you can now render AI videos, audio, and background sounds. This would also make it easy for scammers to design deepfake scams to defraud innocent citizens. Users need to exercise self-vigilance to protect themselves. Developers’ responsibilities and government regulations will also play a key part. Google recently launched Veo3, an AI tool that lets you create videos with audio, including background tracks and various sound effects. Until recently, you could either use voice cloning apps to build AI voices or video rendering apps to generate AI videos. However, thanks to Veo3, folks can now create entire videos with audio. While this is an exciting development, we can’t help but think how easy it would be for scammers and swindlers to use Veo3’s videos to scam people. A video posted by a user on Threads shows a TV anchor breaking the news that ‘Secretary of Defence Pete Hegseth has died after drinking an entire litre of vodka on a dare by RFK.’ At first glance, the video is extremely convincing, and chances are that quite a few people might have believed it. After all, the quality is that of a professional news studio with a background of the Pentagon. Another user named Ari Kuschnir posted a 1-minute 16-second video on Reddit showing various characters in different settings talking to each other in various accents. The facial expressions are very close to those of a real human. A user commented, ‘Wow. The things that are coming. Gonna be wild!’ The ‘wild’ part is that the gap between reality and AI-generated content is closing daily. And remember, this is only the first version of this brand-new technology – things will only get worse (worse) from here. New AI Age for Scammers With the development of generative AI, we have already seen countless examples of people losing millions to such scams.  For example, in January 2024, an employee of a Hong Kong firm sent $25M to fraudsters who convinced the employee that she was talking to the CFO of the firm on a video call. Deloitte’s Center for Financial Services has predicted that generative AI could lead to a loss of $40B in the US alone by 2027, growing at a CAGR of 32%. Until now, scammers also had to take the effort of generating audio and video separately and syncing them to compile a ‘believable’ video. However, advanced AI tools like Veo3 make it easier for bad actors to catch innocent people off guard. In what is called the internet’s biggest scam so far, an 82-year-old retiree, Steve Beauchamp, lost $690,000 after he invested his retirement savings in an investment scheme. The AI-generated video showed Elon Musk talking about this investment and how everyone looking to make money should invest in the scheme. In January 2024, sexually explicit images of Taylor Swift were spread on social media, drawing a lot of legislative attention to the matter. Now, imagine what these scammers can do with Veo3-like technology. Making deepfake porn would become easier and faster, leading to a lot of extortion cases. It’s worth noting, though, that we’re not saying that Veo3 specifically will be used for such criminal activities because they have several safeguards in place. However, now that Veo3 has shown the path, other similar products might be developed for malicious use cases. How to Protect Yourself Protection against AI-generated content is a multifaceted approach involving three key pillars: self-vigilance, developers’ responsibilities, and government regulations. Self Vigilance Well, it’s not entirely impossible to figure out which video is made via AI and which is genuine. Sure, AI has grown leaps and bounds in the last two years, and we have something as advanced as Veo3. However, there are still a few telltale signs of an AI-generated video.  The biggest giveaway is the lip sync. If you see a video of someone speaking, pay close attention to their lips. The audio in most cases will be out of sync by a few milliseconds. The voice, in most cases, will also sound robotic or flat. The tone and pitch might be inconsistent without any natural breathing sounds. We also recommend that you only trust official sources of information and not any random video you find while scrolling Instagram, YouTube, or TikTok. For example, if you see Elon Musk promoting an investment scheme, look for the official page or website of that scheme and dig deeper to find out who the actual promoters are.  You will not find anything reliable or trustworthy in the process. This exercise takes only a couple of minutes but can end up saving thousands of dollars. Developer’s Responsibilities AI developers are also responsible for ensuring their products cannot be misused for scams, extortion, and misinformation. For example, Veo3 blocks prompts that violate responsible AI guidelines, such as those involving politicians or violent acts.  Google has also developed its SynthID watermarking system, which watermarks content generated using Google’s AI tools. People can use the SynthID Detector to verify if a particular content was generated using AI. However, these safeguards are currently limited to Google’s products as of now. There’s a need for similar, if not better, prevention systems moving forward. Government Regulations Lastly, the government needs to play a crucial role in regulating the use of artificial intelligence. For example, the EU has already passed the AI Act, with enforcement beginning in 2025. Under this, companies must undergo stringent documentation, transparency, and oversight standards for all high-risk AI systems.  Even in the US, several laws are under proposal. For instance, the DEEPFAKES Accountability Act would require AI-generated content that shows any person to include a clear disclaimer stating that it is a deepfake. The bill was introduced in the House of Representatives in September 2023 and is currently under consideration.  Similarly, the REAL Political Advertisements Act would require political ads that contain AI content to include a similar disclaimer. That said, we are still only in the early stages of formulating legislation to regulate AI content. With time, as more sophisticated and advanced artificial intelligence tools develop, lawmakers must also be proactive in ensuring digital safety. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all News OpenAI Academy – A New Beginning in AI Learning Krishi Chowdhary 44 minutes ago View all
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Google Launches SynthID Detector – A Revolutionary AI Detection Tool. Is This the Beginning of Responsible AI Development?

    Home Google Launches SynthID Detector – A Revolutionary AI Detection Tool. Is This the Beginning of Responsible AI Development?

    News

    Google Launches SynthID Detector – A Revolutionary AI Detection Tool. Is This the Beginning of Responsible AI Development?

    7 min read

    Published: May 22, 2025

    Key Takeaways

    Google has introduced SynthID Detector, a powerful tool that can detect AI-generated content.
    It works by identifying SynthID-generated watermarks in content served up by Google AI tools, such as Imagen, Gemini, and Lyria.
    The detector is currently in the testing phase and only available for use by joining a waitlist.
    SynthID Detector is also open-source, allowing anyone to build on the tech architecture.

    Google has launched SynthID Detector, a tool that can recognize any content generated through the Google suite of AI tools.
    SynthID, in case you didn’t know, is a state-of-the-art watermarking tool launched by Google in August 2023. This technology adds a watermark on AI-generated content, which is not visible to the naked eye. 
    Initially, SynthID was launched only for AI-generated images, but it has now been extended to text, video, and audio content generated using tools like Imagen, Gemini, Lyria, and Veo.
    The detector uses this SynthID watermarking to identify AI content. When you upload an image, audio, or video to the detector tool, it’ll look for this watermark. If it finds one, it’ll highlight the part of the content that is most likely to be watermarked.
    It’s worth noting, though, that the SynthID Detector is currently in the testing phase. Google has released a waitlist form for researchers, journalists, and media professionals.

    Google has also partnered with NVIDIA to watermark videos generated on their NVIDIA Cosmos AI model. More importantly, Google announced a partnership with GetReal Security, which is a leading pioneer in detecting deepfake media and has raised around million in equity funding.
    We’re likely to see an increasing number of such partnerships from Google’s end, meaning SynthID Detector’s scope will keep broadening. So, you’ll be able to detect not just Google-generated AI content but also content generated with other AI platforms.
    The Need for SynthID Detector
    Notwithstanding all of the benefits that artificial intelligence has brought us, it has also become a powerful tool in the hands of criminals. We have seen hundreds of incidents where innocent people were scammed or threatened using AI-generated content.
    For example, on May 13, Sandra Rogers, a Lackawanna County woman, was found guilty of possessing AI-generated child sex abuse images. In another incident, a 17-year-old kid extorted personal information from 19 victims by creating sexually explicit deepfakes and threatening to leak them.
    A man in China was scammed out of by a scammer using an AI-generated voice over the phone impersonating the man’s friend. Similar scams have become popular in the US and even in countries like India that aren’t really at the forefront of AI technology.
    In addition to crimes against civilians, AI is also being used to cause a lot of political unrest. For instance, a consultant was fined M for using fake robocalls during the US presidential elections. He used AI to mimic Joe Biden’s voice and urged voters in New Hampshire not to vote in the state’s Democratic primary.
    Back in 2022, a fake video of Ukrainian President Zelensky was broadcast on Ukraine 24, a Ukrainian news website, which was allegedly hacked. The fake AI video showed Zelensky apparently surrendering to Russia and ‘laying down arms.’
    This is only the tip of the iceberg. The internet is filled with such cases, with newer ones coming out almost every single day. AI is increasingly being weaponized against institutions, government, and the societal order to cause political and social unrest.

    Image Credit – Statista
    Therefore, a tool like SynthID Detector can be a beacon of hope to combat such perpetrators. News houses, publications, and regulators can run a suspected image or content through the detector to verify a story before running it for millions to view.
    More importantly, tools like SynthID will also go a long way in instilling some semblance of fear among criminals, who will know that they can be busted anytime.
    And What About the Legal Grey Area of AI Usage?
    Besides the above outright illegal use of AI, there’s also a moral dilemma attached to increasing AI use. Educators are specifically worried about the use of LLMs and text-generating AI models in schools, colleges, and universities.
    Instead of putting in the hard yards, students now just punch in a couple of prompts to generate detailed, human-like articles and assignments. Research at the University of Pennsylvania formed two groups of students: one with access to ChatGPT and another without any such LLM tools. 
    The students who had used ChatGPT could solve 48% more mathematical problems correctly. However, when a test was conducted, the students who had used ChatGPT solved 17% fewer problems than those who didn’t. 
    This shows that the use of LLM models isn’t really contributing to learning and academic development. They’re, instead, tools to simply ‘complete tasks,’ which is slowly robbing us of our ability to think.
    Another study called ‘AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking’ shows that people in the age group 17-25 have the highest AI usage as well as the lowest critical thinking scores. Coincidence? We don’t think so.
    Clearly, the use of AI tools isn’t contributing to the development of young minds. Instead, it has become a watchdog for laziness for people who wish to cut corners.
    We call this a moral dilemma because the use of AI tools for education or any other purpose, for that matter, is not illegal. Instead, it’s more of a conscious decision to let go of our own critical thinking, which, as most would argue, is what makes us human.
    Contemporary AI Detectors Are Worthless
    Because AI is replacing critical thinking and being used to outsource work by students, it’s understandable why educational institutions have resorted to AI detectors to check for the presence of AI-generated content in student submissions and assignments. 
    However, these AI detectors are no more accurate than a blind person telling you the way ahead. Apologies if we stepped on any toes here! We forgot our stick!
    Christopher Penn, an AI expert, made a post on LinkedIn titled ‘AI Detectors are a joke.’ He fed the US Declaration of Independence to a ‘market-leading’ AI detector, and guess what? Apparently, our forefathers used 97% AI to pen down the Declaration. Time travel?

    The inaccurate results from these detectors stem from their use of parameters such as perplexity and burstiness to analyze texts. Consequently, if you write an article that sounds somewhat robotic, lacks vocabulary variety, and features similar line lengths, these ‘AI detectors’ may classify your work as that of an AI language model.
    Bottom line, these tools are not reliable, which is possibly why OpenAI discontinued its AI detection tool in mid-2023, citing accuracy issues. However, the sad part is that a large part of the system, including universities, still relies on these tools to make major decisions such as student expulsions and suspensions.
    This is exactly why we need a better and more reliable tool to call out AI-generated content. Enter SynthID Detector.
    SynthID Detector Is Open-Source
    Possibly the biggest piece of positive news with regard to Google’s SynthID Detector announcement is that the tool has been kept open source. This will allow other companies and creators to build on the existing architecture and incorporate AI watermark detection in their own artificial intelligence models.
    Remember, SynthID Detector currently only works for Google’s AI tools, which is just a small part of the whole artificial intelligence market. So, if someone generates a text using ChatGPT, there’s still no reliable way to tell if it was AI-generated.
    Maybe that’s why Google has kept the detector open-source, hoping that other developers would take a cue from it.
    All in all, it’s really appreciable that Google hasn’t gate-kept this essential development. Other companies that are concerned about the increasing misuse of their AI models should go ahead and contribute to the greater good of making AI safe for society.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style.
    He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth.
    Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides. 
    Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh. 
    Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles by Krishi Chowdhary

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

    More from News

    View all

    View all
    #google #launches #synthid #detector #revolutionary
    Google Launches SynthID Detector – A Revolutionary AI Detection Tool. Is This the Beginning of Responsible AI Development?
    Home Google Launches SynthID Detector – A Revolutionary AI Detection Tool. Is This the Beginning of Responsible AI Development? News Google Launches SynthID Detector – A Revolutionary AI Detection Tool. Is This the Beginning of Responsible AI Development? 7 min read Published: May 22, 2025 Key Takeaways Google has introduced SynthID Detector, a powerful tool that can detect AI-generated content. It works by identifying SynthID-generated watermarks in content served up by Google AI tools, such as Imagen, Gemini, and Lyria. The detector is currently in the testing phase and only available for use by joining a waitlist. SynthID Detector is also open-source, allowing anyone to build on the tech architecture. Google has launched SynthID Detector, a tool that can recognize any content generated through the Google suite of AI tools. SynthID, in case you didn’t know, is a state-of-the-art watermarking tool launched by Google in August 2023. This technology adds a watermark on AI-generated content, which is not visible to the naked eye.  Initially, SynthID was launched only for AI-generated images, but it has now been extended to text, video, and audio content generated using tools like Imagen, Gemini, Lyria, and Veo. The detector uses this SynthID watermarking to identify AI content. When you upload an image, audio, or video to the detector tool, it’ll look for this watermark. If it finds one, it’ll highlight the part of the content that is most likely to be watermarked. It’s worth noting, though, that the SynthID Detector is currently in the testing phase. Google has released a waitlist form for researchers, journalists, and media professionals. Google has also partnered with NVIDIA to watermark videos generated on their NVIDIA Cosmos AI model. More importantly, Google announced a partnership with GetReal Security, which is a leading pioneer in detecting deepfake media and has raised around million in equity funding. We’re likely to see an increasing number of such partnerships from Google’s end, meaning SynthID Detector’s scope will keep broadening. So, you’ll be able to detect not just Google-generated AI content but also content generated with other AI platforms. The Need for SynthID Detector Notwithstanding all of the benefits that artificial intelligence has brought us, it has also become a powerful tool in the hands of criminals. We have seen hundreds of incidents where innocent people were scammed or threatened using AI-generated content. For example, on May 13, Sandra Rogers, a Lackawanna County woman, was found guilty of possessing AI-generated child sex abuse images. In another incident, a 17-year-old kid extorted personal information from 19 victims by creating sexually explicit deepfakes and threatening to leak them. A man in China was scammed out of by a scammer using an AI-generated voice over the phone impersonating the man’s friend. Similar scams have become popular in the US and even in countries like India that aren’t really at the forefront of AI technology. In addition to crimes against civilians, AI is also being used to cause a lot of political unrest. For instance, a consultant was fined M for using fake robocalls during the US presidential elections. He used AI to mimic Joe Biden’s voice and urged voters in New Hampshire not to vote in the state’s Democratic primary. Back in 2022, a fake video of Ukrainian President Zelensky was broadcast on Ukraine 24, a Ukrainian news website, which was allegedly hacked. The fake AI video showed Zelensky apparently surrendering to Russia and ‘laying down arms.’ This is only the tip of the iceberg. The internet is filled with such cases, with newer ones coming out almost every single day. AI is increasingly being weaponized against institutions, government, and the societal order to cause political and social unrest. Image Credit – Statista Therefore, a tool like SynthID Detector can be a beacon of hope to combat such perpetrators. News houses, publications, and regulators can run a suspected image or content through the detector to verify a story before running it for millions to view. More importantly, tools like SynthID will also go a long way in instilling some semblance of fear among criminals, who will know that they can be busted anytime. And What About the Legal Grey Area of AI Usage? Besides the above outright illegal use of AI, there’s also a moral dilemma attached to increasing AI use. Educators are specifically worried about the use of LLMs and text-generating AI models in schools, colleges, and universities. Instead of putting in the hard yards, students now just punch in a couple of prompts to generate detailed, human-like articles and assignments. Research at the University of Pennsylvania formed two groups of students: one with access to ChatGPT and another without any such LLM tools.  The students who had used ChatGPT could solve 48% more mathematical problems correctly. However, when a test was conducted, the students who had used ChatGPT solved 17% fewer problems than those who didn’t.  This shows that the use of LLM models isn’t really contributing to learning and academic development. They’re, instead, tools to simply ‘complete tasks,’ which is slowly robbing us of our ability to think. Another study called ‘AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking’ shows that people in the age group 17-25 have the highest AI usage as well as the lowest critical thinking scores. Coincidence? We don’t think so. Clearly, the use of AI tools isn’t contributing to the development of young minds. Instead, it has become a watchdog for laziness for people who wish to cut corners. We call this a moral dilemma because the use of AI tools for education or any other purpose, for that matter, is not illegal. Instead, it’s more of a conscious decision to let go of our own critical thinking, which, as most would argue, is what makes us human. Contemporary AI Detectors Are Worthless Because AI is replacing critical thinking and being used to outsource work by students, it’s understandable why educational institutions have resorted to AI detectors to check for the presence of AI-generated content in student submissions and assignments.  However, these AI detectors are no more accurate than a blind person telling you the way ahead. Apologies if we stepped on any toes here! We forgot our stick! Christopher Penn, an AI expert, made a post on LinkedIn titled ‘AI Detectors are a joke.’ He fed the US Declaration of Independence to a ‘market-leading’ AI detector, and guess what? Apparently, our forefathers used 97% AI to pen down the Declaration. Time travel? The inaccurate results from these detectors stem from their use of parameters such as perplexity and burstiness to analyze texts. Consequently, if you write an article that sounds somewhat robotic, lacks vocabulary variety, and features similar line lengths, these ‘AI detectors’ may classify your work as that of an AI language model. Bottom line, these tools are not reliable, which is possibly why OpenAI discontinued its AI detection tool in mid-2023, citing accuracy issues. However, the sad part is that a large part of the system, including universities, still relies on these tools to make major decisions such as student expulsions and suspensions. This is exactly why we need a better and more reliable tool to call out AI-generated content. Enter SynthID Detector. SynthID Detector Is Open-Source Possibly the biggest piece of positive news with regard to Google’s SynthID Detector announcement is that the tool has been kept open source. This will allow other companies and creators to build on the existing architecture and incorporate AI watermark detection in their own artificial intelligence models. Remember, SynthID Detector currently only works for Google’s AI tools, which is just a small part of the whole artificial intelligence market. So, if someone generates a text using ChatGPT, there’s still no reliable way to tell if it was AI-generated. Maybe that’s why Google has kept the detector open-source, hoping that other developers would take a cue from it. All in all, it’s really appreciable that Google hasn’t gate-kept this essential development. Other companies that are concerned about the increasing misuse of their AI models should go ahead and contribute to the greater good of making AI safe for society. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all View all #google #launches #synthid #detector #revolutionary
    TECHREPORT.COM
    Google Launches SynthID Detector – A Revolutionary AI Detection Tool. Is This the Beginning of Responsible AI Development?
    Home Google Launches SynthID Detector – A Revolutionary AI Detection Tool. Is This the Beginning of Responsible AI Development? News Google Launches SynthID Detector – A Revolutionary AI Detection Tool. Is This the Beginning of Responsible AI Development? 7 min read Published: May 22, 2025 Key Takeaways Google has introduced SynthID Detector, a powerful tool that can detect AI-generated content. It works by identifying SynthID-generated watermarks in content served up by Google AI tools, such as Imagen, Gemini, and Lyria. The detector is currently in the testing phase and only available for use by joining a waitlist. SynthID Detector is also open-source, allowing anyone to build on the tech architecture. Google has launched SynthID Detector, a tool that can recognize any content generated through the Google suite of AI tools. SynthID, in case you didn’t know, is a state-of-the-art watermarking tool launched by Google in August 2023. This technology adds a watermark on AI-generated content, which is not visible to the naked eye.  Initially, SynthID was launched only for AI-generated images, but it has now been extended to text, video, and audio content generated using tools like Imagen, Gemini, Lyria, and Veo. The detector uses this SynthID watermarking to identify AI content. When you upload an image, audio, or video to the detector tool, it’ll look for this watermark. If it finds one, it’ll highlight the part of the content that is most likely to be watermarked. It’s worth noting, though, that the SynthID Detector is currently in the testing phase. Google has released a waitlist form for researchers, journalists, and media professionals. Google has also partnered with NVIDIA to watermark videos generated on their NVIDIA Cosmos AI model. More importantly, Google announced a partnership with GetReal Security, which is a leading pioneer in detecting deepfake media and has raised around $17.5 million in equity funding. We’re likely to see an increasing number of such partnerships from Google’s end, meaning SynthID Detector’s scope will keep broadening. So, you’ll be able to detect not just Google-generated AI content but also content generated with other AI platforms. The Need for SynthID Detector Notwithstanding all of the benefits that artificial intelligence has brought us, it has also become a powerful tool in the hands of criminals. We have seen hundreds of incidents where innocent people were scammed or threatened using AI-generated content. For example, on May 13, Sandra Rogers, a Lackawanna County woman, was found guilty of possessing AI-generated child sex abuse images. In another incident, a 17-year-old kid extorted personal information from 19 victims by creating sexually explicit deepfakes and threatening to leak them. A man in China was scammed out of $622,000 by a scammer using an AI-generated voice over the phone impersonating the man’s friend. Similar scams have become popular in the US and even in countries like India that aren’t really at the forefront of AI technology. In addition to crimes against civilians, AI is also being used to cause a lot of political unrest. For instance, a consultant was fined $6M for using fake robocalls during the US presidential elections. He used AI to mimic Joe Biden’s voice and urged voters in New Hampshire not to vote in the state’s Democratic primary. Back in 2022, a fake video of Ukrainian President Zelensky was broadcast on Ukraine 24, a Ukrainian news website, which was allegedly hacked. The fake AI video showed Zelensky apparently surrendering to Russia and ‘laying down arms.’ This is only the tip of the iceberg. The internet is filled with such cases, with newer ones coming out almost every single day. AI is increasingly being weaponized against institutions, government, and the societal order to cause political and social unrest. Image Credit – Statista Therefore, a tool like SynthID Detector can be a beacon of hope to combat such perpetrators. News houses, publications, and regulators can run a suspected image or content through the detector to verify a story before running it for millions to view. More importantly, tools like SynthID will also go a long way in instilling some semblance of fear among criminals, who will know that they can be busted anytime. And What About the Legal Grey Area of AI Usage? Besides the above outright illegal use of AI, there’s also a moral dilemma attached to increasing AI use. Educators are specifically worried about the use of LLMs and text-generating AI models in schools, colleges, and universities. Instead of putting in the hard yards, students now just punch in a couple of prompts to generate detailed, human-like articles and assignments. Research at the University of Pennsylvania formed two groups of students: one with access to ChatGPT and another without any such LLM tools.  The students who had used ChatGPT could solve 48% more mathematical problems correctly. However, when a test was conducted, the students who had used ChatGPT solved 17% fewer problems than those who didn’t.  This shows that the use of LLM models isn’t really contributing to learning and academic development. They’re, instead, tools to simply ‘complete tasks,’ which is slowly robbing us of our ability to think. Another study called ‘AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking’ shows that people in the age group 17-25 have the highest AI usage as well as the lowest critical thinking scores. Coincidence? We don’t think so. Clearly, the use of AI tools isn’t contributing to the development of young minds. Instead, it has become a watchdog for laziness for people who wish to cut corners. We call this a moral dilemma because the use of AI tools for education or any other purpose, for that matter, is not illegal. Instead, it’s more of a conscious decision to let go of our own critical thinking, which, as most would argue, is what makes us human. Contemporary AI Detectors Are Worthless Because AI is replacing critical thinking and being used to outsource work by students, it’s understandable why educational institutions have resorted to AI detectors to check for the presence of AI-generated content in student submissions and assignments.  However, these AI detectors are no more accurate than a blind person telling you the way ahead. Apologies if we stepped on any toes here! We forgot our stick! Christopher Penn, an AI expert, made a post on LinkedIn titled ‘AI Detectors are a joke.’ He fed the US Declaration of Independence to a ‘market-leading’ AI detector, and guess what? Apparently, our forefathers used 97% AI to pen down the Declaration. Time travel? The inaccurate results from these detectors stem from their use of parameters such as perplexity and burstiness to analyze texts. Consequently, if you write an article that sounds somewhat robotic, lacks vocabulary variety, and features similar line lengths, these ‘AI detectors’ may classify your work as that of an AI language model. Bottom line, these tools are not reliable, which is possibly why OpenAI discontinued its AI detection tool in mid-2023, citing accuracy issues. However, the sad part is that a large part of the system, including universities, still relies on these tools to make major decisions such as student expulsions and suspensions. This is exactly why we need a better and more reliable tool to call out AI-generated content. Enter SynthID Detector. SynthID Detector Is Open-Source Possibly the biggest piece of positive news with regard to Google’s SynthID Detector announcement is that the tool has been kept open source. This will allow other companies and creators to build on the existing architecture and incorporate AI watermark detection in their own artificial intelligence models. Remember, SynthID Detector currently only works for Google’s AI tools, which is just a small part of the whole artificial intelligence market. So, if someone generates a text using ChatGPT, there’s still no reliable way to tell if it was AI-generated. Maybe that’s why Google has kept the detector open-source, hoping that other developers would take a cue from it. All in all, it’s really appreciable that Google hasn’t gate-kept this essential development. Other companies that are concerned about the increasing misuse of their AI models should go ahead and contribute to the greater good of making AI safe for society. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all View all
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Live Updates From Google I/O 2025

    © Gizmodo I wish I was making this stuff up, but chaos seems to follow me at all tech events. After waiting an hour to try out Google’s hyped-up Android XR smart glasses for five minutes, I was actually given a three-minute demo, where I actually had 90 seconds to use Gemini in an extremely controlled environment. And actually, if you watch the video in my hands-on write-up below, you’ll see that I spent even less time with it because Gemini fumbled a few times in the beginning. Oof. I really hope there’s another chance to try them again because it was just too rushed. I think it might be the most rushed product demo I’ve ever had in my life, and I’ve been covering new gadgets for the past 15 years. —Raymond Wong Google, a company valued at trillion, seemingly brought one pair of Android XR smart glasses for press to demo… and one pair of Samsung’s Project Moohan mixed reality headset running the same augmented reality platform. I’m told the wait is 1 hour to try either device for 5 minutes. Of course, I’m going to try out the smart glasses. But if I want to demo Moohan, I need to get back in line and wait all over again. This is madness! —Raymond Wong May 20Keynote Fin © Raymond Wong / Gizmodo Talk about a loooooong keynote. Total duration: 1 hour and 55 minutes, and then Sundar Pichai walked off stage. What do you make of all the AI announcements? Let’s hang in the comments! I’m headed over to a demo area to try out a pair of Android XR smart glasses. I can’t lie, even though the video stream from the live demo lagged for a good portion, I’m hyped! It really feels like Google is finally delivering on Google Glass over a decade later. Shoulda had Google co-founder Sergey Brin jump out of a helicopter and land on stage again, though. —Raymond Wong Pieces of Project Astra, Google’s computer vision-based UI, are winding up in various different products, it seems, and not all of them are geared toward smart glasses specifically. One of the most exciting updates to Astra is “computer control,” which allows one to do a lot more on their devices with computer vision alone. For instance, you could just point your phone at an objectand then ask Astra to search for the bike, find some brakes for it, and then even pull up a YouTube tutorial on how to fix it—all without typing anything into your phone. —James Pero Shopping bots aren’t just for scalpers anymore. Google is putting the power of automated consumerism in your hands with its new AI shopping tool. There are some pretty wild ideas here, too, including a virtual shopping avatar that’s supposed to represent your own body—the idea is you can make it try on clothes to see how they fit. How all that works in practice is TBD, but if you’re ready for a full AI shopping experience, you’ve finally got it. For the whole story, check out our story from Gizmodo’s Senior Editor, Consumer Tech, Raymond Wong. —James Pero I got what I wanted. Google showed off what its Android XR tech can bring to smart glasses. In a live demo, Google showcased how a pair of unspecified smart glasses did a few of the things that I’ve been waiting to do, including projecting live navigation and remembering objects in your environment—basically the stuff that it pitched with Project Astra last year, but in a glasses form factor. There’s still a lot that needs to happen, both hardware and software-wise, before you can walk around wearing glasses that actually do all those things, but it was exciting to see that Google is making progress in that direction. It’s worth noting that not all of the demos went off smoothly—there was lots of stutter in the live translation demo—but I guess props to them for giving it a go. When we’ll actually get to walk around wearing functional smart glasses with some kind of optical passthrough or virtual display is anyone’s guess, but the race is certainly heating up. —James Pero Google’s SynthID has been around for nearly three years, but it’s been largely kept out of the public eye. The system disturbs AI-generated images, video, or audio with an invisible, undetectable watermark that can be observed with Google DeepMind’s proprietary tool. At I/O, Google said it was working with both Nvidia and GetReal to introduce the same watermarking technique with those companies’ AI image generators. Users may be able to detect these watermarks themselves, even if only part of the media was modified with AI. Early testers are getting access to it “today,” but hopefully more people can acess it at a later date from labs.google/synthid. — Kyle Barr This keynote has been going on for 1.5 hours now. Do I run to the restroom now or wait? But how much longer until it ends??? Can we petiton to Sundar Pichai to make these keynotes shorter or at least have an intermission? Update: I ran for it right near the end before Android XR news hit. I almost made it… —Raymond Wong © Raymond Wong / Gizmodo Google’s new video generator Veo, is getting a big upgrade that includes sound generation, and it’s not just dialogue. Veo 3 can also generate sound effects and music. In a demo, Google showed off an animated forest scene that includes all three—dialogue, sound effects, and video. The length of clips, I assume, will be short at first, but the results look pretty sophisticated if the demo is to be believed. —James Pero If you pay for a Google One subscription, you’ll start to see Gemini in your Google Chrome browserlater this week. This will appear as the sparkle icon at the top of your browser app. You can use this to bring up a prompt box to ask a question about the current page you’re browsing, such as if you want to consolidate a number of user reviews for a local campsite. — Kyle Barr © Google / GIF by Gizmodo Google’s high-tech video conferencing tech, now called Beam, looks impressive. You can make eye contact! It feels like the person in the screen is right in front of you! It’s glasses-free 3D! Come back down to Earth, buddy—it’s not coming out as a consumer product. Commercial first with partners like HP. Time to apply for a new job? —Raymond Wong here: Google doesn’t want Search to be tied to your browser or apps anymore. Search Live is akin to the video and audio comprehension capabilities of Gemini Live, but with the added benefit of getting quick answers based on sites from around the web. Google showed how Search Live could comprehend queries about at-home science experiment and bring in answers from sites like Quora or YouTube. — Kyle Barr Google is getting deep into augmented reality with Android XR—its operating system built specifically for AR glasses and VR headsets. Google showed us how users may be able to see a holographic live Google Maps view directly on their glasses or set up calendar events, all without needing to touch a single screen. This uses Gemini AI to comprehend your voice prompts and follow through on your instructions. Google doesn’t have its own device to share at I/O, but its planning to work with companies like XReal and Samsung to craft new devices across both AR and VR. — Kyle Barr Read our full report here: I know how much you all love subscriptions! Google does too, apparently, and is now offering a per month AI bundle that groups some of its most advanced AI services. Subscribing to Google AI Ultra will get you: Gemini and its full capabilities Flow, a new, more advanced AI filmmaking tool based on Veo Whisk, which allows text-to-image creation NotebookLM, an AI note-taking app Gemini in Gmail and Docs Gemini in Chrome Project Mariner, an agentic research AI 30TB of storage I’m not sure who needs all of this, but maybe there are more AI superusers than I thought. —James Pero Google CEO Sundar Pichai was keen to claim that users are big, big fans of AI overviews in Google Search results. If there wasn’t already enough AI on your search bar, Google will now stick an entire “AI Mode” tab on your search bar next to the Google Lens button. This encompasses the Gemini 2.5 model. This opens up an entirely new UI for searching via a prompt with a chatbot. After you input your rambling search query, it will bring up an assortment of short-form textual answers, links, and even a Google Maps widget depending on what you were looking for. AI Mode should be available starting today. Google said AI Mode pulls together information from the web alongside its other data like weather or academic research through Google Scholar. It should also eventually encompass your “personal context,” which will be available later this summer. Eventually, Google will add more AI Mode capabilities directly to AI Overviews. — Kyle Barr May 20News Embargo Has Lifted! © Xreal Get your butt over to Gizmodo.com’s home page because the Google I/O news embargo just lifted. We’ve got a bunch of stories, including this one about Google partnering up with Xreal for a new pair of “optical see-through”smart glasses called Project Aura. The smart glasses run Android XR and are powered by a Qualcomm chip. You can see three cameras. Wireless, these are not—you’ll need to tether to a phone or other device. Update: Little scoop: I’ve confirmed that Project Aura has a 70-degree field of view, which is way wider than the One Pro’s FOV, which is 57 degrees. —Raymond Wong © Raymond Wong / Gizmodo Google’s DeepMind CEO showed off the updated version of Project Astra running on a phone and drove home how its “personal, proactive, and powerful” AI features are the groundwork for a “universal assistant” that truly understands and works on your behalf. If you think Gemini is a fad, it’s time to get familiar with it because it’s not going anywhere. —Raymond Wong May 20Gemini 2.5 Pro Is Here © Gizmodo Google says Gemini 2.5 Pro is its “most advanced model yet,” and comes with “enhanced reasoning,” better coding ability, and can even create interactive simulations. You can try it now via Google AI Studio. —James Pero There are two major types of transformer AI used today. One is the LLM, AKA large language models, and diffusion models—which are mostly used for image generation. The Gemini Diffusion model blurs the lines of these types of models. Google said its new research model can iterate on a solution quickly and correct itself while generating an answer. For math or coding prompts, Gemini Diffusion can potentially output an entire response much faster than a typical Chatbot. Unlike a traditional LLM model, which may take a few seconds to answer a question, Gemini Diffusion can create a response to a complex math equation in the blink of an eye, and still share the steps it took to reach its conclusion. — Kyle Barr © Gizmodo New Gemini 2.5 Flash and Gemini Pro models are incoming and, naturally, Google says both are faster and more sophisticated across the board. One of the improvements for Gemini 2.5 Flash is even more inflection when speaking. Unfortunately for my ears, Google demoed the new Flash speaking in a whisper that sent chills down my spine. —James Pero Is anybody keeping track of how many times Google execs have said “Gemini” and “AI” so far? Oops, I think I’m already drunk, and we’re only 20 minutes in. —Raymond Wong © Raymond Wong / Gizmodo Google’s Project Astra is supposed to be getting much better at avoiding hallucinations, AKA when the AI makes stuff up. Project Astra’s vision and audio comprehension capabilities are supposed to be far better at knowing when you’re trying to trick it. In a video, Google showed how its Gemini Live AI wouldn’t buy your bullshit if you tell it that a garbage truck is a convertible, a lamp pole is a skyscraper, or your shadow is some stalker. This should hopefully mean the AI doesn’t confidently lie to you, as well. Google CEO Sundar Pichai said “Gemini is really good at telling you when you’re wrong.” These enhanced features should be rolling out today for Gemini app on iOS and Android. — Kyle Barr May 20Release the Agents Like pretty much every other AI player, Google is pursuing agentic AI in a big way. I’d prepare for a lot more talk about how Gemini can take tasks off your hands as the keynote progresses. —James Pero © Gizmodo Google has finally moved Project Starline—its futuristic video-calling machine—into a commercial project called Google Beam. According to Pichai, Google Beam can take a 2D image and transform it into a 3D one, and will also incorporate live translate. —James Pero © Gizmodo Google’s CEO, Sundar Pichai, says Google is shipping at a relentless pace, and to be honest, I tend to agree. There are tons of Gemini models out there already, even though it’s only been out for two years. Probably my favorite milestone, though, is that it has now completed Pokémon Blue, earning all 8 badges according to Pichai. —James Pero May 20Let’s Do This Buckle up, kiddos, it’s I/O time. Methinks there will be a lot to get to, so you may want to grab a snack now. —James Pero Counting down until the keynote… only a few more minutes to go. The DJ just said AI is changing music and how it’s made. But don’t forget that we’re all here… in person. Will we all be wearing Android XR smart glasses next year? Mixed reality headsets? —Raymond Wong © Raymond Wong / Gizmodo Fun fact: I haven’t attended Google I/O in person since before Covid-19. The Wi-Fi is definitely stronger and more stable now. It’s so great to be back and covering for Gizmodo. Dream job, unlocked! —Raymond Wong © Raymond Wong / Gizmodo Mini breakfast burritos… bagels… but these bagels can’t compare to real Made In New York City bagels with that authentic NY water —Raymond Wong © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo I’ve arrived at the Shoreline Amphitheatre in Mountain View, Calif., where the Google I/O keynote is taking place in 40 minutes. Seats are filling up. But first, must go check out the breakfast situation because my tummy is growling… —Raymond Wong May 20Should We Do a Giveaway? © Raymond Wong / Gizmodo Google I/O attendees get a special tote bag, a metal water bottle, a cap, and a cute sheet of stickers. I always end up donating this stuff to Goodwill during the holidays. A guy living in NYC with two cats only has so much room for tote bags and water bottles… Would be cool to do giveaway. Leave a comment to let us know if you’d be into that and I can pester top brass to make it happen —Raymond Wong May 20Got My Press Badge! In 13 hours, Google will blitz everyone with Gemini AI, Gemini AI, and tons more Gemini AI. Who’s ready for… Gemini AI? —Raymond Wong May 19Google Glass: The Redux © Google / Screenshot by Gizmodo Google is very obviously inching toward the release of some kind of smart glasses product for the first time sinceGoogle Glass, and if I were a betting man, I’d say this one will have a much warmer reception than its forebearer. I’m not saying Google can snatch the crown from Meta and its Ray-Ban smart glasses right out of the gate, but if it plays its cards right, it could capitalize on the integration with its other hardwarein a big way. Meta may finally have a real competitor on its hands. ICYMI: Here’s Google’s President of the Android Ecosystem, Sameer Samat, teasing some kind of smart glasses device in a recorded demo last week. —James Pero Hi folks, I’m James Pero, Gizmodo’s new Senior Writer. There’s a lot we have to get to with Google I/O, so I’ll keep this introduction short. I like long walks on the beach, the wind in my nonexistent hair, and I’m really, really, looking forward to bringing you even more of the spicy, insightful, and entertaining coverage on consumer tech that Gizmodo is known for. I’m starting my tenure here out hot with Google I/O, so make sure you check back here throughout the week to get those sweet, sweet blogs and commentary from me and Gizmodo’s Senior Consumer Tech Editor Raymond Wong. —James Pero © Raymond Wong / Gizmodo Hey everyone! Raymond Wong, senior editor in charge of Gizmodo’s consumer tech team, here! Landed in San Francisco, and I’ll be making my way over to Mountain View, California, later today to pick up my press badge and scope out the scene for tomorrow’s Google I/O keynote, which kicks off at 1 p.m. ET / 10 a.m. PT. Google I/O is a developer conference, but that doesn’t mean it’s news only for engineers. While there will be a lot of nerdy stuff that will have developers hollering, what Google announces—expect updates on Gemini AI, Android, and Android XR, to name a few headliners—will shape consumer productsfor the rest of this year and also the years to come. I/O is a glimpse at Google’s technology roadmap as AI weaves itself into the way we compute at our desks and on the go. This is going to be a fun live blog! —Raymond Wong
    #live #updates #google
    Live Updates From Google I/O 2025 🔴
    © Gizmodo I wish I was making this stuff up, but chaos seems to follow me at all tech events. After waiting an hour to try out Google’s hyped-up Android XR smart glasses for five minutes, I was actually given a three-minute demo, where I actually had 90 seconds to use Gemini in an extremely controlled environment. And actually, if you watch the video in my hands-on write-up below, you’ll see that I spent even less time with it because Gemini fumbled a few times in the beginning. Oof. I really hope there’s another chance to try them again because it was just too rushed. I think it might be the most rushed product demo I’ve ever had in my life, and I’ve been covering new gadgets for the past 15 years. —Raymond Wong Google, a company valued at trillion, seemingly brought one pair of Android XR smart glasses for press to demo… and one pair of Samsung’s Project Moohan mixed reality headset running the same augmented reality platform. I’m told the wait is 1 hour to try either device for 5 minutes. Of course, I’m going to try out the smart glasses. But if I want to demo Moohan, I need to get back in line and wait all over again. This is madness! —Raymond Wong May 20Keynote Fin © Raymond Wong / Gizmodo Talk about a loooooong keynote. Total duration: 1 hour and 55 minutes, and then Sundar Pichai walked off stage. What do you make of all the AI announcements? Let’s hang in the comments! I’m headed over to a demo area to try out a pair of Android XR smart glasses. I can’t lie, even though the video stream from the live demo lagged for a good portion, I’m hyped! It really feels like Google is finally delivering on Google Glass over a decade later. Shoulda had Google co-founder Sergey Brin jump out of a helicopter and land on stage again, though. —Raymond Wong Pieces of Project Astra, Google’s computer vision-based UI, are winding up in various different products, it seems, and not all of them are geared toward smart glasses specifically. One of the most exciting updates to Astra is “computer control,” which allows one to do a lot more on their devices with computer vision alone. For instance, you could just point your phone at an objectand then ask Astra to search for the bike, find some brakes for it, and then even pull up a YouTube tutorial on how to fix it—all without typing anything into your phone. —James Pero Shopping bots aren’t just for scalpers anymore. Google is putting the power of automated consumerism in your hands with its new AI shopping tool. There are some pretty wild ideas here, too, including a virtual shopping avatar that’s supposed to represent your own body—the idea is you can make it try on clothes to see how they fit. How all that works in practice is TBD, but if you’re ready for a full AI shopping experience, you’ve finally got it. For the whole story, check out our story from Gizmodo’s Senior Editor, Consumer Tech, Raymond Wong. —James Pero I got what I wanted. Google showed off what its Android XR tech can bring to smart glasses. In a live demo, Google showcased how a pair of unspecified smart glasses did a few of the things that I’ve been waiting to do, including projecting live navigation and remembering objects in your environment—basically the stuff that it pitched with Project Astra last year, but in a glasses form factor. There’s still a lot that needs to happen, both hardware and software-wise, before you can walk around wearing glasses that actually do all those things, but it was exciting to see that Google is making progress in that direction. It’s worth noting that not all of the demos went off smoothly—there was lots of stutter in the live translation demo—but I guess props to them for giving it a go. When we’ll actually get to walk around wearing functional smart glasses with some kind of optical passthrough or virtual display is anyone’s guess, but the race is certainly heating up. —James Pero Google’s SynthID has been around for nearly three years, but it’s been largely kept out of the public eye. The system disturbs AI-generated images, video, or audio with an invisible, undetectable watermark that can be observed with Google DeepMind’s proprietary tool. At I/O, Google said it was working with both Nvidia and GetReal to introduce the same watermarking technique with those companies’ AI image generators. Users may be able to detect these watermarks themselves, even if only part of the media was modified with AI. Early testers are getting access to it “today,” but hopefully more people can acess it at a later date from labs.google/synthid. — Kyle Barr This keynote has been going on for 1.5 hours now. Do I run to the restroom now or wait? But how much longer until it ends??? Can we petiton to Sundar Pichai to make these keynotes shorter or at least have an intermission? Update: I ran for it right near the end before Android XR news hit. I almost made it… —Raymond Wong © Raymond Wong / Gizmodo Google’s new video generator Veo, is getting a big upgrade that includes sound generation, and it’s not just dialogue. Veo 3 can also generate sound effects and music. In a demo, Google showed off an animated forest scene that includes all three—dialogue, sound effects, and video. The length of clips, I assume, will be short at first, but the results look pretty sophisticated if the demo is to be believed. —James Pero If you pay for a Google One subscription, you’ll start to see Gemini in your Google Chrome browserlater this week. This will appear as the sparkle icon at the top of your browser app. You can use this to bring up a prompt box to ask a question about the current page you’re browsing, such as if you want to consolidate a number of user reviews for a local campsite. — Kyle Barr © Google / GIF by Gizmodo Google’s high-tech video conferencing tech, now called Beam, looks impressive. You can make eye contact! It feels like the person in the screen is right in front of you! It’s glasses-free 3D! Come back down to Earth, buddy—it’s not coming out as a consumer product. Commercial first with partners like HP. Time to apply for a new job? —Raymond Wong here: Google doesn’t want Search to be tied to your browser or apps anymore. Search Live is akin to the video and audio comprehension capabilities of Gemini Live, but with the added benefit of getting quick answers based on sites from around the web. Google showed how Search Live could comprehend queries about at-home science experiment and bring in answers from sites like Quora or YouTube. — Kyle Barr Google is getting deep into augmented reality with Android XR—its operating system built specifically for AR glasses and VR headsets. Google showed us how users may be able to see a holographic live Google Maps view directly on their glasses or set up calendar events, all without needing to touch a single screen. This uses Gemini AI to comprehend your voice prompts and follow through on your instructions. Google doesn’t have its own device to share at I/O, but its planning to work with companies like XReal and Samsung to craft new devices across both AR and VR. — Kyle Barr Read our full report here: I know how much you all love subscriptions! Google does too, apparently, and is now offering a per month AI bundle that groups some of its most advanced AI services. Subscribing to Google AI Ultra will get you: Gemini and its full capabilities Flow, a new, more advanced AI filmmaking tool based on Veo Whisk, which allows text-to-image creation NotebookLM, an AI note-taking app Gemini in Gmail and Docs Gemini in Chrome Project Mariner, an agentic research AI 30TB of storage I’m not sure who needs all of this, but maybe there are more AI superusers than I thought. —James Pero Google CEO Sundar Pichai was keen to claim that users are big, big fans of AI overviews in Google Search results. If there wasn’t already enough AI on your search bar, Google will now stick an entire “AI Mode” tab on your search bar next to the Google Lens button. This encompasses the Gemini 2.5 model. This opens up an entirely new UI for searching via a prompt with a chatbot. After you input your rambling search query, it will bring up an assortment of short-form textual answers, links, and even a Google Maps widget depending on what you were looking for. AI Mode should be available starting today. Google said AI Mode pulls together information from the web alongside its other data like weather or academic research through Google Scholar. It should also eventually encompass your “personal context,” which will be available later this summer. Eventually, Google will add more AI Mode capabilities directly to AI Overviews. — Kyle Barr May 20News Embargo Has Lifted! © Xreal Get your butt over to Gizmodo.com’s home page because the Google I/O news embargo just lifted. We’ve got a bunch of stories, including this one about Google partnering up with Xreal for a new pair of “optical see-through”smart glasses called Project Aura. The smart glasses run Android XR and are powered by a Qualcomm chip. You can see three cameras. Wireless, these are not—you’ll need to tether to a phone or other device. Update: Little scoop: I’ve confirmed that Project Aura has a 70-degree field of view, which is way wider than the One Pro’s FOV, which is 57 degrees. —Raymond Wong © Raymond Wong / Gizmodo Google’s DeepMind CEO showed off the updated version of Project Astra running on a phone and drove home how its “personal, proactive, and powerful” AI features are the groundwork for a “universal assistant” that truly understands and works on your behalf. If you think Gemini is a fad, it’s time to get familiar with it because it’s not going anywhere. —Raymond Wong May 20Gemini 2.5 Pro Is Here © Gizmodo Google says Gemini 2.5 Pro is its “most advanced model yet,” and comes with “enhanced reasoning,” better coding ability, and can even create interactive simulations. You can try it now via Google AI Studio. —James Pero There are two major types of transformer AI used today. One is the LLM, AKA large language models, and diffusion models—which are mostly used for image generation. The Gemini Diffusion model blurs the lines of these types of models. Google said its new research model can iterate on a solution quickly and correct itself while generating an answer. For math or coding prompts, Gemini Diffusion can potentially output an entire response much faster than a typical Chatbot. Unlike a traditional LLM model, which may take a few seconds to answer a question, Gemini Diffusion can create a response to a complex math equation in the blink of an eye, and still share the steps it took to reach its conclusion. — Kyle Barr © Gizmodo New Gemini 2.5 Flash and Gemini Pro models are incoming and, naturally, Google says both are faster and more sophisticated across the board. One of the improvements for Gemini 2.5 Flash is even more inflection when speaking. Unfortunately for my ears, Google demoed the new Flash speaking in a whisper that sent chills down my spine. —James Pero Is anybody keeping track of how many times Google execs have said “Gemini” and “AI” so far? Oops, I think I’m already drunk, and we’re only 20 minutes in. —Raymond Wong © Raymond Wong / Gizmodo Google’s Project Astra is supposed to be getting much better at avoiding hallucinations, AKA when the AI makes stuff up. Project Astra’s vision and audio comprehension capabilities are supposed to be far better at knowing when you’re trying to trick it. In a video, Google showed how its Gemini Live AI wouldn’t buy your bullshit if you tell it that a garbage truck is a convertible, a lamp pole is a skyscraper, or your shadow is some stalker. This should hopefully mean the AI doesn’t confidently lie to you, as well. Google CEO Sundar Pichai said “Gemini is really good at telling you when you’re wrong.” These enhanced features should be rolling out today for Gemini app on iOS and Android. — Kyle Barr May 20Release the Agents Like pretty much every other AI player, Google is pursuing agentic AI in a big way. I’d prepare for a lot more talk about how Gemini can take tasks off your hands as the keynote progresses. —James Pero © Gizmodo Google has finally moved Project Starline—its futuristic video-calling machine—into a commercial project called Google Beam. According to Pichai, Google Beam can take a 2D image and transform it into a 3D one, and will also incorporate live translate. —James Pero © Gizmodo Google’s CEO, Sundar Pichai, says Google is shipping at a relentless pace, and to be honest, I tend to agree. There are tons of Gemini models out there already, even though it’s only been out for two years. Probably my favorite milestone, though, is that it has now completed Pokémon Blue, earning all 8 badges according to Pichai. —James Pero May 20Let’s Do This Buckle up, kiddos, it’s I/O time. Methinks there will be a lot to get to, so you may want to grab a snack now. —James Pero Counting down until the keynote… only a few more minutes to go. The DJ just said AI is changing music and how it’s made. But don’t forget that we’re all here… in person. Will we all be wearing Android XR smart glasses next year? Mixed reality headsets? —Raymond Wong © Raymond Wong / Gizmodo Fun fact: I haven’t attended Google I/O in person since before Covid-19. The Wi-Fi is definitely stronger and more stable now. It’s so great to be back and covering for Gizmodo. Dream job, unlocked! —Raymond Wong © Raymond Wong / Gizmodo Mini breakfast burritos… bagels… but these bagels can’t compare to real Made In New York City bagels with that authentic NY water 😏 —Raymond Wong © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo I’ve arrived at the Shoreline Amphitheatre in Mountain View, Calif., where the Google I/O keynote is taking place in 40 minutes. Seats are filling up. But first, must go check out the breakfast situation because my tummy is growling… —Raymond Wong May 20Should We Do a Giveaway? © Raymond Wong / Gizmodo Google I/O attendees get a special tote bag, a metal water bottle, a cap, and a cute sheet of stickers. I always end up donating this stuff to Goodwill during the holidays. A guy living in NYC with two cats only has so much room for tote bags and water bottles… Would be cool to do giveaway. Leave a comment to let us know if you’d be into that and I can pester top brass to make it happen 🤪 —Raymond Wong May 20Got My Press Badge! In 13 hours, Google will blitz everyone with Gemini AI, Gemini AI, and tons more Gemini AI. Who’s ready for… Gemini AI? —Raymond Wong May 19Google Glass: The Redux © Google / Screenshot by Gizmodo Google is very obviously inching toward the release of some kind of smart glasses product for the first time sinceGoogle Glass, and if I were a betting man, I’d say this one will have a much warmer reception than its forebearer. I’m not saying Google can snatch the crown from Meta and its Ray-Ban smart glasses right out of the gate, but if it plays its cards right, it could capitalize on the integration with its other hardwarein a big way. Meta may finally have a real competitor on its hands. ICYMI: Here’s Google’s President of the Android Ecosystem, Sameer Samat, teasing some kind of smart glasses device in a recorded demo last week. —James Pero Hi folks, I’m James Pero, Gizmodo’s new Senior Writer. There’s a lot we have to get to with Google I/O, so I’ll keep this introduction short. I like long walks on the beach, the wind in my nonexistent hair, and I’m really, really, looking forward to bringing you even more of the spicy, insightful, and entertaining coverage on consumer tech that Gizmodo is known for. I’m starting my tenure here out hot with Google I/O, so make sure you check back here throughout the week to get those sweet, sweet blogs and commentary from me and Gizmodo’s Senior Consumer Tech Editor Raymond Wong. —James Pero © Raymond Wong / Gizmodo Hey everyone! Raymond Wong, senior editor in charge of Gizmodo’s consumer tech team, here! Landed in San Francisco, and I’ll be making my way over to Mountain View, California, later today to pick up my press badge and scope out the scene for tomorrow’s Google I/O keynote, which kicks off at 1 p.m. ET / 10 a.m. PT. Google I/O is a developer conference, but that doesn’t mean it’s news only for engineers. While there will be a lot of nerdy stuff that will have developers hollering, what Google announces—expect updates on Gemini AI, Android, and Android XR, to name a few headliners—will shape consumer productsfor the rest of this year and also the years to come. I/O is a glimpse at Google’s technology roadmap as AI weaves itself into the way we compute at our desks and on the go. This is going to be a fun live blog! —Raymond Wong #live #updates #google
    GIZMODO.COM
    Live Updates From Google I/O 2025 🔴
    © Gizmodo I wish I was making this stuff up, but chaos seems to follow me at all tech events. After waiting an hour to try out Google’s hyped-up Android XR smart glasses for five minutes, I was actually given a three-minute demo, where I actually had 90 seconds to use Gemini in an extremely controlled environment. And actually, if you watch the video in my hands-on write-up below, you’ll see that I spent even less time with it because Gemini fumbled a few times in the beginning. Oof. I really hope there’s another chance to try them again because it was just too rushed. I think it might be the most rushed product demo I’ve ever had in my life, and I’ve been covering new gadgets for the past 15 years. —Raymond Wong Google, a company valued at $2 trillion, seemingly brought one pair of Android XR smart glasses for press to demo… and one pair of Samsung’s Project Moohan mixed reality headset running the same augmented reality platform. I’m told the wait is 1 hour to try either device for 5 minutes. Of course, I’m going to try out the smart glasses. But if I want to demo Moohan, I need to get back in line and wait all over again. This is madness! —Raymond Wong May 20Keynote Fin © Raymond Wong / Gizmodo Talk about a loooooong keynote. Total duration: 1 hour and 55 minutes, and then Sundar Pichai walked off stage. What do you make of all the AI announcements? Let’s hang in the comments! I’m headed over to a demo area to try out a pair of Android XR smart glasses. I can’t lie, even though the video stream from the live demo lagged for a good portion, I’m hyped! It really feels like Google is finally delivering on Google Glass over a decade later. Shoulda had Google co-founder Sergey Brin jump out of a helicopter and land on stage again, though. —Raymond Wong Pieces of Project Astra, Google’s computer vision-based UI, are winding up in various different products, it seems, and not all of them are geared toward smart glasses specifically. One of the most exciting updates to Astra is “computer control,” which allows one to do a lot more on their devices with computer vision alone. For instance, you could just point your phone at an object (say, a bike) and then ask Astra to search for the bike, find some brakes for it, and then even pull up a YouTube tutorial on how to fix it—all without typing anything into your phone. —James Pero Shopping bots aren’t just for scalpers anymore. Google is putting the power of automated consumerism in your hands with its new AI shopping tool. There are some pretty wild ideas here, too, including a virtual shopping avatar that’s supposed to represent your own body—the idea is you can make it try on clothes to see how they fit. How all that works in practice is TBD, but if you’re ready for a full AI shopping experience, you’ve finally got it. For the whole story, check out our story from Gizmodo’s Senior Editor, Consumer Tech, Raymond Wong. —James Pero I got what I wanted. Google showed off what its Android XR tech can bring to smart glasses. In a live demo, Google showcased how a pair of unspecified smart glasses did a few of the things that I’ve been waiting to do, including projecting live navigation and remembering objects in your environment—basically the stuff that it pitched with Project Astra last year, but in a glasses form factor. There’s still a lot that needs to happen, both hardware and software-wise, before you can walk around wearing glasses that actually do all those things, but it was exciting to see that Google is making progress in that direction. It’s worth noting that not all of the demos went off smoothly—there was lots of stutter in the live translation demo—but I guess props to them for giving it a go. When we’ll actually get to walk around wearing functional smart glasses with some kind of optical passthrough or virtual display is anyone’s guess, but the race is certainly heating up. —James Pero Google’s SynthID has been around for nearly three years, but it’s been largely kept out of the public eye. The system disturbs AI-generated images, video, or audio with an invisible, undetectable watermark that can be observed with Google DeepMind’s proprietary tool. At I/O, Google said it was working with both Nvidia and GetReal to introduce the same watermarking technique with those companies’ AI image generators. Users may be able to detect these watermarks themselves, even if only part of the media was modified with AI. Early testers are getting access to it “today,” but hopefully more people can acess it at a later date from labs.google/synthid. — Kyle Barr This keynote has been going on for 1.5 hours now. Do I run to the restroom now or wait? But how much longer until it ends??? Can we petiton to Sundar Pichai to make these keynotes shorter or at least have an intermission? Update: I ran for it right near the end before Android XR news hit. I almost made it… —Raymond Wong © Raymond Wong / Gizmodo Google’s new video generator Veo, is getting a big upgrade that includes sound generation, and it’s not just dialogue. Veo 3 can also generate sound effects and music. In a demo, Google showed off an animated forest scene that includes all three—dialogue, sound effects, and video. The length of clips, I assume, will be short at first, but the results look pretty sophisticated if the demo is to be believed. —James Pero If you pay for a Google One subscription, you’ll start to see Gemini in your Google Chrome browser (and—judging by this developer conference—everywhere else) later this week. This will appear as the sparkle icon at the top of your browser app. You can use this to bring up a prompt box to ask a question about the current page you’re browsing, such as if you want to consolidate a number of user reviews for a local campsite. — Kyle Barr © Google / GIF by Gizmodo Google’s high-tech video conferencing tech, now called Beam, looks impressive. You can make eye contact! It feels like the person in the screen is right in front of you! It’s glasses-free 3D! Come back down to Earth, buddy—it’s not coming out as a consumer product. Commercial first with partners like HP. Time to apply for a new job? —Raymond Wong Read more here: Google doesn’t want Search to be tied to your browser or apps anymore. Search Live is akin to the video and audio comprehension capabilities of Gemini Live, but with the added benefit of getting quick answers based on sites from around the web. Google showed how Search Live could comprehend queries about at-home science experiment and bring in answers from sites like Quora or YouTube. — Kyle Barr Google is getting deep into augmented reality with Android XR—its operating system built specifically for AR glasses and VR headsets. Google showed us how users may be able to see a holographic live Google Maps view directly on their glasses or set up calendar events, all without needing to touch a single screen. This uses Gemini AI to comprehend your voice prompts and follow through on your instructions. Google doesn’t have its own device to share at I/O, but its planning to work with companies like XReal and Samsung to craft new devices across both AR and VR. — Kyle Barr Read our full report here: I know how much you all love subscriptions! Google does too, apparently, and is now offering a $250 per month AI bundle that groups some of its most advanced AI services. Subscribing to Google AI Ultra will get you: Gemini and its full capabilities Flow, a new, more advanced AI filmmaking tool based on Veo Whisk, which allows text-to-image creation NotebookLM, an AI note-taking app Gemini in Gmail and Docs Gemini in Chrome Project Mariner, an agentic research AI 30TB of storage I’m not sure who needs all of this, but maybe there are more AI superusers than I thought. —James Pero Google CEO Sundar Pichai was keen to claim that users are big, big fans of AI overviews in Google Search results. If there wasn’t already enough AI on your search bar, Google will now stick an entire “AI Mode” tab on your search bar next to the Google Lens button. This encompasses the Gemini 2.5 model. This opens up an entirely new UI for searching via a prompt with a chatbot. After you input your rambling search query, it will bring up an assortment of short-form textual answers, links, and even a Google Maps widget depending on what you were looking for. AI Mode should be available starting today. Google said AI Mode pulls together information from the web alongside its other data like weather or academic research through Google Scholar. It should also eventually encompass your “personal context,” which will be available later this summer. Eventually, Google will add more AI Mode capabilities directly to AI Overviews. — Kyle Barr May 20News Embargo Has Lifted! © Xreal Get your butt over to Gizmodo.com’s home page because the Google I/O news embargo just lifted. We’ve got a bunch of stories, including this one about Google partnering up with Xreal for a new pair of “optical see-through” (OST) smart glasses called Project Aura. The smart glasses run Android XR and are powered by a Qualcomm chip. You can see three cameras. Wireless, these are not—you’ll need to tether to a phone or other device. Update: Little scoop: I’ve confirmed that Project Aura has a 70-degree field of view, which is way wider than the One Pro’s FOV, which is 57 degrees. —Raymond Wong © Raymond Wong / Gizmodo Google’s DeepMind CEO showed off the updated version of Project Astra running on a phone and drove home how its “personal, proactive, and powerful” AI features are the groundwork for a “universal assistant” that truly understands and works on your behalf. If you think Gemini is a fad, it’s time to get familiar with it because it’s not going anywhere. —Raymond Wong May 20Gemini 2.5 Pro Is Here © Gizmodo Google says Gemini 2.5 Pro is its “most advanced model yet,” and comes with “enhanced reasoning,” better coding ability, and can even create interactive simulations. You can try it now via Google AI Studio. —James Pero There are two major types of transformer AI used today. One is the LLM, AKA large language models, and diffusion models—which are mostly used for image generation. The Gemini Diffusion model blurs the lines of these types of models. Google said its new research model can iterate on a solution quickly and correct itself while generating an answer. For math or coding prompts, Gemini Diffusion can potentially output an entire response much faster than a typical Chatbot. Unlike a traditional LLM model, which may take a few seconds to answer a question, Gemini Diffusion can create a response to a complex math equation in the blink of an eye, and still share the steps it took to reach its conclusion. — Kyle Barr © Gizmodo New Gemini 2.5 Flash and Gemini Pro models are incoming and, naturally, Google says both are faster and more sophisticated across the board. One of the improvements for Gemini 2.5 Flash is even more inflection when speaking. Unfortunately for my ears, Google demoed the new Flash speaking in a whisper that sent chills down my spine. —James Pero Is anybody keeping track of how many times Google execs have said “Gemini” and “AI” so far? Oops, I think I’m already drunk, and we’re only 20 minutes in. —Raymond Wong © Raymond Wong / Gizmodo Google’s Project Astra is supposed to be getting much better at avoiding hallucinations, AKA when the AI makes stuff up. Project Astra’s vision and audio comprehension capabilities are supposed to be far better at knowing when you’re trying to trick it. In a video, Google showed how its Gemini Live AI wouldn’t buy your bullshit if you tell it that a garbage truck is a convertible, a lamp pole is a skyscraper, or your shadow is some stalker. This should hopefully mean the AI doesn’t confidently lie to you, as well. Google CEO Sundar Pichai said “Gemini is really good at telling you when you’re wrong.” These enhanced features should be rolling out today for Gemini app on iOS and Android. — Kyle Barr May 20Release the Agents Like pretty much every other AI player, Google is pursuing agentic AI in a big way. I’d prepare for a lot more talk about how Gemini can take tasks off your hands as the keynote progresses. —James Pero © Gizmodo Google has finally moved Project Starline—its futuristic video-calling machine—into a commercial project called Google Beam. According to Pichai, Google Beam can take a 2D image and transform it into a 3D one, and will also incorporate live translate. —James Pero © Gizmodo Google’s CEO, Sundar Pichai, says Google is shipping at a relentless pace, and to be honest, I tend to agree. There are tons of Gemini models out there already, even though it’s only been out for two years. Probably my favorite milestone, though, is that it has now completed Pokémon Blue, earning all 8 badges according to Pichai. —James Pero May 20Let’s Do This Buckle up, kiddos, it’s I/O time. Methinks there will be a lot to get to, so you may want to grab a snack now. —James Pero Counting down until the keynote… only a few more minutes to go. The DJ just said AI is changing music and how it’s made. But don’t forget that we’re all here… in person. Will we all be wearing Android XR smart glasses next year? Mixed reality headsets? —Raymond Wong © Raymond Wong / Gizmodo Fun fact: I haven’t attended Google I/O in person since before Covid-19. The Wi-Fi is definitely stronger and more stable now. It’s so great to be back and covering for Gizmodo. Dream job, unlocked! —Raymond Wong © Raymond Wong / Gizmodo Mini breakfast burritos… bagels… but these bagels can’t compare to real Made In New York City bagels with that authentic NY water 😏 —Raymond Wong © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo I’ve arrived at the Shoreline Amphitheatre in Mountain View, Calif., where the Google I/O keynote is taking place in 40 minutes. Seats are filling up. But first, must go check out the breakfast situation because my tummy is growling… —Raymond Wong May 20Should We Do a Giveaway? © Raymond Wong / Gizmodo Google I/O attendees get a special tote bag, a metal water bottle, a cap, and a cute sheet of stickers. I always end up donating this stuff to Goodwill during the holidays. A guy living in NYC with two cats only has so much room for tote bags and water bottles… Would be cool to do giveaway. Leave a comment to let us know if you’d be into that and I can pester top brass to make it happen 🤪 —Raymond Wong May 20Got My Press Badge! In 13 hours, Google will blitz everyone with Gemini AI, Gemini AI, and tons more Gemini AI. Who’s ready for… Gemini AI? —Raymond Wong May 19Google Glass: The Redux © Google / Screenshot by Gizmodo Google is very obviously inching toward the release of some kind of smart glasses product for the first time since (gulp) Google Glass, and if I were a betting man, I’d say this one will have a much warmer reception than its forebearer. I’m not saying Google can snatch the crown from Meta and its Ray-Ban smart glasses right out of the gate, but if it plays its cards right, it could capitalize on the integration with its other hardware (hello, Pixel devices) in a big way. Meta may finally have a real competitor on its hands. ICYMI: Here’s Google’s President of the Android Ecosystem, Sameer Samat, teasing some kind of smart glasses device in a recorded demo last week. —James Pero Hi folks, I’m James Pero, Gizmodo’s new Senior Writer. There’s a lot we have to get to with Google I/O, so I’ll keep this introduction short. I like long walks on the beach, the wind in my nonexistent hair, and I’m really, really, looking forward to bringing you even more of the spicy, insightful, and entertaining coverage on consumer tech that Gizmodo is known for. I’m starting my tenure here out hot with Google I/O, so make sure you check back here throughout the week to get those sweet, sweet blogs and commentary from me and Gizmodo’s Senior Consumer Tech Editor Raymond Wong. —James Pero © Raymond Wong / Gizmodo Hey everyone! Raymond Wong, senior editor in charge of Gizmodo’s consumer tech team, here! Landed in San Francisco (the sunrise was *chef’s kiss*), and I’ll be making my way over to Mountain View, California, later today to pick up my press badge and scope out the scene for tomorrow’s Google I/O keynote, which kicks off at 1 p.m. ET / 10 a.m. PT. Google I/O is a developer conference, but that doesn’t mean it’s news only for engineers. While there will be a lot of nerdy stuff that will have developers hollering, what Google announces—expect updates on Gemini AI, Android, and Android XR, to name a few headliners—will shape consumer products (hardware, software, and services) for the rest of this year and also the years to come. I/O is a glimpse at Google’s technology roadmap as AI weaves itself into the way we compute at our desks and on the go. This is going to be a fun live blog! —Raymond Wong
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Google has a new tool to help detect AI-generated content

    Google announced a new SynthID Detector tool at Google I/O that lets you check if content has been made with the assistance of Google’s AI tools.In a blog post, Google DeepMind’s Pushmeet Kohli describes SynthID Detector as “a verification portal” that can “quickly and efficiently identify AI-generated content made with Google AI.” It’s also able to “highlight which parts of the content are more likely to have been watermarked with SynthID.”SynthID watermarks are applied to AI-generated images, text, audio, and videos, including content generated by Google’s Gemini, Imagen, Lyria, and Veo models, Kohli says.Here’s how the tool works, according to Kohli:When you upload an image, audio track, video or piece of text created using Google’s AI tools, the portal will scan the media for a SynthID watermark. If a watermark is detected, the portal will highlight specific portions of the content most likely to be watermarked.For audio, the portal pinpoints specific segments where a SynthID watermark is detected, and for images, it indicates areas where a watermark is most likely.Google is starting to roll out the tool to “early testers,” Kohli says in the post. “Following the initial testing phase, the portal will gradually be rolled out to users who sign up to the waitlist to gain access to the SynthID Detector,” Kohli tells The Verge. “We will take learnings from this cohort of professionals and work to implement content transparency more broadly.”I’m on the waitlist, but I haven’t tested the tool myself, so I can’t vouch for how well it might work. And will people actually use it when it’s widely available? I hope so, but we’ll have to wait and see.See More:
    #google #has #new #tool #help
    Google has a new tool to help detect AI-generated content
    Google announced a new SynthID Detector tool at Google I/O that lets you check if content has been made with the assistance of Google’s AI tools.In a blog post, Google DeepMind’s Pushmeet Kohli describes SynthID Detector as “a verification portal” that can “quickly and efficiently identify AI-generated content made with Google AI.” It’s also able to “highlight which parts of the content are more likely to have been watermarked with SynthID.”SynthID watermarks are applied to AI-generated images, text, audio, and videos, including content generated by Google’s Gemini, Imagen, Lyria, and Veo models, Kohli says.Here’s how the tool works, according to Kohli:When you upload an image, audio track, video or piece of text created using Google’s AI tools, the portal will scan the media for a SynthID watermark. If a watermark is detected, the portal will highlight specific portions of the content most likely to be watermarked.For audio, the portal pinpoints specific segments where a SynthID watermark is detected, and for images, it indicates areas where a watermark is most likely.Google is starting to roll out the tool to “early testers,” Kohli says in the post. “Following the initial testing phase, the portal will gradually be rolled out to users who sign up to the waitlist to gain access to the SynthID Detector,” Kohli tells The Verge. “We will take learnings from this cohort of professionals and work to implement content transparency more broadly.”I’m on the waitlist, but I haven’t tested the tool myself, so I can’t vouch for how well it might work. And will people actually use it when it’s widely available? I hope so, but we’ll have to wait and see.See More: #google #has #new #tool #help
    WWW.THEVERGE.COM
    Google has a new tool to help detect AI-generated content
    Google announced a new SynthID Detector tool at Google I/O that lets you check if content has been made with the assistance of Google’s AI tools.In a blog post, Google DeepMind’s Pushmeet Kohli describes SynthID Detector as “a verification portal” that can “quickly and efficiently identify AI-generated content made with Google AI.” It’s also able to “highlight which parts of the content are more likely to have been watermarked with SynthID.”SynthID watermarks are applied to AI-generated images, text, audio, and videos, including content generated by Google’s Gemini, Imagen, Lyria, and Veo models, Kohli says.Here’s how the tool works, according to Kohli:When you upload an image, audio track, video or piece of text created using Google’s AI tools, the portal will scan the media for a SynthID watermark. If a watermark is detected, the portal will highlight specific portions of the content most likely to be watermarked.For audio, the portal pinpoints specific segments where a SynthID watermark is detected, and for images, it indicates areas where a watermark is most likely.Google is starting to roll out the tool to “early testers,” Kohli says in the post. “Following the initial testing phase, the portal will gradually be rolled out to users who sign up to the waitlist to gain access to the SynthID Detector,” Kohli tells The Verge. “We will take learnings from this cohort of professionals and work to implement content transparency more broadly.”I’m on the waitlist, but I haven’t tested the tool myself, so I can’t vouch for how well it might work. And will people actually use it when it’s widely available? I hope so, but we’ll have to wait and see.See More:
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Google Announces SynthID Detector That Can Identify Gemini-Generated Content at Google I/O 2025

    Google I/O 2025 keynote session on Tuesday was focused on new artificial intelligenceupdates and features. Alongside, the Mountain View-based tech giant also introduced a new tool to bring more transparency when it comes to AI-generated content. Dubbed SynthID Detector, it is an under-testing verification portal which can detect and identify multimodal AI content made using the company's AI models. The technology can identify audio, video, image, and text, allowing individuals to easily assess if a piece of content is human-made or synthetic.Google Tests SynthID Detector Verification PlatformThe company first unveiled SynthID in 2023 as a technology that can add an imperceptible watermark into content that cannot be removed or tampered with. In 2024, the company open-sourced the text watermarking technology to businesses and developers. The invisible watermark shows up when analysed using special software. Google is now testing a verification portal dubbed SynthID Detector that will allow individuals to quickly check if a media is generated using AI or not.In a blog post, the tech giant said the portal provides transparency “in a rapidly evolving landscape of generative media.” With Veo 3 and Imagen 4 AI models that can generate hyperrealistic images and videos, the risk of deepfakes has also increased significantly.While measures such as the Coalition for Content Provenance and Authenticitystandard have offered a way for AI companies to highlight AI-generated content, they are not completely tamper-proof. Advanced watermarking technologies enable users and institutions to protect themselves from misinformation and synthetic abusive content.The portal is straightforward to use, Google explains. Users can upload media they suspect to have been generated using Google's AI tool and SynthID Detector then scans the uploaded media and detects any SynthID watermark. Afterwards, it shares the results, and if a watermark is detected, it highlights which part of the content is likely to be AI-generated. Notably, the tool does not work with non-Google AI products.

    One of the biggest advantages of SynthID is that the imperceptible watermark does not compromise the quality of the media, and at the same time, it is not possible to remove or alter it. Currently, Google is rolling out the portal to early testers, and it plans to make it available more broadly later this year. Journalists, media professionals and researchers can join the waitlist to gain early access to the SynthID Detector.

    For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

    Further reading:
    Google IO 2025, Google, SynthID, Gemini, AI, Artificial Intelligence

    Akash Dutta

    Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food.
    More

    Related Stories
    #google #announces #synthid #detector #that
    Google Announces SynthID Detector That Can Identify Gemini-Generated Content at Google I/O 2025
    Google I/O 2025 keynote session on Tuesday was focused on new artificial intelligenceupdates and features. Alongside, the Mountain View-based tech giant also introduced a new tool to bring more transparency when it comes to AI-generated content. Dubbed SynthID Detector, it is an under-testing verification portal which can detect and identify multimodal AI content made using the company's AI models. The technology can identify audio, video, image, and text, allowing individuals to easily assess if a piece of content is human-made or synthetic.Google Tests SynthID Detector Verification PlatformThe company first unveiled SynthID in 2023 as a technology that can add an imperceptible watermark into content that cannot be removed or tampered with. In 2024, the company open-sourced the text watermarking technology to businesses and developers. The invisible watermark shows up when analysed using special software. Google is now testing a verification portal dubbed SynthID Detector that will allow individuals to quickly check if a media is generated using AI or not.In a blog post, the tech giant said the portal provides transparency “in a rapidly evolving landscape of generative media.” With Veo 3 and Imagen 4 AI models that can generate hyperrealistic images and videos, the risk of deepfakes has also increased significantly.While measures such as the Coalition for Content Provenance and Authenticitystandard have offered a way for AI companies to highlight AI-generated content, they are not completely tamper-proof. Advanced watermarking technologies enable users and institutions to protect themselves from misinformation and synthetic abusive content.The portal is straightforward to use, Google explains. Users can upload media they suspect to have been generated using Google's AI tool and SynthID Detector then scans the uploaded media and detects any SynthID watermark. Afterwards, it shares the results, and if a watermark is detected, it highlights which part of the content is likely to be AI-generated. Notably, the tool does not work with non-Google AI products. One of the biggest advantages of SynthID is that the imperceptible watermark does not compromise the quality of the media, and at the same time, it is not possible to remove or alter it. Currently, Google is rolling out the portal to early testers, and it plans to make it available more broadly later this year. Journalists, media professionals and researchers can join the waitlist to gain early access to the SynthID Detector. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: Google IO 2025, Google, SynthID, Gemini, AI, Artificial Intelligence Akash Dutta Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More Related Stories #google #announces #synthid #detector #that
    WWW.GADGETS360.COM
    Google Announces SynthID Detector That Can Identify Gemini-Generated Content at Google I/O 2025
    Google I/O 2025 keynote session on Tuesday was focused on new artificial intelligence (AI) updates and features. Alongside, the Mountain View-based tech giant also introduced a new tool to bring more transparency when it comes to AI-generated content. Dubbed SynthID Detector, it is an under-testing verification portal which can detect and identify multimodal AI content made using the company's AI models. The technology can identify audio, video, image, and text, allowing individuals to easily assess if a piece of content is human-made or synthetic.Google Tests SynthID Detector Verification PlatformThe company first unveiled SynthID in 2023 as a technology that can add an imperceptible watermark into content that cannot be removed or tampered with. In 2024, the company open-sourced the text watermarking technology to businesses and developers. The invisible watermark shows up when analysed using special software. Google is now testing a verification portal dubbed SynthID Detector that will allow individuals to quickly check if a media is generated using AI or not.In a blog post, the tech giant said the portal provides transparency “in a rapidly evolving landscape of generative media.” With Veo 3 and Imagen 4 AI models that can generate hyperrealistic images and videos, the risk of deepfakes has also increased significantly.While measures such as the Coalition for Content Provenance and Authenticity (C2PA) standard have offered a way for AI companies to highlight AI-generated content, they are not completely tamper-proof. Advanced watermarking technologies enable users and institutions to protect themselves from misinformation and synthetic abusive content.The portal is straightforward to use, Google explains. Users can upload media they suspect to have been generated using Google's AI tool and SynthID Detector then scans the uploaded media and detects any SynthID watermark. Afterwards, it shares the results, and if a watermark is detected, it highlights which part of the content is likely to be AI-generated. Notably, the tool does not work with non-Google AI products. One of the biggest advantages of SynthID is that the imperceptible watermark does not compromise the quality of the media, and at the same time, it is not possible to remove or alter it. Currently, Google is rolling out the portal to early testers, and it plans to make it available more broadly later this year. Journalists, media professionals and researchers can join the waitlist to gain early access to the SynthID Detector. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: Google IO 2025, Google, SynthID, Gemini, AI, Artificial Intelligence Akash Dutta Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More Related Stories
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Google I/O 2025: Everything announced at this year’s developer conference

    Google I/O 2025, Google’s biggest developer conference of the year, takes place Tuesday and Wednesday at the Shoreline Amphitheatre in Mountain View. We’re on the ground bringing you the latest updates from the event. 
    I/O showcases product announcements from across Google’s portfolio. We’ve got plenty of news relating to Android, Chrome, Google Search, YouTube, and — of course — Google’s AI-powered chatbot, Gemini.
    Google hosted a separate event dedicated to Android updates: The Android Show. The company announced new ways to find lost Android phones and other items, additional device-level features for its Advanced Protection program, security tools to protect against scams and theft, and a new design language called Material 3 Expressive.
    Here are all the things announced at Google I/O 2025.
    Gemini Ultra
    Gemini Ultradelivers the “highest level of access” to Google’s AI-powered apps and services, according to Google. It’s priced at per month and includes Google’s Veo 3 video generator, the company’s new Flow video editing app, and a powerful AI capability called Gemini 2.5 Pro Deep Think mode, which hasn’t launched yet.
    AI Ultra comes with higher limits in Google’s NotebookLM platform and Whisk, the company’s image remixing app. AI Ultra subscribers also get access to Google’s Gemini chatbot in Chrome; some “agentic” tools powered by the company’s Project Mariner tech; YouTube Premium; and 30TB of storage across Google Drive, Google Photos, and Gmail.
    Deep Think in Gemini 2.5 Pro
    Deep Think is an “enhanced” reasoning mode for Google’s flagship Gemini 2.5 Pro model. It allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks.

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    Google didn’t go into detail about Deep Think works, but it could be similar to OpenAI’s o1-pro and upcoming o3-pro models, which likely use an engine to search for and synthesize the best solution to a given problem.
    Deep Think is available to “trusted testers” via the Gemini API. Google said that it’s taking additional time to conduct safety evaluations before rolling out Deep Think widely.
    Veo 3 video-generating AI model
    Google claims that Veo 3 can generate sound effects, background noises, and even dialogue to accompany the videos it creates. Veo 3 also improves upon its predecessor, Veo 2, in terms of the quality of footage it can generate, Google says.
    Veo 3 is available beginning Tuesday in Google’s Gemini chatbot app for subscribers to Google’s -per-month AI Ultra plan, where it can be prompted with text or an image.
    Imagen 4 AI image generator
    According to Google, Imagen 4 is fast — faster than Imagen 3. And it’ll soon get faster. In the near future, Google plans to release a variant of Imagen 4 that’s up to 10x quicker than Imagen 3.
    Imagen 4 is capable of rendering “fine details” like fabrics, water droplets, and animal fur, according to Google. It can handle both photorealistic and abstract styles, creating images in a range of aspect ratios and up to 2K resolution.
    Both Veo 3 and Imagen 4 will be used to power Flow, the company’s AI-powered video tool geared towards filmmaking. 
    A sample from Imagen 4.Image Credits:Google
    Gemini app updates
    Google announced that Gemini apps have more than 400 monthly active users. 
    Gemini Live’s camera and screen-sharing capabilities will roll out this week to all users on iOS and Android. The feature, powered by Project Astra, lets people have near-real time verbal conversations with Gemini, while also streaming video from their smartphone’s camera or screen to the AI model.
    Google says Gemini Live will also start to integrate more deeply with its other apps in the coming weeks: It will soon be able to offer directions from Google Maps, create events in Google Calendar, and make to-do lists with Google Tasks.
    Google says it’s updating Deep Research, Gemini’s AI agent that generates thorough research reports, by allowing users to upload their own private PDFs and images.
    Stitch
    Stitch is an AI-powered tool to help people design web and mobile app front ends by generating the necessary UI elements and code. Stitch can be prompted to create app UIs with a few words or even an image, providing HTML and CSS markup for the designs it generates.
    Stitch is a bit more limited in what it can do compared to some other vibe coding products, but there’s a fair amount of customization options.
    Google has also expanded access to Jules, its AI agent aimed at helping developers fix bugs in code. The tool helps developers understand complex code, create pull requests on GitHub, and handle certain backlog items and programming tasks.
    Project Mariner
    Project Mariner is Google’s experimental AI agent that browses and uses websites. Google says it has significantly updated how Project Mariner works, allowing the agent to take on nearly a dozen tasks at a time, and is now rolling it out to users.
    For example, Project Mariner users can purchase tickets to a baseball game or buy groceries online without ever visiting a third-party website. People can just chat with Google’s AI agent, and it visits websites and takes actions for them.
    Project Astra
    Google’s low latency, multimodal AI experience, Project Astra, will power an array of new experiences in Search, the Gemini AI app, and products from third-party developers. 
    Project Astra was born out of Google DeepMind as a way to showcase nearly real-time, multimodal AI capabilities. The company says it’s now building those Project Astra glasses with partners including Samsung and Warby Parker, but the company doesn’t have a set launch date yet. 

    AI Mode
    Google is rolling out AI Mode, the experimental Google Search feature that lets people ask complex, multi-part questions via an AI interface, to users in the U.S. this week.
    AI Mode will support the use of complex data in sports and finance queries, and it will offer “try it on” options for apparel. Search Live, which is rolling out later this summer, will let you ask questions based on what your phone’s camera is seeing in real-time. 
    Gmail is the first app to be supported with personalized context.
    Beam 3D teleconferencing
    Beam, previously called Starline, uses a combination of software and hardware, including a six-camera array and custom light field display, to let a user converse with someone as if they were in the same meeting room. An AI model converts video from the cameras, which are positioned at different angles and pointed toward the user, into a 3D rendering.
    Google’s Beam boasts “near-perfect” millimeter-level head tracking and 60fps video streaming. When used with Google Meet, Beam provides an AI-powered real-time speech translation feature that preserves the original speaker’s voice, tone, and expressions.
    And speaking of Google Meet, Google announced that Meet is getting real-time speech translation.
    More AI updates
    Google is launching Gemini in Chrome, which will give people access to a new AI browsing assistant that will help them quickly understand the context of a page and get tasks done. 
    Gemma 3n is a model designed to run “smoothly” on phones, laptops, and tablets. It’s available in preview starting Tuesday; it can handle audio, text, images, and videos, according to Google.
    The company also announced a ton of AI Workspace features coming to Gmail, Google Docs, and Google Vids. Most notably, Gmail is getting personalized smart replies and a new inbox-cleaning feature, while Vids is getting new ways to create and edit content.
    Video Overviews are coming to NotebookLM, and the company rolled out SynthID Detector, a verification portal that uses Google’s SynthID watermarking technology to help identify AI-generated content. Lyria RealTime, the AI model that powers its experimental music production app, is now available via an API.
    Wear OS 6
    Wear OS 6 brings a unified font to tiles for a cleaner app look, and Pixel Watches are getting dynamic theming that syncs app colors with watch faces. 
    The core promise of the new design reference platform is to let developers build better customization in apps along with seamless transitions. The company is releasing a design guideline for developers along with Figma design files.
    Image Credits:Google /
    Google Play
    Google is beefing up the Play Store for Android developers with fresh tools to handle subscriptions, topic pages so users can dive into specific interests, audio samples to give folks a sneak peek into app content, and a new checkout experience to make selling add-ons smoother.
    “Topic browse” pages for movies and showswill connect users to apps tied to tons of shows and movies. Plus, developers are getting dedicated pages for testing and releases, and tools to keep an eye on and improve their app rollouts. Developers using Google can also now halt live app releases if a critical problem pops up.
    Subscription management tools are also getting an upgrade with multi-product checkout. Devs will soon be able to offer subscription add-ons alongside main subscriptions, all under one payment.
    Android Studio
    Android Studio is integrating new AI features, including “Journeys,” an “agentic AI” capability that coincides with the release of the Gemini 2.5 Pro model. And an “Agent Mode” will be able to handle more-intricate development processes.
    Android Studio will receive new AI capabilities, including an enhanced “crash insights” feature in the App Quality Insights panel. This improvement, powered by Gemini, will analyze an app’s source code to identify potential causes of crashes and suggest fixes.
    #google #everything #announced #this #years
    Google I/O 2025: Everything announced at this year’s developer conference
    Google I/O 2025, Google’s biggest developer conference of the year, takes place Tuesday and Wednesday at the Shoreline Amphitheatre in Mountain View. We’re on the ground bringing you the latest updates from the event.  I/O showcases product announcements from across Google’s portfolio. We’ve got plenty of news relating to Android, Chrome, Google Search, YouTube, and — of course — Google’s AI-powered chatbot, Gemini. Google hosted a separate event dedicated to Android updates: The Android Show. The company announced new ways to find lost Android phones and other items, additional device-level features for its Advanced Protection program, security tools to protect against scams and theft, and a new design language called Material 3 Expressive. Here are all the things announced at Google I/O 2025. Gemini Ultra Gemini Ultradelivers the “highest level of access” to Google’s AI-powered apps and services, according to Google. It’s priced at per month and includes Google’s Veo 3 video generator, the company’s new Flow video editing app, and a powerful AI capability called Gemini 2.5 Pro Deep Think mode, which hasn’t launched yet. AI Ultra comes with higher limits in Google’s NotebookLM platform and Whisk, the company’s image remixing app. AI Ultra subscribers also get access to Google’s Gemini chatbot in Chrome; some “agentic” tools powered by the company’s Project Mariner tech; YouTube Premium; and 30TB of storage across Google Drive, Google Photos, and Gmail. Deep Think in Gemini 2.5 Pro Deep Think is an “enhanced” reasoning mode for Google’s flagship Gemini 2.5 Pro model. It allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Google didn’t go into detail about Deep Think works, but it could be similar to OpenAI’s o1-pro and upcoming o3-pro models, which likely use an engine to search for and synthesize the best solution to a given problem. Deep Think is available to “trusted testers” via the Gemini API. Google said that it’s taking additional time to conduct safety evaluations before rolling out Deep Think widely. Veo 3 video-generating AI model Google claims that Veo 3 can generate sound effects, background noises, and even dialogue to accompany the videos it creates. Veo 3 also improves upon its predecessor, Veo 2, in terms of the quality of footage it can generate, Google says. Veo 3 is available beginning Tuesday in Google’s Gemini chatbot app for subscribers to Google’s -per-month AI Ultra plan, where it can be prompted with text or an image. Imagen 4 AI image generator According to Google, Imagen 4 is fast — faster than Imagen 3. And it’ll soon get faster. In the near future, Google plans to release a variant of Imagen 4 that’s up to 10x quicker than Imagen 3. Imagen 4 is capable of rendering “fine details” like fabrics, water droplets, and animal fur, according to Google. It can handle both photorealistic and abstract styles, creating images in a range of aspect ratios and up to 2K resolution. Both Veo 3 and Imagen 4 will be used to power Flow, the company’s AI-powered video tool geared towards filmmaking.  A sample from Imagen 4.Image Credits:Google Gemini app updates Google announced that Gemini apps have more than 400 monthly active users.  Gemini Live’s camera and screen-sharing capabilities will roll out this week to all users on iOS and Android. The feature, powered by Project Astra, lets people have near-real time verbal conversations with Gemini, while also streaming video from their smartphone’s camera or screen to the AI model. Google says Gemini Live will also start to integrate more deeply with its other apps in the coming weeks: It will soon be able to offer directions from Google Maps, create events in Google Calendar, and make to-do lists with Google Tasks. Google says it’s updating Deep Research, Gemini’s AI agent that generates thorough research reports, by allowing users to upload their own private PDFs and images. Stitch Stitch is an AI-powered tool to help people design web and mobile app front ends by generating the necessary UI elements and code. Stitch can be prompted to create app UIs with a few words or even an image, providing HTML and CSS markup for the designs it generates. Stitch is a bit more limited in what it can do compared to some other vibe coding products, but there’s a fair amount of customization options. Google has also expanded access to Jules, its AI agent aimed at helping developers fix bugs in code. The tool helps developers understand complex code, create pull requests on GitHub, and handle certain backlog items and programming tasks. Project Mariner Project Mariner is Google’s experimental AI agent that browses and uses websites. Google says it has significantly updated how Project Mariner works, allowing the agent to take on nearly a dozen tasks at a time, and is now rolling it out to users. For example, Project Mariner users can purchase tickets to a baseball game or buy groceries online without ever visiting a third-party website. People can just chat with Google’s AI agent, and it visits websites and takes actions for them. Project Astra Google’s low latency, multimodal AI experience, Project Astra, will power an array of new experiences in Search, the Gemini AI app, and products from third-party developers.  Project Astra was born out of Google DeepMind as a way to showcase nearly real-time, multimodal AI capabilities. The company says it’s now building those Project Astra glasses with partners including Samsung and Warby Parker, but the company doesn’t have a set launch date yet.  AI Mode Google is rolling out AI Mode, the experimental Google Search feature that lets people ask complex, multi-part questions via an AI interface, to users in the U.S. this week. AI Mode will support the use of complex data in sports and finance queries, and it will offer “try it on” options for apparel. Search Live, which is rolling out later this summer, will let you ask questions based on what your phone’s camera is seeing in real-time.  Gmail is the first app to be supported with personalized context. Beam 3D teleconferencing Beam, previously called Starline, uses a combination of software and hardware, including a six-camera array and custom light field display, to let a user converse with someone as if they were in the same meeting room. An AI model converts video from the cameras, which are positioned at different angles and pointed toward the user, into a 3D rendering. Google’s Beam boasts “near-perfect” millimeter-level head tracking and 60fps video streaming. When used with Google Meet, Beam provides an AI-powered real-time speech translation feature that preserves the original speaker’s voice, tone, and expressions. And speaking of Google Meet, Google announced that Meet is getting real-time speech translation. More AI updates Google is launching Gemini in Chrome, which will give people access to a new AI browsing assistant that will help them quickly understand the context of a page and get tasks done.  Gemma 3n is a model designed to run “smoothly” on phones, laptops, and tablets. It’s available in preview starting Tuesday; it can handle audio, text, images, and videos, according to Google. The company also announced a ton of AI Workspace features coming to Gmail, Google Docs, and Google Vids. Most notably, Gmail is getting personalized smart replies and a new inbox-cleaning feature, while Vids is getting new ways to create and edit content. Video Overviews are coming to NotebookLM, and the company rolled out SynthID Detector, a verification portal that uses Google’s SynthID watermarking technology to help identify AI-generated content. Lyria RealTime, the AI model that powers its experimental music production app, is now available via an API. Wear OS 6 Wear OS 6 brings a unified font to tiles for a cleaner app look, and Pixel Watches are getting dynamic theming that syncs app colors with watch faces.  The core promise of the new design reference platform is to let developers build better customization in apps along with seamless transitions. The company is releasing a design guideline for developers along with Figma design files. Image Credits:Google / Google Play Google is beefing up the Play Store for Android developers with fresh tools to handle subscriptions, topic pages so users can dive into specific interests, audio samples to give folks a sneak peek into app content, and a new checkout experience to make selling add-ons smoother. “Topic browse” pages for movies and showswill connect users to apps tied to tons of shows and movies. Plus, developers are getting dedicated pages for testing and releases, and tools to keep an eye on and improve their app rollouts. Developers using Google can also now halt live app releases if a critical problem pops up. Subscription management tools are also getting an upgrade with multi-product checkout. Devs will soon be able to offer subscription add-ons alongside main subscriptions, all under one payment. Android Studio Android Studio is integrating new AI features, including “Journeys,” an “agentic AI” capability that coincides with the release of the Gemini 2.5 Pro model. And an “Agent Mode” will be able to handle more-intricate development processes. Android Studio will receive new AI capabilities, including an enhanced “crash insights” feature in the App Quality Insights panel. This improvement, powered by Gemini, will analyze an app’s source code to identify potential causes of crashes and suggest fixes. #google #everything #announced #this #years
    TECHCRUNCH.COM
    Google I/O 2025: Everything announced at this year’s developer conference
    Google I/O 2025, Google’s biggest developer conference of the year, takes place Tuesday and Wednesday at the Shoreline Amphitheatre in Mountain View. We’re on the ground bringing you the latest updates from the event.  I/O showcases product announcements from across Google’s portfolio. We’ve got plenty of news relating to Android, Chrome, Google Search, YouTube, and — of course — Google’s AI-powered chatbot, Gemini. Google hosted a separate event dedicated to Android updates: The Android Show. The company announced new ways to find lost Android phones and other items, additional device-level features for its Advanced Protection program, security tools to protect against scams and theft, and a new design language called Material 3 Expressive. Here are all the things announced at Google I/O 2025. Gemini Ultra Gemini Ultra (only in the U.S. for now) delivers the “highest level of access” to Google’s AI-powered apps and services, according to Google. It’s priced at $249.99 per month and includes Google’s Veo 3 video generator, the company’s new Flow video editing app, and a powerful AI capability called Gemini 2.5 Pro Deep Think mode, which hasn’t launched yet. AI Ultra comes with higher limits in Google’s NotebookLM platform and Whisk, the company’s image remixing app. AI Ultra subscribers also get access to Google’s Gemini chatbot in Chrome; some “agentic” tools powered by the company’s Project Mariner tech; YouTube Premium; and 30TB of storage across Google Drive, Google Photos, and Gmail. Deep Think in Gemini 2.5 Pro Deep Think is an “enhanced” reasoning mode for Google’s flagship Gemini 2.5 Pro model. It allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Google didn’t go into detail about Deep Think works, but it could be similar to OpenAI’s o1-pro and upcoming o3-pro models, which likely use an engine to search for and synthesize the best solution to a given problem. Deep Think is available to “trusted testers” via the Gemini API. Google said that it’s taking additional time to conduct safety evaluations before rolling out Deep Think widely. Veo 3 video-generating AI model Google claims that Veo 3 can generate sound effects, background noises, and even dialogue to accompany the videos it creates. Veo 3 also improves upon its predecessor, Veo 2, in terms of the quality of footage it can generate, Google says. Veo 3 is available beginning Tuesday in Google’s Gemini chatbot app for subscribers to Google’s $249.99-per-month AI Ultra plan, where it can be prompted with text or an image. Imagen 4 AI image generator According to Google, Imagen 4 is fast — faster than Imagen 3. And it’ll soon get faster. In the near future, Google plans to release a variant of Imagen 4 that’s up to 10x quicker than Imagen 3. Imagen 4 is capable of rendering “fine details” like fabrics, water droplets, and animal fur, according to Google. It can handle both photorealistic and abstract styles, creating images in a range of aspect ratios and up to 2K resolution. Both Veo 3 and Imagen 4 will be used to power Flow, the company’s AI-powered video tool geared towards filmmaking.  A sample from Imagen 4.Image Credits:Google Gemini app updates Google announced that Gemini apps have more than 400 monthly active users.  Gemini Live’s camera and screen-sharing capabilities will roll out this week to all users on iOS and Android. The feature, powered by Project Astra, lets people have near-real time verbal conversations with Gemini, while also streaming video from their smartphone’s camera or screen to the AI model. Google says Gemini Live will also start to integrate more deeply with its other apps in the coming weeks: It will soon be able to offer directions from Google Maps, create events in Google Calendar, and make to-do lists with Google Tasks. Google says it’s updating Deep Research, Gemini’s AI agent that generates thorough research reports, by allowing users to upload their own private PDFs and images. Stitch Stitch is an AI-powered tool to help people design web and mobile app front ends by generating the necessary UI elements and code. Stitch can be prompted to create app UIs with a few words or even an image, providing HTML and CSS markup for the designs it generates. Stitch is a bit more limited in what it can do compared to some other vibe coding products, but there’s a fair amount of customization options. Google has also expanded access to Jules, its AI agent aimed at helping developers fix bugs in code. The tool helps developers understand complex code, create pull requests on GitHub, and handle certain backlog items and programming tasks. Project Mariner Project Mariner is Google’s experimental AI agent that browses and uses websites. Google says it has significantly updated how Project Mariner works, allowing the agent to take on nearly a dozen tasks at a time, and is now rolling it out to users. For example, Project Mariner users can purchase tickets to a baseball game or buy groceries online without ever visiting a third-party website. People can just chat with Google’s AI agent, and it visits websites and takes actions for them. Project Astra Google’s low latency, multimodal AI experience, Project Astra, will power an array of new experiences in Search, the Gemini AI app, and products from third-party developers.  Project Astra was born out of Google DeepMind as a way to showcase nearly real-time, multimodal AI capabilities. The company says it’s now building those Project Astra glasses with partners including Samsung and Warby Parker, but the company doesn’t have a set launch date yet.  AI Mode Google is rolling out AI Mode, the experimental Google Search feature that lets people ask complex, multi-part questions via an AI interface, to users in the U.S. this week. AI Mode will support the use of complex data in sports and finance queries, and it will offer “try it on” options for apparel. Search Live, which is rolling out later this summer, will let you ask questions based on what your phone’s camera is seeing in real-time.  Gmail is the first app to be supported with personalized context. Beam 3D teleconferencing Beam, previously called Starline, uses a combination of software and hardware, including a six-camera array and custom light field display, to let a user converse with someone as if they were in the same meeting room. An AI model converts video from the cameras, which are positioned at different angles and pointed toward the user, into a 3D rendering. Google’s Beam boasts “near-perfect” millimeter-level head tracking and 60fps video streaming. When used with Google Meet, Beam provides an AI-powered real-time speech translation feature that preserves the original speaker’s voice, tone, and expressions. And speaking of Google Meet, Google announced that Meet is getting real-time speech translation. More AI updates Google is launching Gemini in Chrome, which will give people access to a new AI browsing assistant that will help them quickly understand the context of a page and get tasks done.  Gemma 3n is a model designed to run “smoothly” on phones, laptops, and tablets. It’s available in preview starting Tuesday; it can handle audio, text, images, and videos, according to Google. The company also announced a ton of AI Workspace features coming to Gmail, Google Docs, and Google Vids. Most notably, Gmail is getting personalized smart replies and a new inbox-cleaning feature, while Vids is getting new ways to create and edit content. Video Overviews are coming to NotebookLM, and the company rolled out SynthID Detector, a verification portal that uses Google’s SynthID watermarking technology to help identify AI-generated content. Lyria RealTime, the AI model that powers its experimental music production app, is now available via an API. Wear OS 6 Wear OS 6 brings a unified font to tiles for a cleaner app look, and Pixel Watches are getting dynamic theming that syncs app colors with watch faces.  The core promise of the new design reference platform is to let developers build better customization in apps along with seamless transitions. The company is releasing a design guideline for developers along with Figma design files. Image Credits:Google / Google Play Google is beefing up the Play Store for Android developers with fresh tools to handle subscriptions, topic pages so users can dive into specific interests, audio samples to give folks a sneak peek into app content, and a new checkout experience to make selling add-ons smoother. “Topic browse” pages for movies and shows (U.S. only for now) will connect users to apps tied to tons of shows and movies. Plus, developers are getting dedicated pages for testing and releases, and tools to keep an eye on and improve their app rollouts. Developers using Google can also now halt live app releases if a critical problem pops up. Subscription management tools are also getting an upgrade with multi-product checkout. Devs will soon be able to offer subscription add-ons alongside main subscriptions, all under one payment. Android Studio Android Studio is integrating new AI features, including “Journeys,” an “agentic AI” capability that coincides with the release of the Gemini 2.5 Pro model. And an “Agent Mode” will be able to handle more-intricate development processes. Android Studio will receive new AI capabilities, including an enhanced “crash insights” feature in the App Quality Insights panel. This improvement, powered by Gemini, will analyze an app’s source code to identify potential causes of crashes and suggest fixes.
    0 Comentários 0 Compartilhamentos 0 Anterior
CGShares https://cgshares.com