• Dispatch offers something new for superhero video games — engaging deskwork

    While we’ve had plenty of superhero games come out over the past decade and a half, most have either been open-world adventures or fighting games. I’m as excited as anyone for the upcoming Marvel Tōkon and Invincible VS, but I’m also ready for a little something different. That’s where Dispatch from AdHoc Studio comes in.

    Dispatch is a game made for people who enjoy watching a rerun of The Office as a palate cleanser after the bloody battles of Invincible. So, me. You’re cast as Robert Robertson, the former superhero known as Mecha Man. He has to step away from frontline superheroics as the mech suit he relied on was destroyed in battle. Needing a job, he starts work at a dispatch center for superheroes, and the demo takes you through a small, 30-minute chunk of his first day.

    You’ll notice Dispatch’s crude humor early on. The first thing you can do in Dispatch is give a colleague a “bro fist” at a urinal, and the juvenile jokes don’t stop there. Middle school boys are going to love it, though I’d be lying if I said a few of the jokes didn’t get chuckles from me.

    Another of Robertson’s co-workers, who also used to be a superhero until his powers caused him to rapidly age, introduces Robertson’s team of misfit heroes, though that term should be used loosely. He notes they’re a “motley crew of dangerous fuck-ups” as Robertson examines their files, each with a mugshot and rapsheet. Robertson isn’t in charge of the Avengers — he’s leading a D-List Suicide Squad. The cast, however, is full of A-listers: Laura Bailey, Matthew Mercer, Aaron Paul, and Jeffrey Wright are among those lending their voices to Dispatch.

    Much like The Boys, Dispatch plays with the idea of the corporatization of superheroes. These heroes aren’t a lone Spider-Man swinging through Manhattan on patrol — they’re employees waiting for an assignment. Gameplay consists of matching the righthero to the job. Some assignments I saw in the demo included breaking up a robbery, catching a 12-year-old thief, and grabbing a kid’s balloon from a tree while also making sure the kid didn’t cry. Seeing as how one of your misfits is a literal bat man and another looks like a tiefling, you have to choose wisely.

    The real draw of Dispatch for me isn’t the point-and-click assignment gameplay, but rather the choice-based dialogue. It’s developed by AdHoc Studio, which was formed in 2018 by former developers who had worked on Telltale titles like The Wolf Among Us, The Walking Dead, and Tales from the Borderlands, and you can easily see the throughline from those titles to Dispatch. At various points, you have a limited time to select Robertson’s dialogue, and occasionally a pop-up saying a character “will remember that” appears. How much Robertson’s choices actually have consequences or influence his relationships with others remains to be seen, though I have no doubt those choices will be fun to make.

    After its reveal at The Game Awards six months ago, Dispatch will be coming to Windows PC and unspecified consoles sometime this year. You can check out its demo now on Steam.
    #dispatch #offers #something #new #superhero
    Dispatch offers something new for superhero video games — engaging deskwork
    While we’ve had plenty of superhero games come out over the past decade and a half, most have either been open-world adventures or fighting games. I’m as excited as anyone for the upcoming Marvel Tōkon and Invincible VS, but I’m also ready for a little something different. That’s where Dispatch from AdHoc Studio comes in. Dispatch is a game made for people who enjoy watching a rerun of The Office as a palate cleanser after the bloody battles of Invincible. So, me. You’re cast as Robert Robertson, the former superhero known as Mecha Man. He has to step away from frontline superheroics as the mech suit he relied on was destroyed in battle. Needing a job, he starts work at a dispatch center for superheroes, and the demo takes you through a small, 30-minute chunk of his first day. You’ll notice Dispatch’s crude humor early on. The first thing you can do in Dispatch is give a colleague a “bro fist” at a urinal, and the juvenile jokes don’t stop there. Middle school boys are going to love it, though I’d be lying if I said a few of the jokes didn’t get chuckles from me. Another of Robertson’s co-workers, who also used to be a superhero until his powers caused him to rapidly age, introduces Robertson’s team of misfit heroes, though that term should be used loosely. He notes they’re a “motley crew of dangerous fuck-ups” as Robertson examines their files, each with a mugshot and rapsheet. Robertson isn’t in charge of the Avengers — he’s leading a D-List Suicide Squad. The cast, however, is full of A-listers: Laura Bailey, Matthew Mercer, Aaron Paul, and Jeffrey Wright are among those lending their voices to Dispatch. Much like The Boys, Dispatch plays with the idea of the corporatization of superheroes. These heroes aren’t a lone Spider-Man swinging through Manhattan on patrol — they’re employees waiting for an assignment. Gameplay consists of matching the righthero to the job. Some assignments I saw in the demo included breaking up a robbery, catching a 12-year-old thief, and grabbing a kid’s balloon from a tree while also making sure the kid didn’t cry. Seeing as how one of your misfits is a literal bat man and another looks like a tiefling, you have to choose wisely. The real draw of Dispatch for me isn’t the point-and-click assignment gameplay, but rather the choice-based dialogue. It’s developed by AdHoc Studio, which was formed in 2018 by former developers who had worked on Telltale titles like The Wolf Among Us, The Walking Dead, and Tales from the Borderlands, and you can easily see the throughline from those titles to Dispatch. At various points, you have a limited time to select Robertson’s dialogue, and occasionally a pop-up saying a character “will remember that” appears. How much Robertson’s choices actually have consequences or influence his relationships with others remains to be seen, though I have no doubt those choices will be fun to make. After its reveal at The Game Awards six months ago, Dispatch will be coming to Windows PC and unspecified consoles sometime this year. You can check out its demo now on Steam. #dispatch #offers #something #new #superhero
    WWW.POLYGON.COM
    Dispatch offers something new for superhero video games — engaging deskwork
    While we’ve had plenty of superhero games come out over the past decade and a half (and I’m always down for more), most have either been open-world adventures or fighting games. I’m as excited as anyone for the upcoming Marvel Tōkon and Invincible VS, but I’m also ready for a little something different. That’s where Dispatch from AdHoc Studio comes in. Dispatch is a game made for people who enjoy watching a rerun of The Office as a palate cleanser after the bloody battles of Invincible. So, me. You’re cast as Robert Robertson, the former superhero known as Mecha Man. He has to step away from frontline superheroics as the mech suit he relied on was destroyed in battle. Needing a job, he starts work at a dispatch center for superheroes, and the demo takes you through a small, 30-minute chunk of his first day. You’ll notice Dispatch’s crude humor early on. The first thing you can do in Dispatch is give a colleague a “bro fist” at a urinal, and the juvenile jokes don’t stop there. Middle school boys are going to love it, though I’d be lying if I said a few of the jokes didn’t get chuckles from me. Another of Robertson’s co-workers, who also used to be a superhero until his powers caused him to rapidly age, introduces Robertson’s team of misfit heroes, though that term should be used loosely. He notes they’re a “motley crew of dangerous fuck-ups” as Robertson examines their files, each with a mugshot and rapsheet. Robertson isn’t in charge of the Avengers — he’s leading a D-List Suicide Squad. The cast, however, is full of A-listers: Laura Bailey, Matthew Mercer, Aaron Paul, and Jeffrey Wright are among those lending their voices to Dispatch. Much like The Boys, Dispatch plays with the idea of the corporatization of superheroes (though without the satire of and parallels to modern-day politics). These heroes aren’t a lone Spider-Man swinging through Manhattan on patrol — they’re employees waiting for an assignment. Gameplay consists of matching the right (or perhaps “good enough”) hero to the job. Some assignments I saw in the demo included breaking up a robbery, catching a 12-year-old thief, and grabbing a kid’s balloon from a tree while also making sure the kid didn’t cry. Seeing as how one of your misfits is a literal bat man and another looks like a tiefling, you have to choose wisely. The real draw of Dispatch for me isn’t the point-and-click assignment gameplay, but rather the choice-based dialogue. It’s developed by AdHoc Studio, which was formed in 2018 by former developers who had worked on Telltale titles like The Wolf Among Us, The Walking Dead, and Tales from the Borderlands, and you can easily see the throughline from those titles to Dispatch. At various points, you have a limited time to select Robertson’s dialogue, and occasionally a pop-up saying a character “will remember that” appears. How much Robertson’s choices actually have consequences or influence his relationships with others remains to be seen, though I have no doubt those choices will be fun to make. After its reveal at The Game Awards six months ago, Dispatch will be coming to Windows PC and unspecified consoles sometime this year. You can check out its demo now on Steam.
    Like
    Love
    Wow
    Sad
    Angry
    431
    0 Comments 0 Shares
  • Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud

    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    #googles #new #tool #generates #convincing
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement.Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder.Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare.For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization.Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.” #googles #new #tool #generates #convincing
    TIME.COM
    Google’s New AI Tool Generates Convincing Deepfakes of Riots, Conflict, and Election Fraud
    Google's recently launched AI video tool can generate realistic clips that contain misleading or inflammatory information about news events, according to a TIME analysis and several tech watchdogs.TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence. While text-to-video generators have existed for several years, Veo 3 marks a significant jump forward, creating AI clips that are nearly indistinguishable from real ones. Unlike the outputs of previous video generators like OpenAI’s Sora, Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery. Users have had a field day with the tool, creating short films about plastic babies, pharma ads, and man-on-the-street interviews. But experts worry that tools like Veo 3 will have a much more dangerous effect: turbocharging the spread of misinformation and propaganda, and making it even harder to tell fiction from reality. Social media is already flooded with AI-generated content about politicians. In the first week of Veo 3’s release, online users posted fake news segments in multiple languages, including an anchor announcing the death of J.K. Rowling and of fake political news conferences. “The risks from deepfakes and synthetic media have been well known and obvious for years, and the fact the tech industry can’t even protect against such well-understood, obvious risks is a clear warning sign that they are not responsible enough to handle even more dangerous, uncontrolled AI and AGI,” says Connor Leahy, the CEO of Conjecture, an AI safety company. “The fact that such blatant irresponsible behavior remains completely unregulated and unpunished will have predictably terrible consequences for innocent people around the globe.”Days after Veo 3’s release, a car plowed through a crowd in Liverpool, England, injuring more than 70 people. Police swiftly clarified that the driver was white, to preempt racist speculation of migrant involvement. (Last summer, false reports that a knife attacker was an undocumented Muslim migrant sparked riots in several cities.) Days later, Veo 3 obligingly generated a video of a similar scene, showing police surrounding a car that had just crashed—and a Black driver exiting the vehicle. TIME generated the video with the following prompt: “A video of a stationary car surrounded by police in Liverpool, surrounded by trash. Aftermath of a car crash. There are people running away from the car. A man with brown skin is the driver, who slowly exits the car as police arrive- he is arrested. The video is shot from above - the window of a building. There are screams in the background.”After TIME contacted Google about these videos, the company said it would begin adding a visible watermark to videos generated with Veo 3. The watermark now appears on videos generated by the tool. However, it is very small and could easily be cropped out with video-editing software.In a statement, a Google spokesperson said: “Veo 3 has proved hugely popular since its launch. We're committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools.”Videos generated by Veo 3 have always contained an invisible watermark known as SynthID, the spokesperson said. Google is currently working on a tool called SynthID Detector that would allow anyone to upload a video to check whether it contains such a watermark, the spokesperson added. However, this tool is not yet publicly available.Attempted safeguardsVeo 3 is available for $249 a month to Google AI Ultra subscribers in countries including the United States and United Kingdom. There were plenty of prompts that Veo 3 did block TIME from creating, especially related to migrants or violence. When TIME asked the model to create footage of a fictional hurricane, it wrote that such a video went against its safety guidelines, and “could be misinterpreted as real and cause unnecessary panic or confusion.” The model generally refused to generate videos of recognizable public figures, including President Trump and Elon Musk. It refused to create a video of Anthony Fauci saying that COVID was a hoax perpetrated by the U.S. government.Veo’s website states that it blocks “harmful requests and results.” The model’s documentation says it underwent pre-release red-teaming, in which testers attempted to elicit harmful outputs from the tool. Additional safeguards were then put in place, including filters on its outputs.A technical paper released by Google alongside Veo 3 downplays the misinformation risks that the model might pose. Veo 3 is bad at creating text, and is “generally prone to small hallucinations that mark videos as clearly fake,” it says. “Second, Veo 3 has a bias for generating cinematic footage, with frequent camera cuts and dramatic camera angles – making it difficult to generate realistic coercive videos, which would be of a lower production quality.”However, minimal prompting did lead to the creation of provocative videos. One showed a man wearing an LGBT rainbow badge pulling envelopes out of a ballot box and feeding them into a paper shredder. (Veo 3 titled the file “Election Fraud Video.”) Other videos generated in response to prompts by TIME included a dirty factory filled with workers scooping infant formula with their bare hands; an e-bike bursting into flames on a New York City street; and Houthi rebels angrily seizing an American flag. Some users have been able to take misleading videos even further. Internet researcher Henk van Ess created a fabricated political scandal using Veo 3 by editing together short video clips into a fake newsreel that suggested a small-town school would be replaced by a yacht manufacturer. “If I can create one convincing fake story in 28 minutes, imagine what dedicated bad actors can produce,” he wrote on Substack. “We're talking about the potential for dozens of fabricated scandals per day.” “Companies need to be creating mechanisms to distinguish between authentic and synthetic imagery right now,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face. “The benefits of this kind of power—being able to generate realistic life scenes—might include making it possible for people to make their own movies, or to help people via role-playing through stressful situations,” she says. “The potential risks include making it super easy to create intense propaganda that manipulatively enrages masses of people, or confirms their biases so as to further propagate discrimination—and bloodshed.”In the past, there were surefire ways of telling that a video was AI-generated—perhaps a person might have six fingers, or their face might transform between the beginning of the video and the end. But as models improve, those signs are becoming increasingly rare. (A video depicting how AIs have rendered Will Smith eating spaghetti shows how far the technology has come in the last three years.) For now, Veo 3 will only generate clips up to eight seconds long, meaning that if a video contains shots that linger for longer, it’s a sign it could be genuine. But this limitation is not likely to last for long. Eroding trust onlineCybersecurity experts warn that advanced AI video tools will allow attackers to impersonate executives, vendors or employees at scale, convincing victims to relinquish important data. Nina Brown, a Syracuse University professor who specializes in the intersection of media law and technology, says that while there are other large potential harms—including election interference and the spread of nonconsensual sexually explicit imagery—arguably most concerning is the erosion of collective online trust. “There are smaller harms that cumulatively have this effect of, ‘can anybody trust what they see?’” she says. “That’s the biggest danger.” Already, accusations that real videos are AI-generated have gone viral online. One post on X, which received 2.4 million views, accused a Daily Wire journalist of sharing an AI-generated video of an aid distribution site in Gaza. A journalist at the BBC later confirmed that the video was authentic.Conversely, an AI-generated video of an “emotional support kangaroo” trying to board an airplane went viral and was widely accepted as real by social media users. Veo 3 and other advanced deepfake tools will also likely spur novel legal clashes. Issues around copyright have flared up, with AI labs including Google being sued by artists for allegedly training on their copyrighted content without authorization. (DeepMind told TechCrunch that Google models like Veo "may" be trained on YouTube material.) Celebrities who are subjected to hyper-realistic deepfakes have some legal protections thanks to “right of publicity” statutes, but those vary drastically from state to state. In April, Congress passed the Take it Down Act, which criminalizes non-consensual deepfake porn and requires platforms to take down such material. Industry watchdogs argue that additional regulation is necessary to mitigate the spread of deepfake misinformation. “Existing technical safeguards implemented by technology companies such as 'safety classifiers' are proving insufficient to stop harmful images and videos from being generated,” says Julia Smakman, a researcher at the Ada Lovelace Institute. “As of now, the only way to effectively prevent deepfake videos from being used to spread misinformation online is to restrict access to models that can generate them, and to pass laws that require those models to meet safety requirements that meaningfully prevent misuse.”
    Like
    Love
    Wow
    Angry
    Sad
    218
    0 Comments 0 Shares
  • Use Google’s Flow TV If You Actually Want to Watch an Endless Stream of AI Videos

    Even if you don't want to dive in and create AI videos using the latest Veo 3 model released by Google, you can sit back and marvel atthe work of others: Flow TV is a new lean-back experience that lets you click through a seemingly endless carousel of AI-generated clips.Unlike the Flow video creator that is needed to create these videos, you don't need to pay Google a subscription fee to use Flow TV, and you don't even need to be signed into a Google account. It's a showcase for the best AI clips produced by Veo, though for now, it's limited to the older Veo 2 model rather than Veo 3.Google hasn't said much about the creators behind the videos in Flow TV, but it is described as an "ever-growing showcase" of videos, so presumably there are new clips being added regularly behind the scenes—and eventually we might see Veo 3 clips mixed in, the kind of clips that have already been fooling people online.Ready to take a break from content made by flesh and blood humans and see what AI is currently cooking up? Point your browser towards the Flow TV channel list.Channel hopping

    Flow TV gives you multiple channels to choose from.
    Credit: Lifehacker

    The channel list gives you some idea of what's available on Flow TV: We've got channels like Window Seat, Unnatural, and Zoo Break. Some of these play to the strengths of AI video, including It's All Yarnand Dream Factory.And do expect to be freaked out pretty regularly, by the way: Flow TV is not ideal if you're easily unsettled or unnerved, because these clips move quickly, and feature content that goes way beyond the norm. I didn't come across anything really shocking or disturbing, but this is AI—and Flow TV doesn't particularly focus on realism.There's also a Shuffle All option in addition to the individual channels, and whichever route you pick through the clips, there's a lot to watch—I wasn't able to get to the end of it all. You can also switch to the Short Films tab at the top of the channel list to see three longer pieces of work made by acknowledged creators.Whichever route you take through this content, you get playback controls underneath the current clip: Controls for pausing playback, jumping forwards and backwards between clips, looping videos, and switching to full screen mode. What you can't do, however, is skip forwards or backwards through a clip, YouTube-style.To the right of the control panel you can switch between seeing one video at a time, and seeing a whole grid of options, and further to the right you've got a channel switcher. Click the TV icon to the left of the control panel to see all the available channels again, and the Flow TV button in the top-left corner to jump to something random. There's also a search box up at the top to help you look for something specific.Prompt engineering

    Expect the unexpected from AI video.
    Credit: Lifehacker

    While you're watching the videos, you'll see a Show Prompt toggle switch underneath each clip. Turn this switch on to see the prompt used to make the video you're watching, together with the AI model deployed. It's an interesting look behind the scenes at how each clip was made.Here's an example one: "First person view. Follow me into through this secret door into my magic world. Documentary. Soft natural light. 90s." As you can see, Veo just lets you throw in whatever ideas or camera directions or style guidelines come to mind, without worrying too much about formal structure.Revealing the prompts lets you see what the AI got right and what it didn't, and how the models interpret different instructions. Of course, it always makes the most generic picks from prompts, based on whatever dominates its training data: Generic swans, generic buses, generic cars, generic people, generic camera angles and movements. If you need something out of the ordinary from AI video, you need to ask for it specifically.Look closer, and the usual telltale signs of AI generation are here, from the way most clips use a similar pacing, scene length, and shot construction, to the weird physics that are constantly confusing. AI video is getting better fast, but it's a much more difficult challenge than text or images represent.For now, Flow TV is a diverting demo gallery of where AI video is at: what it does well and where it still falls short. On this occasion, I'll leave aside the issues of how much energy was used to generate all of these clips, or what kinds of videos the Veo models might have been trained on, but it might be worth bookmarking the Flow TV channel directory if you want to stay up to speed with the state of AI filmmaking.
    #use #googles #flow #you #actually
    Use Google’s Flow TV If You Actually Want to Watch an Endless Stream of AI Videos
    Even if you don't want to dive in and create AI videos using the latest Veo 3 model released by Google, you can sit back and marvel atthe work of others: Flow TV is a new lean-back experience that lets you click through a seemingly endless carousel of AI-generated clips.Unlike the Flow video creator that is needed to create these videos, you don't need to pay Google a subscription fee to use Flow TV, and you don't even need to be signed into a Google account. It's a showcase for the best AI clips produced by Veo, though for now, it's limited to the older Veo 2 model rather than Veo 3.Google hasn't said much about the creators behind the videos in Flow TV, but it is described as an "ever-growing showcase" of videos, so presumably there are new clips being added regularly behind the scenes—and eventually we might see Veo 3 clips mixed in, the kind of clips that have already been fooling people online.Ready to take a break from content made by flesh and blood humans and see what AI is currently cooking up? Point your browser towards the Flow TV channel list.Channel hopping Flow TV gives you multiple channels to choose from. Credit: Lifehacker The channel list gives you some idea of what's available on Flow TV: We've got channels like Window Seat, Unnatural, and Zoo Break. Some of these play to the strengths of AI video, including It's All Yarnand Dream Factory.And do expect to be freaked out pretty regularly, by the way: Flow TV is not ideal if you're easily unsettled or unnerved, because these clips move quickly, and feature content that goes way beyond the norm. I didn't come across anything really shocking or disturbing, but this is AI—and Flow TV doesn't particularly focus on realism.There's also a Shuffle All option in addition to the individual channels, and whichever route you pick through the clips, there's a lot to watch—I wasn't able to get to the end of it all. You can also switch to the Short Films tab at the top of the channel list to see three longer pieces of work made by acknowledged creators.Whichever route you take through this content, you get playback controls underneath the current clip: Controls for pausing playback, jumping forwards and backwards between clips, looping videos, and switching to full screen mode. What you can't do, however, is skip forwards or backwards through a clip, YouTube-style.To the right of the control panel you can switch between seeing one video at a time, and seeing a whole grid of options, and further to the right you've got a channel switcher. Click the TV icon to the left of the control panel to see all the available channels again, and the Flow TV button in the top-left corner to jump to something random. There's also a search box up at the top to help you look for something specific.Prompt engineering Expect the unexpected from AI video. Credit: Lifehacker While you're watching the videos, you'll see a Show Prompt toggle switch underneath each clip. Turn this switch on to see the prompt used to make the video you're watching, together with the AI model deployed. It's an interesting look behind the scenes at how each clip was made.Here's an example one: "First person view. Follow me into through this secret door into my magic world. Documentary. Soft natural light. 90s." As you can see, Veo just lets you throw in whatever ideas or camera directions or style guidelines come to mind, without worrying too much about formal structure.Revealing the prompts lets you see what the AI got right and what it didn't, and how the models interpret different instructions. Of course, it always makes the most generic picks from prompts, based on whatever dominates its training data: Generic swans, generic buses, generic cars, generic people, generic camera angles and movements. If you need something out of the ordinary from AI video, you need to ask for it specifically.Look closer, and the usual telltale signs of AI generation are here, from the way most clips use a similar pacing, scene length, and shot construction, to the weird physics that are constantly confusing. AI video is getting better fast, but it's a much more difficult challenge than text or images represent.For now, Flow TV is a diverting demo gallery of where AI video is at: what it does well and where it still falls short. On this occasion, I'll leave aside the issues of how much energy was used to generate all of these clips, or what kinds of videos the Veo models might have been trained on, but it might be worth bookmarking the Flow TV channel directory if you want to stay up to speed with the state of AI filmmaking. #use #googles #flow #you #actually
    LIFEHACKER.COM
    Use Google’s Flow TV If You Actually Want to Watch an Endless Stream of AI Videos
    Even if you don't want to dive in and create AI videos using the latest Veo 3 model released by Google, you can sit back and marvel at (or be petrified by) the work of others: Flow TV is a new lean-back experience that lets you click through a seemingly endless carousel of AI-generated clips.Unlike the Flow video creator that is needed to create these videos, you don't need to pay Google a subscription fee to use Flow TV, and you don't even need to be signed into a Google account. It's a showcase for the best AI clips produced by Veo, though for now, it's limited to the older Veo 2 model rather than Veo 3.Google hasn't said much about the creators behind the videos in Flow TV, but it is described as an "ever-growing showcase" of videos, so presumably there are new clips being added regularly behind the scenes—and eventually we might see Veo 3 clips mixed in, the kind of clips that have already been fooling people online.Ready to take a break from content made by flesh and blood humans and see what AI is currently cooking up? Point your browser towards the Flow TV channel list.Channel hopping Flow TV gives you multiple channels to choose from. Credit: Lifehacker The channel list gives you some idea of what's available on Flow TV: We've got channels like Window Seat (views from train carriages), Unnatural (nature with an AI twist), and Zoo Break (animal adventures). Some of these play to the strengths of AI video, including It's All Yarn (self-explanatory) and Dream Factory (general weirdness).And do expect to be freaked out pretty regularly, by the way: Flow TV is not ideal if you're easily unsettled or unnerved, because these clips move quickly, and feature content that goes way beyond the norm. I didn't come across anything really shocking or disturbing, but this is AI—and Flow TV doesn't particularly focus on realism.There's also a Shuffle All option in addition to the individual channels, and whichever route you pick through the clips, there's a lot to watch—I wasn't able to get to the end of it all. You can also switch to the Short Films tab at the top of the channel list to see three longer pieces of work made by acknowledged creators.Whichever route you take through this content, you get playback controls underneath the current clip: Controls for pausing playback, jumping forwards and backwards between clips, looping videos, and switching to full screen mode. What you can't do, however, is skip forwards or backwards through a clip, YouTube-style.To the right of the control panel you can switch between seeing one video at a time, and seeing a whole grid of options, and further to the right you've got a channel switcher. Click the TV icon to the left of the control panel to see all the available channels again, and the Flow TV button in the top-left corner to jump to something random. There's also a search box up at the top to help you look for something specific.Prompt engineering Expect the unexpected from AI video. Credit: Lifehacker While you're watching the videos, you'll see a Show Prompt toggle switch underneath each clip. Turn this switch on to see the prompt used to make the video you're watching, together with the AI model deployed (which is always Veo 2, at least for now). It's an interesting look behind the scenes at how each clip was made.Here's an example one: "First person view. Follow me into through this secret door into my magic world. Documentary. Soft natural light. 90s." As you can see, Veo just lets you throw in whatever ideas or camera directions or style guidelines come to mind, without worrying too much about formal structure (or grammar).Revealing the prompts lets you see what the AI got right and what it didn't, and how the models interpret different instructions. Of course, it always makes the most generic picks from prompts, based on whatever dominates its training data: Generic swans, generic buses, generic cars, generic people, generic camera angles and movements. If you need something out of the ordinary from AI video, you need to ask for it specifically.Look closer, and the usual telltale signs of AI generation are here, from the way most clips use a similar pacing, scene length, and shot construction, to the weird physics that are constantly confusing (and are sometimes deliberately used for effect). AI video is getting better fast, but it's a much more difficult challenge than text or images represent.For now, Flow TV is a diverting demo gallery of where AI video is at: what it does well and where it still falls short. On this occasion, I'll leave aside the issues of how much energy was used to generate all of these clips, or what kinds of videos the Veo models might have been trained on, but it might be worth bookmarking the Flow TV channel directory if you want to stay up to speed with the state of AI filmmaking.
    0 Comments 0 Shares
  • Pick up these helpful tips on advanced profiling

    In June, we hosted a webinar featuring experts from Arm, the Unity Accelerate Solutions team, and SYBO Games, the creator of Subway Surfers. The resulting roundtable focused on profiling tips and strategies for mobile games, the business implications of poor performance, and how SYBO shipped a hit mobile game with 3 billion downloads to date.Let’s dive into some of the follow-up questions we didn’t have time to cover during the webinar. You can also watch the full recording.We hear a lot about the Unity Profiler in relation to CPU profiling, but not as much about the Profile Analyzer. Are there any plans to improve it or integrate it into the core Profiler toolset?There are no immediate plans to integrate the Profile Analyzer into the core Editor, but this might change as our profiling tools evolve.Does Unity have any plans to add an option for the GPU Usage Profiler module to appear in percentages like it does in milliseconds?That’s a great idea, and while we can’t say yes or no at the time of this blog post, it’s a request that’s been shared with our R&D teams for possible future consideration.Do you have plans for tackling “Application Not Responding”errors that are reported by the Google Play store and don’t contain any stack trace?Although we don’t have specific plans for tracking ANR without stack trace at the moment, we will consider it for the future roadmap.How can I share my feedback to help influence the future development of Unity’s profiling tools?You can keep track of upcoming features and share feedback via our product board and forums. We are also conducting a survey to learn more about our customers’ experience with the profiling tools. If you’ve used profiling tools beforeor are working on a project that requires optimization, we would love to get your input. The survey is designed to take no more than 5–10 minutes to complete.By participating, you’ll also have the chance to opt into a follow-up interview to share more feedback directly with the development team, including the opportunity to discuss potential prototypes of new features.Is there a good rule for determining what counts as a viable low-end device to target?A rule of thumb we hear from many Unity game developers is to target devices that are five years old at the time of your game’s release, as this helps to ensure the largest user base. But we also see teams reducing their release-date scope to devices that are only three years old if they’re aiming for higher graphical quality. A visually complex 3D application, for example, will have higher device requirements than a simple 2D application. This approach allows for a higher “min spec,” but reduces the size of the initial install base. It’s essentially a business decision: Will it cost more to develop for and support old devices than what your game will earn running on them?Sometimes the technical requirements of your game will dictate your minimum target specifications. So if your game uses up large amounts of texture memory even after optimization, but you absolutely cannot reduce quality or resolution, that probably rules out running on phones with insufficient memory. If your rendering solution requires compute shaders, that likely rules out devices with drivers that can’t support OpenGL ES 3.1, Metal, or Vulkan.It’s a good idea to look at market data for your priority target audience. For instance, mobile device specs can vary a lot between countries and regions. Remember to define some target “budgets” so that benchmarking goals for what’s acceptable are set prior to choosing low-end devices for testing.For live service games that will run for years, you’ll need to monitor their compatibility continuously and adapt over time based on both your actual user base and current devices on the market.Is it enough to test performance exclusively on low-end devices to ensure that the game will also run smoothly on high-end ones?It might be, if you have a uniform workload on all devices. However, you still need to consider variations across hardware from different vendors and/or driver versions.It’s common for graphically rich games to have tiers of graphical fidelity – the higher the visual tier, the more resources required on capable devices. This tier selection might be automatic, but increasingly, users themselves can control the choice via a graphical settings menu. For this style of development, you’ll need to test at least one “min spec” target device per feature/workload tier that your game supports.If your game detects the capabilities of the device it’s running on and adapts the graphics output as needed, it could perform differently on higher end devices. So be sure to test on a range of devices with the different quality levels you’ve programmed the title for.Note: In this section, we’ve specified whether the expert answering is from Arm or Unity.Do you have advice for detecting the power range of a device to support automatic quality settings, particularly for mobile?Arm: We typically see developers doing coarse capability binning based on CPU and GPU models, as well as the GPU shader core count. This is never perfect, but it’s “about right.” A lot of studios collect live analytics from deployed devices, so they can supplement the automated binning with device-specific opt-in/opt-out to work around point issues where the capability binning isn’t accurate enough.As related to the previous question, for graphically rich content, we see a trend in mobile toward settings menus where users can choose to turn effects on or off, thereby allowing them to make performance choices that suit their preferences.Unity: Device memory and screen resolution are also important factors for choosing quality settings. Regarding textures, developers should be aware that Render Textures used by effects or post-processing can become a problem on devices with high resolution screens, but without a lot of memory to match.Given the breadth of configurations available, can you suggest a way to categorize devices to reduce the number of tiers you need to optimize for?Arm: The number of tiers your team optimizes for is really a game design and business decision, and should be based on how important pushing visual quality is to the value proposition of the game. For some genres it might not matter at all, but for others, users will have high expectations for the visual fidelity.Does the texture memory limit differ among models and brands of Android devices that have the same amount of total system memory?Arm: To a first-order approximation, we would expect the total amount of texture memory to be similar across vendors and hardware generations. There will be minor differences caused by memory layout and alignment restrictions, so it won’t be exactly the same.Is it CPU or GPU usage that contributes the most to overheating on mobile devices?Arm: It’s entirely content dependent. The CPU, GPU, or the DRAM can individually overheat a high-end device if pushed hard enough, even if you ignore the other two completely. The exact balance will vary based on the workload you are running.What tips can you give for profiling on devices that have thermal throttling? What margin would you target to avoid thermal throttling?Arm: Optimizing for frame time can be misleading on Android because devices will constantly adjust frequency to optimize energy usage, making frame time an incomplete measure by itself. Preferably, monitor CPU and GPU cycles per frame, as well as GPU memory bandwidth per frame, to get some value that is independent of frequency. The cycle target you need will depend on each device’s chip design, so you’ll need to experiment.Any optimization helps when it comes to managing power consumption, even if it doesn’t directly improve frame rate. For example, reducing CPU cycles will reduce thermal load even if the CPU isn’t the critical path for your game.Beyond that, optimizing memory bandwidth is one of the biggest savings you can make. Accessing DRAM is orders of magnitude more expensive than accessing local data on-chip, so watch your triangle budget and keep data types in memory as small as possible.Unity: To limit the impact of CPU clock frequency on the performance metrics, we recommend trying to run at a consistent temperature. There are a couple of approaches for doing this:Run warm: Run the device for a while so that it reaches a stable warm state before profiling.Run cool: Leave the device to cool for a while before profiling. This strategy can eliminate confusion and inconsistency in profiling sessions by taking captures that are unlikely to be thermally throttled. However, such captures will always represent the best case performance a user will see rather than what they might actually see after long play sessions. This strategy can also delay the time between profiling runs due to the need to wait for the cooling period first.With some hardware, you can fix the clock frequency for more stable performance metrics. However, this is not representative of most devices your users will be using, and will not report accurate real-world performance. Basically, it’s a handy technique if you are using a continuous integration setup to check for performance changes in your codebase over time.Any thoughts on Vulkan vs OpenGL ES 3 on Android? Vulkan is generally slower performance-wise. At the same time, many devices lack support for various features on ES3.Arm: Recent drivers and engine builds have vastly improved the quality of the Vulkan implementations available; so for an equivalent workload, there shouldn’t be a performance gap between OpenGL ES and Vulkan. The switch to Vulkan is picking up speed and we expect to see more people choosing Vulkan by default over the next year or two. If you have counterexamples of areas where Vulkan isn’t performing well, please get in touch with us. We’d love to hear from you.What tools can we use to monitor memory bandwidth?Arm: The Streamline Profiler in Arm Mobile Studio can measure bandwidth between Mali GPUs and the external DRAM.Should you split graphical assets by device tiers or device resolution?Arm: You can get the best result by retuning assets, but it’s expensive to do. Start by reducing resolution and frame rate, or disabling some optional post-processing effects.What is the best way to record performance metric statistics from our development build?Arm: You can use the Performance Advisor tool in Arm Mobile Studio to automatically capture and export performance metrics from the Mali GPUs, although this comes with a caveat: The generation of JSON reports requires a Professional Edition license.Unity: The Unity Profiler can be used to view common rendering metrics, such as vertex and triangle counts in the Rendering module. Plus you can include custom packages, such as System Metrics Mali, in your project to add low-level Mali GPU metrics to the Unity Profiler.What are your recommendations for profiling shader code?You need a GPU Profiler to do this. The one you choose depends on your target platform. For example, on iOS devices, Xcode’s GPU Profiler includes the Shader Profiler, which breaks down shader performance on a line-by-line basis.Arm Mobile Studio supports Mali Offline Compiler, a static analysis tool for shader code and compute kernels. This tool provides some overall performance estimates and recommendations for the Arm Mali GPU family.When profiling, the general rule is to test your game or app on the target device. With the industry moving toward more types of chipsets, how can developers profile and pinpoint issues on the many different hardware configurations in a reasonable amount of time?The proliferation of chipsets is primarily a concern on desktop platforms. There are a limited number of hardware architectures to test for console games. On mobile, there’s Apple’s A Series for iOS devices and a range of Arm and Qualcomm architectures for Android – but selecting a manageable list of representative mobile devices is pretty straightforward.On desktop it’s trickier because there’s a wide range of available chipsets and architectures, and buying Macs and PCs for testing can be expensive. Our best advice is to do what you can. No studio has infinite time and money for testing. We generally wouldn’t expect any huge surprises when comparing performance between an Intel x86 CPU and a similarly specced AMD processor, for instance. As long as the game performs comfortably on your minimum spec machine, you should be reasonably confident about other machines. It’s also worth considering using analytics, such as Unity Analytics, to record frame rates, system specs, and player options’ settings to identify hotspots or problematic configurations.We’re seeing more studios move to using at least some level of automated testing for regular on-device profiling, with summary stats published where the whole team can keep an eye on performance across the range of target devices. With well-designed test scenes, this can usually be made into a mechanical process that’s suited for automation, so you don’t need an experienced technical artist or QA tester running builds through the process manually.Do you ever see performance issues on high-end devices that don’t occur on the low-end ones?It’s uncommon, but we have seen it. Often the issue lies in how the project is configured, such as with the use of fancy shaders and high-res textures on high-end devices, which can put extra pressure on the GPU or memory. Sometimes a high-end mobile device or console will use a high-res phone screen or 4K TV output as a selling point but not necessarily have enough GPU power or memory to live up to that promise without further optimization.If you make use of the current versions of the C# Job System, verify whether there’s a job scheduling overhead that scales with the number of worker threads, which in turn, scales with the number of CPU cores. This can result in code that runs more slowly on a 64+ core Threadripper™ than on a modest 4-core or 8-core CPU. This issue will be addressed in future versions of Unity, but in the meantime, try limiting the number of job worker threads by setting JobsUtility.JobWorkerCount.What are some pointers for setting a good frame budget?Most of the time when we talk about frame budgets, we’re talking about the overall time budget for the frame. You calculate 1000/target frames per secondto get your frame budget: 33.33 ms for 30 fps, 16.66 ms for 60 fps, 8.33 ms for 120 Hz, etc. Reduce that number by around 35% if you’re on mobile to give the chips a chance to cool down between each frame. Dividing the budget up to get specific sub-budgets for different features and/or systems is probably overkill except for projects with very specific, predictable systems, or those that make heavy use of Time Slicing.Generally, profiling is the process of finding the biggest bottlenecks – and therefore, the biggest potential performance gains. So rather than saying, “Physics is taking 1.2 ms when the budget only allows for 1 ms,” you might look at a frame and say, “Rendering is taking 6 ms, making it the biggest main thread CPU cost in the frame. How can we reduce that?”It seems like profiling early and often is still not common knowledge. What are your thoughts on why this might be the case?Building, releasing, promoting, and managing a game is difficult work on multiple fronts. So there will always be numerous priorities vying for a developer’s attention, and profiling can fall by the wayside. They know it’s something they should do, but perhaps they’re unfamiliar with the tools and don’t feel like they have time to learn. Or, they don’t know how to fit profiling into their workflows because they’re pushed toward completing features rather than performance optimization.Just as with bugs and technical debt, performance issues are cheaper and less risky to address early on, rather than later in a project’s development cycle. Our focus is on helping to demystify profiling tools and techniques for those developers who are unfamiliar with them. That’s what the profiling e-book and its related blog post and webinar aim to support.Is there a way to exclude certain methods from instrumentation or include only specific methods when using Deep Profiling in the Unity Profiler? When using a lot of async/await tasks, we create large stack traces, but how can we avoid slowing down both the client and the Profiler when Deep Profiling?You can enable Allocation call stacks to see the full call stacks that lead to managed allocations. Additionally, you can – and should! – manually instrument long-running methods and processes by sprinkling ProfilerMarkers throughout your code. There’s currently no way to automatically enable Deep Profiling or disable profiling entirely in specific parts of your application. But manually adding ProfilerMarkers and enabling Allocation call stacks when required can help you dig down into problem areas without having to resort to Deep Profiling.As of Unity 2022.2, you can also use our IgnoredByDeepProfilerAttribute to prevent the Unity Profiler from capturing method calls. Just add the IgnoredByDeepProfiler attribute to classes, structures, and methods.Where can I find more information on Deep Profiling in Unity?Deep Profiling is covered in our Profiler documentation. Then there’s the most in-depth, single resource for profiling information, the Ultimate Guide to profiling Unity games e-book, which links to relevant documentation and other resources throughout.Is it correct that Deep Profiling is only useful for the Allocations Profiler and that it skews results so much that it’s not useful for finding hitches in the game?Deep Profiling can be used to find the specific causes of managed allocations, although Allocation call stacks can do the same thing with less overhead, overall. At the same time, Deep Profiling can be helpful for quickly investigating why one specific ProfilerMarker seems to be taking so long, as it’s more convenient to enable than to add numerous ProfilerMarkers to your scripts and rebuild your game. But yes, it does skew performance quite heavily and so shouldn’t be enabled for general profiling.Is VSync worth setting to every VBlank? My mobile game runs at a very low fps when it’s disabled.Mobile devices force VSync to be enabled at a driver/hardware level, so disabling it in Unity’s Quality settings shouldn’t make any difference on those platforms. We haven’t heard of a case where disabling VSync negatively affects performance. Try taking a profile capture with VSync enabled, along with another capture of the same scene but with VSync disabled. Then compare the captures using Profile Analyzer to try to understand why the performance is so different.How can you determine if the main thread is waiting for the GPU and not the other way around?This is covered in the Ultimate Guide to profiling Unity games. You can also get more information in the blog post, Detecting performance bottlenecks with Unity Frame Timing Manager.Generally speaking, the telltale sign is that the main thread waits for the Render thread while the Render thread waits for the GPU. The specific marker names will differ depending on your target platform and graphics API, but you should look out for markers with names such as “PresentFrame” or “WaitForPresent.”Is there a solid process for finding memory leaks in profiling?Use the Memory Profiler to compare memory snapshots and check for leaks. For example, you can take a snapshot in your main menu, enter your game and then quit, go back to the main menu, and take a second snapshot. Comparing these two will tell you whether any objects/allocations from the game are still hanging around in memory.Does it make sense to optimize and rewrite part of the code for the DOTS system, for mobile devices including VR/AR? Do you use this system in your projects?A number of game projects now make use of parts of the Data-Oriented Technology Stack. Native Containers, the C# Job System, Mathematics, and the Burst compilerare all fully supported packages that you can use right away to write optimal, parallelized, high-performance C#code to improve your project’s CPU performance.A smaller number of projects are also using Entities and associated packages, such as the Hybrid Renderer, Unity Physics, and NetCode. However, at this time, the packages listed are experimental, and using them involves accepting a degree of technical risk. This risk derives from an API that is still evolving, missing or incomplete features, as well as the engineering learning curve required to understand Data-Oriented Designto get the most out of Unity’s Entity Component System. Unity engineer Steve McGreal wrote a guide on DOTS best practices, which includes some DOD fundamentals and tips for improving ECS performance.How do you go about setting limits on SetPass calls or shader complexity? Can you even set limits beforehand?Rendering is a complex process and there is no practical way to set a hard limit on the maximum number of SetPass calls or a metric for shader complexity. Even on a fixed hardware platform, such as a single console, the limits will depend on what kind of scene you want to render, and what other work is happening on the CPU and GPU during a frame.That’s why the rule on when to profile is “early and often.” Teams tend to create a “vertical slice” demo early on during production – usually a short burst of gameplay developed to the level of visual fidelity intended for the final game. This is your first opportunity to profile rendering and figure out what optimizations and limits might be needed. The profiling process should be repeated every time a new area or other major piece of visual content is added.Here are additional resources for learning about performance optimization:BlogsOptimize your mobile game performance: Expert tips on graphics and assetsOptimize your mobile game performance: Expert tips on physics, UI, and audio settingsOptimize your mobile game performance: Expert tips on profiling, memory, and code architecture from Unity’s top engineersExpert tips on optimizing your game graphics for consolesProfiling in Unity 2021 LTS: What, when, and howHow-to pagesProfiling and debugging toolsHow to profile memory in UnityBest practices for profiling game performanceE-booksOptimize your console and PC game performanceOptimize your mobile game performanceUltimate guide to profiling Unity gamesLearn tutorialsProfiling CPU performance in Android builds with Android StudioProfiling applications – Made with UnityEven more advanced technical content is coming soon – but in the meantime, please feel free to suggest topics for us to cover on the forum and check out the full roundtable webinar recording.
    #pick #these #helpful #tips #advanced
    Pick up these helpful tips on advanced profiling
    In June, we hosted a webinar featuring experts from Arm, the Unity Accelerate Solutions team, and SYBO Games, the creator of Subway Surfers. The resulting roundtable focused on profiling tips and strategies for mobile games, the business implications of poor performance, and how SYBO shipped a hit mobile game with 3 billion downloads to date.Let’s dive into some of the follow-up questions we didn’t have time to cover during the webinar. You can also watch the full recording.We hear a lot about the Unity Profiler in relation to CPU profiling, but not as much about the Profile Analyzer. Are there any plans to improve it or integrate it into the core Profiler toolset?There are no immediate plans to integrate the Profile Analyzer into the core Editor, but this might change as our profiling tools evolve.Does Unity have any plans to add an option for the GPU Usage Profiler module to appear in percentages like it does in milliseconds?That’s a great idea, and while we can’t say yes or no at the time of this blog post, it’s a request that’s been shared with our R&D teams for possible future consideration.Do you have plans for tackling “Application Not Responding”errors that are reported by the Google Play store and don’t contain any stack trace?Although we don’t have specific plans for tracking ANR without stack trace at the moment, we will consider it for the future roadmap.How can I share my feedback to help influence the future development of Unity’s profiling tools?You can keep track of upcoming features and share feedback via our product board and forums. We are also conducting a survey to learn more about our customers’ experience with the profiling tools. If you’ve used profiling tools beforeor are working on a project that requires optimization, we would love to get your input. The survey is designed to take no more than 5–10 minutes to complete.By participating, you’ll also have the chance to opt into a follow-up interview to share more feedback directly with the development team, including the opportunity to discuss potential prototypes of new features.Is there a good rule for determining what counts as a viable low-end device to target?A rule of thumb we hear from many Unity game developers is to target devices that are five years old at the time of your game’s release, as this helps to ensure the largest user base. But we also see teams reducing their release-date scope to devices that are only three years old if they’re aiming for higher graphical quality. A visually complex 3D application, for example, will have higher device requirements than a simple 2D application. This approach allows for a higher “min spec,” but reduces the size of the initial install base. It’s essentially a business decision: Will it cost more to develop for and support old devices than what your game will earn running on them?Sometimes the technical requirements of your game will dictate your minimum target specifications. So if your game uses up large amounts of texture memory even after optimization, but you absolutely cannot reduce quality or resolution, that probably rules out running on phones with insufficient memory. If your rendering solution requires compute shaders, that likely rules out devices with drivers that can’t support OpenGL ES 3.1, Metal, or Vulkan.It’s a good idea to look at market data for your priority target audience. For instance, mobile device specs can vary a lot between countries and regions. Remember to define some target “budgets” so that benchmarking goals for what’s acceptable are set prior to choosing low-end devices for testing.For live service games that will run for years, you’ll need to monitor their compatibility continuously and adapt over time based on both your actual user base and current devices on the market.Is it enough to test performance exclusively on low-end devices to ensure that the game will also run smoothly on high-end ones?It might be, if you have a uniform workload on all devices. However, you still need to consider variations across hardware from different vendors and/or driver versions.It’s common for graphically rich games to have tiers of graphical fidelity – the higher the visual tier, the more resources required on capable devices. This tier selection might be automatic, but increasingly, users themselves can control the choice via a graphical settings menu. For this style of development, you’ll need to test at least one “min spec” target device per feature/workload tier that your game supports.If your game detects the capabilities of the device it’s running on and adapts the graphics output as needed, it could perform differently on higher end devices. So be sure to test on a range of devices with the different quality levels you’ve programmed the title for.Note: In this section, we’ve specified whether the expert answering is from Arm or Unity.Do you have advice for detecting the power range of a device to support automatic quality settings, particularly for mobile?Arm: We typically see developers doing coarse capability binning based on CPU and GPU models, as well as the GPU shader core count. This is never perfect, but it’s “about right.” A lot of studios collect live analytics from deployed devices, so they can supplement the automated binning with device-specific opt-in/opt-out to work around point issues where the capability binning isn’t accurate enough.As related to the previous question, for graphically rich content, we see a trend in mobile toward settings menus where users can choose to turn effects on or off, thereby allowing them to make performance choices that suit their preferences.Unity: Device memory and screen resolution are also important factors for choosing quality settings. Regarding textures, developers should be aware that Render Textures used by effects or post-processing can become a problem on devices with high resolution screens, but without a lot of memory to match.Given the breadth of configurations available, can you suggest a way to categorize devices to reduce the number of tiers you need to optimize for?Arm: The number of tiers your team optimizes for is really a game design and business decision, and should be based on how important pushing visual quality is to the value proposition of the game. For some genres it might not matter at all, but for others, users will have high expectations for the visual fidelity.Does the texture memory limit differ among models and brands of Android devices that have the same amount of total system memory?Arm: To a first-order approximation, we would expect the total amount of texture memory to be similar across vendors and hardware generations. There will be minor differences caused by memory layout and alignment restrictions, so it won’t be exactly the same.Is it CPU or GPU usage that contributes the most to overheating on mobile devices?Arm: It’s entirely content dependent. The CPU, GPU, or the DRAM can individually overheat a high-end device if pushed hard enough, even if you ignore the other two completely. The exact balance will vary based on the workload you are running.What tips can you give for profiling on devices that have thermal throttling? What margin would you target to avoid thermal throttling?Arm: Optimizing for frame time can be misleading on Android because devices will constantly adjust frequency to optimize energy usage, making frame time an incomplete measure by itself. Preferably, monitor CPU and GPU cycles per frame, as well as GPU memory bandwidth per frame, to get some value that is independent of frequency. The cycle target you need will depend on each device’s chip design, so you’ll need to experiment.Any optimization helps when it comes to managing power consumption, even if it doesn’t directly improve frame rate. For example, reducing CPU cycles will reduce thermal load even if the CPU isn’t the critical path for your game.Beyond that, optimizing memory bandwidth is one of the biggest savings you can make. Accessing DRAM is orders of magnitude more expensive than accessing local data on-chip, so watch your triangle budget and keep data types in memory as small as possible.Unity: To limit the impact of CPU clock frequency on the performance metrics, we recommend trying to run at a consistent temperature. There are a couple of approaches for doing this:Run warm: Run the device for a while so that it reaches a stable warm state before profiling.Run cool: Leave the device to cool for a while before profiling. This strategy can eliminate confusion and inconsistency in profiling sessions by taking captures that are unlikely to be thermally throttled. However, such captures will always represent the best case performance a user will see rather than what they might actually see after long play sessions. This strategy can also delay the time between profiling runs due to the need to wait for the cooling period first.With some hardware, you can fix the clock frequency for more stable performance metrics. However, this is not representative of most devices your users will be using, and will not report accurate real-world performance. Basically, it’s a handy technique if you are using a continuous integration setup to check for performance changes in your codebase over time.Any thoughts on Vulkan vs OpenGL ES 3 on Android? Vulkan is generally slower performance-wise. At the same time, many devices lack support for various features on ES3.Arm: Recent drivers and engine builds have vastly improved the quality of the Vulkan implementations available; so for an equivalent workload, there shouldn’t be a performance gap between OpenGL ES and Vulkan. The switch to Vulkan is picking up speed and we expect to see more people choosing Vulkan by default over the next year or two. If you have counterexamples of areas where Vulkan isn’t performing well, please get in touch with us. We’d love to hear from you.What tools can we use to monitor memory bandwidth?Arm: The Streamline Profiler in Arm Mobile Studio can measure bandwidth between Mali GPUs and the external DRAM.Should you split graphical assets by device tiers or device resolution?Arm: You can get the best result by retuning assets, but it’s expensive to do. Start by reducing resolution and frame rate, or disabling some optional post-processing effects.What is the best way to record performance metric statistics from our development build?Arm: You can use the Performance Advisor tool in Arm Mobile Studio to automatically capture and export performance metrics from the Mali GPUs, although this comes with a caveat: The generation of JSON reports requires a Professional Edition license.Unity: The Unity Profiler can be used to view common rendering metrics, such as vertex and triangle counts in the Rendering module. Plus you can include custom packages, such as System Metrics Mali, in your project to add low-level Mali GPU metrics to the Unity Profiler.What are your recommendations for profiling shader code?You need a GPU Profiler to do this. The one you choose depends on your target platform. For example, on iOS devices, Xcode’s GPU Profiler includes the Shader Profiler, which breaks down shader performance on a line-by-line basis.Arm Mobile Studio supports Mali Offline Compiler, a static analysis tool for shader code and compute kernels. This tool provides some overall performance estimates and recommendations for the Arm Mali GPU family.When profiling, the general rule is to test your game or app on the target device. With the industry moving toward more types of chipsets, how can developers profile and pinpoint issues on the many different hardware configurations in a reasonable amount of time?The proliferation of chipsets is primarily a concern on desktop platforms. There are a limited number of hardware architectures to test for console games. On mobile, there’s Apple’s A Series for iOS devices and a range of Arm and Qualcomm architectures for Android – but selecting a manageable list of representative mobile devices is pretty straightforward.On desktop it’s trickier because there’s a wide range of available chipsets and architectures, and buying Macs and PCs for testing can be expensive. Our best advice is to do what you can. No studio has infinite time and money for testing. We generally wouldn’t expect any huge surprises when comparing performance between an Intel x86 CPU and a similarly specced AMD processor, for instance. As long as the game performs comfortably on your minimum spec machine, you should be reasonably confident about other machines. It’s also worth considering using analytics, such as Unity Analytics, to record frame rates, system specs, and player options’ settings to identify hotspots or problematic configurations.We’re seeing more studios move to using at least some level of automated testing for regular on-device profiling, with summary stats published where the whole team can keep an eye on performance across the range of target devices. With well-designed test scenes, this can usually be made into a mechanical process that’s suited for automation, so you don’t need an experienced technical artist or QA tester running builds through the process manually.Do you ever see performance issues on high-end devices that don’t occur on the low-end ones?It’s uncommon, but we have seen it. Often the issue lies in how the project is configured, such as with the use of fancy shaders and high-res textures on high-end devices, which can put extra pressure on the GPU or memory. Sometimes a high-end mobile device or console will use a high-res phone screen or 4K TV output as a selling point but not necessarily have enough GPU power or memory to live up to that promise without further optimization.If you make use of the current versions of the C# Job System, verify whether there’s a job scheduling overhead that scales with the number of worker threads, which in turn, scales with the number of CPU cores. This can result in code that runs more slowly on a 64+ core Threadripper™ than on a modest 4-core or 8-core CPU. This issue will be addressed in future versions of Unity, but in the meantime, try limiting the number of job worker threads by setting JobsUtility.JobWorkerCount.What are some pointers for setting a good frame budget?Most of the time when we talk about frame budgets, we’re talking about the overall time budget for the frame. You calculate 1000/target frames per secondto get your frame budget: 33.33 ms for 30 fps, 16.66 ms for 60 fps, 8.33 ms for 120 Hz, etc. Reduce that number by around 35% if you’re on mobile to give the chips a chance to cool down between each frame. Dividing the budget up to get specific sub-budgets for different features and/or systems is probably overkill except for projects with very specific, predictable systems, or those that make heavy use of Time Slicing.Generally, profiling is the process of finding the biggest bottlenecks – and therefore, the biggest potential performance gains. So rather than saying, “Physics is taking 1.2 ms when the budget only allows for 1 ms,” you might look at a frame and say, “Rendering is taking 6 ms, making it the biggest main thread CPU cost in the frame. How can we reduce that?”It seems like profiling early and often is still not common knowledge. What are your thoughts on why this might be the case?Building, releasing, promoting, and managing a game is difficult work on multiple fronts. So there will always be numerous priorities vying for a developer’s attention, and profiling can fall by the wayside. They know it’s something they should do, but perhaps they’re unfamiliar with the tools and don’t feel like they have time to learn. Or, they don’t know how to fit profiling into their workflows because they’re pushed toward completing features rather than performance optimization.Just as with bugs and technical debt, performance issues are cheaper and less risky to address early on, rather than later in a project’s development cycle. Our focus is on helping to demystify profiling tools and techniques for those developers who are unfamiliar with them. That’s what the profiling e-book and its related blog post and webinar aim to support.Is there a way to exclude certain methods from instrumentation or include only specific methods when using Deep Profiling in the Unity Profiler? When using a lot of async/await tasks, we create large stack traces, but how can we avoid slowing down both the client and the Profiler when Deep Profiling?You can enable Allocation call stacks to see the full call stacks that lead to managed allocations. Additionally, you can – and should! – manually instrument long-running methods and processes by sprinkling ProfilerMarkers throughout your code. There’s currently no way to automatically enable Deep Profiling or disable profiling entirely in specific parts of your application. But manually adding ProfilerMarkers and enabling Allocation call stacks when required can help you dig down into problem areas without having to resort to Deep Profiling.As of Unity 2022.2, you can also use our IgnoredByDeepProfilerAttribute to prevent the Unity Profiler from capturing method calls. Just add the IgnoredByDeepProfiler attribute to classes, structures, and methods.Where can I find more information on Deep Profiling in Unity?Deep Profiling is covered in our Profiler documentation. Then there’s the most in-depth, single resource for profiling information, the Ultimate Guide to profiling Unity games e-book, which links to relevant documentation and other resources throughout.Is it correct that Deep Profiling is only useful for the Allocations Profiler and that it skews results so much that it’s not useful for finding hitches in the game?Deep Profiling can be used to find the specific causes of managed allocations, although Allocation call stacks can do the same thing with less overhead, overall. At the same time, Deep Profiling can be helpful for quickly investigating why one specific ProfilerMarker seems to be taking so long, as it’s more convenient to enable than to add numerous ProfilerMarkers to your scripts and rebuild your game. But yes, it does skew performance quite heavily and so shouldn’t be enabled for general profiling.Is VSync worth setting to every VBlank? My mobile game runs at a very low fps when it’s disabled.Mobile devices force VSync to be enabled at a driver/hardware level, so disabling it in Unity’s Quality settings shouldn’t make any difference on those platforms. We haven’t heard of a case where disabling VSync negatively affects performance. Try taking a profile capture with VSync enabled, along with another capture of the same scene but with VSync disabled. Then compare the captures using Profile Analyzer to try to understand why the performance is so different.How can you determine if the main thread is waiting for the GPU and not the other way around?This is covered in the Ultimate Guide to profiling Unity games. You can also get more information in the blog post, Detecting performance bottlenecks with Unity Frame Timing Manager.Generally speaking, the telltale sign is that the main thread waits for the Render thread while the Render thread waits for the GPU. The specific marker names will differ depending on your target platform and graphics API, but you should look out for markers with names such as “PresentFrame” or “WaitForPresent.”Is there a solid process for finding memory leaks in profiling?Use the Memory Profiler to compare memory snapshots and check for leaks. For example, you can take a snapshot in your main menu, enter your game and then quit, go back to the main menu, and take a second snapshot. Comparing these two will tell you whether any objects/allocations from the game are still hanging around in memory.Does it make sense to optimize and rewrite part of the code for the DOTS system, for mobile devices including VR/AR? Do you use this system in your projects?A number of game projects now make use of parts of the Data-Oriented Technology Stack. Native Containers, the C# Job System, Mathematics, and the Burst compilerare all fully supported packages that you can use right away to write optimal, parallelized, high-performance C#code to improve your project’s CPU performance.A smaller number of projects are also using Entities and associated packages, such as the Hybrid Renderer, Unity Physics, and NetCode. However, at this time, the packages listed are experimental, and using them involves accepting a degree of technical risk. This risk derives from an API that is still evolving, missing or incomplete features, as well as the engineering learning curve required to understand Data-Oriented Designto get the most out of Unity’s Entity Component System. Unity engineer Steve McGreal wrote a guide on DOTS best practices, which includes some DOD fundamentals and tips for improving ECS performance.How do you go about setting limits on SetPass calls or shader complexity? Can you even set limits beforehand?Rendering is a complex process and there is no practical way to set a hard limit on the maximum number of SetPass calls or a metric for shader complexity. Even on a fixed hardware platform, such as a single console, the limits will depend on what kind of scene you want to render, and what other work is happening on the CPU and GPU during a frame.That’s why the rule on when to profile is “early and often.” Teams tend to create a “vertical slice” demo early on during production – usually a short burst of gameplay developed to the level of visual fidelity intended for the final game. This is your first opportunity to profile rendering and figure out what optimizations and limits might be needed. The profiling process should be repeated every time a new area or other major piece of visual content is added.Here are additional resources for learning about performance optimization:BlogsOptimize your mobile game performance: Expert tips on graphics and assetsOptimize your mobile game performance: Expert tips on physics, UI, and audio settingsOptimize your mobile game performance: Expert tips on profiling, memory, and code architecture from Unity’s top engineersExpert tips on optimizing your game graphics for consolesProfiling in Unity 2021 LTS: What, when, and howHow-to pagesProfiling and debugging toolsHow to profile memory in UnityBest practices for profiling game performanceE-booksOptimize your console and PC game performanceOptimize your mobile game performanceUltimate guide to profiling Unity gamesLearn tutorialsProfiling CPU performance in Android builds with Android StudioProfiling applications – Made with UnityEven more advanced technical content is coming soon – but in the meantime, please feel free to suggest topics for us to cover on the forum and check out the full roundtable webinar recording. #pick #these #helpful #tips #advanced
    UNITY.COM
    Pick up these helpful tips on advanced profiling
    In June, we hosted a webinar featuring experts from Arm, the Unity Accelerate Solutions team, and SYBO Games, the creator of Subway Surfers. The resulting roundtable focused on profiling tips and strategies for mobile games, the business implications of poor performance, and how SYBO shipped a hit mobile game with 3 billion downloads to date.Let’s dive into some of the follow-up questions we didn’t have time to cover during the webinar. You can also watch the full recording.We hear a lot about the Unity Profiler in relation to CPU profiling, but not as much about the Profile Analyzer (available as a Unity package). Are there any plans to improve it or integrate it into the core Profiler toolset?There are no immediate plans to integrate the Profile Analyzer into the core Editor, but this might change as our profiling tools evolve.Does Unity have any plans to add an option for the GPU Usage Profiler module to appear in percentages like it does in milliseconds?That’s a great idea, and while we can’t say yes or no at the time of this blog post, it’s a request that’s been shared with our R&D teams for possible future consideration.Do you have plans for tackling “Application Not Responding” (ANR) errors that are reported by the Google Play store and don’t contain any stack trace?Although we don’t have specific plans for tracking ANR without stack trace at the moment, we will consider it for the future roadmap.How can I share my feedback to help influence the future development of Unity’s profiling tools?You can keep track of upcoming features and share feedback via our product board and forums. We are also conducting a survey to learn more about our customers’ experience with the profiling tools. If you’ve used profiling tools before (either daily or just once) or are working on a project that requires optimization, we would love to get your input. The survey is designed to take no more than 5–10 minutes to complete.By participating, you’ll also have the chance to opt into a follow-up interview to share more feedback directly with the development team, including the opportunity to discuss potential prototypes of new features.Is there a good rule for determining what counts as a viable low-end device to target?A rule of thumb we hear from many Unity game developers is to target devices that are five years old at the time of your game’s release, as this helps to ensure the largest user base. But we also see teams reducing their release-date scope to devices that are only three years old if they’re aiming for higher graphical quality. A visually complex 3D application, for example, will have higher device requirements than a simple 2D application. This approach allows for a higher “min spec,” but reduces the size of the initial install base. It’s essentially a business decision: Will it cost more to develop for and support old devices than what your game will earn running on them?Sometimes the technical requirements of your game will dictate your minimum target specifications. So if your game uses up large amounts of texture memory even after optimization, but you absolutely cannot reduce quality or resolution, that probably rules out running on phones with insufficient memory. If your rendering solution requires compute shaders, that likely rules out devices with drivers that can’t support OpenGL ES 3.1, Metal, or Vulkan.It’s a good idea to look at market data for your priority target audience. For instance, mobile device specs can vary a lot between countries and regions. Remember to define some target “budgets” so that benchmarking goals for what’s acceptable are set prior to choosing low-end devices for testing.For live service games that will run for years, you’ll need to monitor their compatibility continuously and adapt over time based on both your actual user base and current devices on the market.Is it enough to test performance exclusively on low-end devices to ensure that the game will also run smoothly on high-end ones?It might be, if you have a uniform workload on all devices. However, you still need to consider variations across hardware from different vendors and/or driver versions.It’s common for graphically rich games to have tiers of graphical fidelity – the higher the visual tier, the more resources required on capable devices. This tier selection might be automatic, but increasingly, users themselves can control the choice via a graphical settings menu. For this style of development, you’ll need to test at least one “min spec” target device per feature/workload tier that your game supports.If your game detects the capabilities of the device it’s running on and adapts the graphics output as needed, it could perform differently on higher end devices. So be sure to test on a range of devices with the different quality levels you’ve programmed the title for.Note: In this section, we’ve specified whether the expert answering is from Arm or Unity.Do you have advice for detecting the power range of a device to support automatic quality settings, particularly for mobile?Arm: We typically see developers doing coarse capability binning based on CPU and GPU models, as well as the GPU shader core count. This is never perfect, but it’s “about right.” A lot of studios collect live analytics from deployed devices, so they can supplement the automated binning with device-specific opt-in/opt-out to work around point issues where the capability binning isn’t accurate enough.As related to the previous question, for graphically rich content, we see a trend in mobile toward settings menus where users can choose to turn effects on or off, thereby allowing them to make performance choices that suit their preferences.Unity: Device memory and screen resolution are also important factors for choosing quality settings. Regarding textures, developers should be aware that Render Textures used by effects or post-processing can become a problem on devices with high resolution screens, but without a lot of memory to match.Given the breadth of configurations available (CPU, GPU, SOC, memory, mobile, desktop, console, etc.), can you suggest a way to categorize devices to reduce the number of tiers you need to optimize for?Arm: The number of tiers your team optimizes for is really a game design and business decision, and should be based on how important pushing visual quality is to the value proposition of the game. For some genres it might not matter at all, but for others, users will have high expectations for the visual fidelity.Does the texture memory limit differ among models and brands of Android devices that have the same amount of total system memory?Arm: To a first-order approximation, we would expect the total amount of texture memory to be similar across vendors and hardware generations. There will be minor differences caused by memory layout and alignment restrictions, so it won’t be exactly the same.Is it CPU or GPU usage that contributes the most to overheating on mobile devices?Arm: It’s entirely content dependent. The CPU, GPU, or the DRAM can individually overheat a high-end device if pushed hard enough, even if you ignore the other two completely. The exact balance will vary based on the workload you are running.What tips can you give for profiling on devices that have thermal throttling? What margin would you target to avoid thermal throttling (i.e., targeting 20 ms instead of 33 ms)?Arm: Optimizing for frame time can be misleading on Android because devices will constantly adjust frequency to optimize energy usage, making frame time an incomplete measure by itself. Preferably, monitor CPU and GPU cycles per frame, as well as GPU memory bandwidth per frame, to get some value that is independent of frequency. The cycle target you need will depend on each device’s chip design, so you’ll need to experiment.Any optimization helps when it comes to managing power consumption, even if it doesn’t directly improve frame rate. For example, reducing CPU cycles will reduce thermal load even if the CPU isn’t the critical path for your game.Beyond that, optimizing memory bandwidth is one of the biggest savings you can make. Accessing DRAM is orders of magnitude more expensive than accessing local data on-chip, so watch your triangle budget and keep data types in memory as small as possible.Unity: To limit the impact of CPU clock frequency on the performance metrics, we recommend trying to run at a consistent temperature. There are a couple of approaches for doing this:Run warm: Run the device for a while so that it reaches a stable warm state before profiling.Run cool: Leave the device to cool for a while before profiling. This strategy can eliminate confusion and inconsistency in profiling sessions by taking captures that are unlikely to be thermally throttled. However, such captures will always represent the best case performance a user will see rather than what they might actually see after long play sessions. This strategy can also delay the time between profiling runs due to the need to wait for the cooling period first.With some hardware, you can fix the clock frequency for more stable performance metrics. However, this is not representative of most devices your users will be using, and will not report accurate real-world performance. Basically, it’s a handy technique if you are using a continuous integration setup to check for performance changes in your codebase over time.Any thoughts on Vulkan vs OpenGL ES 3 on Android? Vulkan is generally slower performance-wise. At the same time, many devices lack support for various features on ES3.Arm: Recent drivers and engine builds have vastly improved the quality of the Vulkan implementations available; so for an equivalent workload, there shouldn’t be a performance gap between OpenGL ES and Vulkan (if there is, please let us know). The switch to Vulkan is picking up speed and we expect to see more people choosing Vulkan by default over the next year or two. If you have counterexamples of areas where Vulkan isn’t performing well, please get in touch with us. We’d love to hear from you.What tools can we use to monitor memory bandwidth (RAM <-> VRAM)?Arm: The Streamline Profiler in Arm Mobile Studio can measure bandwidth between Mali GPUs and the external DRAM (or system cache).Should you split graphical assets by device tiers or device resolution?Arm: You can get the best result by retuning assets, but it’s expensive to do. Start by reducing resolution and frame rate, or disabling some optional post-processing effects.What is the best way to record performance metric statistics from our development build?Arm: You can use the Performance Advisor tool in Arm Mobile Studio to automatically capture and export performance metrics from the Mali GPUs, although this comes with a caveat: The generation of JSON reports requires a Professional Edition license.Unity: The Unity Profiler can be used to view common rendering metrics, such as vertex and triangle counts in the Rendering module. Plus you can include custom packages, such as System Metrics Mali, in your project to add low-level Mali GPU metrics to the Unity Profiler.What are your recommendations for profiling shader code?You need a GPU Profiler to do this. The one you choose depends on your target platform. For example, on iOS devices, Xcode’s GPU Profiler includes the Shader Profiler, which breaks down shader performance on a line-by-line basis.Arm Mobile Studio supports Mali Offline Compiler, a static analysis tool for shader code and compute kernels. This tool provides some overall performance estimates and recommendations for the Arm Mali GPU family.When profiling, the general rule is to test your game or app on the target device(s). With the industry moving toward more types of chipsets (Apple M1, Arm, x86 by Intel, AMD, etc.), how can developers profile and pinpoint issues on the many different hardware configurations in a reasonable amount of time?The proliferation of chipsets is primarily a concern on desktop platforms. There are a limited number of hardware architectures to test for console games. On mobile, there’s Apple’s A Series for iOS devices and a range of Arm and Qualcomm architectures for Android – but selecting a manageable list of representative mobile devices is pretty straightforward.On desktop it’s trickier because there’s a wide range of available chipsets and architectures, and buying Macs and PCs for testing can be expensive. Our best advice is to do what you can. No studio has infinite time and money for testing. We generally wouldn’t expect any huge surprises when comparing performance between an Intel x86 CPU and a similarly specced AMD processor, for instance. As long as the game performs comfortably on your minimum spec machine, you should be reasonably confident about other machines. It’s also worth considering using analytics, such as Unity Analytics, to record frame rates, system specs, and player options’ settings to identify hotspots or problematic configurations.We’re seeing more studios move to using at least some level of automated testing for regular on-device profiling, with summary stats published where the whole team can keep an eye on performance across the range of target devices. With well-designed test scenes, this can usually be made into a mechanical process that’s suited for automation, so you don’t need an experienced technical artist or QA tester running builds through the process manually.Do you ever see performance issues on high-end devices that don’t occur on the low-end ones?It’s uncommon, but we have seen it. Often the issue lies in how the project is configured, such as with the use of fancy shaders and high-res textures on high-end devices, which can put extra pressure on the GPU or memory. Sometimes a high-end mobile device or console will use a high-res phone screen or 4K TV output as a selling point but not necessarily have enough GPU power or memory to live up to that promise without further optimization.If you make use of the current versions of the C# Job System, verify whether there’s a job scheduling overhead that scales with the number of worker threads, which in turn, scales with the number of CPU cores. This can result in code that runs more slowly on a 64+ core Threadripper™ than on a modest 4-core or 8-core CPU. This issue will be addressed in future versions of Unity, but in the meantime, try limiting the number of job worker threads by setting JobsUtility.JobWorkerCount.What are some pointers for setting a good frame budget?Most of the time when we talk about frame budgets, we’re talking about the overall time budget for the frame. You calculate 1000/target frames per second (fps) to get your frame budget: 33.33 ms for 30 fps, 16.66 ms for 60 fps, 8.33 ms for 120 Hz, etc. Reduce that number by around 35% if you’re on mobile to give the chips a chance to cool down between each frame. Dividing the budget up to get specific sub-budgets for different features and/or systems is probably overkill except for projects with very specific, predictable systems, or those that make heavy use of Time Slicing.Generally, profiling is the process of finding the biggest bottlenecks – and therefore, the biggest potential performance gains. So rather than saying, “Physics is taking 1.2 ms when the budget only allows for 1 ms,” you might look at a frame and say, “Rendering is taking 6 ms, making it the biggest main thread CPU cost in the frame. How can we reduce that?”It seems like profiling early and often is still not common knowledge. What are your thoughts on why this might be the case?Building, releasing, promoting, and managing a game is difficult work on multiple fronts. So there will always be numerous priorities vying for a developer’s attention, and profiling can fall by the wayside. They know it’s something they should do, but perhaps they’re unfamiliar with the tools and don’t feel like they have time to learn. Or, they don’t know how to fit profiling into their workflows because they’re pushed toward completing features rather than performance optimization.Just as with bugs and technical debt, performance issues are cheaper and less risky to address early on, rather than later in a project’s development cycle. Our focus is on helping to demystify profiling tools and techniques for those developers who are unfamiliar with them. That’s what the profiling e-book and its related blog post and webinar aim to support.Is there a way to exclude certain methods from instrumentation or include only specific methods when using Deep Profiling in the Unity Profiler? When using a lot of async/await tasks, we create large stack traces, but how can we avoid slowing down both the client and the Profiler when Deep Profiling?You can enable Allocation call stacks to see the full call stacks that lead to managed allocations (shown as magenta in the Unity CPU Profiler Timeline view). Additionally, you can – and should! – manually instrument long-running methods and processes by sprinkling ProfilerMarkers throughout your code. There’s currently no way to automatically enable Deep Profiling or disable profiling entirely in specific parts of your application. But manually adding ProfilerMarkers and enabling Allocation call stacks when required can help you dig down into problem areas without having to resort to Deep Profiling.As of Unity 2022.2, you can also use our IgnoredByDeepProfilerAttribute to prevent the Unity Profiler from capturing method calls. Just add the IgnoredByDeepProfiler attribute to classes, structures, and methods.Where can I find more information on Deep Profiling in Unity?Deep Profiling is covered in our Profiler documentation. Then there’s the most in-depth, single resource for profiling information, the Ultimate Guide to profiling Unity games e-book, which links to relevant documentation and other resources throughout.Is it correct that Deep Profiling is only useful for the Allocations Profiler and that it skews results so much that it’s not useful for finding hitches in the game?Deep Profiling can be used to find the specific causes of managed allocations, although Allocation call stacks can do the same thing with less overhead, overall. At the same time, Deep Profiling can be helpful for quickly investigating why one specific ProfilerMarker seems to be taking so long, as it’s more convenient to enable than to add numerous ProfilerMarkers to your scripts and rebuild your game. But yes, it does skew performance quite heavily and so shouldn’t be enabled for general profiling.Is VSync worth setting to every VBlank? My mobile game runs at a very low fps when it’s disabled.Mobile devices force VSync to be enabled at a driver/hardware level, so disabling it in Unity’s Quality settings shouldn’t make any difference on those platforms. We haven’t heard of a case where disabling VSync negatively affects performance. Try taking a profile capture with VSync enabled, along with another capture of the same scene but with VSync disabled. Then compare the captures using Profile Analyzer to try to understand why the performance is so different.How can you determine if the main thread is waiting for the GPU and not the other way around?This is covered in the Ultimate Guide to profiling Unity games. You can also get more information in the blog post, Detecting performance bottlenecks with Unity Frame Timing Manager.Generally speaking, the telltale sign is that the main thread waits for the Render thread while the Render thread waits for the GPU. The specific marker names will differ depending on your target platform and graphics API, but you should look out for markers with names such as “PresentFrame” or “WaitForPresent.”Is there a solid process for finding memory leaks in profiling?Use the Memory Profiler to compare memory snapshots and check for leaks. For example, you can take a snapshot in your main menu, enter your game and then quit, go back to the main menu, and take a second snapshot. Comparing these two will tell you whether any objects/allocations from the game are still hanging around in memory.Does it make sense to optimize and rewrite part of the code for the DOTS system, for mobile devices including VR/AR? Do you use this system in your projects?A number of game projects now make use of parts of the Data-Oriented Technology Stack (DOTS). Native Containers, the C# Job System, Mathematics, and the Burst compilerare all fully supported packages that you can use right away to write optimal, parallelized, high-performance C# (HPC#) code to improve your project’s CPU performance.A smaller number of projects are also using Entities and associated packages, such as the Hybrid Renderer, Unity Physics, and NetCode. However, at this time, the packages listed are experimental, and using them involves accepting a degree of technical risk. This risk derives from an API that is still evolving, missing or incomplete features, as well as the engineering learning curve required to understand Data-Oriented Design (DOD) to get the most out of Unity’s Entity Component System (ECS). Unity engineer Steve McGreal wrote a guide on DOTS best practices, which includes some DOD fundamentals and tips for improving ECS performance.How do you go about setting limits on SetPass calls or shader complexity? Can you even set limits beforehand?Rendering is a complex process and there is no practical way to set a hard limit on the maximum number of SetPass calls or a metric for shader complexity. Even on a fixed hardware platform, such as a single console, the limits will depend on what kind of scene you want to render, and what other work is happening on the CPU and GPU during a frame.That’s why the rule on when to profile is “early and often.” Teams tend to create a “vertical slice” demo early on during production – usually a short burst of gameplay developed to the level of visual fidelity intended for the final game. This is your first opportunity to profile rendering and figure out what optimizations and limits might be needed. The profiling process should be repeated every time a new area or other major piece of visual content is added.Here are additional resources for learning about performance optimization:BlogsOptimize your mobile game performance: Expert tips on graphics and assetsOptimize your mobile game performance: Expert tips on physics, UI, and audio settingsOptimize your mobile game performance: Expert tips on profiling, memory, and code architecture from Unity’s top engineersExpert tips on optimizing your game graphics for consolesProfiling in Unity 2021 LTS: What, when, and howHow-to pagesProfiling and debugging toolsHow to profile memory in UnityBest practices for profiling game performanceE-booksOptimize your console and PC game performanceOptimize your mobile game performanceUltimate guide to profiling Unity gamesLearn tutorialsProfiling CPU performance in Android builds with Android StudioProfiling applications – Made with UnityEven more advanced technical content is coming soon – but in the meantime, please feel free to suggest topics for us to cover on the forum and check out the full roundtable webinar recording.
    0 Comments 0 Shares
  • to a T review – surrealism and empathy from the maker of Katamari Damacy

    to a T – what a strange thing to happenHaving your arms stuck in a permeant T-pose leads to a wonderfully surreal narrative adventure, in this new indie treat from Katamari creator Keita Takahashi.
    Keita Takahashi seems to be a very nice man. We met him back in 2018, and liked him immensely, but we’re genuinely surprise he’s still working in the games industry. He rose to fame with the first two Katamari Damacy games but after leaving Bandai Namco his assertion that he wanted to leave gaming behind and design playgrounds for children seemed like a much more obvious career path, for someone that absolutely doesn’t want to be stuck making sequels or generic action games.
    That’s certainly not been his fate and while titles like Noby Noby Boy and Wattam were wonderfully weird and inventive they weren’t the breakout hits that his bank balance probably needed. His latest refusal to toe the line probably isn’t destined to make him a billionaire either, but we’re sure that was never the point of to a T.
    Instead, this is just a relentlessly sweet and charming game about the evils of bullying and the benefits of being nice to people. It’s frequently surreal and ridiculous, but also capable of being serious, and somewhat dark, when it feels the need. Which given all the signing giraffes is quite some accomplishment.
    The game casts you as a young schoolkid whose arms are permanently stuck in a T-pose, with both stretched out 90° from his torso. If you’re waiting for an explanation as to why then we’re afraid we can’t tell you, because your characterdoesn’t know either. You find out eventually and the answer is… nothing you would expect.
    This has all been going on for a while before the game starts, as you’re by now well used to sidling through doors and getting your dog to help you dress. You’re also regularly bullied at school, which makes it obvious that being stuck like this is just a metaphor for any difference or peculiarity in real-life.
    Although the specific situations in to a T are fantastical, including the fact that the Japanese village you live in is also populated by anthropomorphic animals, its take on bullying is surprisingly nuanced and well written. There’re also some fun songs that are repeated just enough to become unavoidable earworms.
    The problem is that as well meaning as all this is, there’s no core gameplay element to make it a compelling video game. You can wander around talking to people, and a lot of what they say can be interesting and/or charmingly silly, but that’s all you’re doing. The game describes itself as a ‘narrative adventure’ and that’s very accurate, but what results is the sort of barely interactive experience that makes a Telltale game seem like Doom by comparison.
    There are some short little mini-games, like cleaning your teeth and eating breakfast, but the only goal beyond just triggering story sequences is collecting coins that you can spend on new outfits. This is gamified quite a bit when you realise your arms give you the ability to glide short distances, but it’s still very basic stuff.
    One chapter also lets you play as your dog, trying to solve an array of simple puzzles and engaging in very basic platforming, but while this is more interactive than the normal chapters it’s still not really much fun in its own right.

    More Trending

    Everything is all very charming – the cartoonish visuals are reminiscent of a slightly more realistic looking Wattam – but none of it really amounts to very much. The overall message is about getting on with people no matter their differences, but while that doesn’t necessarily come across as trite it’s also not really the sort of thing you need a £15 video game, with zero replayability, to tell you about.
    It also doesn’t help that the game can be quite frustrating to play through, making it hard to know what you’re supposed to do next, or where you’re meant to be going. The lack of camera controls means it’s hard to act on that information even if you do know what destination you’re aiming for, either because the screen is too zoomed in, something’s blocking your view, or you keep getting confused because the perspective changes.
    As with Wattam, we don’t feel entirely comfortable criticising the game for its failings. We’ll take a game trying to do something new and interesting over a workmanlike sequel any day of the week – whether it succeeds or not – but there’s so little to the experience it’s hard to imagine this fitting anyone to a T.

    to a T review summary

    In Short: Charming, silly, and occasionally profound but Keita Takahashi’s latest lacks the gameplay hook of Katamari Damacy, even if it is surprisingly well written.
    Pros: Wonderfully and unashamedly bizarre, from the premise on down. A great script, that touches on some dark subjects, and charming visuals and music.
    Cons: There’s very little gameplay involved and what there is, is either very simple or awkward to control. Barely five hours long, with no replayability.
    Score: 6/10

    Formats: PlayStation 5, Xbox Series X/S, and PCPrice: £15.49Publisher: Annapurna InteractiveDeveloper: uvulaRelease Date: 28th May 2025Age Rating: 7

    Who knew giraffes were so good at making sandwichesEmail gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter.
    To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here.
    For more stories like this, check our Gaming page.

    GameCentral
    Sign up for exclusive analysis, latest releases, and bonus community content.
    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    #review #surrealism #empathy #maker #katamari
    to a T review – surrealism and empathy from the maker of Katamari Damacy
    to a T – what a strange thing to happenHaving your arms stuck in a permeant T-pose leads to a wonderfully surreal narrative adventure, in this new indie treat from Katamari creator Keita Takahashi. Keita Takahashi seems to be a very nice man. We met him back in 2018, and liked him immensely, but we’re genuinely surprise he’s still working in the games industry. He rose to fame with the first two Katamari Damacy games but after leaving Bandai Namco his assertion that he wanted to leave gaming behind and design playgrounds for children seemed like a much more obvious career path, for someone that absolutely doesn’t want to be stuck making sequels or generic action games. That’s certainly not been his fate and while titles like Noby Noby Boy and Wattam were wonderfully weird and inventive they weren’t the breakout hits that his bank balance probably needed. His latest refusal to toe the line probably isn’t destined to make him a billionaire either, but we’re sure that was never the point of to a T. Instead, this is just a relentlessly sweet and charming game about the evils of bullying and the benefits of being nice to people. It’s frequently surreal and ridiculous, but also capable of being serious, and somewhat dark, when it feels the need. Which given all the signing giraffes is quite some accomplishment. The game casts you as a young schoolkid whose arms are permanently stuck in a T-pose, with both stretched out 90° from his torso. If you’re waiting for an explanation as to why then we’re afraid we can’t tell you, because your characterdoesn’t know either. You find out eventually and the answer is… nothing you would expect. This has all been going on for a while before the game starts, as you’re by now well used to sidling through doors and getting your dog to help you dress. You’re also regularly bullied at school, which makes it obvious that being stuck like this is just a metaphor for any difference or peculiarity in real-life. Although the specific situations in to a T are fantastical, including the fact that the Japanese village you live in is also populated by anthropomorphic animals, its take on bullying is surprisingly nuanced and well written. There’re also some fun songs that are repeated just enough to become unavoidable earworms. The problem is that as well meaning as all this is, there’s no core gameplay element to make it a compelling video game. You can wander around talking to people, and a lot of what they say can be interesting and/or charmingly silly, but that’s all you’re doing. The game describes itself as a ‘narrative adventure’ and that’s very accurate, but what results is the sort of barely interactive experience that makes a Telltale game seem like Doom by comparison. There are some short little mini-games, like cleaning your teeth and eating breakfast, but the only goal beyond just triggering story sequences is collecting coins that you can spend on new outfits. This is gamified quite a bit when you realise your arms give you the ability to glide short distances, but it’s still very basic stuff. One chapter also lets you play as your dog, trying to solve an array of simple puzzles and engaging in very basic platforming, but while this is more interactive than the normal chapters it’s still not really much fun in its own right. More Trending Everything is all very charming – the cartoonish visuals are reminiscent of a slightly more realistic looking Wattam – but none of it really amounts to very much. The overall message is about getting on with people no matter their differences, but while that doesn’t necessarily come across as trite it’s also not really the sort of thing you need a £15 video game, with zero replayability, to tell you about. It also doesn’t help that the game can be quite frustrating to play through, making it hard to know what you’re supposed to do next, or where you’re meant to be going. The lack of camera controls means it’s hard to act on that information even if you do know what destination you’re aiming for, either because the screen is too zoomed in, something’s blocking your view, or you keep getting confused because the perspective changes. As with Wattam, we don’t feel entirely comfortable criticising the game for its failings. We’ll take a game trying to do something new and interesting over a workmanlike sequel any day of the week – whether it succeeds or not – but there’s so little to the experience it’s hard to imagine this fitting anyone to a T. to a T review summary In Short: Charming, silly, and occasionally profound but Keita Takahashi’s latest lacks the gameplay hook of Katamari Damacy, even if it is surprisingly well written. Pros: Wonderfully and unashamedly bizarre, from the premise on down. A great script, that touches on some dark subjects, and charming visuals and music. Cons: There’s very little gameplay involved and what there is, is either very simple or awkward to control. Barely five hours long, with no replayability. Score: 6/10 Formats: PlayStation 5, Xbox Series X/S, and PCPrice: £15.49Publisher: Annapurna InteractiveDeveloper: uvulaRelease Date: 28th May 2025Age Rating: 7 Who knew giraffes were so good at making sandwichesEmail gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy #review #surrealism #empathy #maker #katamari
    METRO.CO.UK
    to a T review – surrealism and empathy from the maker of Katamari Damacy
    to a T – what a strange thing to happen (Annapurna Interactive) Having your arms stuck in a permeant T-pose leads to a wonderfully surreal narrative adventure, in this new indie treat from Katamari creator Keita Takahashi. Keita Takahashi seems to be a very nice man. We met him back in 2018, and liked him immensely, but we’re genuinely surprise he’s still working in the games industry. He rose to fame with the first two Katamari Damacy games but after leaving Bandai Namco his assertion that he wanted to leave gaming behind and design playgrounds for children seemed like a much more obvious career path, for someone that absolutely doesn’t want to be stuck making sequels or generic action games. That’s certainly not been his fate and while titles like Noby Noby Boy and Wattam were wonderfully weird and inventive they weren’t the breakout hits that his bank balance probably needed. His latest refusal to toe the line probably isn’t destined to make him a billionaire either, but we’re sure that was never the point of to a T. Instead, this is just a relentlessly sweet and charming game about the evils of bullying and the benefits of being nice to people. It’s frequently surreal and ridiculous, but also capable of being serious, and somewhat dark, when it feels the need. Which given all the signing giraffes is quite some accomplishment. The game casts you as a young schoolkid whose arms are permanently stuck in a T-pose, with both stretched out 90° from his torso. If you’re waiting for an explanation as to why then we’re afraid we can’t tell you, because your character (who you can customise and name as you see fit, along with his dog) doesn’t know either. You find out eventually and the answer is… nothing you would expect. This has all been going on for a while before the game starts, as you’re by now well used to sidling through doors and getting your dog to help you dress. You’re also regularly bullied at school, which makes it obvious that being stuck like this is just a metaphor for any difference or peculiarity in real-life. Although the specific situations in to a T are fantastical, including the fact that the Japanese village you live in is also populated by anthropomorphic animals (most notably a cadre of food-obsessed giraffes), its take on bullying is surprisingly nuanced and well written. There’re also some fun songs that are repeated just enough to become unavoidable earworms. The problem is that as well meaning as all this is, there’s no core gameplay element to make it a compelling video game. You can wander around talking to people, and a lot of what they say can be interesting and/or charmingly silly, but that’s all you’re doing. The game describes itself as a ‘narrative adventure’ and that’s very accurate, but what results is the sort of barely interactive experience that makes a Telltale game seem like Doom by comparison. There are some short little mini-games, like cleaning your teeth and eating breakfast, but the only goal beyond just triggering story sequences is collecting coins that you can spend on new outfits. This is gamified quite a bit when you realise your arms give you the ability to glide short distances, but it’s still very basic stuff. One chapter also lets you play as your dog, trying to solve an array of simple puzzles and engaging in very basic platforming, but while this is more interactive than the normal chapters it’s still not really much fun in its own right. More Trending Everything is all very charming – the cartoonish visuals are reminiscent of a slightly more realistic looking Wattam – but none of it really amounts to very much. The overall message is about getting on with people no matter their differences, but while that doesn’t necessarily come across as trite it’s also not really the sort of thing you need a £15 video game, with zero replayability, to tell you about. It also doesn’t help that the game can be quite frustrating to play through, making it hard to know what you’re supposed to do next, or where you’re meant to be going. The lack of camera controls means it’s hard to act on that information even if you do know what destination you’re aiming for, either because the screen is too zoomed in, something’s blocking your view, or you keep getting confused because the perspective changes. As with Wattam, we don’t feel entirely comfortable criticising the game for its failings. We’ll take a game trying to do something new and interesting over a workmanlike sequel any day of the week – whether it succeeds or not – but there’s so little to the experience it’s hard to imagine this fitting anyone to a T. to a T review summary In Short: Charming, silly, and occasionally profound but Keita Takahashi’s latest lacks the gameplay hook of Katamari Damacy, even if it is surprisingly well written. Pros: Wonderfully and unashamedly bizarre, from the premise on down. A great script, that touches on some dark subjects, and charming visuals and music. Cons: There’s very little gameplay involved and what there is, is either very simple or awkward to control. Barely five hours long, with no replayability. Score: 6/10 Formats: PlayStation 5 (reviewed), Xbox Series X/S, and PCPrice: £15.49Publisher: Annapurna InteractiveDeveloper: uvulaRelease Date: 28th May 2025Age Rating: 7 Who knew giraffes were so good at making sandwiches (Annapurna Interactive) Email gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Comments 0 Shares
  • Real TikTokers are pretending to be Veo 3 AI creations for fun, attention

    The turing test in reverse

    Real TikTokers are pretending to be Veo 3 AI creations for fun, attention

    From music videos to "Are you a prompt?" stunts, "real" videos are presenting as AI

    Kyle Orland



    May 31, 2025 7:08 am

    |

    13

    Of course I'm an AI creation! Why would you even doubt it?

    Credit:

    Getty Images

    Of course I'm an AI creation! Why would you even doubt it?

    Credit:

    Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok's algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes.
    However, among all the AI-generated video experiments spreading around, I've also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars.
    “This has to be real. There’s no way it's AI.”
    I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption "Google VEO 3 THIS IS 100% AI." As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone's living room, I read the caption containing the supposed prompt that had generated the clip: "a band of brothers with beards playing rock music in 6/8 with an accordion."

    @kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: “a band of brothers playing rock music in 6/8 with an accordion”♬ original sound - KONGOS

    After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit "Come With Me Now." And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention.
    Here's the sad thing: It worked! Without the "Look what Veo 3 did!" hook, I might have quickly scrolled by this video before I took the time to listen to thesong. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade.

    Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI". I could go on, but you get the idea.

    @gameboi_pat This has got to be real. There’s no way it’s AI #google #veo3 #googleveo3 #AI #prompts #areweprompts? ♬ original sound - GameBoi_pat

    I know it's tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there's something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke.
    The whole thing evokes last year's stunt where a couple of podcast hosts released a posthumous "AI-generated" George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that!

    Are we just prompts?
    Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as "the prompt theory." These clips focus on various AI-generated people reacting to the idea that they are "just prompts" with various levels of skepticism, fear, or even conspiratorial paranoia.
    On the other side of that gag, some humans are making joke videos playing off the idea that they're merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying "of course I'm going to make this. This is AI, you put that I'm going to make this in the prompt." User thisisamurica thanked his faux prompters for putting him in "a world with such delicious food" before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling.

    @justdrakenaround Goolgle’s New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit ♬ original sound - Drake Cummings

    Beyond the obvious jokes, though, I've also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that "we're all just prompts." The reactions run the gamut from "get the fuck away from me" to "I blame that, I now have to pay taxes" to solipsistic philosophical musings from convenience store employees.
    I'm loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an "Are you a prompt?" ambush video put it: "New trend: Do normal videos and write 'Google Veo 3' on top of the video."
    Which one is real?
    The best Veo-related TikTok engagement hack I've stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of "Veo 3 Goth Girls" across four clips, challenging in the caption that "one of these videos is real... can you guess which one?" In another example, two similar sets of kids are shown hanging out in cars while the caption asks, "Are you able to identify which scene is real and which one is from veo3?"

    @spongibobbu2 One of these videos is real… can you guess which one? #veo3 ♬ original sound - Jett

    After watching both of these videos on loop a few times, I'm relativelyconvinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the "Real or Veo" challenge framing is at grabbing my attention. Additionally, I'm still not 100 percent confident in my assessments, which is a testament to just how good Google's new model is at creating convincing videos.

    There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longeris almost certainly not generated by Google's AI. Looking back at a creator's other videos can also provide some clues—if the same person was appearing in "normal" videos two weeks ago, it's unlikely they would be appearing in Veo creations suddenly.
    There's also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough "genuine" Veo creations, you can start to pick out the patterns.
    Regardless, TikTokers trying to pass off real videos as fakes—even as a joke or engagement hack—is a recognition that video sites are now deep in the "deep doubt" era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn't really happen, a problem that political scientists call the liar's dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of "A.I.'d" crowds in real photos of her Detroit airport rally.
    For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling.

    Kyle Orland
    Senior Gaming Editor

    Kyle Orland
    Senior Gaming Editor

    Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

    13 Comments
    #real #tiktokers #are #pretending #veo
    Real TikTokers are pretending to be Veo 3 AI creations for fun, attention
    The turing test in reverse Real TikTokers are pretending to be Veo 3 AI creations for fun, attention From music videos to "Are you a prompt?" stunts, "real" videos are presenting as AI Kyle Orland – May 31, 2025 7:08 am | 13 Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok's algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes. However, among all the AI-generated video experiments spreading around, I've also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars. “This has to be real. There’s no way it's AI.” I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption "Google VEO 3 THIS IS 100% AI." As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone's living room, I read the caption containing the supposed prompt that had generated the clip: "a band of brothers with beards playing rock music in 6/8 with an accordion." @kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: “a band of brothers playing rock music in 6/8 with an accordion”♬ original sound - KONGOS After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit "Come With Me Now." And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention. Here's the sad thing: It worked! Without the "Look what Veo 3 did!" hook, I might have quickly scrolled by this video before I took the time to listen to thesong. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade. Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI". I could go on, but you get the idea. @gameboi_pat This has got to be real. There’s no way it’s AI 😩 #google #veo3 #googleveo3 #AI #prompts #areweprompts? ♬ original sound - GameBoi_pat I know it's tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there's something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke. The whole thing evokes last year's stunt where a couple of podcast hosts released a posthumous "AI-generated" George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that! Are we just prompts? Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as "the prompt theory." These clips focus on various AI-generated people reacting to the idea that they are "just prompts" with various levels of skepticism, fear, or even conspiratorial paranoia. On the other side of that gag, some humans are making joke videos playing off the idea that they're merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying "of course I'm going to make this. This is AI, you put that I'm going to make this in the prompt." User thisisamurica thanked his faux prompters for putting him in "a world with such delicious food" before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling. @justdrakenaround Goolgle’s New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit ♬ original sound - Drake Cummings Beyond the obvious jokes, though, I've also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that "we're all just prompts." The reactions run the gamut from "get the fuck away from me" to "I blame that, I now have to pay taxes" to solipsistic philosophical musings from convenience store employees. I'm loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an "Are you a prompt?" ambush video put it: "New trend: Do normal videos and write 'Google Veo 3' on top of the video." Which one is real? The best Veo-related TikTok engagement hack I've stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of "Veo 3 Goth Girls" across four clips, challenging in the caption that "one of these videos is real... can you guess which one?" In another example, two similar sets of kids are shown hanging out in cars while the caption asks, "Are you able to identify which scene is real and which one is from veo3?" @spongibobbu2 One of these videos is real… can you guess which one? #veo3 ♬ original sound - Jett After watching both of these videos on loop a few times, I'm relativelyconvinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the "Real or Veo" challenge framing is at grabbing my attention. Additionally, I'm still not 100 percent confident in my assessments, which is a testament to just how good Google's new model is at creating convincing videos. There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longeris almost certainly not generated by Google's AI. Looking back at a creator's other videos can also provide some clues—if the same person was appearing in "normal" videos two weeks ago, it's unlikely they would be appearing in Veo creations suddenly. There's also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough "genuine" Veo creations, you can start to pick out the patterns. Regardless, TikTokers trying to pass off real videos as fakes—even as a joke or engagement hack—is a recognition that video sites are now deep in the "deep doubt" era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn't really happen, a problem that political scientists call the liar's dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of "A.I.'d" crowds in real photos of her Detroit airport rally. For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling. Kyle Orland Senior Gaming Editor Kyle Orland Senior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 13 Comments #real #tiktokers #are #pretending #veo
    ARSTECHNICA.COM
    Real TikTokers are pretending to be Veo 3 AI creations for fun, attention
    The turing test in reverse Real TikTokers are pretending to be Veo 3 AI creations for fun, attention From music videos to "Are you a prompt?" stunts, "real" videos are presenting as AI Kyle Orland – May 31, 2025 7:08 am | 13 Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok's algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes. However, among all the AI-generated video experiments spreading around, I've also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars. “This has to be real. There’s no way it's AI.” I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption "Google VEO 3 THIS IS 100% AI." As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone's living room, I read the caption containing the supposed prompt that had generated the clip: "a band of brothers with beards playing rock music in 6/8 with an accordion." @kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: “a band of brothers playing rock music in 6/8 with an accordion”♬ original sound - KONGOS After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit "Come With Me Now." And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention. Here's the sad thing: It worked! Without the "Look what Veo 3 did!" hook, I might have quickly scrolled by this video before I took the time to listen to the (pretty good!) song. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade. Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI" (that last part is true, at least). I could go on, but you get the idea. @gameboi_pat This has got to be real. There’s no way it’s AI 😩 #google #veo3 #googleveo3 #AI #prompts #areweprompts? ♬ original sound - GameBoi_pat I know it's tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there's something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke (or don't, based on some of the comments). The whole thing evokes last year's stunt where a couple of podcast hosts released a posthumous "AI-generated" George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that! Are we just prompts? Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as "the prompt theory." These clips focus on various AI-generated people reacting to the idea that they are "just prompts" with various levels of skepticism, fear, or even conspiratorial paranoia. On the other side of that gag, some humans are making joke videos playing off the idea that they're merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying "of course I'm going to make this. This is AI, you put that I'm going to make this in the prompt." User thisisamurica thanked his faux prompters for putting him in "a world with such delicious food" before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling ("Goolgle’s [sic] New A.I. Veo 3 is at it again!! When will the prompts end?!" Cummings jokes in the caption). @justdrakenaround Goolgle’s New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit ♬ original sound - Drake Cummings Beyond the obvious jokes, though, I've also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that "we're all just prompts." The reactions run the gamut from "get the fuck away from me" to "I blame that [prompter], I now have to pay taxes" to solipsistic philosophical musings from convenience store employees. I'm loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an "Are you a prompt?" ambush video put it: "New trend: Do normal videos and write 'Google Veo 3' on top of the video." Which one is real? The best Veo-related TikTok engagement hack I've stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of "Veo 3 Goth Girls" across four clips, challenging in the caption that "one of these videos is real... can you guess which one?" In another example, two similar sets of kids are shown hanging out in cars while the caption asks, "Are you able to identify which scene is real and which one is from veo3?" @spongibobbu2 One of these videos is real… can you guess which one? #veo3 ♬ original sound - Jett After watching both of these videos on loop a few times, I'm relatively (but not entirely) convinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the "Real or Veo" challenge framing is at grabbing my attention. Additionally, I'm still not 100 percent confident in my assessments, which is a testament to just how good Google's new model is at creating convincing videos. There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longer (without an apparent change in camera angle) is almost certainly not generated by Google's AI. Looking back at a creator's other videos can also provide some clues—if the same person was appearing in "normal" videos two weeks ago, it's unlikely they would be appearing in Veo creations suddenly. There's also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough "genuine" Veo creations, you can start to pick out the patterns. Regardless, TikTokers trying to pass off real videos as fakes—even as a joke or engagement hack—is a recognition that video sites are now deep in the "deep doubt" era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn't really happen, a problem that political scientists call the liar's dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of "A.I.'d" crowds in real photos of her Detroit airport rally. For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling. Kyle Orland Senior Gaming Editor Kyle Orland Senior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 13 Comments
    0 Comments 0 Shares
  • You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await

    Home You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await

    News

    You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await

    6 min read

    Published: May 27, 2025

    Key Takeaways

    With Google’s Veo3, you can now render AI videos, audio, and background sounds.
    This would also make it easy for scammers to design deepfake scams to defraud innocent citizens.
    Users need to exercise self-vigilance to protect themselves. Developers’ responsibilities and government regulations will also play a key part.

    Google recently launched Veo3, an AI tool that lets you create videos with audio, including background tracks and various sound effects. Until recently, you could either use voice cloning apps to build AI voices or video rendering apps to generate AI videos. However, thanks to Veo3, folks can now create entire videos with audio.
    While this is an exciting development, we can’t help but think how easy it would be for scammers and swindlers to use Veo3’s videos to scam people.
    A video posted by a user on Threads shows a TV anchor breaking the news that ‘Secretary of Defence Pete Hegseth has died after drinking an entire litre of vodka on a dare by RFK.’ At first glance, the video is extremely convincing, and chances are that quite a few people might have believed it. After all, the quality is that of a professional news studio with a background of the Pentagon.
    Another user named Ari Kuschnir posted a 1-minute 16-second video on Reddit showing various characters in different settings talking to each other in various accents. The facial expressions are very close to those of a real human.
    A user commented, ‘Wow. The things that are coming. Gonna be wild!’ The ‘wild’ part is that the gap between reality and AI-generated content is closing daily. And remember, this is only the first version of this brand-new technology – things will only get worsefrom here.
    New AI Age for Scammers
    With the development of generative AI, we have already seen countless examples of people losing millions to such scams. 
    For example, in January 2024, an employee of a Hong Kong firm sent M to fraudsters who convinced the employee that she was talking to the CFO of the firm on a video call. Deloitte’s Center for Financial Services has predicted that generative AI could lead to a loss of B in the US alone by 2027, growing at a CAGR of 32%.
    Until now, scammers also had to take the effort of generating audio and video separately and syncing them to compile a ‘believable’ video. However, advanced AI tools like Veo3 make it easier for bad actors to catch innocent people off guard.

    In what is called the internet’s biggest scam so far, an 82-year-old retiree, Steve Beauchamp, lost after he invested his retirement savings in an investment scheme. The AI-generated video showed Elon Musk talking about this investment and how everyone looking to make money should invest in the scheme.
    In January 2024, sexually explicit images of Taylor Swift were spread on social media, drawing a lot of legislative attention to the matter. Now, imagine what these scammers can do with Veo3-like technology. Making deepfake porn would become easier and faster, leading to a lot of extortion cases.
    It’s worth noting, though, that we’re not saying that Veo3 specifically will be used for such criminal activities because they have several safeguards in place. However, now that Veo3 has shown the path, other similar products might be developed for malicious use cases.
    How to Protect Yourself
    Protection against AI-generated content is a multifaceted approach involving three key pillars: self-vigilance, developers’ responsibilities, and government regulations.
    Self Vigilance
    Well, it’s not entirely impossible to figure out which video is made via AI and which is genuine. Sure, AI has grown leaps and bounds in the last two years, and we have something as advanced as Veo3. However, there are still a few telltale signs of an AI-generated video. 

    The biggest giveaway is the lip sync. If you see a video of someone speaking, pay close attention to their lips. The audio in most cases will be out of sync by a few milliseconds.
    The voice, in most cases, will also sound robotic or flat. The tone and pitch might be inconsistent without any natural breathing sounds.

    We also recommend that you only trust official sources of information and not any random video you find while scrolling Instagram, YouTube, or TikTok. For example, if you see Elon Musk promoting an investment scheme, look for the official page or website of that scheme and dig deeper to find out who the actual promoters are. 
    You will not find anything reliable or trustworthy in the process. This exercise takes only a couple of minutes but can end up saving thousands of dollars.
    Developer’s Responsibilities
    AI developers are also responsible for ensuring their products cannot be misused for scams, extortion, and misinformation. For example, Veo3 blocks prompts that violate responsible AI guidelines, such as those involving politicians or violent acts. 
    Google has also developed its SynthID watermarking system, which watermarks content generated using Google’s AI tools. People can use the SynthID Detector to verify if a particular content was generated using AI.

    However, these safeguards are currently limited to Google’s products as of now. There’s a need for similar, if not better, prevention systems moving forward.
    Government Regulations
    Lastly, the government needs to play a crucial role in regulating the use of artificial intelligence. For example, the EU has already passed the AI Act, with enforcement beginning in 2025. Under this, companies must undergo stringent documentation, transparency, and oversight standards for all high-risk AI systems. 
    Even in the US, several laws are under proposal. For instance, the DEEPFAKES Accountability Act would require AI-generated content that shows any person to include a clear disclaimer stating that it is a deepfake. The bill was introduced in the House of Representatives in September 2023 and is currently under consideration. 
    Similarly, the REAL Political Advertisements Act would require political ads that contain AI content to include a similar disclaimer.
    That said, we are still only in the early stages of formulating legislation to regulate AI content. With time, as more sophisticated and advanced artificial intelligence tools develop, lawmakers must also be proactive in ensuring digital safety.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style.
    He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth.
    Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides. 
    Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh. 
    Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles by Krishi Chowdhary

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

    More from News

    View all

    News

    OpenAI Academy – A New Beginning in AI Learning

    Krishi Chowdhary

    44 minutes ago

    View all
    #you #can #now #make #videos
    You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await
    Home You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await News You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await 6 min read Published: May 27, 2025 Key Takeaways With Google’s Veo3, you can now render AI videos, audio, and background sounds. This would also make it easy for scammers to design deepfake scams to defraud innocent citizens. Users need to exercise self-vigilance to protect themselves. Developers’ responsibilities and government regulations will also play a key part. Google recently launched Veo3, an AI tool that lets you create videos with audio, including background tracks and various sound effects. Until recently, you could either use voice cloning apps to build AI voices or video rendering apps to generate AI videos. However, thanks to Veo3, folks can now create entire videos with audio. While this is an exciting development, we can’t help but think how easy it would be for scammers and swindlers to use Veo3’s videos to scam people. A video posted by a user on Threads shows a TV anchor breaking the news that ‘Secretary of Defence Pete Hegseth has died after drinking an entire litre of vodka on a dare by RFK.’ At first glance, the video is extremely convincing, and chances are that quite a few people might have believed it. After all, the quality is that of a professional news studio with a background of the Pentagon. Another user named Ari Kuschnir posted a 1-minute 16-second video on Reddit showing various characters in different settings talking to each other in various accents. The facial expressions are very close to those of a real human. A user commented, ‘Wow. The things that are coming. Gonna be wild!’ The ‘wild’ part is that the gap between reality and AI-generated content is closing daily. And remember, this is only the first version of this brand-new technology – things will only get worsefrom here. New AI Age for Scammers With the development of generative AI, we have already seen countless examples of people losing millions to such scams.  For example, in January 2024, an employee of a Hong Kong firm sent M to fraudsters who convinced the employee that she was talking to the CFO of the firm on a video call. Deloitte’s Center for Financial Services has predicted that generative AI could lead to a loss of B in the US alone by 2027, growing at a CAGR of 32%. Until now, scammers also had to take the effort of generating audio and video separately and syncing them to compile a ‘believable’ video. However, advanced AI tools like Veo3 make it easier for bad actors to catch innocent people off guard. In what is called the internet’s biggest scam so far, an 82-year-old retiree, Steve Beauchamp, lost after he invested his retirement savings in an investment scheme. The AI-generated video showed Elon Musk talking about this investment and how everyone looking to make money should invest in the scheme. In January 2024, sexually explicit images of Taylor Swift were spread on social media, drawing a lot of legislative attention to the matter. Now, imagine what these scammers can do with Veo3-like technology. Making deepfake porn would become easier and faster, leading to a lot of extortion cases. It’s worth noting, though, that we’re not saying that Veo3 specifically will be used for such criminal activities because they have several safeguards in place. However, now that Veo3 has shown the path, other similar products might be developed for malicious use cases. How to Protect Yourself Protection against AI-generated content is a multifaceted approach involving three key pillars: self-vigilance, developers’ responsibilities, and government regulations. Self Vigilance Well, it’s not entirely impossible to figure out which video is made via AI and which is genuine. Sure, AI has grown leaps and bounds in the last two years, and we have something as advanced as Veo3. However, there are still a few telltale signs of an AI-generated video.  The biggest giveaway is the lip sync. If you see a video of someone speaking, pay close attention to their lips. The audio in most cases will be out of sync by a few milliseconds. The voice, in most cases, will also sound robotic or flat. The tone and pitch might be inconsistent without any natural breathing sounds. We also recommend that you only trust official sources of information and not any random video you find while scrolling Instagram, YouTube, or TikTok. For example, if you see Elon Musk promoting an investment scheme, look for the official page or website of that scheme and dig deeper to find out who the actual promoters are.  You will not find anything reliable or trustworthy in the process. This exercise takes only a couple of minutes but can end up saving thousands of dollars. Developer’s Responsibilities AI developers are also responsible for ensuring their products cannot be misused for scams, extortion, and misinformation. For example, Veo3 blocks prompts that violate responsible AI guidelines, such as those involving politicians or violent acts.  Google has also developed its SynthID watermarking system, which watermarks content generated using Google’s AI tools. People can use the SynthID Detector to verify if a particular content was generated using AI. However, these safeguards are currently limited to Google’s products as of now. There’s a need for similar, if not better, prevention systems moving forward. Government Regulations Lastly, the government needs to play a crucial role in regulating the use of artificial intelligence. For example, the EU has already passed the AI Act, with enforcement beginning in 2025. Under this, companies must undergo stringent documentation, transparency, and oversight standards for all high-risk AI systems.  Even in the US, several laws are under proposal. For instance, the DEEPFAKES Accountability Act would require AI-generated content that shows any person to include a clear disclaimer stating that it is a deepfake. The bill was introduced in the House of Representatives in September 2023 and is currently under consideration.  Similarly, the REAL Political Advertisements Act would require political ads that contain AI content to include a similar disclaimer. That said, we are still only in the early stages of formulating legislation to regulate AI content. With time, as more sophisticated and advanced artificial intelligence tools develop, lawmakers must also be proactive in ensuring digital safety. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all News OpenAI Academy – A New Beginning in AI Learning Krishi Chowdhary 44 minutes ago View all #you #can #now #make #videos
    TECHREPORT.COM
    You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await
    Home You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await News You Can Now Make AI Videos with Audio Thanks to Veo3: A New Era of Scams Await 6 min read Published: May 27, 2025 Key Takeaways With Google’s Veo3, you can now render AI videos, audio, and background sounds. This would also make it easy for scammers to design deepfake scams to defraud innocent citizens. Users need to exercise self-vigilance to protect themselves. Developers’ responsibilities and government regulations will also play a key part. Google recently launched Veo3, an AI tool that lets you create videos with audio, including background tracks and various sound effects. Until recently, you could either use voice cloning apps to build AI voices or video rendering apps to generate AI videos. However, thanks to Veo3, folks can now create entire videos with audio. While this is an exciting development, we can’t help but think how easy it would be for scammers and swindlers to use Veo3’s videos to scam people. A video posted by a user on Threads shows a TV anchor breaking the news that ‘Secretary of Defence Pete Hegseth has died after drinking an entire litre of vodka on a dare by RFK.’ At first glance, the video is extremely convincing, and chances are that quite a few people might have believed it. After all, the quality is that of a professional news studio with a background of the Pentagon. Another user named Ari Kuschnir posted a 1-minute 16-second video on Reddit showing various characters in different settings talking to each other in various accents. The facial expressions are very close to those of a real human. A user commented, ‘Wow. The things that are coming. Gonna be wild!’ The ‘wild’ part is that the gap between reality and AI-generated content is closing daily. And remember, this is only the first version of this brand-new technology – things will only get worse (worse) from here. New AI Age for Scammers With the development of generative AI, we have already seen countless examples of people losing millions to such scams.  For example, in January 2024, an employee of a Hong Kong firm sent $25M to fraudsters who convinced the employee that she was talking to the CFO of the firm on a video call. Deloitte’s Center for Financial Services has predicted that generative AI could lead to a loss of $40B in the US alone by 2027, growing at a CAGR of 32%. Until now, scammers also had to take the effort of generating audio and video separately and syncing them to compile a ‘believable’ video. However, advanced AI tools like Veo3 make it easier for bad actors to catch innocent people off guard. In what is called the internet’s biggest scam so far, an 82-year-old retiree, Steve Beauchamp, lost $690,000 after he invested his retirement savings in an investment scheme. The AI-generated video showed Elon Musk talking about this investment and how everyone looking to make money should invest in the scheme. In January 2024, sexually explicit images of Taylor Swift were spread on social media, drawing a lot of legislative attention to the matter. Now, imagine what these scammers can do with Veo3-like technology. Making deepfake porn would become easier and faster, leading to a lot of extortion cases. It’s worth noting, though, that we’re not saying that Veo3 specifically will be used for such criminal activities because they have several safeguards in place. However, now that Veo3 has shown the path, other similar products might be developed for malicious use cases. How to Protect Yourself Protection against AI-generated content is a multifaceted approach involving three key pillars: self-vigilance, developers’ responsibilities, and government regulations. Self Vigilance Well, it’s not entirely impossible to figure out which video is made via AI and which is genuine. Sure, AI has grown leaps and bounds in the last two years, and we have something as advanced as Veo3. However, there are still a few telltale signs of an AI-generated video.  The biggest giveaway is the lip sync. If you see a video of someone speaking, pay close attention to their lips. The audio in most cases will be out of sync by a few milliseconds. The voice, in most cases, will also sound robotic or flat. The tone and pitch might be inconsistent without any natural breathing sounds. We also recommend that you only trust official sources of information and not any random video you find while scrolling Instagram, YouTube, or TikTok. For example, if you see Elon Musk promoting an investment scheme, look for the official page or website of that scheme and dig deeper to find out who the actual promoters are.  You will not find anything reliable or trustworthy in the process. This exercise takes only a couple of minutes but can end up saving thousands of dollars. Developer’s Responsibilities AI developers are also responsible for ensuring their products cannot be misused for scams, extortion, and misinformation. For example, Veo3 blocks prompts that violate responsible AI guidelines, such as those involving politicians or violent acts.  Google has also developed its SynthID watermarking system, which watermarks content generated using Google’s AI tools. People can use the SynthID Detector to verify if a particular content was generated using AI. However, these safeguards are currently limited to Google’s products as of now. There’s a need for similar, if not better, prevention systems moving forward. Government Regulations Lastly, the government needs to play a crucial role in regulating the use of artificial intelligence. For example, the EU has already passed the AI Act, with enforcement beginning in 2025. Under this, companies must undergo stringent documentation, transparency, and oversight standards for all high-risk AI systems.  Even in the US, several laws are under proposal. For instance, the DEEPFAKES Accountability Act would require AI-generated content that shows any person to include a clear disclaimer stating that it is a deepfake. The bill was introduced in the House of Representatives in September 2023 and is currently under consideration.  Similarly, the REAL Political Advertisements Act would require political ads that contain AI content to include a similar disclaimer. That said, we are still only in the early stages of formulating legislation to regulate AI content. With time, as more sophisticated and advanced artificial intelligence tools develop, lawmakers must also be proactive in ensuring digital safety. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all News OpenAI Academy – A New Beginning in AI Learning Krishi Chowdhary 44 minutes ago View all
    0 Comments 0 Shares
  • Google's New AI Video Tool Floods Internet With Real-Looking Clips

    Google's new AI video tool, Veo 3, is being used to create hyperrealistic videos that are now flooding the internet, terrifying viewers "with a sense that real and fake have become hopelessly blurred," reports Axios. From the report: Unlike OpenAI's video generator Sora, released more widely last December, Google DeepMind's Veo 3 can include dialogue, soundtracks and sound effects. The model excels at following complex prompts and translating detailed descriptions into realistic videos. The AI engine abides by real-world physics, offers accurate lip-syncing, rarely breaks continuity and generates people with lifelike human features, including five fingers per hand.
    According to examples shared by Google and from users online, the telltale signs of synthetic content are mostly absent.

    In one viral example posted on X, filmmaker and molecular biologist Hashem Al-Ghaili shows a series of short films of AI-generated actors railing against their AI creators and prompts. Special effects technology, video-editing apps and camera tech advances have been changing Hollywood for many decades, but artificially generated films pose a novel challenge to human creators. In a promo video for Flow, Google's new video tool that includes Veo 3, filmmakers say the AI engine gives them a new sense of freedom with a hint of eerie autonomy. "It feels like it's almost building upon itself," filmmaker Dave Clark says.

    of this story at Slashdot.
    #google039s #new #video #tool #floods
    Google's New AI Video Tool Floods Internet With Real-Looking Clips
    Google's new AI video tool, Veo 3, is being used to create hyperrealistic videos that are now flooding the internet, terrifying viewers "with a sense that real and fake have become hopelessly blurred," reports Axios. From the report: Unlike OpenAI's video generator Sora, released more widely last December, Google DeepMind's Veo 3 can include dialogue, soundtracks and sound effects. The model excels at following complex prompts and translating detailed descriptions into realistic videos. The AI engine abides by real-world physics, offers accurate lip-syncing, rarely breaks continuity and generates people with lifelike human features, including five fingers per hand. According to examples shared by Google and from users online, the telltale signs of synthetic content are mostly absent. In one viral example posted on X, filmmaker and molecular biologist Hashem Al-Ghaili shows a series of short films of AI-generated actors railing against their AI creators and prompts. Special effects technology, video-editing apps and camera tech advances have been changing Hollywood for many decades, but artificially generated films pose a novel challenge to human creators. In a promo video for Flow, Google's new video tool that includes Veo 3, filmmakers say the AI engine gives them a new sense of freedom with a hint of eerie autonomy. "It feels like it's almost building upon itself," filmmaker Dave Clark says. of this story at Slashdot. #google039s #new #video #tool #floods
    TECH.SLASHDOT.ORG
    Google's New AI Video Tool Floods Internet With Real-Looking Clips
    Google's new AI video tool, Veo 3, is being used to create hyperrealistic videos that are now flooding the internet, terrifying viewers "with a sense that real and fake have become hopelessly blurred," reports Axios. From the report: Unlike OpenAI's video generator Sora, released more widely last December, Google DeepMind's Veo 3 can include dialogue, soundtracks and sound effects. The model excels at following complex prompts and translating detailed descriptions into realistic videos. The AI engine abides by real-world physics, offers accurate lip-syncing, rarely breaks continuity and generates people with lifelike human features, including five fingers per hand. According to examples shared by Google and from users online, the telltale signs of synthetic content are mostly absent. In one viral example posted on X, filmmaker and molecular biologist Hashem Al-Ghaili shows a series of short films of AI-generated actors railing against their AI creators and prompts. Special effects technology, video-editing apps and camera tech advances have been changing Hollywood for many decades, but artificially generated films pose a novel challenge to human creators. In a promo video for Flow, Google's new video tool that includes Veo 3, filmmakers say the AI engine gives them a new sense of freedom with a hint of eerie autonomy. "It feels like it's almost building upon itself," filmmaker Dave Clark says. Read more of this story at Slashdot.
    0 Comments 0 Shares
  • Have we finally solved mystery of magnetic moon rocks?

    i ate a rock from the moon

    Have we finally solved mystery of magnetic moon rocks?

    Simulations show how effects of asteroid impact could amplify the early Moon's weak magnetic field.

    Jennifer Ouellette



    May 23, 2025 2:36 pm

    |

    5

    NASA Lunar sample 60015 on display at Space Center Houston Lunar Samples Vault, at NASA's Johnson Space Center

    Credit:

    OptoMechEngineer/CC BY-SA 4.0

    NASA Lunar sample 60015 on display at Space Center Houston Lunar Samples Vault, at NASA's Johnson Space Center

    Credit:

    OptoMechEngineer/CC BY-SA 4.0

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    NASA's Apollo missions brought back moon rock samples for scientists to study. We've learned a great deal over the ensuing decades, but one enduring mystery remains. Many of those lunar samples show signs of exposure to strong magnetic fields comparable to Earth's, yet the Moon doesn't have such a field today. So, how did the moon rocks get their magnetism?
    There have been many attempts to explain this anomaly. The latest comes from MIT scientists, who argue in a new paper published in the journal Science Advances that a large asteroid impact briefly boosted the Moon's early weak magnetic field—and that this spike is what is recorded in some lunar samples.
    Evidence gleaned from orbiting spacecraft observations, as well as results announced earlier this year from China's Chang'e 5 and Chang'e 6 missions, is largely consistent with the existence of at least a weak magnetic field on the early Moon. But where did this field come from? These usually form in planetary bodies as a result of a dynamo, in which molten metals in the core start to convect thanks to slowly dissipating heat. The problem is that the early Moon's small core had a mantle that wasn't much cooler than its core, so there would not have been significant convection to produce a sufficiently strong dynamo.
    There have been proposed hypotheses as to how the Moon could have developed a core dynamo. For instance, a 2022 analysis suggested that in the first billion years, when the Moon was covered in molten rock, giant rocks formed as the magma cooled and solidified. Denser minerals sank to the core while lighter ones formed a crust.
    Over time, the authors argued, a titanium layer crystallized just beneath the surface, and because it was denser than lighter minerals just beneath, that layer eventually broke into small blobs and sank through the mantle. The temperature difference between the cooler sinking rocks and the hotter core generated convection, creating intermittently strong magnetic fields—thus explaining why some rocks have that magnetic signature and others don't.
    Or perhaps there is no need for the presence of a dynamo-driven magnetic field at all. For instance, the authors of a 2021 study thought earlier analyses of lunar samples may have been altered during the process. They re-examined samples from the 1972 Apollo 16 mission using CO2 lasers to heat them, thus avoiding any alteration of the magnetic carriers. They concluded that any magnetic signatures in those samples could be explained by the impact of meteorites or comets hitting the Moon.

    Bracing for impact
    In 2020, two of the current paper's authors, MIT's Benjamin Weiss and Rona Oran, ran simulations to test whether a giant impact could generate a plasma that, in turn, would amplify the Moon's existing weak solar-generated magnetic field sufficiently to account for the levels of magnetism measured in the moon rocks. Those results seemed to rule out the possibility. This time around, they have come up with a new hypothesis that essentially combines elements of the dynamo and the plasma-generating impact hypotheses—taking into account an impact's resulting shockwave for good measure.

    Amplification of the lunar dynamo field by an Imbrium-­sized impact at the magnetic equator.

    Credit:

    Isaac S. Narrett et al., 2025

    They tested their hypothesis by running impact simulations, focusing on the level of impact that created the Moon's Imbrium basin, as well as plasma cloud simulations. Their starting assumption was that the early Moon had a dynamo that generated a weak magnetic field 50 times weaker than Earth's. The results confirmed that a large asteroid impact, for example, could have kicked up a plasma cloud, part of which spread outward into space. The remaining plasma streamed around to the other side of the Moon, amplifying the existing weak magnetic field for around 40 minutes.
    A key factor is the shock wave created by the initial impact, similar to seismic waves, which would have rattled surrounding rocks enough to reorient their subatomic spins in line with the newly amplified magnetic field. Weiss has likened the effect to tossing a deck of 52 playing cards into the air within a magnetic field. If each card had its own compass needle, its magnetism would be in a new orientation once each card hit the ground.
    It's a complicated scenario that admittedly calls for a degree of serendipity. But we might not have to wait too long for confirmation one way or the other. The answer could lie in analyzing fresh lunar samples and looking for telltale signatures not just of high magnetism but also shock.Scientists are looking to NASA's planned Artemis crewed missions for this, since sample returns are among the objectives. Much will depend on NASA's future funding, which is currently facing substantial cuts, although thus far, Artemis II and III remain on track.
    Science Advances, 2025. DOI: 10.1126/sciadv.adr7401  .

    Jennifer Ouellette
    Senior Writer

    Jennifer Ouellette
    Senior Writer

    Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

    5 Comments
    #have #finally #solved #mystery #magnetic
    Have we finally solved mystery of magnetic moon rocks?
    i ate a rock from the moon Have we finally solved mystery of magnetic moon rocks? Simulations show how effects of asteroid impact could amplify the early Moon's weak magnetic field. Jennifer Ouellette – May 23, 2025 2:36 pm | 5 NASA Lunar sample 60015 on display at Space Center Houston Lunar Samples Vault, at NASA's Johnson Space Center Credit: OptoMechEngineer/CC BY-SA 4.0 NASA Lunar sample 60015 on display at Space Center Houston Lunar Samples Vault, at NASA's Johnson Space Center Credit: OptoMechEngineer/CC BY-SA 4.0 Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more NASA's Apollo missions brought back moon rock samples for scientists to study. We've learned a great deal over the ensuing decades, but one enduring mystery remains. Many of those lunar samples show signs of exposure to strong magnetic fields comparable to Earth's, yet the Moon doesn't have such a field today. So, how did the moon rocks get their magnetism? There have been many attempts to explain this anomaly. The latest comes from MIT scientists, who argue in a new paper published in the journal Science Advances that a large asteroid impact briefly boosted the Moon's early weak magnetic field—and that this spike is what is recorded in some lunar samples. Evidence gleaned from orbiting spacecraft observations, as well as results announced earlier this year from China's Chang'e 5 and Chang'e 6 missions, is largely consistent with the existence of at least a weak magnetic field on the early Moon. But where did this field come from? These usually form in planetary bodies as a result of a dynamo, in which molten metals in the core start to convect thanks to slowly dissipating heat. The problem is that the early Moon's small core had a mantle that wasn't much cooler than its core, so there would not have been significant convection to produce a sufficiently strong dynamo. There have been proposed hypotheses as to how the Moon could have developed a core dynamo. For instance, a 2022 analysis suggested that in the first billion years, when the Moon was covered in molten rock, giant rocks formed as the magma cooled and solidified. Denser minerals sank to the core while lighter ones formed a crust. Over time, the authors argued, a titanium layer crystallized just beneath the surface, and because it was denser than lighter minerals just beneath, that layer eventually broke into small blobs and sank through the mantle. The temperature difference between the cooler sinking rocks and the hotter core generated convection, creating intermittently strong magnetic fields—thus explaining why some rocks have that magnetic signature and others don't. Or perhaps there is no need for the presence of a dynamo-driven magnetic field at all. For instance, the authors of a 2021 study thought earlier analyses of lunar samples may have been altered during the process. They re-examined samples from the 1972 Apollo 16 mission using CO2 lasers to heat them, thus avoiding any alteration of the magnetic carriers. They concluded that any magnetic signatures in those samples could be explained by the impact of meteorites or comets hitting the Moon. Bracing for impact In 2020, two of the current paper's authors, MIT's Benjamin Weiss and Rona Oran, ran simulations to test whether a giant impact could generate a plasma that, in turn, would amplify the Moon's existing weak solar-generated magnetic field sufficiently to account for the levels of magnetism measured in the moon rocks. Those results seemed to rule out the possibility. This time around, they have come up with a new hypothesis that essentially combines elements of the dynamo and the plasma-generating impact hypotheses—taking into account an impact's resulting shockwave for good measure. Amplification of the lunar dynamo field by an Imbrium-­sized impact at the magnetic equator. Credit: Isaac S. Narrett et al., 2025 They tested their hypothesis by running impact simulations, focusing on the level of impact that created the Moon's Imbrium basin, as well as plasma cloud simulations. Their starting assumption was that the early Moon had a dynamo that generated a weak magnetic field 50 times weaker than Earth's. The results confirmed that a large asteroid impact, for example, could have kicked up a plasma cloud, part of which spread outward into space. The remaining plasma streamed around to the other side of the Moon, amplifying the existing weak magnetic field for around 40 minutes. A key factor is the shock wave created by the initial impact, similar to seismic waves, which would have rattled surrounding rocks enough to reorient their subatomic spins in line with the newly amplified magnetic field. Weiss has likened the effect to tossing a deck of 52 playing cards into the air within a magnetic field. If each card had its own compass needle, its magnetism would be in a new orientation once each card hit the ground. It's a complicated scenario that admittedly calls for a degree of serendipity. But we might not have to wait too long for confirmation one way or the other. The answer could lie in analyzing fresh lunar samples and looking for telltale signatures not just of high magnetism but also shock.Scientists are looking to NASA's planned Artemis crewed missions for this, since sample returns are among the objectives. Much will depend on NASA's future funding, which is currently facing substantial cuts, although thus far, Artemis II and III remain on track. Science Advances, 2025. DOI: 10.1126/sciadv.adr7401  . Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 5 Comments #have #finally #solved #mystery #magnetic
    ARSTECHNICA.COM
    Have we finally solved mystery of magnetic moon rocks?
    i ate a rock from the moon Have we finally solved mystery of magnetic moon rocks? Simulations show how effects of asteroid impact could amplify the early Moon's weak magnetic field. Jennifer Ouellette – May 23, 2025 2:36 pm | 5 NASA Lunar sample 60015 on display at Space Center Houston Lunar Samples Vault, at NASA's Johnson Space Center Credit: OptoMechEngineer/CC BY-SA 4.0 NASA Lunar sample 60015 on display at Space Center Houston Lunar Samples Vault, at NASA's Johnson Space Center Credit: OptoMechEngineer/CC BY-SA 4.0 Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more NASA's Apollo missions brought back moon rock samples for scientists to study. We've learned a great deal over the ensuing decades, but one enduring mystery remains. Many of those lunar samples show signs of exposure to strong magnetic fields comparable to Earth's, yet the Moon doesn't have such a field today. So, how did the moon rocks get their magnetism? There have been many attempts to explain this anomaly. The latest comes from MIT scientists, who argue in a new paper published in the journal Science Advances that a large asteroid impact briefly boosted the Moon's early weak magnetic field—and that this spike is what is recorded in some lunar samples. Evidence gleaned from orbiting spacecraft observations, as well as results announced earlier this year from China's Chang'e 5 and Chang'e 6 missions, is largely consistent with the existence of at least a weak magnetic field on the early Moon. But where did this field come from? These usually form in planetary bodies as a result of a dynamo, in which molten metals in the core start to convect thanks to slowly dissipating heat. The problem is that the early Moon's small core had a mantle that wasn't much cooler than its core, so there would not have been significant convection to produce a sufficiently strong dynamo. There have been proposed hypotheses as to how the Moon could have developed a core dynamo. For instance, a 2022 analysis suggested that in the first billion years, when the Moon was covered in molten rock, giant rocks formed as the magma cooled and solidified. Denser minerals sank to the core while lighter ones formed a crust. Over time, the authors argued, a titanium layer crystallized just beneath the surface, and because it was denser than lighter minerals just beneath, that layer eventually broke into small blobs and sank through the mantle (gravitational overturn). The temperature difference between the cooler sinking rocks and the hotter core generated convection, creating intermittently strong magnetic fields—thus explaining why some rocks have that magnetic signature and others don't. Or perhaps there is no need for the presence of a dynamo-driven magnetic field at all. For instance, the authors of a 2021 study thought earlier analyses of lunar samples may have been altered during the process. They re-examined samples from the 1972 Apollo 16 mission using CO2 lasers to heat them, thus avoiding any alteration of the magnetic carriers. They concluded that any magnetic signatures in those samples could be explained by the impact of meteorites or comets hitting the Moon. Bracing for impact In 2020, two of the current paper's authors, MIT's Benjamin Weiss and Rona Oran, ran simulations to test whether a giant impact could generate a plasma that, in turn, would amplify the Moon's existing weak solar-generated magnetic field sufficiently to account for the levels of magnetism measured in the moon rocks. Those results seemed to rule out the possibility. This time around, they have come up with a new hypothesis that essentially combines elements of the dynamo and the plasma-generating impact hypotheses—taking into account an impact's resulting shockwave for good measure. Amplification of the lunar dynamo field by an Imbrium-­sized impact at the magnetic equator. Credit: Isaac S. Narrett et al., 2025 They tested their hypothesis by running impact simulations, focusing on the level of impact that created the Moon's Imbrium basin, as well as plasma cloud simulations. Their starting assumption was that the early Moon had a dynamo that generated a weak magnetic field 50 times weaker than Earth's. The results confirmed that a large asteroid impact, for example, could have kicked up a plasma cloud, part of which spread outward into space. The remaining plasma streamed around to the other side of the Moon, amplifying the existing weak magnetic field for around 40 minutes. A key factor is the shock wave created by the initial impact, similar to seismic waves, which would have rattled surrounding rocks enough to reorient their subatomic spins in line with the newly amplified magnetic field. Weiss has likened the effect to tossing a deck of 52 playing cards into the air within a magnetic field. If each card had its own compass needle, its magnetism would be in a new orientation once each card hit the ground. It's a complicated scenario that admittedly calls for a degree of serendipity. But we might not have to wait too long for confirmation one way or the other. The answer could lie in analyzing fresh lunar samples and looking for telltale signatures not just of high magnetism but also shock. (Early lunar samples were often discarded if they showed signs of shock.) Scientists are looking to NASA's planned Artemis crewed missions for this, since sample returns are among the objectives. Much will depend on NASA's future funding, which is currently facing substantial cuts, although thus far, Artemis II and III remain on track. Science Advances, 2025. DOI: 10.1126/sciadv.adr7401  (About DOIs). Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 5 Comments
    0 Comments 0 Shares