• Real TikTokers are pretending to be Veo 3 AI creations for fun, attention

    The turing test in reverse

    Real TikTokers are pretending to be Veo 3 AI creations for fun, attention

    From music videos to "Are you a prompt?" stunts, "real" videos are presenting as AI

    Kyle Orland



    May 31, 2025 7:08 am

    |

    13

    Of course I'm an AI creation! Why would you even doubt it?

    Credit:

    Getty Images

    Of course I'm an AI creation! Why would you even doubt it?

    Credit:

    Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok's algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes.
    However, among all the AI-generated video experiments spreading around, I've also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars.
    “This has to be real. There’s no way it's AI.”
    I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption "Google VEO 3 THIS IS 100% AI." As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone's living room, I read the caption containing the supposed prompt that had generated the clip: "a band of brothers with beards playing rock music in 6/8 with an accordion."

    @kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: “a band of brothers playing rock music in 6/8 with an accordion”♬ original sound - KONGOS

    After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit "Come With Me Now." And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention.
    Here's the sad thing: It worked! Without the "Look what Veo 3 did!" hook, I might have quickly scrolled by this video before I took the time to listen to thesong. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade.

    Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI". I could go on, but you get the idea.

    @gameboi_pat This has got to be real. There’s no way it’s AI #google #veo3 #googleveo3 #AI #prompts #areweprompts? ♬ original sound - GameBoi_pat

    I know it's tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there's something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke.
    The whole thing evokes last year's stunt where a couple of podcast hosts released a posthumous "AI-generated" George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that!

    Are we just prompts?
    Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as "the prompt theory." These clips focus on various AI-generated people reacting to the idea that they are "just prompts" with various levels of skepticism, fear, or even conspiratorial paranoia.
    On the other side of that gag, some humans are making joke videos playing off the idea that they're merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying "of course I'm going to make this. This is AI, you put that I'm going to make this in the prompt." User thisisamurica thanked his faux prompters for putting him in "a world with such delicious food" before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling.

    @justdrakenaround Goolgle’s New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit ♬ original sound - Drake Cummings

    Beyond the obvious jokes, though, I've also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that "we're all just prompts." The reactions run the gamut from "get the fuck away from me" to "I blame that, I now have to pay taxes" to solipsistic philosophical musings from convenience store employees.
    I'm loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an "Are you a prompt?" ambush video put it: "New trend: Do normal videos and write 'Google Veo 3' on top of the video."
    Which one is real?
    The best Veo-related TikTok engagement hack I've stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of "Veo 3 Goth Girls" across four clips, challenging in the caption that "one of these videos is real... can you guess which one?" In another example, two similar sets of kids are shown hanging out in cars while the caption asks, "Are you able to identify which scene is real and which one is from veo3?"

    @spongibobbu2 One of these videos is real… can you guess which one? #veo3 ♬ original sound - Jett

    After watching both of these videos on loop a few times, I'm relativelyconvinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the "Real or Veo" challenge framing is at grabbing my attention. Additionally, I'm still not 100 percent confident in my assessments, which is a testament to just how good Google's new model is at creating convincing videos.

    There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longeris almost certainly not generated by Google's AI. Looking back at a creator's other videos can also provide some clues—if the same person was appearing in "normal" videos two weeks ago, it's unlikely they would be appearing in Veo creations suddenly.
    There's also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough "genuine" Veo creations, you can start to pick out the patterns.
    Regardless, TikTokers trying to pass off real videos as fakes—even as a joke or engagement hack—is a recognition that video sites are now deep in the "deep doubt" era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn't really happen, a problem that political scientists call the liar's dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of "A.I.'d" crowds in real photos of her Detroit airport rally.
    For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling.

    Kyle Orland
    Senior Gaming Editor

    Kyle Orland
    Senior Gaming Editor

    Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

    13 Comments
    #real #tiktokers #are #pretending #veo
    Real TikTokers are pretending to be Veo 3 AI creations for fun, attention
    The turing test in reverse Real TikTokers are pretending to be Veo 3 AI creations for fun, attention From music videos to "Are you a prompt?" stunts, "real" videos are presenting as AI Kyle Orland – May 31, 2025 7:08 am | 13 Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok's algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes. However, among all the AI-generated video experiments spreading around, I've also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars. “This has to be real. There’s no way it's AI.” I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption "Google VEO 3 THIS IS 100% AI." As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone's living room, I read the caption containing the supposed prompt that had generated the clip: "a band of brothers with beards playing rock music in 6/8 with an accordion." @kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: “a band of brothers playing rock music in 6/8 with an accordion”♬ original sound - KONGOS After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit "Come With Me Now." And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention. Here's the sad thing: It worked! Without the "Look what Veo 3 did!" hook, I might have quickly scrolled by this video before I took the time to listen to thesong. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade. Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI". I could go on, but you get the idea. @gameboi_pat This has got to be real. There’s no way it’s AI 😩 #google #veo3 #googleveo3 #AI #prompts #areweprompts? ♬ original sound - GameBoi_pat I know it's tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there's something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke. The whole thing evokes last year's stunt where a couple of podcast hosts released a posthumous "AI-generated" George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that! Are we just prompts? Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as "the prompt theory." These clips focus on various AI-generated people reacting to the idea that they are "just prompts" with various levels of skepticism, fear, or even conspiratorial paranoia. On the other side of that gag, some humans are making joke videos playing off the idea that they're merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying "of course I'm going to make this. This is AI, you put that I'm going to make this in the prompt." User thisisamurica thanked his faux prompters for putting him in "a world with such delicious food" before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling. @justdrakenaround Goolgle’s New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit ♬ original sound - Drake Cummings Beyond the obvious jokes, though, I've also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that "we're all just prompts." The reactions run the gamut from "get the fuck away from me" to "I blame that, I now have to pay taxes" to solipsistic philosophical musings from convenience store employees. I'm loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an "Are you a prompt?" ambush video put it: "New trend: Do normal videos and write 'Google Veo 3' on top of the video." Which one is real? The best Veo-related TikTok engagement hack I've stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of "Veo 3 Goth Girls" across four clips, challenging in the caption that "one of these videos is real... can you guess which one?" In another example, two similar sets of kids are shown hanging out in cars while the caption asks, "Are you able to identify which scene is real and which one is from veo3?" @spongibobbu2 One of these videos is real… can you guess which one? #veo3 ♬ original sound - Jett After watching both of these videos on loop a few times, I'm relativelyconvinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the "Real or Veo" challenge framing is at grabbing my attention. Additionally, I'm still not 100 percent confident in my assessments, which is a testament to just how good Google's new model is at creating convincing videos. There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longeris almost certainly not generated by Google's AI. Looking back at a creator's other videos can also provide some clues—if the same person was appearing in "normal" videos two weeks ago, it's unlikely they would be appearing in Veo creations suddenly. There's also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough "genuine" Veo creations, you can start to pick out the patterns. Regardless, TikTokers trying to pass off real videos as fakes—even as a joke or engagement hack—is a recognition that video sites are now deep in the "deep doubt" era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn't really happen, a problem that political scientists call the liar's dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of "A.I.'d" crowds in real photos of her Detroit airport rally. For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling. Kyle Orland Senior Gaming Editor Kyle Orland Senior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 13 Comments #real #tiktokers #are #pretending #veo
    ARSTECHNICA.COM
    Real TikTokers are pretending to be Veo 3 AI creations for fun, attention
    The turing test in reverse Real TikTokers are pretending to be Veo 3 AI creations for fun, attention From music videos to "Are you a prompt?" stunts, "real" videos are presenting as AI Kyle Orland – May 31, 2025 7:08 am | 13 Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok's algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes. However, among all the AI-generated video experiments spreading around, I've also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars. “This has to be real. There’s no way it's AI.” I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption "Google VEO 3 THIS IS 100% AI." As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone's living room, I read the caption containing the supposed prompt that had generated the clip: "a band of brothers with beards playing rock music in 6/8 with an accordion." @kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: “a band of brothers playing rock music in 6/8 with an accordion”♬ original sound - KONGOS After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit "Come With Me Now." And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention. Here's the sad thing: It worked! Without the "Look what Veo 3 did!" hook, I might have quickly scrolled by this video before I took the time to listen to the (pretty good!) song. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade. Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI" (that last part is true, at least). I could go on, but you get the idea. @gameboi_pat This has got to be real. There’s no way it’s AI 😩 #google #veo3 #googleveo3 #AI #prompts #areweprompts? ♬ original sound - GameBoi_pat I know it's tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there's something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke (or don't, based on some of the comments). The whole thing evokes last year's stunt where a couple of podcast hosts released a posthumous "AI-generated" George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that! Are we just prompts? Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as "the prompt theory." These clips focus on various AI-generated people reacting to the idea that they are "just prompts" with various levels of skepticism, fear, or even conspiratorial paranoia. On the other side of that gag, some humans are making joke videos playing off the idea that they're merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying "of course I'm going to make this. This is AI, you put that I'm going to make this in the prompt." User thisisamurica thanked his faux prompters for putting him in "a world with such delicious food" before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling ("Goolgle’s [sic] New A.I. Veo 3 is at it again!! When will the prompts end?!" Cummings jokes in the caption). @justdrakenaround Goolgle’s New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit ♬ original sound - Drake Cummings Beyond the obvious jokes, though, I've also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that "we're all just prompts." The reactions run the gamut from "get the fuck away from me" to "I blame that [prompter], I now have to pay taxes" to solipsistic philosophical musings from convenience store employees. I'm loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an "Are you a prompt?" ambush video put it: "New trend: Do normal videos and write 'Google Veo 3' on top of the video." Which one is real? The best Veo-related TikTok engagement hack I've stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of "Veo 3 Goth Girls" across four clips, challenging in the caption that "one of these videos is real... can you guess which one?" In another example, two similar sets of kids are shown hanging out in cars while the caption asks, "Are you able to identify which scene is real and which one is from veo3?" @spongibobbu2 One of these videos is real… can you guess which one? #veo3 ♬ original sound - Jett After watching both of these videos on loop a few times, I'm relatively (but not entirely) convinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the "Real or Veo" challenge framing is at grabbing my attention. Additionally, I'm still not 100 percent confident in my assessments, which is a testament to just how good Google's new model is at creating convincing videos. There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longer (without an apparent change in camera angle) is almost certainly not generated by Google's AI. Looking back at a creator's other videos can also provide some clues—if the same person was appearing in "normal" videos two weeks ago, it's unlikely they would be appearing in Veo creations suddenly. There's also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough "genuine" Veo creations, you can start to pick out the patterns. Regardless, TikTokers trying to pass off real videos as fakes—even as a joke or engagement hack—is a recognition that video sites are now deep in the "deep doubt" era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn't really happen, a problem that political scientists call the liar's dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of "A.I.'d" crowds in real photos of her Detroit airport rally. For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling. Kyle Orland Senior Gaming Editor Kyle Orland Senior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 13 Comments
    0 Commentaires 0 Parts
  • Apple Settles Claim for Siri Eavesdropping

    May 20, 20253 min readIs Your Tech Listening? Apple Settles Claim for Siri EavesdroppingApple is paying million over claims that Siri secretly recorded private chats and fed targeted adsBy Deni Ellis Béchard edited by Dean Visser Artur Widak/NurPhoto via Getty ImagesSex, drug deals and doctor visits: according to allegations, Apple’s Siri eavesdropped on these and much more—on people’s iPhones, HomePods and Apple Watches—and used the content to target advertisements on users’ devices. Despite having denied selling our pillow talk to marketers, Apple just cut a -million check to settle a lawsuit in which plaintiffs reported eerie coincidences: discussing Air Jordan sneakers and immediately seeing ads for them; mentioning Olive Garden only to be served pasta commercials; talking privately with a doctor about a surgical procedure before seeing a promo for that very treatment. In early May the settlement administrator opened a claims website, allowing U.S. owners of every Siri-enabled gadget bought between September 2014 and December 2024to request a payout of up to 20 bucks per affected device—enough for a drink and a wary glance at your phone.The lawsuit, Lopez v. Apple, dates back to July 2019, when the Guardian published the allegations of an anonymous whistleblower—an Apple subcontractor whose job was to listen to Siri recordings to determine if the voice-activated assistant was being correctly triggered. The whistleblower claimed that accidental Siri activations routinely captured sensitive audio. Despite Apple’s promises that Siri listens only when invited, background noisescould switch it on. The contractor said user location and contact information accompanied recordings.Apple had never explicitly told users that humans might review their Siri requests, and within a week of the Guardian report, the company halted the program. The first Lopez v. Apple complaint was filed in August 2019, and two weeks later Apple issued a public apology in which it promised to make human review opt-in-only and to stop retaining audio by default. That apology was framed to allay customer concerns—not as an admission of wrongdoing. Apple denied all allegations in the lawsuit, which is common in class-action settlements in U.S. courts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.If the situation sounds familiar, your memory works. In 2018 Amazon’s Alexa recorded a married couple’s conversation about hardwood floors and sent it to one of the husband’s employees. Amazon blamed an unlikely chain of misheard cues—basically, it came down to Alexa butt-dialing someone with living room chatter. The following year Bloomberg reported that Amazon had thousands of workers transcribing clips to fine-tune the assistant. Later Google faced similar allegations. The pattern was clear: robots needed to be trained to make sure that they were hearing voice commands correctly, and this training needed to come from humans who, in the process, inevitably heard things they shouldn’t via consumer gadgets. Even TVs were implicated: in 2015 Samsung warned owners not to discuss secrets near its smart sets because voice commands were sent to unnamed third parties, a disclaimer that could have been written by George Orwell.This isn’t tin-foil-hat territory. A 2019 survey found that 55 percent of Americans believe their phones listen to them to collect data for targeted ads, and a 2023 poll pushed the number north of 60 percent. In the U.K., a 2021 poll found two thirds of adults had noticed an ad that they felt was tied to a recent real-life chat. But psychologists say this perception of “conversation-related ad creep” often relies on a feedback loop driven by confirmation bias: we ignore the thousands of ads that form a constant backdrop to our lives but build a campfire legend from the one time we mentioned “fire,” and an app tried to sell us tiki torches. The result is a low-grade cultural fear, with people placing masking tape on device mics and TikTokers begging Siri to stop stalking them. Knowing how ravenous tech companies are for data, people can hardly be blamed for this attitude.As for Apple, which once put “What happens on your iPhone, stays on your iPhone” on a Las Vegas billboard, the settlement doesn’t force it to admit fault—but lands a dent in its titanium halo: If the Cupertino, Calif.–based company can’t keep a lid on hot-mic moments, who can?
    #apple #settles #claim #siri #eavesdropping
    Apple Settles Claim for Siri Eavesdropping
    May 20, 20253 min readIs Your Tech Listening? Apple Settles Claim for Siri EavesdroppingApple is paying million over claims that Siri secretly recorded private chats and fed targeted adsBy Deni Ellis Béchard edited by Dean Visser Artur Widak/NurPhoto via Getty ImagesSex, drug deals and doctor visits: according to allegations, Apple’s Siri eavesdropped on these and much more—on people’s iPhones, HomePods and Apple Watches—and used the content to target advertisements on users’ devices. Despite having denied selling our pillow talk to marketers, Apple just cut a -million check to settle a lawsuit in which plaintiffs reported eerie coincidences: discussing Air Jordan sneakers and immediately seeing ads for them; mentioning Olive Garden only to be served pasta commercials; talking privately with a doctor about a surgical procedure before seeing a promo for that very treatment. In early May the settlement administrator opened a claims website, allowing U.S. owners of every Siri-enabled gadget bought between September 2014 and December 2024to request a payout of up to 20 bucks per affected device—enough for a drink and a wary glance at your phone.The lawsuit, Lopez v. Apple, dates back to July 2019, when the Guardian published the allegations of an anonymous whistleblower—an Apple subcontractor whose job was to listen to Siri recordings to determine if the voice-activated assistant was being correctly triggered. The whistleblower claimed that accidental Siri activations routinely captured sensitive audio. Despite Apple’s promises that Siri listens only when invited, background noisescould switch it on. The contractor said user location and contact information accompanied recordings.Apple had never explicitly told users that humans might review their Siri requests, and within a week of the Guardian report, the company halted the program. The first Lopez v. Apple complaint was filed in August 2019, and two weeks later Apple issued a public apology in which it promised to make human review opt-in-only and to stop retaining audio by default. That apology was framed to allay customer concerns—not as an admission of wrongdoing. Apple denied all allegations in the lawsuit, which is common in class-action settlements in U.S. courts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.If the situation sounds familiar, your memory works. In 2018 Amazon’s Alexa recorded a married couple’s conversation about hardwood floors and sent it to one of the husband’s employees. Amazon blamed an unlikely chain of misheard cues—basically, it came down to Alexa butt-dialing someone with living room chatter. The following year Bloomberg reported that Amazon had thousands of workers transcribing clips to fine-tune the assistant. Later Google faced similar allegations. The pattern was clear: robots needed to be trained to make sure that they were hearing voice commands correctly, and this training needed to come from humans who, in the process, inevitably heard things they shouldn’t via consumer gadgets. Even TVs were implicated: in 2015 Samsung warned owners not to discuss secrets near its smart sets because voice commands were sent to unnamed third parties, a disclaimer that could have been written by George Orwell.This isn’t tin-foil-hat territory. A 2019 survey found that 55 percent of Americans believe their phones listen to them to collect data for targeted ads, and a 2023 poll pushed the number north of 60 percent. In the U.K., a 2021 poll found two thirds of adults had noticed an ad that they felt was tied to a recent real-life chat. But psychologists say this perception of “conversation-related ad creep” often relies on a feedback loop driven by confirmation bias: we ignore the thousands of ads that form a constant backdrop to our lives but build a campfire legend from the one time we mentioned “fire,” and an app tried to sell us tiki torches. The result is a low-grade cultural fear, with people placing masking tape on device mics and TikTokers begging Siri to stop stalking them. Knowing how ravenous tech companies are for data, people can hardly be blamed for this attitude.As for Apple, which once put “What happens on your iPhone, stays on your iPhone” on a Las Vegas billboard, the settlement doesn’t force it to admit fault—but lands a dent in its titanium halo: If the Cupertino, Calif.–based company can’t keep a lid on hot-mic moments, who can? #apple #settles #claim #siri #eavesdropping
    WWW.SCIENTIFICAMERICAN.COM
    Apple Settles Claim for Siri Eavesdropping
    May 20, 20253 min readIs Your Tech Listening? Apple Settles Claim for Siri EavesdroppingApple is paying $95 million over claims that Siri secretly recorded private chats and fed targeted adsBy Deni Ellis Béchard edited by Dean Visser Artur Widak/NurPhoto via Getty ImagesSex, drug deals and doctor visits: according to allegations, Apple’s Siri eavesdropped on these and much more—on people’s iPhones, HomePods and Apple Watches—and used the content to target advertisements on users’ devices. Despite having denied selling our pillow talk to marketers, Apple just cut a $95-million check to settle a lawsuit in which plaintiffs reported eerie coincidences: discussing Air Jordan sneakers and immediately seeing ads for them; mentioning Olive Garden only to be served pasta commercials; talking privately with a doctor about a surgical procedure before seeing a promo for that very treatment. In early May the settlement administrator opened a claims website, allowing U.S. owners of every Siri-enabled gadget bought between September 2014 and December 2024 (essentially the lifespan of “Hey, Siri”) to request a payout of up to 20 bucks per affected device—enough for a drink and a wary glance at your phone.The lawsuit, Lopez v. Apple, dates back to July 2019, when the Guardian published the allegations of an anonymous whistleblower—an Apple subcontractor whose job was to listen to Siri recordings to determine if the voice-activated assistant was being correctly triggered. The whistleblower claimed that accidental Siri activations routinely captured sensitive audio. Despite Apple’s promises that Siri listens only when invited, background noises (often just the sound of a zipper, according to the whistleblower) could switch it on. The contractor said user location and contact information accompanied recordings.Apple had never explicitly told users that humans might review their Siri requests, and within a week of the Guardian report, the company halted the program. The first Lopez v. Apple complaint was filed in August 2019, and two weeks later Apple issued a public apology in which it promised to make human review opt-in-only and to stop retaining audio by default. That apology was framed to allay customer concerns—not as an admission of wrongdoing. Apple denied all allegations in the lawsuit, which is common in class-action settlements in U.S. courts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.If the situation sounds familiar, your memory works. In 2018 Amazon’s Alexa recorded a married couple’s conversation about hardwood floors and sent it to one of the husband’s employees. Amazon blamed an unlikely chain of misheard cues—basically, it came down to Alexa butt-dialing someone with living room chatter. The following year Bloomberg reported that Amazon had thousands of workers transcribing clips to fine-tune the assistant. Later Google faced similar allegations. The pattern was clear: robots needed to be trained to make sure that they were hearing voice commands correctly, and this training needed to come from humans who, in the process, inevitably heard things they shouldn’t via consumer gadgets. Even TVs were implicated: in 2015 Samsung warned owners not to discuss secrets near its smart sets because voice commands were sent to unnamed third parties, a disclaimer that could have been written by George Orwell.This isn’t tin-foil-hat territory. A 2019 survey found that 55 percent of Americans believe their phones listen to them to collect data for targeted ads, and a 2023 poll pushed the number north of 60 percent. In the U.K., a 2021 poll found two thirds of adults had noticed an ad that they felt was tied to a recent real-life chat. But psychologists say this perception of “conversation-related ad creep” often relies on a feedback loop driven by confirmation bias: we ignore the thousands of ads that form a constant backdrop to our lives but build a campfire legend from the one time we mentioned “fire,” and an app tried to sell us tiki torches. The result is a low-grade cultural fear, with people placing masking tape on device mics and TikTokers begging Siri to stop stalking them. Knowing how ravenous tech companies are for data, people can hardly be blamed for this attitude.As for Apple, which once put “What happens on your iPhone, stays on your iPhone” on a Las Vegas billboard, the settlement doesn’t force it to admit fault—but lands a dent in its titanium halo: If the Cupertino, Calif.–based company can’t keep a lid on hot-mic moments, who can?(Asked for comment by Scientific American, Apple shared information on the settlement and emphasized its commitment to privacy. And Amazon reiterated its commitment to privacy, writing, “Access to internal services is highly controlled, and is only granted to a limited number of employees who require these services to train and improve the service.” Samsung and Google had not responded to requests for comment by the time of publication.)
    0 Commentaires 0 Parts
  • You're probably not going to speak to a glitching AI bot on your next job interview

    AI-powered video interviews are likely to become more common as companies seek to streamline and automate early hiring stages.

    amperespy/Getty Images

    2025-05-17T00:00:01Z

    d

    Read in app

    This story is available exclusively to Business Insider
    subscribers. Become an Insider
    and start reading now.
    Have an account?

    TikTokers are posting clips of interviews with glitching AI bots.
    These cases are rare and likely staged, professors told Business Insider.
    AI interviews are on the rise, and glitches can erode trust in the hiring process.

    TikTok videos of glitchy AI interviews have gone viral in recent weeks,One user, who goes by Freddie, posted a video on May 3 of an AI assistant named "Catherine Appleton" glitching and spewing gibberish during his job interview. As of Thursday, his video had 8.8 million views."Should I email them? I was expecting a real human," he wrote in the caption.Another TikTok user named Ken shared a clip of her interview, in which the AI assistant repeated the phrase "vertical bar pilates" on loop.Neither responded to requests for comment from Business Insider. @its_ken04 It was genuinely so creepy and weird. Please stop trying to be lazy and have AI try to do YOUR JOB!!! It gave me the creeps so bad #fyp ♬ original sound - Its Ken
    Your next job interview probably won't involve a glitching AI botYes, the viral TikToks are creepy. But they're probably not your future."The TikTok videos showcasing glitches or malfunctions are likely either doctored or represent rare, isolated incidents," said Sriram Iyer, an adjunct senior lecturer at the National University of Singapore Business School.They "should not be considered a common phenomenon," he added.Tan Hong Ming, the deputy head and senior lecturer in the department of analytics and operations at NUS Business School, said social media "tends to amplify things.""It can make something appear far more common than it actually is through repetition and viral sharing," he said.Tan, who also serves as lead advisor to a Singapore-based AI recruitment firm, said the looping audio is "likely dramatized or re-enacted to drive engagement and shares." He said he has not come across this specific glitch in AI interviews, but occasional breakdowns aren't surprising.Many companies are using AI-powered recruitment tools which are often "wrappers around the same core models or APIs."Some of them may not use the latest or most stable versions, which could explain why similar glitches show up across platforms, he said.Unaizah Obaidellah, a senior lecturer specializing in AI at Malaysia's University of Malaya, said insufficient or irrelevant data could also be a culprit. If the bots are not trained with enough relevant examples, their quality suffers.She added that the incidents portrayed on the videos could reflect the larger race to deploy AI faster than we're ready for, which is "quite worrying."AI interviews on the riseEmily DeJeu, an assistant professor at Carnegie Mellon University's Tepper School of Business who specializes in AI communication and etiquette, told BI earlier this week that AI-powered video interviews are likely to become more common as companies seek to streamline and automate early hiring stages.
    Any time technology promises to save time and money and make everything faster, "we by default pursue it — there's a kind of inevitability to it," she said.Despite what the TikToks might suggest, candidates aren't necessarily turned off by bots, said Iyer, who has worked in HR tech for 20 years.What to do if your interview bot glitchesGlitches during AI interviews aren't just awkward."Glitches chip away at trust and can make the hiring process feel impersonal or even unfair," said Tan, especially if companies are not upfront about conducting an AI interview."They undermine the candidate's experience," he said, adding that employers need to "build in strong fallback options" and monitor these tools closely in real-world settings."Otherwise, what feels like a time-saving solution could quietly become a systemic problem," he added.For candidates, the key is not to panic.If an AI bot malfunctions mid-interview, Tan recommends emailing the hiring manager with a screenshot or recording of what happened."Most should offer a redo assuming the candidate isn't already put off by the idea of being interviewed by a bot in the first place," he said.Unaizah, from the University of Malaya, said candidates can also request feedback from the HR team on their interview performance.If there's clear evidence the interview wasn't properly assessed — or wasn't reviewed by a human — ask for an in-person interview, if possible, she said."If all fails or your gut feeling says otherwise, perhaps it's best to look for other companies," said Unaizah. "Target companies that prioritize human-centered hiring."

    Recommended video
    #you039re #probably #not #going #speak
    You're probably not going to speak to a glitching AI bot on your next job interview
    AI-powered video interviews are likely to become more common as companies seek to streamline and automate early hiring stages. amperespy/Getty Images 2025-05-17T00:00:01Z d Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? TikTokers are posting clips of interviews with glitching AI bots. These cases are rare and likely staged, professors told Business Insider. AI interviews are on the rise, and glitches can erode trust in the hiring process. TikTok videos of glitchy AI interviews have gone viral in recent weeks,One user, who goes by Freddie, posted a video on May 3 of an AI assistant named "Catherine Appleton" glitching and spewing gibberish during his job interview. As of Thursday, his video had 8.8 million views."Should I email them? I was expecting a real human," he wrote in the caption.Another TikTok user named Ken shared a clip of her interview, in which the AI assistant repeated the phrase "vertical bar pilates" on loop.Neither responded to requests for comment from Business Insider. @its_ken04 It was genuinely so creepy and weird. Please stop trying to be lazy and have AI try to do YOUR JOB!!! It gave me the creeps so bad #fyp ♬ original sound - Its Ken 🤍 Your next job interview probably won't involve a glitching AI botYes, the viral TikToks are creepy. But they're probably not your future."The TikTok videos showcasing glitches or malfunctions are likely either doctored or represent rare, isolated incidents," said Sriram Iyer, an adjunct senior lecturer at the National University of Singapore Business School.They "should not be considered a common phenomenon," he added.Tan Hong Ming, the deputy head and senior lecturer in the department of analytics and operations at NUS Business School, said social media "tends to amplify things.""It can make something appear far more common than it actually is through repetition and viral sharing," he said.Tan, who also serves as lead advisor to a Singapore-based AI recruitment firm, said the looping audio is "likely dramatized or re-enacted to drive engagement and shares." He said he has not come across this specific glitch in AI interviews, but occasional breakdowns aren't surprising.Many companies are using AI-powered recruitment tools which are often "wrappers around the same core models or APIs."Some of them may not use the latest or most stable versions, which could explain why similar glitches show up across platforms, he said.Unaizah Obaidellah, a senior lecturer specializing in AI at Malaysia's University of Malaya, said insufficient or irrelevant data could also be a culprit. If the bots are not trained with enough relevant examples, their quality suffers.She added that the incidents portrayed on the videos could reflect the larger race to deploy AI faster than we're ready for, which is "quite worrying."AI interviews on the riseEmily DeJeu, an assistant professor at Carnegie Mellon University's Tepper School of Business who specializes in AI communication and etiquette, told BI earlier this week that AI-powered video interviews are likely to become more common as companies seek to streamline and automate early hiring stages. Any time technology promises to save time and money and make everything faster, "we by default pursue it — there's a kind of inevitability to it," she said.Despite what the TikToks might suggest, candidates aren't necessarily turned off by bots, said Iyer, who has worked in HR tech for 20 years.What to do if your interview bot glitchesGlitches during AI interviews aren't just awkward."Glitches chip away at trust and can make the hiring process feel impersonal or even unfair," said Tan, especially if companies are not upfront about conducting an AI interview."They undermine the candidate's experience," he said, adding that employers need to "build in strong fallback options" and monitor these tools closely in real-world settings."Otherwise, what feels like a time-saving solution could quietly become a systemic problem," he added.For candidates, the key is not to panic.If an AI bot malfunctions mid-interview, Tan recommends emailing the hiring manager with a screenshot or recording of what happened."Most should offer a redo assuming the candidate isn't already put off by the idea of being interviewed by a bot in the first place," he said.Unaizah, from the University of Malaya, said candidates can also request feedback from the HR team on their interview performance.If there's clear evidence the interview wasn't properly assessed — or wasn't reviewed by a human — ask for an in-person interview, if possible, she said."If all fails or your gut feeling says otherwise, perhaps it's best to look for other companies," said Unaizah. "Target companies that prioritize human-centered hiring." Recommended video #you039re #probably #not #going #speak
    WWW.BUSINESSINSIDER.COM
    You're probably not going to speak to a glitching AI bot on your next job interview
    AI-powered video interviews are likely to become more common as companies seek to streamline and automate early hiring stages. amperespy/Getty Images 2025-05-17T00:00:01Z Save Saved Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? TikTokers are posting clips of interviews with glitching AI bots. These cases are rare and likely staged, professors told Business Insider. AI interviews are on the rise, and glitches can erode trust in the hiring process. TikTok videos of glitchy AI interviews have gone viral in recent weeks,One user, who goes by Freddie, posted a video on May 3 of an AI assistant named "Catherine Appleton" glitching and spewing gibberish during his job interview. As of Thursday, his video had 8.8 million views."Should I email them? I was expecting a real human," he wrote in the caption.Another TikTok user named Ken shared a clip of her interview, in which the AI assistant repeated the phrase "vertical bar pilates" on loop.Neither responded to requests for comment from Business Insider. @its_ken04 It was genuinely so creepy and weird. Please stop trying to be lazy and have AI try to do YOUR JOB!!! It gave me the creeps so bad #fyp ♬ original sound - Its Ken 🤍 Your next job interview probably won't involve a glitching AI botYes, the viral TikToks are creepy. But they're probably not your future."The TikTok videos showcasing glitches or malfunctions are likely either doctored or represent rare, isolated incidents," said Sriram Iyer, an adjunct senior lecturer at the National University of Singapore Business School.They "should not be considered a common phenomenon," he added.Tan Hong Ming, the deputy head and senior lecturer in the department of analytics and operations at NUS Business School, said social media "tends to amplify things.""It can make something appear far more common than it actually is through repetition and viral sharing," he said.Tan, who also serves as lead advisor to a Singapore-based AI recruitment firm, said the looping audio is "likely dramatized or re-enacted to drive engagement and shares." He said he has not come across this specific glitch in AI interviews, but occasional breakdowns aren't surprising.Many companies are using AI-powered recruitment tools which are often "wrappers around the same core models or APIs."Some of them may not use the latest or most stable versions, which could explain why similar glitches show up across platforms, he said.Unaizah Obaidellah, a senior lecturer specializing in AI at Malaysia's University of Malaya, said insufficient or irrelevant data could also be a culprit. If the bots are not trained with enough relevant examples, their quality suffers.She added that the incidents portrayed on the videos could reflect the larger race to deploy AI faster than we're ready for, which is "quite worrying."AI interviews on the riseEmily DeJeu, an assistant professor at Carnegie Mellon University's Tepper School of Business who specializes in AI communication and etiquette, told BI earlier this week that AI-powered video interviews are likely to become more common as companies seek to streamline and automate early hiring stages. Any time technology promises to save time and money and make everything faster, "we by default pursue it — there's a kind of inevitability to it," she said.Despite what the TikToks might suggest, candidates aren't necessarily turned off by bots, said Iyer, who has worked in HR tech for 20 years.What to do if your interview bot glitchesGlitches during AI interviews aren't just awkward."Glitches chip away at trust and can make the hiring process feel impersonal or even unfair," said Tan, especially if companies are not upfront about conducting an AI interview."They undermine the candidate's experience," he said, adding that employers need to "build in strong fallback options" and monitor these tools closely in real-world settings."Otherwise, what feels like a time-saving solution could quietly become a systemic problem," he added.For candidates, the key is not to panic.If an AI bot malfunctions mid-interview, Tan recommends emailing the hiring manager with a screenshot or recording of what happened."Most should offer a redo assuming the candidate isn't already put off by the idea of being interviewed by a bot in the first place," he said.Unaizah, from the University of Malaya, said candidates can also request feedback from the HR team on their interview performance.If there's clear evidence the interview wasn't properly assessed — or wasn't reviewed by a human — ask for an in-person interview, if possible, she said."If all fails or your gut feeling says otherwise, perhaps it's best to look for other companies," said Unaizah. "Target companies that prioritize human-centered hiring." Recommended video
    0 Commentaires 0 Parts