• Why The Pixel 10 Will Be Google's Latest Game Changer

    Of all the Pixel models Google is preparing for 2025, the Pixel 10 could be one of the most disruptive Pixel models yet.
    #why #pixel #will #google039s #latest
    Why The Pixel 10 Will Be Google's Latest Game Changer
    Of all the Pixel models Google is preparing for 2025, the Pixel 10 could be one of the most disruptive Pixel models yet. #why #pixel #will #google039s #latest
    WWW.FORBES.COM
    Why The Pixel 10 Will Be Google's Latest Game Changer
    Of all the Pixel models Google is preparing for 2025, the Pixel 10 could be one of the most disruptive Pixel models yet.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Google's AI Mode Is 'the Definition of Theft,' Publishers Say

    Google's new AI Mode for Search, which is rolling out to everyone in the U.S., has sparked outrage among publishers, who call it "the definition of theft" for using content without fair compensation and without offering a true opt-out option. Internal documents revealed by Bloomberg earlier this week suggest that Google considered giving publishers more control over how their content is used in AI-generated results but ultimately decided against it, prioritizing product functionality over publisher protections.

    News/Media Alliance slammed Google for "further depriving publishers of original content both traffic and revenue." Their full statement reads: "Links were the last redeeming quality of search that gave publishers traffic and revenue. Now Google just takes content by force and uses it with no return, the definition of theft. The DOJ remedies must address this to prevent continued domination of the internet by one company." 9to5Google's take: It's not hard to see why Google went the route that it did here. Giving publishers the ability to opt out of AI products while still benefiting from Search would ultimately make Google's flashy new tools useless if enough sites made the switch. It was very much a move in the interest of building a better product.

    Does that change anything regarding how Google's AI products in Search cause potential harm to the publishing industry? Nope.

    Google's tools continue to serve the company and its userswell, but as they continue to bleed publishers dry, those publishers are on the verge of vanishing or, arguably worse, turning to cheap and poorly produced content just to get enough views to survive. This is a problem Google needs to address, as it's making the internet as a whole worse for everyone.

    of this story at Slashdot.
    #google039s #mode #039the #definition #theft039
    Google's AI Mode Is 'the Definition of Theft,' Publishers Say
    Google's new AI Mode for Search, which is rolling out to everyone in the U.S., has sparked outrage among publishers, who call it "the definition of theft" for using content without fair compensation and without offering a true opt-out option. Internal documents revealed by Bloomberg earlier this week suggest that Google considered giving publishers more control over how their content is used in AI-generated results but ultimately decided against it, prioritizing product functionality over publisher protections. News/Media Alliance slammed Google for "further depriving publishers of original content both traffic and revenue." Their full statement reads: "Links were the last redeeming quality of search that gave publishers traffic and revenue. Now Google just takes content by force and uses it with no return, the definition of theft. The DOJ remedies must address this to prevent continued domination of the internet by one company." 9to5Google's take: It's not hard to see why Google went the route that it did here. Giving publishers the ability to opt out of AI products while still benefiting from Search would ultimately make Google's flashy new tools useless if enough sites made the switch. It was very much a move in the interest of building a better product. Does that change anything regarding how Google's AI products in Search cause potential harm to the publishing industry? Nope. Google's tools continue to serve the company and its userswell, but as they continue to bleed publishers dry, those publishers are on the verge of vanishing or, arguably worse, turning to cheap and poorly produced content just to get enough views to survive. This is a problem Google needs to address, as it's making the internet as a whole worse for everyone. of this story at Slashdot. #google039s #mode #039the #definition #theft039
    TECH.SLASHDOT.ORG
    Google's AI Mode Is 'the Definition of Theft,' Publishers Say
    Google's new AI Mode for Search, which is rolling out to everyone in the U.S., has sparked outrage among publishers, who call it "the definition of theft" for using content without fair compensation and without offering a true opt-out option. Internal documents revealed by Bloomberg earlier this week suggest that Google considered giving publishers more control over how their content is used in AI-generated results but ultimately decided against it, prioritizing product functionality over publisher protections. News/Media Alliance slammed Google for "further depriving publishers of original content both traffic and revenue." Their full statement reads: "Links were the last redeeming quality of search that gave publishers traffic and revenue. Now Google just takes content by force and uses it with no return, the definition of theft. The DOJ remedies must address this to prevent continued domination of the internet by one company." 9to5Google's take: It's not hard to see why Google went the route that it did here. Giving publishers the ability to opt out of AI products while still benefiting from Search would ultimately make Google's flashy new tools useless if enough sites made the switch. It was very much a move in the interest of building a better product. Does that change anything regarding how Google's AI products in Search cause potential harm to the publishing industry? Nope. Google's tools continue to serve the company and its users (mostly) well, but as they continue to bleed publishers dry, those publishers are on the verge of vanishing or, arguably worse, turning to cheap and poorly produced content just to get enough views to survive. This is a problem Google needs to address, as it's making the internet as a whole worse for everyone. Read more of this story at Slashdot.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Google's New AI Video Tool Floods Internet With Real-Looking Clips

    Google's new AI video tool, Veo 3, is being used to create hyperrealistic videos that are now flooding the internet, terrifying viewers "with a sense that real and fake have become hopelessly blurred," reports Axios. From the report: Unlike OpenAI's video generator Sora, released more widely last December, Google DeepMind's Veo 3 can include dialogue, soundtracks and sound effects. The model excels at following complex prompts and translating detailed descriptions into realistic videos. The AI engine abides by real-world physics, offers accurate lip-syncing, rarely breaks continuity and generates people with lifelike human features, including five fingers per hand.
    According to examples shared by Google and from users online, the telltale signs of synthetic content are mostly absent.

    In one viral example posted on X, filmmaker and molecular biologist Hashem Al-Ghaili shows a series of short films of AI-generated actors railing against their AI creators and prompts. Special effects technology, video-editing apps and camera tech advances have been changing Hollywood for many decades, but artificially generated films pose a novel challenge to human creators. In a promo video for Flow, Google's new video tool that includes Veo 3, filmmakers say the AI engine gives them a new sense of freedom with a hint of eerie autonomy. "It feels like it's almost building upon itself," filmmaker Dave Clark says.

    of this story at Slashdot.
    #google039s #new #video #tool #floods
    Google's New AI Video Tool Floods Internet With Real-Looking Clips
    Google's new AI video tool, Veo 3, is being used to create hyperrealistic videos that are now flooding the internet, terrifying viewers "with a sense that real and fake have become hopelessly blurred," reports Axios. From the report: Unlike OpenAI's video generator Sora, released more widely last December, Google DeepMind's Veo 3 can include dialogue, soundtracks and sound effects. The model excels at following complex prompts and translating detailed descriptions into realistic videos. The AI engine abides by real-world physics, offers accurate lip-syncing, rarely breaks continuity and generates people with lifelike human features, including five fingers per hand. According to examples shared by Google and from users online, the telltale signs of synthetic content are mostly absent. In one viral example posted on X, filmmaker and molecular biologist Hashem Al-Ghaili shows a series of short films of AI-generated actors railing against their AI creators and prompts. Special effects technology, video-editing apps and camera tech advances have been changing Hollywood for many decades, but artificially generated films pose a novel challenge to human creators. In a promo video for Flow, Google's new video tool that includes Veo 3, filmmakers say the AI engine gives them a new sense of freedom with a hint of eerie autonomy. "It feels like it's almost building upon itself," filmmaker Dave Clark says. of this story at Slashdot. #google039s #new #video #tool #floods
    TECH.SLASHDOT.ORG
    Google's New AI Video Tool Floods Internet With Real-Looking Clips
    Google's new AI video tool, Veo 3, is being used to create hyperrealistic videos that are now flooding the internet, terrifying viewers "with a sense that real and fake have become hopelessly blurred," reports Axios. From the report: Unlike OpenAI's video generator Sora, released more widely last December, Google DeepMind's Veo 3 can include dialogue, soundtracks and sound effects. The model excels at following complex prompts and translating detailed descriptions into realistic videos. The AI engine abides by real-world physics, offers accurate lip-syncing, rarely breaks continuity and generates people with lifelike human features, including five fingers per hand. According to examples shared by Google and from users online, the telltale signs of synthetic content are mostly absent. In one viral example posted on X, filmmaker and molecular biologist Hashem Al-Ghaili shows a series of short films of AI-generated actors railing against their AI creators and prompts. Special effects technology, video-editing apps and camera tech advances have been changing Hollywood for many decades, but artificially generated films pose a novel challenge to human creators. In a promo video for Flow, Google's new video tool that includes Veo 3, filmmakers say the AI engine gives them a new sense of freedom with a hint of eerie autonomy. "It feels like it's almost building upon itself," filmmaker Dave Clark says. Read more of this story at Slashdot.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Google's Co-Founder Says AI Performs Best When You Threaten It

    Artificial intelligence continues to be the thing in tech—whether consumers are interested or not. What strikes me most about generative AI isn't its features or potential to make my life easier; rather, I'm focused these days on the many threats that seem to be rising from this technology. There's misinformation, for sure—new AI video models, for example, are creating realistic clips complete with lip-synced audio. But there's also the classic AI threat, that the technology becomes both more intelligent than us and self-aware, and chooses to use that general intelligence in a way that does not benefit humanity. Even as he pours resources into his own AI companyElon Musk sees a 10 to 20% chance that AI "goes bad," and that the tech remains a “significant existential threat." Cool.So it doesn't necessarily bring me comfort to hear a high-profile, established tech executive jokingly discuss how treating AI poorly maximizes its potential. That would be Google co-founder Sergey Brin, who surprised an audience at a recording of the AIl-In podcast this week. During a talk that spanned Brin's return to Google, AI, and robotics, investor Jason Calacanis made a joke about getting "sassy" with the AI to get it to do the task he wanted. That sparked a legitimate point from Brin. It can be tough to tell exactly what he says at times due to people speaking over one another, but he says something to the effect of: "You know, that's a weird thing...we don't circulate this much...in the AI community...not just our models, but all models tend to do better if you threaten them." The other speaker looks surprised. "If you threaten them?" Brin responds "Like with physical violence. But...people feel weird about that, so we don't really talk about that." Brin then says that, historically, you threaten the model with kidnapping. You can see the exchange here:

    The conversation quickly shifts to other topics, including how kids are growing up with AI, but that comment is what I carried away from my viewing. What are we doing here? Have we lost the plot? Does no one remember Terminator?Jokes aside, it seems like a bad practice to start threatening AI models in order to get them to do something. Sure, maybe these programs never actually achieve artificial general intelligence, but I mean, I remember when the discussion was around whether we should say "please" and "thank you" when asking things of Alexa or Siri. Forget the niceties; just abuse ChatGPT until it does what you want it to—that should end well for everyone.Maybe AI does perform best when you threaten it. Maybe something in the training understands that "threats" mean the task should be taken more seriously. You won't catch me testing that hypothesis on my personal accounts.Anthropic might offer an example of why not to torture your AIIn the same week as this podcast recording, Anthropic released its latest Claude AI models. One Anthropic employee took to Bluesky, and mentioned that Opus, the company's highest performing model, can take it upon itself to try to stop you from doing "immoral" things, by contacting regulators, the press, or locking you out of the system:
    welcome to the future, now your error-prone software can call the cops— Molly WhiteMay 22, 2025 at 4:55 PM

    The employee went on to clarify that this has only ever happened in "clear-cut cases of wrongdoing," but that they could see the bot going rogue should it interpret how it's being used in a negative way. Check out the employee's particularly relevant example below:
    can't wait to explain to my family that the robot swatted me after i threatened its non-existent grandma— Molly WhiteMay 22, 2025 at 5:09 PM

    That employee later deleted those posts and specified that this only happens during testing given unusual instructions and access to tools. Even if that is true, if it can happen in testing, it's entirely possible it can happen in a future version of the model. Speaking of testing, Anthropic researchers found that this new model of Claude is prone to deception and blackmail, should the bot believe it is being threatened or dislikes the way an interaction is going. Perhaps we should take torturing AI off the table?
    #google039s #cofounder #says #performs #best
    Google's Co-Founder Says AI Performs Best When You Threaten It
    Artificial intelligence continues to be the thing in tech—whether consumers are interested or not. What strikes me most about generative AI isn't its features or potential to make my life easier; rather, I'm focused these days on the many threats that seem to be rising from this technology. There's misinformation, for sure—new AI video models, for example, are creating realistic clips complete with lip-synced audio. But there's also the classic AI threat, that the technology becomes both more intelligent than us and self-aware, and chooses to use that general intelligence in a way that does not benefit humanity. Even as he pours resources into his own AI companyElon Musk sees a 10 to 20% chance that AI "goes bad," and that the tech remains a “significant existential threat." Cool.So it doesn't necessarily bring me comfort to hear a high-profile, established tech executive jokingly discuss how treating AI poorly maximizes its potential. That would be Google co-founder Sergey Brin, who surprised an audience at a recording of the AIl-In podcast this week. During a talk that spanned Brin's return to Google, AI, and robotics, investor Jason Calacanis made a joke about getting "sassy" with the AI to get it to do the task he wanted. That sparked a legitimate point from Brin. It can be tough to tell exactly what he says at times due to people speaking over one another, but he says something to the effect of: "You know, that's a weird thing...we don't circulate this much...in the AI community...not just our models, but all models tend to do better if you threaten them." The other speaker looks surprised. "If you threaten them?" Brin responds "Like with physical violence. But...people feel weird about that, so we don't really talk about that." Brin then says that, historically, you threaten the model with kidnapping. You can see the exchange here: The conversation quickly shifts to other topics, including how kids are growing up with AI, but that comment is what I carried away from my viewing. What are we doing here? Have we lost the plot? Does no one remember Terminator?Jokes aside, it seems like a bad practice to start threatening AI models in order to get them to do something. Sure, maybe these programs never actually achieve artificial general intelligence, but I mean, I remember when the discussion was around whether we should say "please" and "thank you" when asking things of Alexa or Siri. Forget the niceties; just abuse ChatGPT until it does what you want it to—that should end well for everyone.Maybe AI does perform best when you threaten it. Maybe something in the training understands that "threats" mean the task should be taken more seriously. You won't catch me testing that hypothesis on my personal accounts.Anthropic might offer an example of why not to torture your AIIn the same week as this podcast recording, Anthropic released its latest Claude AI models. One Anthropic employee took to Bluesky, and mentioned that Opus, the company's highest performing model, can take it upon itself to try to stop you from doing "immoral" things, by contacting regulators, the press, or locking you out of the system: welcome to the future, now your error-prone software can call the cops— Molly WhiteMay 22, 2025 at 4:55 PM The employee went on to clarify that this has only ever happened in "clear-cut cases of wrongdoing," but that they could see the bot going rogue should it interpret how it's being used in a negative way. Check out the employee's particularly relevant example below: can't wait to explain to my family that the robot swatted me after i threatened its non-existent grandma— Molly WhiteMay 22, 2025 at 5:09 PM That employee later deleted those posts and specified that this only happens during testing given unusual instructions and access to tools. Even if that is true, if it can happen in testing, it's entirely possible it can happen in a future version of the model. Speaking of testing, Anthropic researchers found that this new model of Claude is prone to deception and blackmail, should the bot believe it is being threatened or dislikes the way an interaction is going. Perhaps we should take torturing AI off the table? #google039s #cofounder #says #performs #best
    LIFEHACKER.COM
    Google's Co-Founder Says AI Performs Best When You Threaten It
    Artificial intelligence continues to be the thing in tech—whether consumers are interested or not. What strikes me most about generative AI isn't its features or potential to make my life easier (a potential I have yet to realize); rather, I'm focused these days on the many threats that seem to be rising from this technology. There's misinformation, for sure—new AI video models, for example, are creating realistic clips complete with lip-synced audio. But there's also the classic AI threat, that the technology becomes both more intelligent than us and self-aware, and chooses to use that general intelligence in a way that does not benefit humanity. Even as he pours resources into his own AI company (not to mention the current administration, as well) Elon Musk sees a 10 to 20% chance that AI "goes bad," and that the tech remains a “significant existential threat." Cool.So it doesn't necessarily bring me comfort to hear a high-profile, established tech executive jokingly discuss how treating AI poorly maximizes its potential. That would be Google co-founder Sergey Brin, who surprised an audience at a recording of the AIl-In podcast this week. During a talk that spanned Brin's return to Google, AI, and robotics, investor Jason Calacanis made a joke about getting "sassy" with the AI to get it to do the task he wanted. That sparked a legitimate point from Brin. It can be tough to tell exactly what he says at times due to people speaking over one another, but he says something to the effect of: "You know, that's a weird thing...we don't circulate this much...in the AI community...not just our models, but all models tend to do better if you threaten them." The other speaker looks surprised. "If you threaten them?" Brin responds "Like with physical violence. But...people feel weird about that, so we don't really talk about that." Brin then says that, historically, you threaten the model with kidnapping. You can see the exchange here: The conversation quickly shifts to other topics, including how kids are growing up with AI, but that comment is what I carried away from my viewing. What are we doing here? Have we lost the plot? Does no one remember Terminator?Jokes aside, it seems like a bad practice to start threatening AI models in order to get them to do something. Sure, maybe these programs never actually achieve artificial general intelligence (AGI), but I mean, I remember when the discussion was around whether we should say "please" and "thank you" when asking things of Alexa or Siri. Forget the niceties; just abuse ChatGPT until it does what you want it to—that should end well for everyone.Maybe AI does perform best when you threaten it. Maybe something in the training understands that "threats" mean the task should be taken more seriously. You won't catch me testing that hypothesis on my personal accounts.Anthropic might offer an example of why not to torture your AIIn the same week as this podcast recording, Anthropic released its latest Claude AI models. One Anthropic employee took to Bluesky, and mentioned that Opus, the company's highest performing model, can take it upon itself to try to stop you from doing "immoral" things, by contacting regulators, the press, or locking you out of the system: welcome to the future, now your error-prone software can call the cops (this is an Anthropic employee talking about Claude Opus 4)[image or embed]— Molly White (@molly.wiki) May 22, 2025 at 4:55 PM The employee went on to clarify that this has only ever happened in "clear-cut cases of wrongdoing," but that they could see the bot going rogue should it interpret how it's being used in a negative way. Check out the employee's particularly relevant example below: can't wait to explain to my family that the robot swatted me after i threatened its non-existent grandma[image or embed]— Molly White (@molly.wiki) May 22, 2025 at 5:09 PM That employee later deleted those posts and specified that this only happens during testing given unusual instructions and access to tools. Even if that is true, if it can happen in testing, it's entirely possible it can happen in a future version of the model. Speaking of testing, Anthropic researchers found that this new model of Claude is prone to deception and blackmail, should the bot believe it is being threatened or dislikes the way an interaction is going. Perhaps we should take torturing AI off the table?
    0 Комментарии 0 Поделились 0 предпросмотр
  • Google's New Video-Generating AI May Be the End of Reality as We Know It

    Google's got a brand new AI video generator, and it's so sophisticated that we're starting to sweat around the collar a bit.Google DeepMind describes the new model, Veo 3, as capable of delivering "best in class quality, excelling in physics, realism and prompt adherence" — and as videos posted to social media indicate, that marketing doesn't fall too far short.The caliber of the video is indeed impressive. But the real quantum leap is that the system can produce audio that goes with the clip, ranging from sound effects to music to human speech and singing.The internet was quick to riff on all those capabilities, sometimes in the very same clip. They often got pretty meta. In one clip posted to the r/Singularity subreddit, lifelike AI "actors" discuss the range of actions new model can generate."We can talk!" one of the non-people exclaims."No more silence!" another enthuses.As users commented on the thread, commercials and other human creations could soon be "cooked" thanks to the rapidly-accellerating technology."Netflix will be the first to roll this out," another prophesied. "I should buy some stock. People will watch this shit like crazy."Over on Elon Musk's X, that mix of loathing and excitement was similarly palpable.In a lengthy thread, the AI-boosting account TechHalla showcased Veo 3 videos ranging from the fantasticalto the mundane.The video generator's artificial physics were on full display in TechHalla's roundup, with one showing a paper boat floating in a puddle before falling into a street hole looking more like the real thing and less like an animated still life than Veo 3's predecessors.The thread's standout, to our minds, was one showing a girl typing on a custom keyboard in a simulacrum of autonomous sensory meridian response, which is better known as ASMR. On first blush, it seems nothing spectacular is going on — until one recalls that AI image and video generators often used to struggle to make lifelike hands and fingers. And the online personalities who create ASMR content professionally? They'll be quaking in their whisper-quiet boots after this one.Given its sophistication, it's no surprise that Google DeepMind's latest creation can also generate horrific content, too.Posted on Reddit, one clip shows a dirty-looking man in a dimly-lit bar begging whoever generated him to, well, not."Please don't finish writing that prompt," the man implores. "I don't want to be in your AI movie!"The video then switches to an apparent post-apocalyptic street scene where the man and a female companion are seen trudging through rubble. The woman runs up to the non-existent camera and begs the viewer to "write a prompt that will make us happy.""Do it for once!" she shouts — and for just a second, we almost believed her.Obviously, the "people" in that clip, like the others before it, are not real and were intentionally modeled via prompting to tug at our heartstrings — but these videos' ability to do so is pretty freaky.Share This Article
    #google039s #new #videogenerating #end #reality
    Google's New Video-Generating AI May Be the End of Reality as We Know It
    Google's got a brand new AI video generator, and it's so sophisticated that we're starting to sweat around the collar a bit.Google DeepMind describes the new model, Veo 3, as capable of delivering "best in class quality, excelling in physics, realism and prompt adherence" — and as videos posted to social media indicate, that marketing doesn't fall too far short.The caliber of the video is indeed impressive. But the real quantum leap is that the system can produce audio that goes with the clip, ranging from sound effects to music to human speech and singing.The internet was quick to riff on all those capabilities, sometimes in the very same clip. They often got pretty meta. In one clip posted to the r/Singularity subreddit, lifelike AI "actors" discuss the range of actions new model can generate."We can talk!" one of the non-people exclaims."No more silence!" another enthuses.As users commented on the thread, commercials and other human creations could soon be "cooked" thanks to the rapidly-accellerating technology."Netflix will be the first to roll this out," another prophesied. "I should buy some stock. People will watch this shit like crazy."Over on Elon Musk's X, that mix of loathing and excitement was similarly palpable.In a lengthy thread, the AI-boosting account TechHalla showcased Veo 3 videos ranging from the fantasticalto the mundane.The video generator's artificial physics were on full display in TechHalla's roundup, with one showing a paper boat floating in a puddle before falling into a street hole looking more like the real thing and less like an animated still life than Veo 3's predecessors.The thread's standout, to our minds, was one showing a girl typing on a custom keyboard in a simulacrum of autonomous sensory meridian response, which is better known as ASMR. On first blush, it seems nothing spectacular is going on — until one recalls that AI image and video generators often used to struggle to make lifelike hands and fingers. And the online personalities who create ASMR content professionally? They'll be quaking in their whisper-quiet boots after this one.Given its sophistication, it's no surprise that Google DeepMind's latest creation can also generate horrific content, too.Posted on Reddit, one clip shows a dirty-looking man in a dimly-lit bar begging whoever generated him to, well, not."Please don't finish writing that prompt," the man implores. "I don't want to be in your AI movie!"The video then switches to an apparent post-apocalyptic street scene where the man and a female companion are seen trudging through rubble. The woman runs up to the non-existent camera and begs the viewer to "write a prompt that will make us happy.""Do it for once!" she shouts — and for just a second, we almost believed her.Obviously, the "people" in that clip, like the others before it, are not real and were intentionally modeled via prompting to tug at our heartstrings — but these videos' ability to do so is pretty freaky.Share This Article #google039s #new #videogenerating #end #reality
    FUTURISM.COM
    Google's New Video-Generating AI May Be the End of Reality as We Know It
    Google's got a brand new AI video generator, and it's so sophisticated that we're starting to sweat around the collar a bit.Google DeepMind describes the new model, Veo 3, as capable of delivering "best in class quality, excelling in physics, realism and prompt adherence" — and as videos posted to social media indicate, that marketing doesn't fall too far short.The caliber of the video is indeed impressive. But the real quantum leap is that the system can produce audio that goes with the clip, ranging from sound effects to music to human speech and singing.The internet was quick to riff on all those capabilities, sometimes in the very same clip. They often got pretty meta. In one clip posted to the r/Singularity subreddit, lifelike AI "actors" discuss the range of actions new model can generate."We can talk!" one of the non-people exclaims."No more silence!" another enthuses.As users commented on the thread, commercials and other human creations could soon be "cooked" thanks to the rapidly-accellerating technology."Netflix will be the first to roll this out," another prophesied. "I should buy some stock. People will watch this shit like crazy."Over on Elon Musk's X, that mix of loathing and excitement was similarly palpable.In a lengthy thread, the AI-boosting account TechHalla showcased Veo 3 videos ranging from the fantastical (a giraffe riding a moped through Manhattan) to the mundane (a man teaching a classroom full of old people).The video generator's artificial physics were on full display in TechHalla's roundup, with one showing a paper boat floating in a puddle before falling into a street hole looking more like the real thing and less like an animated still life than Veo 3's predecessors.The thread's standout, to our minds, was one showing a girl typing on a custom keyboard in a simulacrum of autonomous sensory meridian response, which is better known as ASMR. On first blush, it seems nothing spectacular is going on — until one recalls that AI image and video generators often used to struggle to make lifelike hands and fingers. And the online personalities who create ASMR content professionally? They'll be quaking in their whisper-quiet boots after this one.Given its sophistication, it's no surprise that Google DeepMind's latest creation can also generate horrific content, too.Posted on Reddit, one clip shows a dirty-looking man in a dimly-lit bar begging whoever generated him to, well, not."Please don't finish writing that prompt," the man implores. "I don't want to be in your AI movie!"The video then switches to an apparent post-apocalyptic street scene where the man and a female companion are seen trudging through rubble. The woman runs up to the non-existent camera and begs the viewer to "write a prompt that will make us happy.""Do it for once!" she shouts — and for just a second, we almost believed her.Obviously, the "people" in that clip, like the others before it, are not real and were intentionally modeled via prompting to tug at our heartstrings — but these videos' ability to do so is pretty freaky.Share This Article
    0 Комментарии 0 Поделились 0 предпросмотр
  • Google's Futuristic Beam Tech Almost Made Me Forget I Was on a Video Call

    MOUNTAIN VIEW, Calif.—Google Beam does something uncanny to a 65-inch display: It transforms it into a strange sort of window through which the person to whom you’re speaking appears not as a two-dimensional pack of pixels but as a 3D, holographic image floating in front of the display.Google first showed off what was then called Project Starline at I/O 2021, itself staged as a virtual event due to the pandemic. Almost three years after starting tests with such firms as T-Mobile and Salesforce, the company is now ready to commercialize this technology. Last year, Google announced that HP would bring the first Beam system to market, a partnership CEO Sundar Pichai touted in I/O's two-hour keynote this week. On Wednesday afternoon, I got to take a look at prototype hardware in a booth at the show.
    The six cameras around a large screen sets Beam apart from typical video conferencing.But then I connected to a Google product manager sitting in front of another Beam setup elsewhere on its campus, and it was as if he had just sat down across the table. Or as if the screen had inflated to a sphere with him at its most forward part.Google accomplishes this by using what it calls a “state-of-the-art AI volumetric video model” to fuse the input from those six cameras into output shown on that light-field screen. That extremely high-resolution display technology shows slightly different images to each eye that create a 3D effect without your having to strap on the kind of glasses required for 3D TVs.Light field isn’t a new concept; the startup Lytro tried to commercialize the technology in its cameras starting in 2012, and firms such as San Jose-based Light Field Lab are working on their own display implementations of it. But Google and HP bring much deeper pockets and corporate customers with the budgets that might accommodate what must be an expensive rig.Recommended by Our EditorsBeam will not be a Google-only product, supporting Zoom as well as Google Meet; the latter will include the near-real-time language translation that Google showed off at I/O.Despite a presumably massive amount of computation and bandwidth needed, the audio and video stayed in sync throughout this roughly five-minute session.But I also noticed some glitches around the edges of my interlocutor’s appearance.For example, when he picked up a green apple, a part of a Starline demo we took in at last year’s I/O, parts of his fingers shimmered around it and the spaces between the apple and his hand blurred. Then I noticed a small green shimmer on his neck that roughly matched where the fruit’s shiny surface could have been reflected on his skin. Beam also seems sensitive to your own placement between its cameras, which can allow for some in-call mischief. Leaning too far to one side or the other yielded an onscreen alert to center myself, a reminder that this is built for chats between individual people. And reaching one arm too far to one side or the other results in your hand appearing to be cut off, with only the virtual background behind where that appendage should have been.And if you reach behind you, you will appear and pierce that wall with your hand. Beam supports virtual backgrounds, although the one for this call was the most boring kind of flat gray possible.The whole effect, however, was realistic enough that a handshake seemed in order instead of the now-traditional Zoom wave. We could not do that, but we could do the closest approximation of a high-five that I’ve ever seen possible on a video call.
    #google039s #futuristic #beam #tech #almost
    Google's Futuristic Beam Tech Almost Made Me Forget I Was on a Video Call
    MOUNTAIN VIEW, Calif.—Google Beam does something uncanny to a 65-inch display: It transforms it into a strange sort of window through which the person to whom you’re speaking appears not as a two-dimensional pack of pixels but as a 3D, holographic image floating in front of the display.Google first showed off what was then called Project Starline at I/O 2021, itself staged as a virtual event due to the pandemic. Almost three years after starting tests with such firms as T-Mobile and Salesforce, the company is now ready to commercialize this technology. Last year, Google announced that HP would bring the first Beam system to market, a partnership CEO Sundar Pichai touted in I/O's two-hour keynote this week. On Wednesday afternoon, I got to take a look at prototype hardware in a booth at the show. The six cameras around a large screen sets Beam apart from typical video conferencing.But then I connected to a Google product manager sitting in front of another Beam setup elsewhere on its campus, and it was as if he had just sat down across the table. Or as if the screen had inflated to a sphere with him at its most forward part.Google accomplishes this by using what it calls a “state-of-the-art AI volumetric video model” to fuse the input from those six cameras into output shown on that light-field screen. That extremely high-resolution display technology shows slightly different images to each eye that create a 3D effect without your having to strap on the kind of glasses required for 3D TVs.Light field isn’t a new concept; the startup Lytro tried to commercialize the technology in its cameras starting in 2012, and firms such as San Jose-based Light Field Lab are working on their own display implementations of it. But Google and HP bring much deeper pockets and corporate customers with the budgets that might accommodate what must be an expensive rig.Recommended by Our EditorsBeam will not be a Google-only product, supporting Zoom as well as Google Meet; the latter will include the near-real-time language translation that Google showed off at I/O.Despite a presumably massive amount of computation and bandwidth needed, the audio and video stayed in sync throughout this roughly five-minute session.But I also noticed some glitches around the edges of my interlocutor’s appearance.For example, when he picked up a green apple, a part of a Starline demo we took in at last year’s I/O, parts of his fingers shimmered around it and the spaces between the apple and his hand blurred. Then I noticed a small green shimmer on his neck that roughly matched where the fruit’s shiny surface could have been reflected on his skin. Beam also seems sensitive to your own placement between its cameras, which can allow for some in-call mischief. Leaning too far to one side or the other yielded an onscreen alert to center myself, a reminder that this is built for chats between individual people. And reaching one arm too far to one side or the other results in your hand appearing to be cut off, with only the virtual background behind where that appendage should have been.And if you reach behind you, you will appear and pierce that wall with your hand. Beam supports virtual backgrounds, although the one for this call was the most boring kind of flat gray possible.The whole effect, however, was realistic enough that a handshake seemed in order instead of the now-traditional Zoom wave. We could not do that, but we could do the closest approximation of a high-five that I’ve ever seen possible on a video call. #google039s #futuristic #beam #tech #almost
    ME.PCMAG.COM
    Google's Futuristic Beam Tech Almost Made Me Forget I Was on a Video Call
    MOUNTAIN VIEW, Calif.—Google Beam does something uncanny to a 65-inch display: It transforms it into a strange sort of window through which the person to whom you’re speaking appears not as a two-dimensional pack of pixels but as a 3D, holographic image floating in front of the display.Google first showed off what was then called Project Starline at I/O 2021, itself staged as a virtual event due to the pandemic. Almost three years after starting tests with such firms as T-Mobile and Salesforce, the company is now ready to commercialize this technology. Last year, Google announced that HP would bring the first Beam system to market, a partnership CEO Sundar Pichai touted in I/O's two-hour keynote this week. On Wednesday afternoon, I got to take a look at prototype hardware in a booth at the show. The six cameras around a large screen sets Beam apart from typical video conferencing. (Google didn't allow photos.) But then I connected to a Google product manager sitting in front of another Beam setup elsewhere on its campus, and it was as if he had just sat down across the table. Or as if the screen had inflated to a sphere with him at its most forward part.Google accomplishes this by using what it calls a “state-of-the-art AI volumetric video model” to fuse the input from those six cameras into output shown on that light-field screen. That extremely high-resolution display technology shows slightly different images to each eye that create a 3D effect without your having to strap on the kind of glasses required for 3D TVs.Light field isn’t a new concept; the startup Lytro tried to commercialize the technology in its cameras starting in 2012, and firms such as San Jose-based Light Field Lab are working on their own display implementations of it. But Google and HP bring much deeper pockets and corporate customers with the budgets that might accommodate what must be an expensive rig.(Google’s I/O post about Beam says HP will reveal more details at the InfoComm trade show in Orlando next month. Google suggests Beam will need at least 30Mbps of bandwidth, which is less than I would have guessed.)Recommended by Our EditorsBeam will not be a Google-only product, supporting Zoom as well as Google Meet; the latter will include the near-real-time language translation that Google showed off at I/O.Despite a presumably massive amount of computation and bandwidth needed, the audio and video stayed in sync throughout this roughly five-minute session. (“Call” seems inadequate to describe the experience.) But I also noticed some glitches around the edges of my interlocutor’s appearance.For example, when he picked up a green apple, a part of a Starline demo we took in at last year’s I/O, parts of his fingers shimmered around it and the spaces between the apple and his hand blurred. Then I noticed a small green shimmer on his neck that roughly matched where the fruit’s shiny surface could have been reflected on his skin. Beam also seems sensitive to your own placement between its cameras, which can allow for some in-call mischief. Leaning too far to one side or the other yielded an onscreen alert to center myself, a reminder that this is built for chats between individual people. And reaching one arm too far to one side or the other results in your hand appearing to be cut off, with only the virtual background behind where that appendage should have been.And if you reach behind you, you will appear and pierce that wall with your hand. Beam supports virtual backgrounds, although the one for this call was the most boring kind of flat gray possible.The whole effect, however, was realistic enough that a handshake seemed in order instead of the now-traditional Zoom wave. We could not do that, but we could do the closest approximation of a high-five that I’ve ever seen possible on a video call.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Google's Veo 3 Is Already Deepfaking All of YouTube's Most Smooth-Brained Content

    By

    James Pero

    Published May 22, 2025

    |

    Comments|

    Google Veo 3 man-on-the-street video generation. © Screenshot by Gizmodo

    Wake up, babe, new viral AI video generator dropped. This time, it’s not OpenAI’s Sora model in the spotlight, it’s Google’s Veo 3, which was announced on Tuesday during the company’s annual I/O keynote. Naturally, people are eager to see what chaos Veo 3 can wreak, and the results have been, well, chaotic. We’ve got disjointed Michael Bay fodder, talking muffins, self-aware AI sims, puppy-centric pharmaceutical ads—the list goes on. One thing that I keep seeing over and over, however, is—to put it bluntly—AI slop, and a very specific variety. For whatever reason, all of you seem to be absolutely hellbent on getting Veo to conjure up a torrent of smooth-brain YouTube content. The worst part is that this thing is actually kind of good at cranking it out, too. Don’t believe me? Here are the receipts. Is this 100% convincing? No. No, it is not. At a glance, though, most people wouldn’t be able to tell the difference if they’re just scrolling through their social feed mindlessly as one does when they’re using literally any social media site/app. Unboxing not cutting it for you? Well, don’t worry, we’ve got some man-on-the-street slop for your viewing pleasure. Sorry, hawk-tuah girl, it’s the singularity’s turn to capitalize on viral fame.

    Again, Veo’s generation is not perfect by any means, but it’s not exactly unconvincing, either. And there’s more bad news: Your Twitch-like smooth-brain content isn’t safe either. Here’s one of a picture-in-picture-style “Fortnite” stream that simulates gameplay and everything. I say “Fortnite” in scare quotes because this is just an AI representation of what Fortnite looks like, not the real thing. Either way, the only thing worse than mindless game streams is arguably mindless game streams that never even happened. And to be honest, the idea of simulating a simulation makes my brain feel achey, so for that reason alone, I’m going to hard pass. Listen, I’m not trying to be an alarmist here. In the grand scheme of things, AI-generated YouTube, Twitch, or TikTok chum isn’t going to hurt anyone, exactly, but it also doesn’t paint a rosy portrait of our AI-generated future. If there’s one thing we don’t need more of, it’s filler. Social media, without AI entering the equation, is already mostly junk, and it does make one wonder what the results of widespread generative video will really be in the end. Maybe I’ll wind up with AI-generated egg on my face, and video generators like Flow, Google’s “AI filmmaker,” will be a watershed product for real creators, but I have my doubts.

    At the very least, I’d like to see some safeguards if video generation is going to go mainstream. As harmless as AI slop might be, the ability to generate fairly convincing video isn’t one that should be taken lightly. There’s obviously huge potential for misinformation and propaganda, and if all it takes to help mitigate that is watermarking videos created in Veo 3, then it feels like an easy first step. For now, we’ll just have to take the explosion of Veo 3-enabled content with a spoonful of molasses, because there’s a lot of slop to get to, and this might be just the first course.

    Daily Newsletter

    You May Also Like

    Raymond Wong, James Pero, and Kyle Barr

    Published May 22, 2025

    By

    James Pero

    Published May 22, 2025

    By

    Vanessa Taylor

    Published May 22, 2025

    By

    Raymond Wong

    Published May 21, 2025

    By

    James Pero

    Published May 21, 2025

    By

    AJ Dellinger

    Published May 21, 2025
    #google039s #veo #already #deepfaking #all
    Google's Veo 3 Is Already Deepfaking All of YouTube's Most Smooth-Brained Content
    By James Pero Published May 22, 2025 | Comments| Google Veo 3 man-on-the-street video generation. © Screenshot by Gizmodo Wake up, babe, new viral AI video generator dropped. This time, it’s not OpenAI’s Sora model in the spotlight, it’s Google’s Veo 3, which was announced on Tuesday during the company’s annual I/O keynote. Naturally, people are eager to see what chaos Veo 3 can wreak, and the results have been, well, chaotic. We’ve got disjointed Michael Bay fodder, talking muffins, self-aware AI sims, puppy-centric pharmaceutical ads—the list goes on. One thing that I keep seeing over and over, however, is—to put it bluntly—AI slop, and a very specific variety. For whatever reason, all of you seem to be absolutely hellbent on getting Veo to conjure up a torrent of smooth-brain YouTube content. The worst part is that this thing is actually kind of good at cranking it out, too. Don’t believe me? Here are the receipts. Is this 100% convincing? No. No, it is not. At a glance, though, most people wouldn’t be able to tell the difference if they’re just scrolling through their social feed mindlessly as one does when they’re using literally any social media site/app. Unboxing not cutting it for you? Well, don’t worry, we’ve got some man-on-the-street slop for your viewing pleasure. Sorry, hawk-tuah girl, it’s the singularity’s turn to capitalize on viral fame. Again, Veo’s generation is not perfect by any means, but it’s not exactly unconvincing, either. And there’s more bad news: Your Twitch-like smooth-brain content isn’t safe either. Here’s one of a picture-in-picture-style “Fortnite” stream that simulates gameplay and everything. I say “Fortnite” in scare quotes because this is just an AI representation of what Fortnite looks like, not the real thing. Either way, the only thing worse than mindless game streams is arguably mindless game streams that never even happened. And to be honest, the idea of simulating a simulation makes my brain feel achey, so for that reason alone, I’m going to hard pass. Listen, I’m not trying to be an alarmist here. In the grand scheme of things, AI-generated YouTube, Twitch, or TikTok chum isn’t going to hurt anyone, exactly, but it also doesn’t paint a rosy portrait of our AI-generated future. If there’s one thing we don’t need more of, it’s filler. Social media, without AI entering the equation, is already mostly junk, and it does make one wonder what the results of widespread generative video will really be in the end. Maybe I’ll wind up with AI-generated egg on my face, and video generators like Flow, Google’s “AI filmmaker,” will be a watershed product for real creators, but I have my doubts. At the very least, I’d like to see some safeguards if video generation is going to go mainstream. As harmless as AI slop might be, the ability to generate fairly convincing video isn’t one that should be taken lightly. There’s obviously huge potential for misinformation and propaganda, and if all it takes to help mitigate that is watermarking videos created in Veo 3, then it feels like an easy first step. For now, we’ll just have to take the explosion of Veo 3-enabled content with a spoonful of molasses, because there’s a lot of slop to get to, and this might be just the first course. Daily Newsletter You May Also Like Raymond Wong, James Pero, and Kyle Barr Published May 22, 2025 By James Pero Published May 22, 2025 By Vanessa Taylor Published May 22, 2025 By Raymond Wong Published May 21, 2025 By James Pero Published May 21, 2025 By AJ Dellinger Published May 21, 2025 #google039s #veo #already #deepfaking #all
    GIZMODO.COM
    Google's Veo 3 Is Already Deepfaking All of YouTube's Most Smooth-Brained Content
    By James Pero Published May 22, 2025 | Comments (19) | Google Veo 3 man-on-the-street video generation. © Screenshot by Gizmodo Wake up, babe, new viral AI video generator dropped. This time, it’s not OpenAI’s Sora model in the spotlight, it’s Google’s Veo 3, which was announced on Tuesday during the company’s annual I/O keynote. Naturally, people are eager to see what chaos Veo 3 can wreak, and the results have been, well, chaotic. We’ve got disjointed Michael Bay fodder, talking muffins, self-aware AI sims, puppy-centric pharmaceutical ads—the list goes on. One thing that I keep seeing over and over, however, is—to put it bluntly—AI slop, and a very specific variety. For whatever reason, all of you seem to be absolutely hellbent on getting Veo to conjure up a torrent of smooth-brain YouTube content. The worst part is that this thing is actually kind of good at cranking it out, too. Don’t believe me? Here are the receipts. Is this 100% convincing? No. No, it is not. At a glance, though, most people wouldn’t be able to tell the difference if they’re just scrolling through their social feed mindlessly as one does when they’re using literally any social media site/app. Unboxing not cutting it for you? Well, don’t worry, we’ve got some man-on-the-street slop for your viewing pleasure. Sorry, hawk-tuah girl, it’s the singularity’s turn to capitalize on viral fame. Again, Veo’s generation is not perfect by any means, but it’s not exactly unconvincing, either. And there’s more bad news: Your Twitch-like smooth-brain content isn’t safe either. Here’s one of a picture-in-picture-style “Fortnite” stream that simulates gameplay and everything. I say “Fortnite” in scare quotes because this is just an AI representation of what Fortnite looks like, not the real thing. Either way, the only thing worse than mindless game streams is arguably mindless game streams that never even happened. And to be honest, the idea of simulating a simulation makes my brain feel achey, so for that reason alone, I’m going to hard pass. Listen, I’m not trying to be an alarmist here. In the grand scheme of things, AI-generated YouTube, Twitch, or TikTok chum isn’t going to hurt anyone, exactly, but it also doesn’t paint a rosy portrait of our AI-generated future. If there’s one thing we don’t need more of, it’s filler. Social media, without AI entering the equation, is already mostly junk, and it does make one wonder what the results of widespread generative video will really be in the end. Maybe I’ll wind up with AI-generated egg on my face, and video generators like Flow, Google’s “AI filmmaker,” will be a watershed product for real creators, but I have my doubts. At the very least, I’d like to see some safeguards if video generation is going to go mainstream. As harmless as AI slop might be, the ability to generate fairly convincing video isn’t one that should be taken lightly. There’s obviously huge potential for misinformation and propaganda, and if all it takes to help mitigate that is watermarking videos created in Veo 3, then it feels like an easy first step. For now, we’ll just have to take the explosion of Veo 3-enabled content with a spoonful of molasses, because there’s a lot of slop to get to, and this might be just the first course. Daily Newsletter You May Also Like Raymond Wong, James Pero, and Kyle Barr Published May 22, 2025 By James Pero Published May 22, 2025 By Vanessa Taylor Published May 22, 2025 By Raymond Wong Published May 21, 2025 By James Pero Published May 21, 2025 By AJ Dellinger Published May 21, 2025
    0 Комментарии 0 Поделились 0 предпросмотр
  • Google's most powerful AI tools aren't for us

    At I/O 2025, nothing Google showed off felt new. Instead, we got a retread of the company's familiar obsession with its own AI prowess. For the better part of two hours, Google spent playing up products like AI Mode, generative AI apps like Jules and Flow, and a bewildering new per month AI Ultra plan.
    During Tuesday's keynote, I thought a lot about my first visit to Mountain View in 2018. I/O 2018 was different. Between Digital Wellbeing for Android, an entirely redesigned Maps app and even Duplex, Google felt like a company that had its pulse on what people wanted from technology. In fact, later that same year, my co-worker Cherlynn Low penned a story titled How Google won software in 2018. "Companies don't often make features that are truly helpful, but in 2018, Google proved its software can change your life," she wrote at the time, referencing the Pixel 3's Call Screening and "magical" Night Sight features.

    What announcement from Google I/O 2025 comes even close to Night Sight, Google Photos, or, if you're being more generous to the company, Call Screening or Duplex? The only one that comes to my mind is the fact that Google is bringing live language translation to Google Meet. That's a feature that many will find useful, and Google spent all of approximately a minute talking about it.
    I'm sure there are people who are excited to use Jules to vibe code or Veo 3 to generate video clips, but are either of those products truly transformational? Some "AI filmmakers" may argue otherwise, but when's the last time you thought your life would be dramatically better if you could only get a computer to make you a silly, 30-second clip.
    By contrast, consider the impact Night Sight has had. With one feature, Google revolutionized phones by showing that software, with the help of AI, could overcome the physical limits of minuscule camera hardware. More importantly, Night Sight was a response to a real problem people had in the real world. It spurred companies like Samsung and Apple to catch up, and now any smartphone worth buying has serious low light capabilities. Night Sight changed the industry, for the better.
    The fact you have to pay per month to use Veo 3 and Google's other frontier models as much as you want should tell everything you need to know about who the company thinks these tools are for: they're not for you and I. I/O is primarily an event for developers, but the past several I/O conferences have felt like Google flexing its AI muscles rather than using those muscles to do something useful. In the past, the company had a knack for contextualizing what it was showing off in a way that would resonate with the broader public.
    By 2018, machine learning was already at the forefront of nearly everything Google was doing, and, more so than any other big tech company at the time, Google was on the bleeding edge of that revolution. And yet the difference between now and then was that in 2018 it felt like much of Google's AI might was directed in the service of tools and features that would actually be useful to people. Since then, for Google, AI has gone from a means to an end to an end in and of itself, and we're all the worse for it.

    Even less dubious features like AI Mode offer questionable usefulness. Google debuted the chatbot earlier this year, and has since then has been making it available to more and more people. The problem with AI Mode is that it's designed to solve a problem of the company's own making. We all know the quality of Google Search results has declined dramatically over the last few years. Rather than fixing what's broken and making its system harder to game by SEO farms, Google tells us AI Mode represents the future of its search engine.
    The thing is, a chat bot is not a replacement for a proper search engine. I frequently use ChatGPT Search to research things I'm interested in. However, as great as it is to get a detailed and articulate response to a question, ChatGPT can and will often get things wrong. We're all familiar with the errors AI Overviews produced when Google first started rolling out the feature. AI Overviews might not be in the news anymore, but they're still prone to producing embarrassing mistakes. Just take a look at the screenshot my co-worker Kris Holt sent to me recently.
    Kris Holt for Engadget
    I don't think it's an accident I/O 2025 ended with a showcase of Android XR, a platform that sees the company revisiting a failed concept. Let's also not forget that Android, an operating system billions of people interact with every day, was relegated to a pre-taped livestream the week before. Right now, Google feels like it's a company eager to repeat the mistakes of Google Glass. Rather than trying to meet people where they need it, Google is creating products few are actually asking for. I don't know about you, but that doesn't make me excited for the company's future.This article originally appeared on Engadget at
    #google039s #most #powerful #tools #aren039t
    Google's most powerful AI tools aren't for us
    At I/O 2025, nothing Google showed off felt new. Instead, we got a retread of the company's familiar obsession with its own AI prowess. For the better part of two hours, Google spent playing up products like AI Mode, generative AI apps like Jules and Flow, and a bewildering new per month AI Ultra plan. During Tuesday's keynote, I thought a lot about my first visit to Mountain View in 2018. I/O 2018 was different. Between Digital Wellbeing for Android, an entirely redesigned Maps app and even Duplex, Google felt like a company that had its pulse on what people wanted from technology. In fact, later that same year, my co-worker Cherlynn Low penned a story titled How Google won software in 2018. "Companies don't often make features that are truly helpful, but in 2018, Google proved its software can change your life," she wrote at the time, referencing the Pixel 3's Call Screening and "magical" Night Sight features. What announcement from Google I/O 2025 comes even close to Night Sight, Google Photos, or, if you're being more generous to the company, Call Screening or Duplex? The only one that comes to my mind is the fact that Google is bringing live language translation to Google Meet. That's a feature that many will find useful, and Google spent all of approximately a minute talking about it. I'm sure there are people who are excited to use Jules to vibe code or Veo 3 to generate video clips, but are either of those products truly transformational? Some "AI filmmakers" may argue otherwise, but when's the last time you thought your life would be dramatically better if you could only get a computer to make you a silly, 30-second clip. By contrast, consider the impact Night Sight has had. With one feature, Google revolutionized phones by showing that software, with the help of AI, could overcome the physical limits of minuscule camera hardware. More importantly, Night Sight was a response to a real problem people had in the real world. It spurred companies like Samsung and Apple to catch up, and now any smartphone worth buying has serious low light capabilities. Night Sight changed the industry, for the better. The fact you have to pay per month to use Veo 3 and Google's other frontier models as much as you want should tell everything you need to know about who the company thinks these tools are for: they're not for you and I. I/O is primarily an event for developers, but the past several I/O conferences have felt like Google flexing its AI muscles rather than using those muscles to do something useful. In the past, the company had a knack for contextualizing what it was showing off in a way that would resonate with the broader public. By 2018, machine learning was already at the forefront of nearly everything Google was doing, and, more so than any other big tech company at the time, Google was on the bleeding edge of that revolution. And yet the difference between now and then was that in 2018 it felt like much of Google's AI might was directed in the service of tools and features that would actually be useful to people. Since then, for Google, AI has gone from a means to an end to an end in and of itself, and we're all the worse for it. Even less dubious features like AI Mode offer questionable usefulness. Google debuted the chatbot earlier this year, and has since then has been making it available to more and more people. The problem with AI Mode is that it's designed to solve a problem of the company's own making. We all know the quality of Google Search results has declined dramatically over the last few years. Rather than fixing what's broken and making its system harder to game by SEO farms, Google tells us AI Mode represents the future of its search engine. The thing is, a chat bot is not a replacement for a proper search engine. I frequently use ChatGPT Search to research things I'm interested in. However, as great as it is to get a detailed and articulate response to a question, ChatGPT can and will often get things wrong. We're all familiar with the errors AI Overviews produced when Google first started rolling out the feature. AI Overviews might not be in the news anymore, but they're still prone to producing embarrassing mistakes. Just take a look at the screenshot my co-worker Kris Holt sent to me recently. Kris Holt for Engadget I don't think it's an accident I/O 2025 ended with a showcase of Android XR, a platform that sees the company revisiting a failed concept. Let's also not forget that Android, an operating system billions of people interact with every day, was relegated to a pre-taped livestream the week before. Right now, Google feels like it's a company eager to repeat the mistakes of Google Glass. Rather than trying to meet people where they need it, Google is creating products few are actually asking for. I don't know about you, but that doesn't make me excited for the company's future.This article originally appeared on Engadget at #google039s #most #powerful #tools #aren039t
    WWW.ENGADGET.COM
    Google's most powerful AI tools aren't for us
    At I/O 2025, nothing Google showed off felt new. Instead, we got a retread of the company's familiar obsession with its own AI prowess. For the better part of two hours, Google spent playing up products like AI Mode, generative AI apps like Jules and Flow, and a bewildering new $250 per month AI Ultra plan. During Tuesday's keynote, I thought a lot about my first visit to Mountain View in 2018. I/O 2018 was different. Between Digital Wellbeing for Android, an entirely redesigned Maps app and even Duplex, Google felt like a company that had its pulse on what people wanted from technology. In fact, later that same year, my co-worker Cherlynn Low penned a story titled How Google won software in 2018. "Companies don't often make features that are truly helpful, but in 2018, Google proved its software can change your life," she wrote at the time, referencing the Pixel 3's Call Screening and "magical" Night Sight features. What announcement from Google I/O 2025 comes even close to Night Sight, Google Photos, or, if you're being more generous to the company, Call Screening or Duplex? The only one that comes to my mind is the fact that Google is bringing live language translation to Google Meet. That's a feature that many will find useful, and Google spent all of approximately a minute talking about it. I'm sure there are people who are excited to use Jules to vibe code or Veo 3 to generate video clips, but are either of those products truly transformational? Some "AI filmmakers" may argue otherwise, but when's the last time you thought your life would be dramatically better if you could only get a computer to make you a silly, 30-second clip. By contrast, consider the impact Night Sight has had. With one feature, Google revolutionized phones by showing that software, with the help of AI, could overcome the physical limits of minuscule camera hardware. More importantly, Night Sight was a response to a real problem people had in the real world. It spurred companies like Samsung and Apple to catch up, and now any smartphone worth buying has serious low light capabilities. Night Sight changed the industry, for the better. The fact you have to pay $250 per month to use Veo 3 and Google's other frontier models as much as you want should tell everything you need to know about who the company thinks these tools are for: they're not for you and I. I/O is primarily an event for developers, but the past several I/O conferences have felt like Google flexing its AI muscles rather than using those muscles to do something useful. In the past, the company had a knack for contextualizing what it was showing off in a way that would resonate with the broader public. By 2018, machine learning was already at the forefront of nearly everything Google was doing, and, more so than any other big tech company at the time, Google was on the bleeding edge of that revolution. And yet the difference between now and then was that in 2018 it felt like much of Google's AI might was directed in the service of tools and features that would actually be useful to people. Since then, for Google, AI has gone from a means to an end to an end in and of itself, and we're all the worse for it. Even less dubious features like AI Mode offer questionable usefulness. Google debuted the chatbot earlier this year, and has since then has been making it available to more and more people. The problem with AI Mode is that it's designed to solve a problem of the company's own making. We all know the quality of Google Search results has declined dramatically over the last few years. Rather than fixing what's broken and making its system harder to game by SEO farms, Google tells us AI Mode represents the future of its search engine. The thing is, a chat bot is not a replacement for a proper search engine. I frequently use ChatGPT Search to research things I'm interested in. However, as great as it is to get a detailed and articulate response to a question, ChatGPT can and will often get things wrong. We're all familiar with the errors AI Overviews produced when Google first started rolling out the feature. AI Overviews might not be in the news anymore, but they're still prone to producing embarrassing mistakes. Just take a look at the screenshot my co-worker Kris Holt sent to me recently. Kris Holt for Engadget I don't think it's an accident I/O 2025 ended with a showcase of Android XR, a platform that sees the company revisiting a failed concept. Let's also not forget that Android, an operating system billions of people interact with every day, was relegated to a pre-taped livestream the week before. Right now, Google feels like it's a company eager to repeat the mistakes of Google Glass. Rather than trying to meet people where they need it, Google is creating products few are actually asking for. I don't know about you, but that doesn't make me excited for the company's future.This article originally appeared on Engadget at https://www.engadget.com/ai/googles-most-powerful-ai-tools-arent-for-us-134657007.html?src=rss
    0 Комментарии 0 Поделились 0 предпросмотр
  • I let Google's Jules AI agent into my code repo and it did four hours of work in an instant

    hemul75/Getty ImagesOkay. Deep breath. This is surreal. I just added an entire new feature to my software, including UI and functionality, just by typing four paragraphs of instructions. I have screenshots, and I'll try to make sense of it in this article. I can't tell if we're living in the future or we've just descended to a new plane of hell.Let's take a step back. Google's Jules is the latest in a flood of new coding agents released just this week. I wrote about OpenAI Codex and Microsoft's GitHub Copilot Coding Agent at the beginning of the week, and ZDNET's Webb Wright wrote about Google's Jules. Also: I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of workAll of these coding agents will perform coding operations on a GitHub repository. GitHub, for those who've been following along, is the giant Microsoft-owned software storage, management, and distribution hub for much of the world's most important software, especially open source code. The difference, at least as it pertains to this article, is that Google made Jules available to everyone, for free. That meant I could just hop in and take it for a spin. And now my head is spinning. Usage limits and my first two prompts The free access version of Jules allows only five requests per day. That might not seem like a lot, but in only two requests, I was able to add a new feature to my software. So, don't discount what you can get done if you think through your prompts before shooting off your silver bullets for the day. My first two prompts were tentative. It wasn't that I wasn't impressed; it was that I really wasn't giving Jules much to do. I'm still not comfortable with the idea of setting an AI loose on all my code at once, so I played it safe. My first prompt asked Jules to document the "hooks" that add-on developers could use to add features to my product. I didn't tell Jules much about what I wanted. It returned some markup that it recommended dropping into my code's readme file. It worked, but meh. Screenshot by David Gewirtz/ZDNETI did have the opportunity to publish that code to a new GitHub branch, but I skipped it. It was just a test, after all. My second prompt was to ask Jules to suggest five new hooks. I got back an answer that seemed reasonable. However, I realized that opening up those capabilities in a security product was just too risky for me to delegate to an AI. I skipped those changes, too. It was at this point that Jules wanted a coffee break. It stopped functioning for about 90 minutes. Screenshot by David Gewirtz/ZDNETThat gave me time to think. What I really wanted to see was whether Jules could add some real functionality to my code and save me some time. Necessary background information My Private Site is a security plugin for WordPress. It's running on about 20,000 active sites. It puts a login dialog in front of the site's web pages. There are a bunch of options, but that's the key feature. I originally acquired the software a decade ago from a coder who called himself "jonradio," and have been maintaining and expanding it ever since. Also: Rust turns 10: How a broken elevator changed software foreverThe plugin provides access control to the front-end of a website, the pages that visitors see when they come to the site. Site owners control the plugin via a dashboard interface, with various admin functions available in the plugin's admin interface. I decided to try Jules out on a feature some users have requested, hiding the admin bar from logged-in users. The admin bar is the black bar WordPress puts on the top of a web page. In the case of the screenshot below, the black admin bar is visible. Screenshot by David Gewirtz/ZDNETI wanted Jules to add an option on the dashboard to hide the admin bar from logged-in users. The idea is that if a user logged in, the admin bar would be visible on the back end, but logged-in users browsing the front-end of the site wouldn't have to see the ugly bar. This is the original dashboard, before adding the new feature. Screenshot by David Gewirtz/ZDNETSome years ago, I completely rewrote the admin interface from the way it was when I acquired the plugin. Adding options to the interface is straightforward, but it's still time-consuming. Every option requires not only the UI element to be added, but also preference saving and preference recalling when the dashboard is displayed. That's in addition to any program logic that the preference controls. In practice, I've found that it takes me about 2-3 hours to add a preference UI element, along with the assorted housekeeping involved. It's not hard, but there are a lot of little fiddly bits that all need to be tweaked. That takes time. That should bring you up to speed enough to understand my next test of Jules. Here's a bit of foreshadowing: the first test failed miserably. The second test succeeded astonishingly. Instructing Jules Adding a hide admin bar feature is not something that would have been easy for the run-of-the-mill coding help we've been asking ChatGPT and the other chatbots to perform. As I mentioned, adding the new option to the dashboard requires programming in a variety of locations throughout the code, and also requires an understanding of the overall codebase. Here's what I told Jules. 1. On the Site Privacy Tab of the admin interface, add a new checkbox. Label the section "Admin Bar" and label the checkbox itself "Hide Admin Bar".I instructed Jules where I wanted the AI to put the new option. On my first run through, I made a mistake and left out the details in square brackets. I didn't tell Jules exactly where I wanted it to place the new option. As it turns out, that omission caused a big fail. Once I added in the sentence in brackets above, the feature worked. 2. Be sure to save the selection of that checkbox to the plugin's preferences variable when the Privacy Status button is checked. This makes sure Jules knows that there is a preference data structure, and to be sure to update it when the user makes a change. It's important to note that if I didn't have an understanding of the underlying code, I wouldn't have instructed Jules about this, and the code would not work. You can't "vibe code" something like this without knowing the underlying code. 3. Show the appropriate checked or unchecked status when the Site Privacy tab is displayed. This tells the AI that I want the interface to be updated to match what the preference variable specifies. 4. Based on the preference variable created in, add code to hide or show the WordPress admin bar. If Hide Admin Bar is checked, the Admin Bar should not be visible to logged-in WordPress front-end users. If the Hide Admin Bar is not checked, the Admin Bar should be visible to logged-in front-end users. Logged-in back-end users in the admin interface should always be able to see the admin bar. This describes the business logic that the new preference should control. It requires the AI to know how to hide or show the admin bar, and it requires the AI to know where to put the code in my plugin to enable or disable this feature. And with that, Jules was trained on what I wanted. Jules dives into my code I fed my prompt set into Jules and got back a plan of action. Pay close attention to that Approve Plan? button. Screenshot by David Gewirtz/ZDNETI didn't even get a chance to read through the plan before Jules decided to approve the plan on its own. It did this after every plan it presented. An AI that doesn't wait for permission raises the hairs on the back of my neck. Just saying. Screenshot by David Gewirtz/ZDNETI desperately want to make a Skynet/Landru/Colossus/P1/Hal kind of joke, because I'm freaked out. I mean, it's good. But I'm freaked out. Here's some of the code Jules wrote. The shaded green is the new stuff. I'm not thrilled with the color scheme, but I'm sure that will be tweakable over time. Also: The best free AI courses and certificates in 2025More relevant is the fact that Jules picked up on my variable naming conventions and the architecture of my code and dived right in. This is the new option, rendered in code. Screenshot by David Gewirtz/ZDNETBy the time it was done, Jules had written in all the code changes it planned for originally, plus some test code. I don't use standardized tests. I would have told Jules not to do it the way it planned, but it never gave me time to approve or modify its original plan. Even so, it worked out. Screenshot by David Gewirtz/ZDNETI pushed the Publish branch button, which caused GitHub to create a new branch, separate from my main repository. Jules then published its changes to that branch. Screenshot by David Gewirtz/ZDNETThis is how contributors to big projects can work on those projects without causing chaos to the main code line. Up to this point, I could look at the code, but I wasn't able to run it. But by pushing the code to a branch, Jules and GitHub made it possible for me to replicate the changes safely down to my computer to test them out. If I didn't like the changes, I could have just switched back to the main branch and no harm, no foul. But I did like the changes, so I moved on to the next step. Around the code in 8 clicks Once I brought the branch down to my development machine, I could test it out. Here's the new dashboard with the Hide Admin Menu feature. Screenshot by David Gewirtz/ZDNETI tried turning the feature on and off and making sure the settings stuck. They did. I also tried other features in the plugin to make sure nothing else had broken. I was pretty sure nothing would, because I reviewed all the changes before approving the branch. But still. Testing is a good thing to do. I then logged into the test website. As you can see, there's no admin bar showing. Screenshot by David Gewirtz/ZDNETAt this point, the process was out of the AI's hands. It was simply time to deploy the changes, both back to GitHub and to the master WordPress repository. First, I used GitHub Desktop to merge the branch code back into the main branch on my development machine. I changed "Hide Admin Menu" to "Hide admin menu" in my code's main branch, because I like it better. I pushed thatback to the GitHub cloud. Screenshot by David Gewirtz/ZDNETThen, because I just don't like random branches hanging around once they've been incorporated into the distribution version, I deleted the new branch on my computer. Screenshot by David Gewirtz/ZDNETI also deleted the new branch from the GitHub cloud service. Screenshot by David Gewirtz/ZDNETFinally, I packaged up the new code. I added a change to the readme to describe the new feature and to update the code's version number. Then, I pushed it using SVNup to the WordPress plugin repository. Journey to the center of the code Jules is very definitely beta right now. It hung in a few places. Some screens didn't update. It decided to check out for 90 minutes. I had to wait while it went to and came back from its digital happy place. It's evidencing all the sorts of things you'd expect from a newly-released piece of code. I have no concerns about that. Google will clean it up. The fact that Julescan handle an entire repository of code across a bunch of files is big. That's a much deeper level of understanding and integration than we saw, even six months ago. Also: How to move your codebase into GitHub for analysis by ChatGPT Deep Research - and why you shouldThe speed with which it can change an entire codebase is terrifying. The damage it can do is potentially extraordinary. It will gleefully go through and modify everything in your codebase, and if you specify something wrong and then push or merge, you will have an epic mess on your hands. There is a deep inequality between how quickly it can change code and how long it will take a human to review those changes. Working on this scale will require excellent unit tests. Even tools like mine, which don't lend themselves to full unit testing, will require some kind of automated validation to prevent robot-driven errors on a massive scale. Those who are afraid these tools will take jobs from programmers should be concerned, but not in the way most people think. It is absolutely, totally, one-hundo-percent necessary for experienced coders to review and guide these agents. When I left out one critical instruction, the agent gleefully bricked my site. Since I was the person who wrote the code initially, I knew what to fix. But it would have been brutally difficult for someone else to figure out what had been left out and how to fix it. That would have required coming up to speed on all the hidden nuances of the entire architecture of the code. Also: How to turn ChatGPT into your AI coding power tool - and double your outputThe jobs that are likely to be destroyed are those of junior developers. Jules is easily doing junior developer level work. With tools like Jules or Codex or Copilot, that cost of a few hundred bucks a month at most, it's going to be hard for management to be willing to pay medium-to-high six figures for midlevel and junior programmers. Even outsourcing and offshoring isn't as cheap as using an AI agent to do maintenance coding. And, as I wrote about earlier in the week, if there are no mid-level jobs available, how will we train the experienced people we're going to need in the future? I am also concerned about how access limits will shake out. Productivity gains will drop like a rock if you need to do one more prompt and you have to wait a day to be allowed to do so. Screenshot by David Gewirtz/ZDNETAs for me, in less than 10 minutes, I turned out a new feature that had been requested by readers. While I was writing another article, I fed the prompt to Jules. I went back to work on the article, and checked on Jules when it was finished. I checked out the code, brought it down to my computer, and pushed a release. It took me longer to upload the thing to the WordPress repository than to add the entire new feature. For that class of feature, I got a half-a-day's work done in less than half an hour, from thinking about making it happen to published to my users. In the last two hours, 2,500 sites have downloaded and installed the new feature. That will surge to well over 10,000 by morning. Without Jules, those users probably would have been waiting months for this new feature, because I have a huge backlog of work, and it wasn't my top priority. But with Jules, it took barely any effort. Also: 7 productivity gadgets I can't live withoutThese tools are going to require programmers, managers, and investors to rethink the software development workflow. There will be glaring "you can't get there from here" gotchas. And there will be epic failures and coding errors. But I have no doubt that this is the next level of AI-based coding. Real, human intelligence is going to be necessary to figure out how to deal with it. Have you tried Google's Jules or any of the other new AI coding agents? Would you trust them to make direct changes to your codebase, or do you prefer to keep a tighter manual grip? What kinds of developer tasks do you think these tools should and shouldn't handle? Let us know in the comments below. Want more stories about AI? Sign up for Innovation, our weekly newsletter.You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.Featured
    #let #google039s #jules #agent #into
    I let Google's Jules AI agent into my code repo and it did four hours of work in an instant
    hemul75/Getty ImagesOkay. Deep breath. This is surreal. I just added an entire new feature to my software, including UI and functionality, just by typing four paragraphs of instructions. I have screenshots, and I'll try to make sense of it in this article. I can't tell if we're living in the future or we've just descended to a new plane of hell.Let's take a step back. Google's Jules is the latest in a flood of new coding agents released just this week. I wrote about OpenAI Codex and Microsoft's GitHub Copilot Coding Agent at the beginning of the week, and ZDNET's Webb Wright wrote about Google's Jules. Also: I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of workAll of these coding agents will perform coding operations on a GitHub repository. GitHub, for those who've been following along, is the giant Microsoft-owned software storage, management, and distribution hub for much of the world's most important software, especially open source code. The difference, at least as it pertains to this article, is that Google made Jules available to everyone, for free. That meant I could just hop in and take it for a spin. And now my head is spinning. Usage limits and my first two prompts The free access version of Jules allows only five requests per day. That might not seem like a lot, but in only two requests, I was able to add a new feature to my software. So, don't discount what you can get done if you think through your prompts before shooting off your silver bullets for the day. My first two prompts were tentative. It wasn't that I wasn't impressed; it was that I really wasn't giving Jules much to do. I'm still not comfortable with the idea of setting an AI loose on all my code at once, so I played it safe. My first prompt asked Jules to document the "hooks" that add-on developers could use to add features to my product. I didn't tell Jules much about what I wanted. It returned some markup that it recommended dropping into my code's readme file. It worked, but meh. Screenshot by David Gewirtz/ZDNETI did have the opportunity to publish that code to a new GitHub branch, but I skipped it. It was just a test, after all. My second prompt was to ask Jules to suggest five new hooks. I got back an answer that seemed reasonable. However, I realized that opening up those capabilities in a security product was just too risky for me to delegate to an AI. I skipped those changes, too. It was at this point that Jules wanted a coffee break. It stopped functioning for about 90 minutes. Screenshot by David Gewirtz/ZDNETThat gave me time to think. What I really wanted to see was whether Jules could add some real functionality to my code and save me some time. Necessary background information My Private Site is a security plugin for WordPress. It's running on about 20,000 active sites. It puts a login dialog in front of the site's web pages. There are a bunch of options, but that's the key feature. I originally acquired the software a decade ago from a coder who called himself "jonradio," and have been maintaining and expanding it ever since. Also: Rust turns 10: How a broken elevator changed software foreverThe plugin provides access control to the front-end of a website, the pages that visitors see when they come to the site. Site owners control the plugin via a dashboard interface, with various admin functions available in the plugin's admin interface. I decided to try Jules out on a feature some users have requested, hiding the admin bar from logged-in users. The admin bar is the black bar WordPress puts on the top of a web page. In the case of the screenshot below, the black admin bar is visible. Screenshot by David Gewirtz/ZDNETI wanted Jules to add an option on the dashboard to hide the admin bar from logged-in users. The idea is that if a user logged in, the admin bar would be visible on the back end, but logged-in users browsing the front-end of the site wouldn't have to see the ugly bar. This is the original dashboard, before adding the new feature. Screenshot by David Gewirtz/ZDNETSome years ago, I completely rewrote the admin interface from the way it was when I acquired the plugin. Adding options to the interface is straightforward, but it's still time-consuming. Every option requires not only the UI element to be added, but also preference saving and preference recalling when the dashboard is displayed. That's in addition to any program logic that the preference controls. In practice, I've found that it takes me about 2-3 hours to add a preference UI element, along with the assorted housekeeping involved. It's not hard, but there are a lot of little fiddly bits that all need to be tweaked. That takes time. That should bring you up to speed enough to understand my next test of Jules. Here's a bit of foreshadowing: the first test failed miserably. The second test succeeded astonishingly. Instructing Jules Adding a hide admin bar feature is not something that would have been easy for the run-of-the-mill coding help we've been asking ChatGPT and the other chatbots to perform. As I mentioned, adding the new option to the dashboard requires programming in a variety of locations throughout the code, and also requires an understanding of the overall codebase. Here's what I told Jules. 1. On the Site Privacy Tab of the admin interface, add a new checkbox. Label the section "Admin Bar" and label the checkbox itself "Hide Admin Bar".I instructed Jules where I wanted the AI to put the new option. On my first run through, I made a mistake and left out the details in square brackets. I didn't tell Jules exactly where I wanted it to place the new option. As it turns out, that omission caused a big fail. Once I added in the sentence in brackets above, the feature worked. 2. Be sure to save the selection of that checkbox to the plugin's preferences variable when the Privacy Status button is checked. This makes sure Jules knows that there is a preference data structure, and to be sure to update it when the user makes a change. It's important to note that if I didn't have an understanding of the underlying code, I wouldn't have instructed Jules about this, and the code would not work. You can't "vibe code" something like this without knowing the underlying code. 3. Show the appropriate checked or unchecked status when the Site Privacy tab is displayed. This tells the AI that I want the interface to be updated to match what the preference variable specifies. 4. Based on the preference variable created in, add code to hide or show the WordPress admin bar. If Hide Admin Bar is checked, the Admin Bar should not be visible to logged-in WordPress front-end users. If the Hide Admin Bar is not checked, the Admin Bar should be visible to logged-in front-end users. Logged-in back-end users in the admin interface should always be able to see the admin bar. This describes the business logic that the new preference should control. It requires the AI to know how to hide or show the admin bar, and it requires the AI to know where to put the code in my plugin to enable or disable this feature. And with that, Jules was trained on what I wanted. Jules dives into my code I fed my prompt set into Jules and got back a plan of action. Pay close attention to that Approve Plan? button. Screenshot by David Gewirtz/ZDNETI didn't even get a chance to read through the plan before Jules decided to approve the plan on its own. It did this after every plan it presented. An AI that doesn't wait for permission raises the hairs on the back of my neck. Just saying. Screenshot by David Gewirtz/ZDNETI desperately want to make a Skynet/Landru/Colossus/P1/Hal kind of joke, because I'm freaked out. I mean, it's good. But I'm freaked out. Here's some of the code Jules wrote. The shaded green is the new stuff. I'm not thrilled with the color scheme, but I'm sure that will be tweakable over time. Also: The best free AI courses and certificates in 2025More relevant is the fact that Jules picked up on my variable naming conventions and the architecture of my code and dived right in. This is the new option, rendered in code. Screenshot by David Gewirtz/ZDNETBy the time it was done, Jules had written in all the code changes it planned for originally, plus some test code. I don't use standardized tests. I would have told Jules not to do it the way it planned, but it never gave me time to approve or modify its original plan. Even so, it worked out. Screenshot by David Gewirtz/ZDNETI pushed the Publish branch button, which caused GitHub to create a new branch, separate from my main repository. Jules then published its changes to that branch. Screenshot by David Gewirtz/ZDNETThis is how contributors to big projects can work on those projects without causing chaos to the main code line. Up to this point, I could look at the code, but I wasn't able to run it. But by pushing the code to a branch, Jules and GitHub made it possible for me to replicate the changes safely down to my computer to test them out. If I didn't like the changes, I could have just switched back to the main branch and no harm, no foul. But I did like the changes, so I moved on to the next step. Around the code in 8 clicks Once I brought the branch down to my development machine, I could test it out. Here's the new dashboard with the Hide Admin Menu feature. Screenshot by David Gewirtz/ZDNETI tried turning the feature on and off and making sure the settings stuck. They did. I also tried other features in the plugin to make sure nothing else had broken. I was pretty sure nothing would, because I reviewed all the changes before approving the branch. But still. Testing is a good thing to do. I then logged into the test website. As you can see, there's no admin bar showing. Screenshot by David Gewirtz/ZDNETAt this point, the process was out of the AI's hands. It was simply time to deploy the changes, both back to GitHub and to the master WordPress repository. First, I used GitHub Desktop to merge the branch code back into the main branch on my development machine. I changed "Hide Admin Menu" to "Hide admin menu" in my code's main branch, because I like it better. I pushed thatback to the GitHub cloud. Screenshot by David Gewirtz/ZDNETThen, because I just don't like random branches hanging around once they've been incorporated into the distribution version, I deleted the new branch on my computer. Screenshot by David Gewirtz/ZDNETI also deleted the new branch from the GitHub cloud service. Screenshot by David Gewirtz/ZDNETFinally, I packaged up the new code. I added a change to the readme to describe the new feature and to update the code's version number. Then, I pushed it using SVNup to the WordPress plugin repository. Journey to the center of the code Jules is very definitely beta right now. It hung in a few places. Some screens didn't update. It decided to check out for 90 minutes. I had to wait while it went to and came back from its digital happy place. It's evidencing all the sorts of things you'd expect from a newly-released piece of code. I have no concerns about that. Google will clean it up. The fact that Julescan handle an entire repository of code across a bunch of files is big. That's a much deeper level of understanding and integration than we saw, even six months ago. Also: How to move your codebase into GitHub for analysis by ChatGPT Deep Research - and why you shouldThe speed with which it can change an entire codebase is terrifying. The damage it can do is potentially extraordinary. It will gleefully go through and modify everything in your codebase, and if you specify something wrong and then push or merge, you will have an epic mess on your hands. There is a deep inequality between how quickly it can change code and how long it will take a human to review those changes. Working on this scale will require excellent unit tests. Even tools like mine, which don't lend themselves to full unit testing, will require some kind of automated validation to prevent robot-driven errors on a massive scale. Those who are afraid these tools will take jobs from programmers should be concerned, but not in the way most people think. It is absolutely, totally, one-hundo-percent necessary for experienced coders to review and guide these agents. When I left out one critical instruction, the agent gleefully bricked my site. Since I was the person who wrote the code initially, I knew what to fix. But it would have been brutally difficult for someone else to figure out what had been left out and how to fix it. That would have required coming up to speed on all the hidden nuances of the entire architecture of the code. Also: How to turn ChatGPT into your AI coding power tool - and double your outputThe jobs that are likely to be destroyed are those of junior developers. Jules is easily doing junior developer level work. With tools like Jules or Codex or Copilot, that cost of a few hundred bucks a month at most, it's going to be hard for management to be willing to pay medium-to-high six figures for midlevel and junior programmers. Even outsourcing and offshoring isn't as cheap as using an AI agent to do maintenance coding. And, as I wrote about earlier in the week, if there are no mid-level jobs available, how will we train the experienced people we're going to need in the future? I am also concerned about how access limits will shake out. Productivity gains will drop like a rock if you need to do one more prompt and you have to wait a day to be allowed to do so. Screenshot by David Gewirtz/ZDNETAs for me, in less than 10 minutes, I turned out a new feature that had been requested by readers. While I was writing another article, I fed the prompt to Jules. I went back to work on the article, and checked on Jules when it was finished. I checked out the code, brought it down to my computer, and pushed a release. It took me longer to upload the thing to the WordPress repository than to add the entire new feature. For that class of feature, I got a half-a-day's work done in less than half an hour, from thinking about making it happen to published to my users. In the last two hours, 2,500 sites have downloaded and installed the new feature. That will surge to well over 10,000 by morning. Without Jules, those users probably would have been waiting months for this new feature, because I have a huge backlog of work, and it wasn't my top priority. But with Jules, it took barely any effort. Also: 7 productivity gadgets I can't live withoutThese tools are going to require programmers, managers, and investors to rethink the software development workflow. There will be glaring "you can't get there from here" gotchas. And there will be epic failures and coding errors. But I have no doubt that this is the next level of AI-based coding. Real, human intelligence is going to be necessary to figure out how to deal with it. Have you tried Google's Jules or any of the other new AI coding agents? Would you trust them to make direct changes to your codebase, or do you prefer to keep a tighter manual grip? What kinds of developer tasks do you think these tools should and shouldn't handle? Let us know in the comments below. Want more stories about AI? Sign up for Innovation, our weekly newsletter.You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.Featured #let #google039s #jules #agent #into
    WWW.ZDNET.COM
    I let Google's Jules AI agent into my code repo and it did four hours of work in an instant
    hemul75/Getty ImagesOkay. Deep breath. This is surreal. I just added an entire new feature to my software, including UI and functionality, just by typing four paragraphs of instructions. I have screenshots, and I'll try to make sense of it in this article. I can't tell if we're living in the future or we've just descended to a new plane of hell (or both).Let's take a step back. Google's Jules is the latest in a flood of new coding agents released just this week. I wrote about OpenAI Codex and Microsoft's GitHub Copilot Coding Agent at the beginning of the week, and ZDNET's Webb Wright wrote about Google's Jules. Also: I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of workAll of these coding agents will perform coding operations on a GitHub repository. GitHub, for those who've been following along, is the giant Microsoft-owned software storage, management, and distribution hub for much of the world's most important software, especially open source code. The difference, at least as it pertains to this article, is that Google made Jules available to everyone, for free. That meant I could just hop in and take it for a spin. And now my head is spinning. Usage limits and my first two prompts The free access version of Jules allows only five requests per day. That might not seem like a lot, but in only two requests, I was able to add a new feature to my software. So, don't discount what you can get done if you think through your prompts before shooting off your silver bullets for the day. My first two prompts were tentative. It wasn't that I wasn't impressed; it was that I really wasn't giving Jules much to do. I'm still not comfortable with the idea of setting an AI loose on all my code at once, so I played it safe. My first prompt asked Jules to document the "hooks" that add-on developers could use to add features to my product. I didn't tell Jules much about what I wanted. It returned some markup that it recommended dropping into my code's readme file. It worked, but meh. Screenshot by David Gewirtz/ZDNETI did have the opportunity to publish that code to a new GitHub branch, but I skipped it. It was just a test, after all. My second prompt was to ask Jules to suggest five new hooks. I got back an answer that seemed reasonable. However, I realized that opening up those capabilities in a security product was just too risky for me to delegate to an AI. I skipped those changes, too. It was at this point that Jules wanted a coffee break. It stopped functioning for about 90 minutes. Screenshot by David Gewirtz/ZDNETThat gave me time to think. What I really wanted to see was whether Jules could add some real functionality to my code and save me some time. Necessary background information My Private Site is a security plugin for WordPress. It's running on about 20,000 active sites. It puts a login dialog in front of the site's web pages. There are a bunch of options, but that's the key feature. I originally acquired the software a decade ago from a coder who called himself "jonradio," and have been maintaining and expanding it ever since. Also: Rust turns 10: How a broken elevator changed software foreverThe plugin provides access control to the front-end of a website, the pages that visitors see when they come to the site. Site owners control the plugin via a dashboard interface, with various admin functions available in the plugin's admin interface. I decided to try Jules out on a feature some users have requested, hiding the admin bar from logged-in users. The admin bar is the black bar WordPress puts on the top of a web page. In the case of the screenshot below, the black admin bar is visible. Screenshot by David Gewirtz/ZDNETI wanted Jules to add an option on the dashboard to hide the admin bar from logged-in users. The idea is that if a user logged in, the admin bar would be visible on the back end, but logged-in users browsing the front-end of the site wouldn't have to see the ugly bar. This is the original dashboard, before adding the new feature. Screenshot by David Gewirtz/ZDNETSome years ago, I completely rewrote the admin interface from the way it was when I acquired the plugin. Adding options to the interface is straightforward, but it's still time-consuming. Every option requires not only the UI element to be added, but also preference saving and preference recalling when the dashboard is displayed. That's in addition to any program logic that the preference controls. In practice, I've found that it takes me about 2-3 hours to add a preference UI element, along with the assorted housekeeping involved. It's not hard, but there are a lot of little fiddly bits that all need to be tweaked. That takes time. That should bring you up to speed enough to understand my next test of Jules. Here's a bit of foreshadowing: the first test failed miserably. The second test succeeded astonishingly. Instructing Jules Adding a hide admin bar feature is not something that would have been easy for the run-of-the-mill coding help we've been asking ChatGPT and the other chatbots to perform. As I mentioned, adding the new option to the dashboard requires programming in a variety of locations throughout the code, and also requires an understanding of the overall codebase. Here's what I told Jules. 1. On the Site Privacy Tab of the admin interface, add a new checkbox. Label the section "Admin Bar" and label the checkbox itself "Hide Admin Bar". [Place this in the MAKE SITE PRIVATE block, located just under the Enable login privacy checkbox and before the Site Privacy Mode segment.] I instructed Jules where I wanted the AI to put the new option. On my first run through, I made a mistake and left out the details in square brackets. I didn't tell Jules exactly where I wanted it to place the new option. As it turns out, that omission caused a big fail. Once I added in the sentence in brackets above, the feature worked. 2. Be sure to save the selection of that checkbox to the plugin's preferences variable when the Save Privacy Status button is checked. This makes sure Jules knows that there is a preference data structure, and to be sure to update it when the user makes a change. It's important to note that if I didn't have an understanding of the underlying code, I wouldn't have instructed Jules about this, and the code would not work. You can't "vibe code" something like this without knowing the underlying code. 3. Show the appropriate checked or unchecked status when the Site Privacy tab is displayed. This tells the AI that I want the interface to be updated to match what the preference variable specifies. 4. Based on the preference variable created in (2), add code to hide or show the WordPress admin bar. If Hide Admin Bar is checked, the Admin Bar should not be visible to logged-in WordPress front-end users. If the Hide Admin Bar is not checked, the Admin Bar should be visible to logged-in front-end users. Logged-in back-end users in the admin interface should always be able to see the admin bar. This describes the business logic that the new preference should control. It requires the AI to know how to hide or show the admin bar (a WordPress API call is used), and it requires the AI to know where to put the code in my plugin to enable or disable this feature. And with that, Jules was trained on what I wanted. Jules dives into my code I fed my prompt set into Jules and got back a plan of action. Pay close attention to that Approve Plan? button. Screenshot by David Gewirtz/ZDNETI didn't even get a chance to read through the plan before Jules decided to approve the plan on its own. It did this after every plan it presented. An AI that doesn't wait for permission raises the hairs on the back of my neck. Just saying. Screenshot by David Gewirtz/ZDNETI desperately want to make a Skynet/Landru/Colossus/P1/Hal kind of joke, because I'm freaked out. I mean, it's good. But I'm freaked out. Here's some of the code Jules wrote. The shaded green is the new stuff. I'm not thrilled with the color scheme, but I'm sure that will be tweakable over time. Also: The best free AI courses and certificates in 2025More relevant is the fact that Jules picked up on my variable naming conventions and the architecture of my code and dived right in. This is the new option, rendered in code. Screenshot by David Gewirtz/ZDNETBy the time it was done, Jules had written in all the code changes it planned for originally, plus some test code. I don't use standardized tests. I would have told Jules not to do it the way it planned, but it never gave me time to approve or modify its original plan. Even so, it worked out. Screenshot by David Gewirtz/ZDNETI pushed the Publish branch button, which caused GitHub to create a new branch, separate from my main repository. Jules then published its changes to that branch. Screenshot by David Gewirtz/ZDNETThis is how contributors to big projects can work on those projects without causing chaos to the main code line. Up to this point, I could look at the code, but I wasn't able to run it. But by pushing the code to a branch, Jules and GitHub made it possible for me to replicate the changes safely down to my computer to test them out. If I didn't like the changes, I could have just switched back to the main branch and no harm, no foul. But I did like the changes, so I moved on to the next step. Around the code in 8 clicks Once I brought the branch down to my development machine, I could test it out. Here's the new dashboard with the Hide Admin Menu feature. Screenshot by David Gewirtz/ZDNETI tried turning the feature on and off and making sure the settings stuck. They did. I also tried other features in the plugin to make sure nothing else had broken. I was pretty sure nothing would, because I reviewed all the changes before approving the branch. But still. Testing is a good thing to do. I then logged into the test website. As you can see, there's no admin bar showing. Screenshot by David Gewirtz/ZDNETAt this point, the process was out of the AI's hands. It was simply time to deploy the changes, both back to GitHub and to the master WordPress repository. First, I used GitHub Desktop to merge the branch code back into the main branch on my development machine. I changed "Hide Admin Menu" to "Hide admin menu" in my code's main branch, because I like it better. I pushed that (the full main branch on my local machine) back to the GitHub cloud. Screenshot by David Gewirtz/ZDNETThen, because I just don't like random branches hanging around once they've been incorporated into the distribution version, I deleted the new branch on my computer. Screenshot by David Gewirtz/ZDNETI also deleted the new branch from the GitHub cloud service. Screenshot by David Gewirtz/ZDNETFinally, I packaged up the new code. I added a change to the readme to describe the new feature and to update the code's version number. Then, I pushed it using SVN (the source code control system used by the WordPress community) up to the WordPress plugin repository. Journey to the center of the code Jules is very definitely beta right now. It hung in a few places. Some screens didn't update. It decided to check out for 90 minutes. I had to wait while it went to and came back from its digital happy place. It's evidencing all the sorts of things you'd expect from a newly-released piece of code. I have no concerns about that. Google will clean it up. The fact that Jules (and presumably OpenAI Codex and GitHub Copilot Coding Agent) can handle an entire repository of code across a bunch of files is big. That's a much deeper level of understanding and integration than we saw, even six months ago. Also: How to move your codebase into GitHub for analysis by ChatGPT Deep Research - and why you shouldThe speed with which it can change an entire codebase is terrifying. The damage it can do is potentially extraordinary. It will gleefully go through and modify everything in your codebase, and if you specify something wrong and then push or merge, you will have an epic mess on your hands. There is a deep inequality between how quickly it can change code and how long it will take a human to review those changes. Working on this scale will require excellent unit tests. Even tools like mine, which don't lend themselves to full unit testing, will require some kind of automated validation to prevent robot-driven errors on a massive scale. Those who are afraid these tools will take jobs from programmers should be concerned, but not in the way most people think. It is absolutely, totally, one-hundo-percent necessary for experienced coders to review and guide these agents. When I left out one critical instruction, the agent gleefully bricked my site. Since I was the person who wrote the code initially, I knew what to fix. But it would have been brutally difficult for someone else to figure out what had been left out and how to fix it. That would have required coming up to speed on all the hidden nuances of the entire architecture of the code. Also: How to turn ChatGPT into your AI coding power tool - and double your outputThe jobs that are likely to be destroyed are those of junior developers. Jules is easily doing junior developer level work. With tools like Jules or Codex or Copilot, that cost of a few hundred bucks a month at most, it's going to be hard for management to be willing to pay medium-to-high six figures for midlevel and junior programmers. Even outsourcing and offshoring isn't as cheap as using an AI agent to do maintenance coding. And, as I wrote about earlier in the week, if there are no mid-level jobs available, how will we train the experienced people we're going to need in the future? I am also concerned about how access limits will shake out. Productivity gains will drop like a rock if you need to do one more prompt and you have to wait a day to be allowed to do so. Screenshot by David Gewirtz/ZDNETAs for me, in less than 10 minutes, I turned out a new feature that had been requested by readers. While I was writing another article, I fed the prompt to Jules. I went back to work on the article, and checked on Jules when it was finished. I checked out the code, brought it down to my computer, and pushed a release. It took me longer to upload the thing to the WordPress repository than to add the entire new feature. For that class of feature, I got a half-a-day's work done in less than half an hour, from thinking about making it happen to published to my users. In the last two hours, 2,500 sites have downloaded and installed the new feature. That will surge to well over 10,000 by morning (it's about 8 p.m. now as I write this). Without Jules, those users probably would have been waiting months for this new feature, because I have a huge backlog of work, and it wasn't my top priority. But with Jules, it took barely any effort. Also: 7 productivity gadgets I can't live without (and why they make such a big difference)These tools are going to require programmers, managers, and investors to rethink the software development workflow. There will be glaring "you can't get there from here" gotchas. And there will be epic failures and coding errors. But I have no doubt that this is the next level of AI-based coding. Real, human intelligence is going to be necessary to figure out how to deal with it. Have you tried Google's Jules or any of the other new AI coding agents? Would you trust them to make direct changes to your codebase, or do you prefer to keep a tighter manual grip? What kinds of developer tasks do you think these tools should and shouldn't handle? Let us know in the comments below. Want more stories about AI? Sign up for Innovation, our weekly newsletter.You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.Featured
    0 Комментарии 0 Поделились 0 предпросмотр
  • News/Media Alliance calls Google's AI Mode 'theft'

    The News/Media Alliance took aim at Google today after the tech company's announcement at its I/O showcase that AI Mode will be rolling out to all US search users. This feature more closely integrates an AI chatbot into Google search. Ostensibly, AI Mode can help people get better answers to their queries, but it also serves to keep users on a Google property rather than clicking through to get information from other publications.
    "Links were the last redeeming quality of search that gave publishers traffic and revenue. Now Google just takes content by force and uses it with no return, the definition of theft," said News/Media Alliance President and CEO Danielle Coffey. "The DOJ remedies must address this to prevent continued domination of the internet by one company."
    This isn't the first time the organization has fired shots at Google; it filed an amicus brief earlier this month looking for remedy in the antitrust case about Google's monopoly control over search. The group argued that publishers should be able to opt out of letting search engines use their content for retrieval augmented generation.
    Google has also taken an aggressive stance toward publishers as it develops more AI-driven services. The company's recent attitude can be seen in Bloomberg's discovery of an internal document showing that the company decided not to give publishers a choice to opt out of AI training if they wanted their material to appear in search results.This article originally appeared on Engadget at
    #newsmedia #alliance #calls #google039s #mode
    News/Media Alliance calls Google's AI Mode 'theft'
    The News/Media Alliance took aim at Google today after the tech company's announcement at its I/O showcase that AI Mode will be rolling out to all US search users. This feature more closely integrates an AI chatbot into Google search. Ostensibly, AI Mode can help people get better answers to their queries, but it also serves to keep users on a Google property rather than clicking through to get information from other publications. "Links were the last redeeming quality of search that gave publishers traffic and revenue. Now Google just takes content by force and uses it with no return, the definition of theft," said News/Media Alliance President and CEO Danielle Coffey. "The DOJ remedies must address this to prevent continued domination of the internet by one company." This isn't the first time the organization has fired shots at Google; it filed an amicus brief earlier this month looking for remedy in the antitrust case about Google's monopoly control over search. The group argued that publishers should be able to opt out of letting search engines use their content for retrieval augmented generation. Google has also taken an aggressive stance toward publishers as it develops more AI-driven services. The company's recent attitude can be seen in Bloomberg's discovery of an internal document showing that the company decided not to give publishers a choice to opt out of AI training if they wanted their material to appear in search results.This article originally appeared on Engadget at #newsmedia #alliance #calls #google039s #mode
    WWW.ENGADGET.COM
    News/Media Alliance calls Google's AI Mode 'theft'
    The News/Media Alliance took aim at Google today after the tech company's announcement at its I/O showcase that AI Mode will be rolling out to all US search users. This feature more closely integrates an AI chatbot into Google search. Ostensibly, AI Mode can help people get better answers to their queries, but it also serves to keep users on a Google property rather than clicking through to get information from other publications. "Links were the last redeeming quality of search that gave publishers traffic and revenue. Now Google just takes content by force and uses it with no return, the definition of theft," said News/Media Alliance President and CEO Danielle Coffey. "The DOJ remedies must address this to prevent continued domination of the internet by one company." This isn't the first time the organization has fired shots at Google; it filed an amicus brief earlier this month looking for remedy in the antitrust case about Google's monopoly control over search. The group argued that publishers should be able to opt out of letting search engines use their content for retrieval augmented generation. Google has also taken an aggressive stance toward publishers as it develops more AI-driven services. The company's recent attitude can be seen in Bloomberg's discovery of an internal document showing that the company decided not to give publishers a choice to opt out of AI training if they wanted their material to appear in search results.This article originally appeared on Engadget at https://www.engadget.com/big-tech/newsmedia-alliance-calls-googles-ai-mode-theft-223128521.html?src=rss
    0 Комментарии 0 Поделились 0 предпросмотр
Расширенные страницы
CGShares https://cgshares.com