• Inside Mark Zuckerberg’s AI hiring spree

    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    #inside #mark #zuckerbergs #hiring #spree
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More: #inside #mark #zuckerbergs #hiring #spree
    WWW.THEVERGE.COM
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch (amazingly, not all of them do), Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI (a deal Zuckerberg passed on). “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies (although that is highly unlikely to happen). Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s $14.3 billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will need (and want) to approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent $3 billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    0 Comments 0 Shares 0 Reviews
  • Google Announces Live Translation Yet Again, This Time in Google Meet

    It's that time again, for Google to announce that real-time translation has come to one of its communication apps. This time, it's Google Meet, which can translate between English and Spanish as you speak in a video call. If that sounds familiar, it's because it's not the first time Google has announced something like this.Google Translate has had features that let you speak to someone in another language in real time for a while. For example, back in 2019, there was a real-time translation feature called Interpreter Mode built into Google Assistant. It's also been possible on Pixel phones for a while. Most of these, however, have been either text-to-text, or speech-to-text. You can use the Google Translate app for a speech-to-speech experience, but like with Google Assistant's Interpreter Mode, that only works in person. So, what's different here? Well, during its I/O keynote, Google demoed two users in a video chat speaking in their native languages. Google Meet then translates and speaks the translation back in a relatively human-sounding voice. This new feature is available now for Google Workspace subscribers, but unfortunately, it's not in the free version. On the plus side, additional languages are promised to start coming out in just a few weeks.While I haven't tested it out yet, it does seem to be a more convenient way to access a feature that you might otherwise have to hack together with another tab, or by opening your phone and holding it up to a speaker. Plus, it can be a bit more natural to hear translations spoken out for you, rather than having to rely on translated captions. I do wonder whether it can keep up with the natural speed and flow of a conversation, though—nobody likes to feel interrupted.
    #google #announces #live #translation #yet
    Google Announces Live Translation Yet Again, This Time in Google Meet
    It's that time again, for Google to announce that real-time translation has come to one of its communication apps. This time, it's Google Meet, which can translate between English and Spanish as you speak in a video call. If that sounds familiar, it's because it's not the first time Google has announced something like this.Google Translate has had features that let you speak to someone in another language in real time for a while. For example, back in 2019, there was a real-time translation feature called Interpreter Mode built into Google Assistant. It's also been possible on Pixel phones for a while. Most of these, however, have been either text-to-text, or speech-to-text. You can use the Google Translate app for a speech-to-speech experience, but like with Google Assistant's Interpreter Mode, that only works in person. So, what's different here? Well, during its I/O keynote, Google demoed two users in a video chat speaking in their native languages. Google Meet then translates and speaks the translation back in a relatively human-sounding voice. This new feature is available now for Google Workspace subscribers, but unfortunately, it's not in the free version. On the plus side, additional languages are promised to start coming out in just a few weeks.While I haven't tested it out yet, it does seem to be a more convenient way to access a feature that you might otherwise have to hack together with another tab, or by opening your phone and holding it up to a speaker. Plus, it can be a bit more natural to hear translations spoken out for you, rather than having to rely on translated captions. I do wonder whether it can keep up with the natural speed and flow of a conversation, though—nobody likes to feel interrupted. #google #announces #live #translation #yet
    LIFEHACKER.COM
    Google Announces Live Translation Yet Again, This Time in Google Meet
    It's that time again, for Google to announce that real-time translation has come to one of its communication apps. This time, it's Google Meet, which can translate between English and Spanish as you speak in a video call. If that sounds familiar, it's because it's not the first time Google has announced something like this.Google Translate has had features that let you speak to someone in another language in real time for a while. For example, back in 2019, there was a real-time translation feature called Interpreter Mode built into Google Assistant. It's also been possible on Pixel phones for a while (and even Samsung phones). Most of these, however, have been either text-to-text, or speech-to-text. You can use the Google Translate app for a speech-to-speech experience, but like with Google Assistant's Interpreter Mode, that only works in person. So, what's different here? Well, during its I/O keynote, Google demoed two users in a video chat speaking in their native languages. Google Meet then translates and speaks the translation back in a relatively human-sounding voice. This new feature is available now for Google Workspace subscribers (plans start at $7/month), but unfortunately, it's not in the free version. On the plus side, additional languages are promised to start coming out in just a few weeks.While I haven't tested it out yet, it does seem to be a more convenient way to access a feature that you might otherwise have to hack together with another tab, or by opening your phone and holding it up to a speaker. Plus, it can be a bit more natural to hear translations spoken out for you, rather than having to rely on translated captions. I do wonder whether it can keep up with the natural speed and flow of a conversation, though—nobody likes to feel interrupted.
    0 Comments 0 Shares 0 Reviews
  • Google’s ‘world-model’ bet: building the AI operating layer before Microsoft captures the UI

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    After three hours at Google’s I/O 2025 event last week in Silicon Valley, it became increasingly clear: Google is rallying its formidable AI efforts – prominently branded under the Gemini name but encompassing a diverse range of underlying model architectures and research – with laser focus. It is releasing a slew of innovations and technologies around it, then integrating them into products at a breathtaking pace.
    Beyond headline-grabbing features, Google laid out a bolder ambition: an operating system for the AI age – not the disk-booting kind, but a logic layer every app could tap – a “world model” meant to power a universal assistant that understands our physical surroundings, and reasons and acts on our behalf. It’s a strategic offensive that many observers may have missed amid the bamboozlement of features. 
    On one hand, it’s a high-stakes strategy to leapfrog entrenched competitors. But on the other, as Google pours billions into this moonshot, a critical question looms: Can Google’s brilliance in AI research and technology translate into products faster than its rivals, whose edge has its own brilliance: packaging AI into immediately accessible and commercially potent products? Can Google out-maneuver a laser-focused Microsoft, fend off OpenAI’s vertical hardware dreams, and, crucially, keep its own search empire alive in the disruptive currents of AI?
    Google is already pursuing this future at dizzying scale. Pichai told I/O that the company now processes 480 trillion tokens a month – 50× more than a year ago – and almost 5x more than the 100 trillion tokens a month that Microsoft’s Satya Nadella said his company processed. This momentum is also reflected in developer adoption, with Pichai saying that over 7 million developers are now building with the Gemini API, representing a five-fold increase since the last I/O, while Gemini usage on Vertex AI has surged more than 40 times. And unit costs keep falling as Gemini 2.5 models and the Ironwood TPU squeeze more performance from each watt and dollar. AI Modeand AI Overviewsare the live test beds where Google tunes latency, quality, and future ad formats as it shifts search into an AI-first era.
    Source: Google I/O 20025
    Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google, and not other companies – creates another big tension: How much control does Google want over this all-knowing assistant, built upon its crown jewel of search? Does it primarily want to leverage it first for itself, to save its billion search business that depends on owning the starting point and avoiding disruption by OpenAI? Or will Google fully open its foundational AI for other developers and companies to leverage – another  segment representing a significant portion of its business, engaging over 20 million developers, more than any other company? 
    It has sometimes stopped short of a radical focus on building these core products for others with the same clarity as its nemesis, Microsoft. That’s because it keeps a lot of core functionality reserved for its cherished search engine. That said, Google is making significant efforts to provide developer access wherever possible. A telling example is Project Mariner. Google could have embedded the agentic browser-automation features directly inside Chrome, giving consumers an immediate showcase under Google’s full control. However, Google followed up by saying Mariner’s computer-use capabilities would be released via the Gemini API more broadly “this summer.” This signals that external access is coming for any rival that wants comparable automation. In fact, Google said partners Automation Anywhere and UiPath were already building with it.
    Google’s grand design: the ‘world model’ and universal assistant
    The clearest articulation of Google’s grand design came from Demis Hassabis, CEO of Google DeepMind, during the I/O keynote. He stated Google continued to “double down” on efforts towards artificial general intelligence. While Gemini was already “the best multimodal model,” Hassabis explained, Google is working hard to “extend it to become what we call a world model. That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.” 
    This concept of ‘a world model,’ as articulated by Hassabis, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems.
    Hassabis has developed this concept of a “world model” and its manifestation as a “universal AI assistant” in several talks since late 2024, and it was presented at I/O most comprehensively – with CEO Sundar Pichai and Gemini lead Josh Woodward echoing the vision on the same stage.Speaking about the Gemini app, Google’s equivalent to OpenAI’s ChatGPT, Hassabis declared, “This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that’s personal, proactive, and powerful, and one of our key milestones on the road to AGI.” 
    This vision was made tangible through I/O demonstrations. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.”
    CEO Sundar Pichai reinforced this, citing Project Astra which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context”enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understandsform the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google is stuffing these capabilities into a crowded toolkit: AI Studio and Firebase Studio are core starting points for developers, while Vertex AI remains the enterprise on-ramp.
    The strategic stakes: defending search, courting developers amid an AI arms race
    This colossal undertaking is driven by Google’s massive R&D capabilities but also by strategic necessity. In the enterprise software landscape, Microsoft has a formidable hold, a Fortune 500 Chief AI Officer told VentureBeat, reassuring customers with its full commitment to tooling Copilot. The executive requested anonymity because of the sensitivity of commenting on the intense competition between the AI cloud providers. Microsoft’s dominance in Office 365 productivity applications will be exceptionally hard to dislodge through direct feature-for-feature competition, the executive said.
    Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology. As Pichai mused with podcaster David Friedberg shortly before I/O, that means awareness of physical surroundings. And so AR glasses, Pichai said, “maybe that’s the next leap…that’s what’s exciting for me.”
    But this AI offensive is a race against multiple clocks. First, the billion search-ads engine that funds Google must be protected even as it is reinvented. The U.S. Department of Justice’s monopolization ruling still hangs over Google – divestiture of Chrome has been floated as the leading remedy. And in Europe, the Digital Markets Act as well as emerging copyright-liability lawsuits could hem in how freely Gemini crawls or displays the open web.
    Finally, execution speed matters. Google has been criticized for moving slowly in past years. But over the past 12 months, it became clear Google had been working patiently on multiple fronts, and that it has paid off with faster growth than rivals. The challenge of successfully navigating this AI transition at massive scale is immense, as evidenced by the recent Bloomberg report detailing how even a tech titan like Apple is grappling with significant setbacks and internal reorganizations in its AI initiatives. This industry-wide difficulty underscores the high stakes for all players. While Pichai lacks the showmanship of some rivals, the long list of enterprise customer testimonials Google paraded at its Cloud Next event last month – about actual AI deployments – underscores a leader who lets sustained product cadence and enterprise wins speak for themselves. 
    At the same time, focused competitors advance. Microsoft’s enterprise march continues. Its Build conference showcased Microsoft 365 Copilot as the “UI for AI,” Azure AI Foundry as a “production line for intelligence,” and Copilot Studio for sophisticated agent-building, with impressive low-code workflow demos. Nadella’s “open agentic web” visionoffers businesses a pragmatic AI adoption path, allowing selective integration of AI tech – whether it be Google’s or another competitor’s – within a Microsoft-centric framework.
    OpenAI, meanwhile, is way out ahead with the consumer reach of its ChatGPT product, with recent references by the company to having 600 million monthly users, and 800 million weekly users. This compares to the Gemini app’s 400 million monthly users. And in December, OpenAI launched a full-blown search offering, and is reportedly planning an ad offering – posing what could be an existential threat to Google’s search model. Beyond making leading models, OpenAI is making a provocative vertical play with its reported billion acquisition of Jony Ive’s IO, pledging to move “beyond these legacy products” – and hinting that it was launching a hardware product that would attempt to disrupt AI just like the iPhone disrupted mobile. While any of this may potentially disrupt Google’s next-gen personal computing ambitions, it’s also true that OpenAI’s ability to build a deep moat like Apple did with the iPhone may be limited in an AI era increasingly defined by open protocolsand easier model interchangeability.
    Internally, Google navigates its vast ecosystem. As Jeanine Banks, Google’s VP of Developer X, told VentureBeat serving Google’s diverse global developer community means “it’s not a one size fits all,” leading to a rich but sometimes complex array of tools – AI Studio, Vertex AI, Firebase Studio, numerous APIs.
    Meanwhile, Amazon is pressing from another flank: Bedrock already hosts Anthropic, Meta, Mistral and Cohere models, giving AWS customers a pragmatic, multi-model default.
    For enterprise decision-makers: navigating Google’s ‘world model’ future
    Google’s audacious bid to build the foundational intelligence for the AI age presents enterprise leaders with compelling opportunities and critical considerations:

    Move now or retrofit later: Falling a release cycle behind could force costly rewrites when assistant-first interfaces become default.
    Tap into revolutionary potential: For organizations seeking to embrace the most powerful AI, leveraging Google’s “world model” research, multimodal capabilities, and the AGI trajectory promised by Google offers a path to potentially significant innovation.
    Prepare for a new interaction paradigm: Success for Google’s “universal assistant” would mean a primary new interface for services and data. Enterprises should strategize for integration via APIs and agentic frameworks for context-aware delivery.
    Factor in the long game: Aligning with Google’s vision is a long-term commitment. The full “world model” and AGI are potentially distant horizons. Decision-makers must balance this with immediate needs and platform complexities.
    Contrast with focused alternatives: Pragmatic solutions from Microsoft offer tangible enterprise productivity now. Disruptive hardware-AI from OpenAI/IO presents another distinct path. A diversified strategy, leveraging the best of each, often makes sense, especially with the increasingly open agentic web allowing for such flexibility.

    These complex choices and real-world AI adoption strategies will be central to discussions at VentureBeat’s Transform 2025 next month. The leading independent event brings enterprise technical decision-makers together with leaders from pioneering companies to share firsthand experiences on platform choices – Google, Microsoft, and beyond – and navigating AI deployment, all curated by the VentureBeat editorial team. With limited seating, early registration is encouraged.
    Google’s defining offensive: shaping the future or strategic overreach?
    Google’s I/O spectacle was a strong statement: Google signalled that it intends to architect and operate the foundational intelligence of the AI-driven future. Its pursuit of a “world model” and its AGI ambitions aim to redefine computing, outflank competitors, and secure its dominance. The audacity is compelling; the technological promise is immense.
    The big question is execution and timing. Can Google innovate and integrate its vast technologies into a cohesive, compelling experience faster than rivals solidify their positions? Can it do so while transforming search and navigating regulatory challenges? And can it do so while focused so broadly on both consumers and business – an agenda that is arguably much broader than that of its key competitors?
    The next few years will be pivotal. If Google delivers on its “world model” vision, it may usher in an era of personalized, ambient intelligence, effectively becoming the new operational layer for our digital lives. If not, its grand ambition could be a cautionary tale of a giant reaching for everything, only to find the future defined by others who aimed more specifically, more quickly. 

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #googles #worldmodel #bet #building #operating
    Google’s ‘world-model’ bet: building the AI operating layer before Microsoft captures the UI
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More After three hours at Google’s I/O 2025 event last week in Silicon Valley, it became increasingly clear: Google is rallying its formidable AI efforts – prominently branded under the Gemini name but encompassing a diverse range of underlying model architectures and research – with laser focus. It is releasing a slew of innovations and technologies around it, then integrating them into products at a breathtaking pace. Beyond headline-grabbing features, Google laid out a bolder ambition: an operating system for the AI age – not the disk-booting kind, but a logic layer every app could tap – a “world model” meant to power a universal assistant that understands our physical surroundings, and reasons and acts on our behalf. It’s a strategic offensive that many observers may have missed amid the bamboozlement of features.  On one hand, it’s a high-stakes strategy to leapfrog entrenched competitors. But on the other, as Google pours billions into this moonshot, a critical question looms: Can Google’s brilliance in AI research and technology translate into products faster than its rivals, whose edge has its own brilliance: packaging AI into immediately accessible and commercially potent products? Can Google out-maneuver a laser-focused Microsoft, fend off OpenAI’s vertical hardware dreams, and, crucially, keep its own search empire alive in the disruptive currents of AI? Google is already pursuing this future at dizzying scale. Pichai told I/O that the company now processes 480 trillion tokens a month – 50× more than a year ago – and almost 5x more than the 100 trillion tokens a month that Microsoft’s Satya Nadella said his company processed. This momentum is also reflected in developer adoption, with Pichai saying that over 7 million developers are now building with the Gemini API, representing a five-fold increase since the last I/O, while Gemini usage on Vertex AI has surged more than 40 times. And unit costs keep falling as Gemini 2.5 models and the Ironwood TPU squeeze more performance from each watt and dollar. AI Modeand AI Overviewsare the live test beds where Google tunes latency, quality, and future ad formats as it shifts search into an AI-first era. Source: Google I/O 20025 Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google, and not other companies – creates another big tension: How much control does Google want over this all-knowing assistant, built upon its crown jewel of search? Does it primarily want to leverage it first for itself, to save its billion search business that depends on owning the starting point and avoiding disruption by OpenAI? Or will Google fully open its foundational AI for other developers and companies to leverage – another  segment representing a significant portion of its business, engaging over 20 million developers, more than any other company?  It has sometimes stopped short of a radical focus on building these core products for others with the same clarity as its nemesis, Microsoft. That’s because it keeps a lot of core functionality reserved for its cherished search engine. That said, Google is making significant efforts to provide developer access wherever possible. A telling example is Project Mariner. Google could have embedded the agentic browser-automation features directly inside Chrome, giving consumers an immediate showcase under Google’s full control. However, Google followed up by saying Mariner’s computer-use capabilities would be released via the Gemini API more broadly “this summer.” This signals that external access is coming for any rival that wants comparable automation. In fact, Google said partners Automation Anywhere and UiPath were already building with it. Google’s grand design: the ‘world model’ and universal assistant The clearest articulation of Google’s grand design came from Demis Hassabis, CEO of Google DeepMind, during the I/O keynote. He stated Google continued to “double down” on efforts towards artificial general intelligence. While Gemini was already “the best multimodal model,” Hassabis explained, Google is working hard to “extend it to become what we call a world model. That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.”  This concept of ‘a world model,’ as articulated by Hassabis, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems. Hassabis has developed this concept of a “world model” and its manifestation as a “universal AI assistant” in several talks since late 2024, and it was presented at I/O most comprehensively – with CEO Sundar Pichai and Gemini lead Josh Woodward echoing the vision on the same stage.Speaking about the Gemini app, Google’s equivalent to OpenAI’s ChatGPT, Hassabis declared, “This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that’s personal, proactive, and powerful, and one of our key milestones on the road to AGI.”  This vision was made tangible through I/O demonstrations. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.” CEO Sundar Pichai reinforced this, citing Project Astra which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context”enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understandsform the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google is stuffing these capabilities into a crowded toolkit: AI Studio and Firebase Studio are core starting points for developers, while Vertex AI remains the enterprise on-ramp. The strategic stakes: defending search, courting developers amid an AI arms race This colossal undertaking is driven by Google’s massive R&D capabilities but also by strategic necessity. In the enterprise software landscape, Microsoft has a formidable hold, a Fortune 500 Chief AI Officer told VentureBeat, reassuring customers with its full commitment to tooling Copilot. The executive requested anonymity because of the sensitivity of commenting on the intense competition between the AI cloud providers. Microsoft’s dominance in Office 365 productivity applications will be exceptionally hard to dislodge through direct feature-for-feature competition, the executive said. Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology. As Pichai mused with podcaster David Friedberg shortly before I/O, that means awareness of physical surroundings. And so AR glasses, Pichai said, “maybe that’s the next leap…that’s what’s exciting for me.” But this AI offensive is a race against multiple clocks. First, the billion search-ads engine that funds Google must be protected even as it is reinvented. The U.S. Department of Justice’s monopolization ruling still hangs over Google – divestiture of Chrome has been floated as the leading remedy. And in Europe, the Digital Markets Act as well as emerging copyright-liability lawsuits could hem in how freely Gemini crawls or displays the open web. Finally, execution speed matters. Google has been criticized for moving slowly in past years. But over the past 12 months, it became clear Google had been working patiently on multiple fronts, and that it has paid off with faster growth than rivals. The challenge of successfully navigating this AI transition at massive scale is immense, as evidenced by the recent Bloomberg report detailing how even a tech titan like Apple is grappling with significant setbacks and internal reorganizations in its AI initiatives. This industry-wide difficulty underscores the high stakes for all players. While Pichai lacks the showmanship of some rivals, the long list of enterprise customer testimonials Google paraded at its Cloud Next event last month – about actual AI deployments – underscores a leader who lets sustained product cadence and enterprise wins speak for themselves.  At the same time, focused competitors advance. Microsoft’s enterprise march continues. Its Build conference showcased Microsoft 365 Copilot as the “UI for AI,” Azure AI Foundry as a “production line for intelligence,” and Copilot Studio for sophisticated agent-building, with impressive low-code workflow demos. Nadella’s “open agentic web” visionoffers businesses a pragmatic AI adoption path, allowing selective integration of AI tech – whether it be Google’s or another competitor’s – within a Microsoft-centric framework. OpenAI, meanwhile, is way out ahead with the consumer reach of its ChatGPT product, with recent references by the company to having 600 million monthly users, and 800 million weekly users. This compares to the Gemini app’s 400 million monthly users. And in December, OpenAI launched a full-blown search offering, and is reportedly planning an ad offering – posing what could be an existential threat to Google’s search model. Beyond making leading models, OpenAI is making a provocative vertical play with its reported billion acquisition of Jony Ive’s IO, pledging to move “beyond these legacy products” – and hinting that it was launching a hardware product that would attempt to disrupt AI just like the iPhone disrupted mobile. While any of this may potentially disrupt Google’s next-gen personal computing ambitions, it’s also true that OpenAI’s ability to build a deep moat like Apple did with the iPhone may be limited in an AI era increasingly defined by open protocolsand easier model interchangeability. Internally, Google navigates its vast ecosystem. As Jeanine Banks, Google’s VP of Developer X, told VentureBeat serving Google’s diverse global developer community means “it’s not a one size fits all,” leading to a rich but sometimes complex array of tools – AI Studio, Vertex AI, Firebase Studio, numerous APIs. Meanwhile, Amazon is pressing from another flank: Bedrock already hosts Anthropic, Meta, Mistral and Cohere models, giving AWS customers a pragmatic, multi-model default. For enterprise decision-makers: navigating Google’s ‘world model’ future Google’s audacious bid to build the foundational intelligence for the AI age presents enterprise leaders with compelling opportunities and critical considerations: Move now or retrofit later: Falling a release cycle behind could force costly rewrites when assistant-first interfaces become default. Tap into revolutionary potential: For organizations seeking to embrace the most powerful AI, leveraging Google’s “world model” research, multimodal capabilities, and the AGI trajectory promised by Google offers a path to potentially significant innovation. Prepare for a new interaction paradigm: Success for Google’s “universal assistant” would mean a primary new interface for services and data. Enterprises should strategize for integration via APIs and agentic frameworks for context-aware delivery. Factor in the long game: Aligning with Google’s vision is a long-term commitment. The full “world model” and AGI are potentially distant horizons. Decision-makers must balance this with immediate needs and platform complexities. Contrast with focused alternatives: Pragmatic solutions from Microsoft offer tangible enterprise productivity now. Disruptive hardware-AI from OpenAI/IO presents another distinct path. A diversified strategy, leveraging the best of each, often makes sense, especially with the increasingly open agentic web allowing for such flexibility. These complex choices and real-world AI adoption strategies will be central to discussions at VentureBeat’s Transform 2025 next month. The leading independent event brings enterprise technical decision-makers together with leaders from pioneering companies to share firsthand experiences on platform choices – Google, Microsoft, and beyond – and navigating AI deployment, all curated by the VentureBeat editorial team. With limited seating, early registration is encouraged. Google’s defining offensive: shaping the future or strategic overreach? Google’s I/O spectacle was a strong statement: Google signalled that it intends to architect and operate the foundational intelligence of the AI-driven future. Its pursuit of a “world model” and its AGI ambitions aim to redefine computing, outflank competitors, and secure its dominance. The audacity is compelling; the technological promise is immense. The big question is execution and timing. Can Google innovate and integrate its vast technologies into a cohesive, compelling experience faster than rivals solidify their positions? Can it do so while transforming search and navigating regulatory challenges? And can it do so while focused so broadly on both consumers and business – an agenda that is arguably much broader than that of its key competitors? The next few years will be pivotal. If Google delivers on its “world model” vision, it may usher in an era of personalized, ambient intelligence, effectively becoming the new operational layer for our digital lives. If not, its grand ambition could be a cautionary tale of a giant reaching for everything, only to find the future defined by others who aimed more specifically, more quickly.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #googles #worldmodel #bet #building #operating
    VENTUREBEAT.COM
    Google’s ‘world-model’ bet: building the AI operating layer before Microsoft captures the UI
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More After three hours at Google’s I/O 2025 event last week in Silicon Valley, it became increasingly clear: Google is rallying its formidable AI efforts – prominently branded under the Gemini name but encompassing a diverse range of underlying model architectures and research – with laser focus. It is releasing a slew of innovations and technologies around it, then integrating them into products at a breathtaking pace. Beyond headline-grabbing features, Google laid out a bolder ambition: an operating system for the AI age – not the disk-booting kind, but a logic layer every app could tap – a “world model” meant to power a universal assistant that understands our physical surroundings, and reasons and acts on our behalf. It’s a strategic offensive that many observers may have missed amid the bamboozlement of features.  On one hand, it’s a high-stakes strategy to leapfrog entrenched competitors. But on the other, as Google pours billions into this moonshot, a critical question looms: Can Google’s brilliance in AI research and technology translate into products faster than its rivals, whose edge has its own brilliance: packaging AI into immediately accessible and commercially potent products? Can Google out-maneuver a laser-focused Microsoft, fend off OpenAI’s vertical hardware dreams, and, crucially, keep its own search empire alive in the disruptive currents of AI? Google is already pursuing this future at dizzying scale. Pichai told I/O that the company now processes 480 trillion tokens a month – 50× more than a year ago – and almost 5x more than the 100 trillion tokens a month that Microsoft’s Satya Nadella said his company processed. This momentum is also reflected in developer adoption, with Pichai saying that over 7 million developers are now building with the Gemini API, representing a five-fold increase since the last I/O, while Gemini usage on Vertex AI has surged more than 40 times. And unit costs keep falling as Gemini 2.5 models and the Ironwood TPU squeeze more performance from each watt and dollar. AI Mode (rolling out in the U.S.) and AI Overviews (already serving 1.5 billion users monthly) are the live test beds where Google tunes latency, quality, and future ad formats as it shifts search into an AI-first era. Source: Google I/O 20025 Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google, and not other companies – creates another big tension: How much control does Google want over this all-knowing assistant, built upon its crown jewel of search? Does it primarily want to leverage it first for itself, to save its $200 billion search business that depends on owning the starting point and avoiding disruption by OpenAI? Or will Google fully open its foundational AI for other developers and companies to leverage – another  segment representing a significant portion of its business, engaging over 20 million developers, more than any other company?  It has sometimes stopped short of a radical focus on building these core products for others with the same clarity as its nemesis, Microsoft. That’s because it keeps a lot of core functionality reserved for its cherished search engine. That said, Google is making significant efforts to provide developer access wherever possible. A telling example is Project Mariner. Google could have embedded the agentic browser-automation features directly inside Chrome, giving consumers an immediate showcase under Google’s full control. However, Google followed up by saying Mariner’s computer-use capabilities would be released via the Gemini API more broadly “this summer.” This signals that external access is coming for any rival that wants comparable automation. In fact, Google said partners Automation Anywhere and UiPath were already building with it. Google’s grand design: the ‘world model’ and universal assistant The clearest articulation of Google’s grand design came from Demis Hassabis, CEO of Google DeepMind, during the I/O keynote. He stated Google continued to “double down” on efforts towards artificial general intelligence (AGI). While Gemini was already “the best multimodal model,” Hassabis explained, Google is working hard to “extend it to become what we call a world model. That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.”  This concept of ‘a world model,’ as articulated by Hassabis, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems. Hassabis has developed this concept of a “world model” and its manifestation as a “universal AI assistant” in several talks since late 2024, and it was presented at I/O most comprehensively – with CEO Sundar Pichai and Gemini lead Josh Woodward echoing the vision on the same stage. (While other AI leaders, including Microsoft’s Satya Nadella, OpenAI’s Sam Altman, and xAI’s Elon Musk have all discussed ‘world models,” Google uniquely and most comprehensively ties this foundational concept to its near-term strategic thrust: the ‘universal AI assistant.) Speaking about the Gemini app, Google’s equivalent to OpenAI’s ChatGPT, Hassabis declared, “This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that’s personal, proactive, and powerful, and one of our key milestones on the road to AGI.”  This vision was made tangible through I/O demonstrations. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.” CEO Sundar Pichai reinforced this, citing Project Astra which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context” (connecting search history, and soon Gmail/Calendar) enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understands (e.g., thermodynamics explained via cycling. This, Woodward emphasized, is “where we’re headed with Gemini,” enabled by the Gemini 2.5 Pro model allowing users to “think things into existence.”  The new developer tools unveiled at I/O are building blocks. Gemini 2.5 Pro with “Deep Think” and the hyper-efficient 2.5 Flash (now with native audio and URL context grounding from Gemini API) form the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google is stuffing these capabilities into a crowded toolkit: AI Studio and Firebase Studio are core starting points for developers, while Vertex AI remains the enterprise on-ramp. The strategic stakes: defending search, courting developers amid an AI arms race This colossal undertaking is driven by Google’s massive R&D capabilities but also by strategic necessity. In the enterprise software landscape, Microsoft has a formidable hold, a Fortune 500 Chief AI Officer told VentureBeat, reassuring customers with its full commitment to tooling Copilot. The executive requested anonymity because of the sensitivity of commenting on the intense competition between the AI cloud providers. Microsoft’s dominance in Office 365 productivity applications will be exceptionally hard to dislodge through direct feature-for-feature competition, the executive said. Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology. As Pichai mused with podcaster David Friedberg shortly before I/O, that means awareness of physical surroundings. And so AR glasses, Pichai said, “maybe that’s the next leap…that’s what’s exciting for me.” But this AI offensive is a race against multiple clocks. First, the $200 billion search-ads engine that funds Google must be protected even as it is reinvented. The U.S. Department of Justice’s monopolization ruling still hangs over Google – divestiture of Chrome has been floated as the leading remedy. And in Europe, the Digital Markets Act as well as emerging copyright-liability lawsuits could hem in how freely Gemini crawls or displays the open web. Finally, execution speed matters. Google has been criticized for moving slowly in past years. But over the past 12 months, it became clear Google had been working patiently on multiple fronts, and that it has paid off with faster growth than rivals. The challenge of successfully navigating this AI transition at massive scale is immense, as evidenced by the recent Bloomberg report detailing how even a tech titan like Apple is grappling with significant setbacks and internal reorganizations in its AI initiatives. This industry-wide difficulty underscores the high stakes for all players. While Pichai lacks the showmanship of some rivals, the long list of enterprise customer testimonials Google paraded at its Cloud Next event last month – about actual AI deployments – underscores a leader who lets sustained product cadence and enterprise wins speak for themselves.  At the same time, focused competitors advance. Microsoft’s enterprise march continues. Its Build conference showcased Microsoft 365 Copilot as the “UI for AI,” Azure AI Foundry as a “production line for intelligence,” and Copilot Studio for sophisticated agent-building, with impressive low-code workflow demos (Microsoft Build Keynote, Miti Joshi at 22:52, Kadesha Kerr at 51:26). Nadella’s “open agentic web” vision (NLWeb, MCP) offers businesses a pragmatic AI adoption path, allowing selective integration of AI tech – whether it be Google’s or another competitor’s – within a Microsoft-centric framework. OpenAI, meanwhile, is way out ahead with the consumer reach of its ChatGPT product, with recent references by the company to having 600 million monthly users, and 800 million weekly users. This compares to the Gemini app’s 400 million monthly users. And in December, OpenAI launched a full-blown search offering, and is reportedly planning an ad offering – posing what could be an existential threat to Google’s search model. Beyond making leading models, OpenAI is making a provocative vertical play with its reported $6.5 billion acquisition of Jony Ive’s IO, pledging to move “beyond these legacy products” – and hinting that it was launching a hardware product that would attempt to disrupt AI just like the iPhone disrupted mobile. While any of this may potentially disrupt Google’s next-gen personal computing ambitions, it’s also true that OpenAI’s ability to build a deep moat like Apple did with the iPhone may be limited in an AI era increasingly defined by open protocols (like MCP) and easier model interchangeability. Internally, Google navigates its vast ecosystem. As Jeanine Banks, Google’s VP of Developer X, told VentureBeat serving Google’s diverse global developer community means “it’s not a one size fits all,” leading to a rich but sometimes complex array of tools – AI Studio, Vertex AI, Firebase Studio, numerous APIs. Meanwhile, Amazon is pressing from another flank: Bedrock already hosts Anthropic, Meta, Mistral and Cohere models, giving AWS customers a pragmatic, multi-model default. For enterprise decision-makers: navigating Google’s ‘world model’ future Google’s audacious bid to build the foundational intelligence for the AI age presents enterprise leaders with compelling opportunities and critical considerations: Move now or retrofit later: Falling a release cycle behind could force costly rewrites when assistant-first interfaces become default. Tap into revolutionary potential: For organizations seeking to embrace the most powerful AI, leveraging Google’s “world model” research, multimodal capabilities (like Veo 3 and Imagen 4 showcased by Woodward at I/O), and the AGI trajectory promised by Google offers a path to potentially significant innovation. Prepare for a new interaction paradigm: Success for Google’s “universal assistant” would mean a primary new interface for services and data. Enterprises should strategize for integration via APIs and agentic frameworks for context-aware delivery. Factor in the long game (and its risks): Aligning with Google’s vision is a long-term commitment. The full “world model” and AGI are potentially distant horizons. Decision-makers must balance this with immediate needs and platform complexities. Contrast with focused alternatives: Pragmatic solutions from Microsoft offer tangible enterprise productivity now. Disruptive hardware-AI from OpenAI/IO presents another distinct path. A diversified strategy, leveraging the best of each, often makes sense, especially with the increasingly open agentic web allowing for such flexibility. These complex choices and real-world AI adoption strategies will be central to discussions at VentureBeat’s Transform 2025 next month. The leading independent event brings enterprise technical decision-makers together with leaders from pioneering companies to share firsthand experiences on platform choices – Google, Microsoft, and beyond – and navigating AI deployment, all curated by the VentureBeat editorial team. With limited seating, early registration is encouraged. Google’s defining offensive: shaping the future or strategic overreach? Google’s I/O spectacle was a strong statement: Google signalled that it intends to architect and operate the foundational intelligence of the AI-driven future. Its pursuit of a “world model” and its AGI ambitions aim to redefine computing, outflank competitors, and secure its dominance. The audacity is compelling; the technological promise is immense. The big question is execution and timing. Can Google innovate and integrate its vast technologies into a cohesive, compelling experience faster than rivals solidify their positions? Can it do so while transforming search and navigating regulatory challenges? And can it do so while focused so broadly on both consumers and business – an agenda that is arguably much broader than that of its key competitors? The next few years will be pivotal. If Google delivers on its “world model” vision, it may usher in an era of personalized, ambient intelligence, effectively becoming the new operational layer for our digital lives. If not, its grand ambition could be a cautionary tale of a giant reaching for everything, only to find the future defined by others who aimed more specifically, more quickly.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comments 0 Shares 0 Reviews
  • Experimental Micron PCIe 6.0 SSD hits a massive 30.25 GB/s, but it's not ready for your rig yet

    Forward-looking: PCIe 5.0 SSDs are fast, but they're kind of old news now – they're everywhere and have kind of lost some of their "wow" factor. But this year, Micron shook things up with a sneak peek at what's next with a prototype PCIe 6.0 SSD. What makes it special is its potential to hit a jaw-dropping 30.25 GB/s in sequential read and write speeds – double the throughput of today's fastest consumer SSDs.
    It all sounds great, as long as you're not expecting to pop one into your gaming rig anytime soon. Dubbed the Micron 9650 Pro, the SSD is still very much in the test-bench phase. It was spotted by Tom's Hardware at Computex 2025, housed in a chunky metal enclosure and far from the familiar M.2 form factor.
    In fact, it appeared to be connected to a PCIe 6.0 expansion card, held down with what looked like sticky tape.

    Unfortunately, Micron isn't targeting your desktop just yet. The 9650 Pro is more of a data center and AI platform play right now. It was showcased at Astera Labs' booth, where it was helping demonstrate next-gen PCIe 6.0 switches and bandwidth-matching software.
    These switches allow devices like GPUs and SSDs to communicate directly with each other, skipping the CPU entirely - something that's becoming increasingly crucial in high-performance AI workflows.
    Also read: The Inner Workings of PCI Express
    // Related Stories

    The catch here is that no CPUs officially support PCIe 6.0 yet, and PCI-SIG certification for Gen 6 devices isn't expected until late 2025.
    That puts the 9650 Pro firmly in the "cool tech demo" category, at least for now. Until the ecosystem that includes motherboards, CPUs, and certification bodies catches up, don't expect it to land in your build anytime soon.

    What was demoed at Computex is currently in the EVT3stage, meaning it has already gone through two rounds of hardware tuning and is now being used to fine-tune performance and compatibility.
    From here, it still needs to pass through Design Validation Testingand Production Validation Testingbefore anything close to a commercial release becomes a reality.
    This latest showcase follows an earlier Micron and Astera Labs demo at DesignCon, where they showed real-world PCIe 6.0 performance hitting 27 GB/s.
    Image credit: Tom's Hardware
    #experimental #micron #pcie #ssd #hits
    Experimental Micron PCIe 6.0 SSD hits a massive 30.25 GB/s, but it's not ready for your rig yet
    Forward-looking: PCIe 5.0 SSDs are fast, but they're kind of old news now – they're everywhere and have kind of lost some of their "wow" factor. But this year, Micron shook things up with a sneak peek at what's next with a prototype PCIe 6.0 SSD. What makes it special is its potential to hit a jaw-dropping 30.25 GB/s in sequential read and write speeds – double the throughput of today's fastest consumer SSDs. It all sounds great, as long as you're not expecting to pop one into your gaming rig anytime soon. Dubbed the Micron 9650 Pro, the SSD is still very much in the test-bench phase. It was spotted by Tom's Hardware at Computex 2025, housed in a chunky metal enclosure and far from the familiar M.2 form factor. In fact, it appeared to be connected to a PCIe 6.0 expansion card, held down with what looked like sticky tape. Unfortunately, Micron isn't targeting your desktop just yet. The 9650 Pro is more of a data center and AI platform play right now. It was showcased at Astera Labs' booth, where it was helping demonstrate next-gen PCIe 6.0 switches and bandwidth-matching software. These switches allow devices like GPUs and SSDs to communicate directly with each other, skipping the CPU entirely - something that's becoming increasingly crucial in high-performance AI workflows. Also read: The Inner Workings of PCI Express // Related Stories The catch here is that no CPUs officially support PCIe 6.0 yet, and PCI-SIG certification for Gen 6 devices isn't expected until late 2025. That puts the 9650 Pro firmly in the "cool tech demo" category, at least for now. Until the ecosystem that includes motherboards, CPUs, and certification bodies catches up, don't expect it to land in your build anytime soon. What was demoed at Computex is currently in the EVT3stage, meaning it has already gone through two rounds of hardware tuning and is now being used to fine-tune performance and compatibility. From here, it still needs to pass through Design Validation Testingand Production Validation Testingbefore anything close to a commercial release becomes a reality. This latest showcase follows an earlier Micron and Astera Labs demo at DesignCon, where they showed real-world PCIe 6.0 performance hitting 27 GB/s. Image credit: Tom's Hardware #experimental #micron #pcie #ssd #hits
    WWW.TECHSPOT.COM
    Experimental Micron PCIe 6.0 SSD hits a massive 30.25 GB/s, but it's not ready for your rig yet
    Forward-looking: PCIe 5.0 SSDs are fast, but they're kind of old news now – they're everywhere and have kind of lost some of their "wow" factor. But this year, Micron shook things up with a sneak peek at what's next with a prototype PCIe 6.0 SSD. What makes it special is its potential to hit a jaw-dropping 30.25 GB/s in sequential read and write speeds – double the throughput of today's fastest consumer SSDs. It all sounds great, as long as you're not expecting to pop one into your gaming rig anytime soon. Dubbed the Micron 9650 Pro, the SSD is still very much in the test-bench phase. It was spotted by Tom's Hardware at Computex 2025, housed in a chunky metal enclosure and far from the familiar M.2 form factor. In fact, it appeared to be connected to a PCIe 6.0 expansion card, held down with what looked like sticky tape. Unfortunately, Micron isn't targeting your desktop just yet. The 9650 Pro is more of a data center and AI platform play right now. It was showcased at Astera Labs' booth, where it was helping demonstrate next-gen PCIe 6.0 switches and bandwidth-matching software. These switches allow devices like GPUs and SSDs to communicate directly with each other, skipping the CPU entirely - something that's becoming increasingly crucial in high-performance AI workflows. Also read: The Inner Workings of PCI Express // Related Stories The catch here is that no CPUs officially support PCIe 6.0 yet, and PCI-SIG certification for Gen 6 devices isn't expected until late 2025. That puts the 9650 Pro firmly in the "cool tech demo" category, at least for now. Until the ecosystem that includes motherboards, CPUs, and certification bodies catches up, don't expect it to land in your build anytime soon. What was demoed at Computex is currently in the EVT3 (Engineering Validation Test 3) stage, meaning it has already gone through two rounds of hardware tuning and is now being used to fine-tune performance and compatibility. From here, it still needs to pass through Design Validation Testing (DVT) and Production Validation Testing (PVT) before anything close to a commercial release becomes a reality. This latest showcase follows an earlier Micron and Astera Labs demo at DesignCon, where they showed real-world PCIe 6.0 performance hitting 27 GB/s. Image credit: Tom's Hardware
    0 Comments 0 Shares 0 Reviews
  • Google has a massive mobile opportunity, and it's partly thanks to Apple

    An Android presentation at Google I/O 2025.

    Google

    2025-05-24T09:00:02Z

    d

    Read in app

    This story is available exclusively to Business Insider
    subscribers. Become an Insider
    and start reading now.
    Have an account?

    Google's announcements at its I/O developer conference this week had analysts bullish on its AI.
    AI features could be a "Trojan horse" for Google's Android products, Bank of America analysts wrote.
    Apple's AI mess has given Google a major mobile opportunity.

    Google's phones, tablets, and, yes, XR glasses are all about to be supercharged by AI.Google needs to seize this moment. Bank of America analysts this week even called Google's slew of new AI announcements a "Trojan horse" for its device business.For years, Apple's iOS and Google's Android have battled it out. Apple leads in the US in phone sales, though it still trails Android globally. The two have also gradually converged; iOS has become more customizable, while Android has become cleaner and easier to use. As hardware upgrades have slowed in recent years, the focus has shifted to the smarts inside the device.That could be a big problem for Apple. Its AI rollouts have proven lackluster with users, while more enticing promised features have been delayed. The company is reportedly trying to rebuild Siri entirely using large language models. Right now, it's still behind Google and OpenAI, and that gap continues to widen.During Google's I/O conference this week, the search giant bombarded us with new AI features. Perhaps the best example was a particularly grabby demo of Google's "Project Astra" assistant helping someone fix their bike by searching through the bike manual, pulling up a YouTube video, and calling a bike shop to see if certain supplies were in stock.It was, of course, a highly polished promotional video, but it made Siri look generations behind."It has long been the case that the best way to bring products to the consumer market is via devices, and that seems truer than ever," wrote Ben Thompson, analyst and Stratechery author, in an I/O dispatch this week."Android is probably going to be the most important canvas for shipping a lot of these capabilities," he added.
    Google's golden opportunityApple has done a good job of locking users into its ecosystem with iMessage blue bubbles, features like FaceTime, and peripherals like the Apple Watch that require an iPhone to use.Google's Pixel phone line, meanwhile, remains a rounding error when compared to global smartphone shipments. That's less of a problem when Google has huge partners like Samsung that bring all of its AI features to billions of Android users globally.While iPhone users will get some of these new features through Google's iOS apps, it's clear that the "universal assistant" the company is building will only see its full potential on Android. Perhaps this could finally get iOS users to make the switch."We're seeing diminishing returns on a hardware upgrade cycle, which means we're now really focused on the software upgrade cycle," Bernstein senior analyst Mark Shmulik told Business Insider.Without major changes by Apple, Shmulik said he sees the gap in capabilities between Android and iOS only widening."If it widens to the point where someone with an iPhone says, 'Well my phone can't do that,' does it finally cause that switching event from what everyone has always considered this incredible lock-in from Apple?" Shmulik said.Beyond smartphonesInternally, Google has been preparing for this moment.The company merged its Pixel, Chrome, and Android teams last year to capitalize on the AI opportunity."We are going to be very fast-moving to not miss this opportunity," Google's Android chief Sameer Samat told BI at last year's I/O. "It's a once-in-a-generation moment to reinvent what phones can do. We are going to seize that moment."A year on, Google appears to be doing just that. Much of what the company demoed this week is either rolling out to devices imminently or in the coming weeks.Google still faces the challenge that its relationships with partners like Samsung have come with the express promise that Google won't give its home-grown devices preferential treatment. So, if Google decides to double down on its Pixel phones at the expense of its partners, it could step into a business land mine.Of course, Google needs to think about more than smartphones. Its renewed bet on XR glasses is a bet on what might be the next-generation computing platform. Meta is already selling its own augmented reality glasses, and Apple is now doubling down on its efforts to get its own smart glasses out by the end of 2026, Bloomberg reported.Google this week demoed glasses that have a visual overlay to instantly provide information to wearers, which Meta's glasses lack and Apple's first version will reportedly also not have.The success of Meta's glasses so far is no doubt encouraging news for Google, as a new era of AI devices is ushered in. Now it's poised to get ahead by leveraging its AI chops, and Apple might give it the exact opening it's waited more than a decade for."I don't know about an open goal," said Shmulik of Apple, "but it does feel like they've earned themselves a penalty kick."Have something to share? Contact this reporter via email at hlangley@businessinsider.com or Signal at 628-228-1836. Use a personal email address and a nonwork device; here's our guide to sharing information securely.
    #google #has #massive #mobile #opportunity
    Google has a massive mobile opportunity, and it's partly thanks to Apple
    An Android presentation at Google I/O 2025. Google 2025-05-24T09:00:02Z d Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Google's announcements at its I/O developer conference this week had analysts bullish on its AI. AI features could be a "Trojan horse" for Google's Android products, Bank of America analysts wrote. Apple's AI mess has given Google a major mobile opportunity. Google's phones, tablets, and, yes, XR glasses are all about to be supercharged by AI.Google needs to seize this moment. Bank of America analysts this week even called Google's slew of new AI announcements a "Trojan horse" for its device business.For years, Apple's iOS and Google's Android have battled it out. Apple leads in the US in phone sales, though it still trails Android globally. The two have also gradually converged; iOS has become more customizable, while Android has become cleaner and easier to use. As hardware upgrades have slowed in recent years, the focus has shifted to the smarts inside the device.That could be a big problem for Apple. Its AI rollouts have proven lackluster with users, while more enticing promised features have been delayed. The company is reportedly trying to rebuild Siri entirely using large language models. Right now, it's still behind Google and OpenAI, and that gap continues to widen.During Google's I/O conference this week, the search giant bombarded us with new AI features. Perhaps the best example was a particularly grabby demo of Google's "Project Astra" assistant helping someone fix their bike by searching through the bike manual, pulling up a YouTube video, and calling a bike shop to see if certain supplies were in stock.It was, of course, a highly polished promotional video, but it made Siri look generations behind."It has long been the case that the best way to bring products to the consumer market is via devices, and that seems truer than ever," wrote Ben Thompson, analyst and Stratechery author, in an I/O dispatch this week."Android is probably going to be the most important canvas for shipping a lot of these capabilities," he added. Google's golden opportunityApple has done a good job of locking users into its ecosystem with iMessage blue bubbles, features like FaceTime, and peripherals like the Apple Watch that require an iPhone to use.Google's Pixel phone line, meanwhile, remains a rounding error when compared to global smartphone shipments. That's less of a problem when Google has huge partners like Samsung that bring all of its AI features to billions of Android users globally.While iPhone users will get some of these new features through Google's iOS apps, it's clear that the "universal assistant" the company is building will only see its full potential on Android. Perhaps this could finally get iOS users to make the switch."We're seeing diminishing returns on a hardware upgrade cycle, which means we're now really focused on the software upgrade cycle," Bernstein senior analyst Mark Shmulik told Business Insider.Without major changes by Apple, Shmulik said he sees the gap in capabilities between Android and iOS only widening."If it widens to the point where someone with an iPhone says, 'Well my phone can't do that,' does it finally cause that switching event from what everyone has always considered this incredible lock-in from Apple?" Shmulik said.Beyond smartphonesInternally, Google has been preparing for this moment.The company merged its Pixel, Chrome, and Android teams last year to capitalize on the AI opportunity."We are going to be very fast-moving to not miss this opportunity," Google's Android chief Sameer Samat told BI at last year's I/O. "It's a once-in-a-generation moment to reinvent what phones can do. We are going to seize that moment."A year on, Google appears to be doing just that. Much of what the company demoed this week is either rolling out to devices imminently or in the coming weeks.Google still faces the challenge that its relationships with partners like Samsung have come with the express promise that Google won't give its home-grown devices preferential treatment. So, if Google decides to double down on its Pixel phones at the expense of its partners, it could step into a business land mine.Of course, Google needs to think about more than smartphones. Its renewed bet on XR glasses is a bet on what might be the next-generation computing platform. Meta is already selling its own augmented reality glasses, and Apple is now doubling down on its efforts to get its own smart glasses out by the end of 2026, Bloomberg reported.Google this week demoed glasses that have a visual overlay to instantly provide information to wearers, which Meta's glasses lack and Apple's first version will reportedly also not have.The success of Meta's glasses so far is no doubt encouraging news for Google, as a new era of AI devices is ushered in. Now it's poised to get ahead by leveraging its AI chops, and Apple might give it the exact opening it's waited more than a decade for."I don't know about an open goal," said Shmulik of Apple, "but it does feel like they've earned themselves a penalty kick."Have something to share? Contact this reporter via email at hlangley@businessinsider.com or Signal at 628-228-1836. Use a personal email address and a nonwork device; here's our guide to sharing information securely. #google #has #massive #mobile #opportunity
    WWW.BUSINESSINSIDER.COM
    Google has a massive mobile opportunity, and it's partly thanks to Apple
    An Android presentation at Google I/O 2025. Google 2025-05-24T09:00:02Z Save Saved Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Google's announcements at its I/O developer conference this week had analysts bullish on its AI. AI features could be a "Trojan horse" for Google's Android products, Bank of America analysts wrote. Apple's AI mess has given Google a major mobile opportunity. Google's phones, tablets, and, yes, XR glasses are all about to be supercharged by AI.Google needs to seize this moment. Bank of America analysts this week even called Google's slew of new AI announcements a "Trojan horse" for its device business.For years, Apple's iOS and Google's Android have battled it out. Apple leads in the US in phone sales, though it still trails Android globally. The two have also gradually converged; iOS has become more customizable, while Android has become cleaner and easier to use. As hardware upgrades have slowed in recent years, the focus has shifted to the smarts inside the device.That could be a big problem for Apple. Its AI rollouts have proven lackluster with users, while more enticing promised features have been delayed. The company is reportedly trying to rebuild Siri entirely using large language models. Right now, it's still behind Google and OpenAI, and that gap continues to widen.During Google's I/O conference this week, the search giant bombarded us with new AI features. Perhaps the best example was a particularly grabby demo of Google's "Project Astra" assistant helping someone fix their bike by searching through the bike manual, pulling up a YouTube video, and calling a bike shop to see if certain supplies were in stock.It was, of course, a highly polished promotional video, but it made Siri look generations behind."It has long been the case that the best way to bring products to the consumer market is via devices, and that seems truer than ever," wrote Ben Thompson, analyst and Stratechery author, in an I/O dispatch this week."Android is probably going to be the most important canvas for shipping a lot of these capabilities," he added. Google's golden opportunityApple has done a good job of locking users into its ecosystem with iMessage blue bubbles, features like FaceTime, and peripherals like the Apple Watch that require an iPhone to use.Google's Pixel phone line, meanwhile, remains a rounding error when compared to global smartphone shipments. That's less of a problem when Google has huge partners like Samsung that bring all of its AI features to billions of Android users globally.While iPhone users will get some of these new features through Google's iOS apps, it's clear that the "universal assistant" the company is building will only see its full potential on Android. Perhaps this could finally get iOS users to make the switch."We're seeing diminishing returns on a hardware upgrade cycle, which means we're now really focused on the software upgrade cycle," Bernstein senior analyst Mark Shmulik told Business Insider.Without major changes by Apple, Shmulik said he sees the gap in capabilities between Android and iOS only widening."If it widens to the point where someone with an iPhone says, 'Well my phone can't do that,' does it finally cause that switching event from what everyone has always considered this incredible lock-in from Apple?" Shmulik said.Beyond smartphonesInternally, Google has been preparing for this moment.The company merged its Pixel, Chrome, and Android teams last year to capitalize on the AI opportunity."We are going to be very fast-moving to not miss this opportunity," Google's Android chief Sameer Samat told BI at last year's I/O. "It's a once-in-a-generation moment to reinvent what phones can do. We are going to seize that moment."A year on, Google appears to be doing just that. Much of what the company demoed this week is either rolling out to devices imminently or in the coming weeks.Google still faces the challenge that its relationships with partners like Samsung have come with the express promise that Google won't give its home-grown devices preferential treatment. So, if Google decides to double down on its Pixel phones at the expense of its partners, it could step into a business land mine.Of course, Google needs to think about more than smartphones. Its renewed bet on XR glasses is a bet on what might be the next-generation computing platform. Meta is already selling its own augmented reality glasses, and Apple is now doubling down on its efforts to get its own smart glasses out by the end of 2026, Bloomberg reported.Google this week demoed glasses that have a visual overlay to instantly provide information to wearers, which Meta's glasses lack and Apple's first version will reportedly also not have.The success of Meta's glasses so far is no doubt encouraging news for Google, as a new era of AI devices is ushered in. Now it's poised to get ahead by leveraging its AI chops, and Apple might give it the exact opening it's waited more than a decade for."I don't know about an open goal," said Shmulik of Apple, "but it does feel like they've earned themselves a penalty kick."Have something to share? Contact this reporter via email at hlangley@businessinsider.com or Signal at 628-228-1836. Use a personal email address and a nonwork device; here's our guide to sharing information securely.
    0 Comments 0 Shares 0 Reviews
  • Sorry, Google and OpenAI: The future of AI hardware remains murky

    2026 may still be more than seven months away, but it’s already shaping up as the year of consumer AI hardware. Or at least the year of a flurry of high-stakes attempts to put generative AI at the heart of new kinds of devices—several of which were in the news this week.

    Let’s review. On Tuesday, at its I/O developer conference keynote, Google demonstrated smart glasses powered by its Android XR platform and announced that eyewear makers Warby Parker and Gentle Monster would be selling products based on it. The next day, OpenAI unveiled its billion acquisition of Jony Ive’s startup IO, which will put the Apple design legend at the center of the ChatGPT maker’s quest to build devices around its AI. And on Thursday, Bloomberg’s Mark Gurman reported that Apple hopes to release its own Siri-enhanced smart glasses. In theory, all these players may have products on the market by the end of next year.

    What I didn’t get from these developments was any new degree of confidence that anyone has figured out how to produce AI gadgets that vast numbers of real people will find indispensable. When and how that could happen remains murky—in certain respects, more than ever.

    To be fair, none of this week’s news involved products that are ready to be judged in full. Only Google has something ready to demonstrate in public at all: Here’s Janko Roettgers’s report on his I/O experience with prototype Android XR glasses built by Samsung. That the company has already made a fair amount of progress is only fitting given that Android XR scratches the same itch the company has had since it unveiled its ill-fated Google Glass a dozen years ago. It’s just that the available technologies—including Google’s Gemini LLM—have come a long, long way.

    Unlike the weird, downright alien-looking Glass, Google’s Android XR prototype resembles a slightly chunky pair of conventional glasses. It uses a conversational voice interface and a transparent mini-display that floats on your view of your surroundings. Google says that shipping products will have “all-day” battery life, a claim, vague though it is, that Glass could never make. But some of the usage scenarios that the company is showing off, such as real-time translation and mapping directions, are the same ones it once envisioned Glass enabling.

    The market’s rejection of Glass was so resounding that one of the few things people remember about the product is that its fans were seen as creepy, privacy-invading glassholes. Enough has happened since then—including the success of Meta’s smart Ray-Bans—that Android XR eyewear surely has a far better shot at acceptance. But as demoed at I/O, the floating screen came off as a roadblock between the user and the real world. Worst case, it might simply be a new, frictionless form of screen addiction that further distracts us from human contact.

    Meanwhile, the video announcement of OpenAI and IO’s merger was as polished as a Jony Ive-designed product—San Francisco has rarely looked so invitingly lustrous—but didn’t even try to offer details about their work in progress. Altman and Ive smothered each other in praise and talked about reinventing computing. Absent any specifics, Altman’s assessment of one of Ive’s prototypessounded like runaway enthusiasm at best and Barnumesque puffery at worst.

    Reporting on an OpenAI staff meeting regarding the news, The Wall Street Journal’s Berber Jin provided some additional tidbits about the OpenAI device. Mostly, they involved what it isn’t—such as a phone or glasses. It might not even be a wearable, at least on a full-time basis: According to Jin, the product will be “able to rest in one’s pocket or on one’s desk” and complement an iPhone and MacBook Pro without supplanting them.

    Whatever this thing is, Jin cites Altman predicting that it will sell 100 million units faster than any product before it. In 2007, by contrast, Apple forecast selling a more modest 10 million iPhones in the phone’s first full year on the market—a challenging goal at the time, though the company surpassed it.

    Now, discounting the possibility of something transformative emerging from OpenAI-IO would be foolish. Ive, after all, may have played a leading role in creating more landmark tech products than anyone else alive. Altman runs the company that gave us the most significant one of the past decade. But Ive rhapsodizing over their working relationship in the video isn’t any more promising a sign than him rhapsodizing over the solid gold Apple Watch was in 2015. And Altman, the biggest investor in Humane’s doomed AI Pin, doesn’t seem to have learned one of the most obvious lessons of that fiasco: Until you have a product in the market, it’s better to tamp down expectations than stoke them.

    You can’t accuse Apple of hyping any smart glasses it might release in 2026. It hasn’t publicly acknowledged their existence, and won’t until their arrival is much closer. If anything, the company may be hypersensitive to the downsides of premature promotion. Almost a year ago, it began trumpeting a new AI-infused version of Siri—one it clearly didn’t have working at the time, and still hasn’t released. After that embarrassing mishap, silencing the skeptics will require shipping stuff, not previewing what might be ahead. Even companies that aren’t presently trying to earn back their AI cred should take note and avoid repeating Apple’s mistake.

    I do believe AI demands that we rethink how computers work from the ground up. I also hope the smartphone doesn’t turn out to be the last must-have device, because if it were, that would be awfully boring. Maybe the best metric of success is hitting Apple’s 10-million-units-per-year goal for the original iPhone—which, perhaps coincidentally, is the same one set by EssilorLuxottica, the manufacturer of Meta’s smart Ray-Bans. If anything released next year gets there, it might be the landmark AI gizmo we haven’t yet seen. And if nothing does, we can safely declare that 2026 wasn’t the year of consumer AI hardware after all.

    You’ve been reading Plugged In, Fast Company’s weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to you—or if you’re reading it on FastCompany.com—you can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard.

    More top tech stories from Fast Company

    How Google is rethinking search in an AI-filled worldGoogle execs Liz Reid and Nick Fox explain how the company is rethinking everything from search results to advertising and personalization. Read More →

    Roku is doing more than ever, but focus is still its secret ingredientThe company that set out to make streaming simple has come a long way since 2008. Yet its current business all connects back to the original mission, says CEO Anthony Wood. Read More →

    Gen Z is willing to sell their personal data—for just a monthA new app, Verb.AI, wants to pay the generation that’s most laissez-faire on digital privacy for their scrolling time. Read More →

    Forget return-to-office. Hybrid now means human plus AIAs AI evolves, businesses should use the technology to complement, not replace, human workers. Read More →

    It turns out TikTok’s viral clear phone is just plastic. Meet the ‘Methaphone’Millions were fooled by a clip of a see-through phone. Its creator says it’s not tech—it’s a tool to break phone addiction. Read More →

    4 free Coursera courses to jump-start your AI journeySee what all the AI fuss is about without spending a dime. Read More →
    #sorry #google #openai #future #hardware
    Sorry, Google and OpenAI: The future of AI hardware remains murky
    2026 may still be more than seven months away, but it’s already shaping up as the year of consumer AI hardware. Or at least the year of a flurry of high-stakes attempts to put generative AI at the heart of new kinds of devices—several of which were in the news this week. Let’s review. On Tuesday, at its I/O developer conference keynote, Google demonstrated smart glasses powered by its Android XR platform and announced that eyewear makers Warby Parker and Gentle Monster would be selling products based on it. The next day, OpenAI unveiled its billion acquisition of Jony Ive’s startup IO, which will put the Apple design legend at the center of the ChatGPT maker’s quest to build devices around its AI. And on Thursday, Bloomberg’s Mark Gurman reported that Apple hopes to release its own Siri-enhanced smart glasses. In theory, all these players may have products on the market by the end of next year. What I didn’t get from these developments was any new degree of confidence that anyone has figured out how to produce AI gadgets that vast numbers of real people will find indispensable. When and how that could happen remains murky—in certain respects, more than ever. To be fair, none of this week’s news involved products that are ready to be judged in full. Only Google has something ready to demonstrate in public at all: Here’s Janko Roettgers’s report on his I/O experience with prototype Android XR glasses built by Samsung. That the company has already made a fair amount of progress is only fitting given that Android XR scratches the same itch the company has had since it unveiled its ill-fated Google Glass a dozen years ago. It’s just that the available technologies—including Google’s Gemini LLM—have come a long, long way. Unlike the weird, downright alien-looking Glass, Google’s Android XR prototype resembles a slightly chunky pair of conventional glasses. It uses a conversational voice interface and a transparent mini-display that floats on your view of your surroundings. Google says that shipping products will have “all-day” battery life, a claim, vague though it is, that Glass could never make. But some of the usage scenarios that the company is showing off, such as real-time translation and mapping directions, are the same ones it once envisioned Glass enabling. The market’s rejection of Glass was so resounding that one of the few things people remember about the product is that its fans were seen as creepy, privacy-invading glassholes. Enough has happened since then—including the success of Meta’s smart Ray-Bans—that Android XR eyewear surely has a far better shot at acceptance. But as demoed at I/O, the floating screen came off as a roadblock between the user and the real world. Worst case, it might simply be a new, frictionless form of screen addiction that further distracts us from human contact. Meanwhile, the video announcement of OpenAI and IO’s merger was as polished as a Jony Ive-designed product—San Francisco has rarely looked so invitingly lustrous—but didn’t even try to offer details about their work in progress. Altman and Ive smothered each other in praise and talked about reinventing computing. Absent any specifics, Altman’s assessment of one of Ive’s prototypessounded like runaway enthusiasm at best and Barnumesque puffery at worst. Reporting on an OpenAI staff meeting regarding the news, The Wall Street Journal’s Berber Jin provided some additional tidbits about the OpenAI device. Mostly, they involved what it isn’t—such as a phone or glasses. It might not even be a wearable, at least on a full-time basis: According to Jin, the product will be “able to rest in one’s pocket or on one’s desk” and complement an iPhone and MacBook Pro without supplanting them. Whatever this thing is, Jin cites Altman predicting that it will sell 100 million units faster than any product before it. In 2007, by contrast, Apple forecast selling a more modest 10 million iPhones in the phone’s first full year on the market—a challenging goal at the time, though the company surpassed it. Now, discounting the possibility of something transformative emerging from OpenAI-IO would be foolish. Ive, after all, may have played a leading role in creating more landmark tech products than anyone else alive. Altman runs the company that gave us the most significant one of the past decade. But Ive rhapsodizing over their working relationship in the video isn’t any more promising a sign than him rhapsodizing over the solid gold Apple Watch was in 2015. And Altman, the biggest investor in Humane’s doomed AI Pin, doesn’t seem to have learned one of the most obvious lessons of that fiasco: Until you have a product in the market, it’s better to tamp down expectations than stoke them. You can’t accuse Apple of hyping any smart glasses it might release in 2026. It hasn’t publicly acknowledged their existence, and won’t until their arrival is much closer. If anything, the company may be hypersensitive to the downsides of premature promotion. Almost a year ago, it began trumpeting a new AI-infused version of Siri—one it clearly didn’t have working at the time, and still hasn’t released. After that embarrassing mishap, silencing the skeptics will require shipping stuff, not previewing what might be ahead. Even companies that aren’t presently trying to earn back their AI cred should take note and avoid repeating Apple’s mistake. I do believe AI demands that we rethink how computers work from the ground up. I also hope the smartphone doesn’t turn out to be the last must-have device, because if it were, that would be awfully boring. Maybe the best metric of success is hitting Apple’s 10-million-units-per-year goal for the original iPhone—which, perhaps coincidentally, is the same one set by EssilorLuxottica, the manufacturer of Meta’s smart Ray-Bans. If anything released next year gets there, it might be the landmark AI gizmo we haven’t yet seen. And if nothing does, we can safely declare that 2026 wasn’t the year of consumer AI hardware after all. You’ve been reading Plugged In, Fast Company’s weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to you—or if you’re reading it on FastCompany.com—you can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard. More top tech stories from Fast Company How Google is rethinking search in an AI-filled worldGoogle execs Liz Reid and Nick Fox explain how the company is rethinking everything from search results to advertising and personalization. Read More → Roku is doing more than ever, but focus is still its secret ingredientThe company that set out to make streaming simple has come a long way since 2008. Yet its current business all connects back to the original mission, says CEO Anthony Wood. Read More → Gen Z is willing to sell their personal data—for just a monthA new app, Verb.AI, wants to pay the generation that’s most laissez-faire on digital privacy for their scrolling time. Read More → Forget return-to-office. Hybrid now means human plus AIAs AI evolves, businesses should use the technology to complement, not replace, human workers. Read More → It turns out TikTok’s viral clear phone is just plastic. Meet the ‘Methaphone’Millions were fooled by a clip of a see-through phone. Its creator says it’s not tech—it’s a tool to break phone addiction. Read More → 4 free Coursera courses to jump-start your AI journeySee what all the AI fuss is about without spending a dime. Read More → #sorry #google #openai #future #hardware
    WWW.FASTCOMPANY.COM
    Sorry, Google and OpenAI: The future of AI hardware remains murky
    2026 may still be more than seven months away, but it’s already shaping up as the year of consumer AI hardware. Or at least the year of a flurry of high-stakes attempts to put generative AI at the heart of new kinds of devices—several of which were in the news this week. Let’s review. On Tuesday, at its I/O developer conference keynote, Google demonstrated smart glasses powered by its Android XR platform and announced that eyewear makers Warby Parker and Gentle Monster would be selling products based on it. The next day, OpenAI unveiled its $6.5 billion acquisition of Jony Ive’s startup IO, which will put the Apple design legend at the center of the ChatGPT maker’s quest to build devices around its AI. And on Thursday, Bloomberg’s Mark Gurman reported that Apple hopes to release its own Siri-enhanced smart glasses. In theory, all these players may have products on the market by the end of next year. What I didn’t get from these developments was any new degree of confidence that anyone has figured out how to produce AI gadgets that vast numbers of real people will find indispensable. When and how that could happen remains murky—in certain respects, more than ever. To be fair, none of this week’s news involved products that are ready to be judged in full. Only Google has something ready to demonstrate in public at all: Here’s Janko Roettgers’s report on his I/O experience with prototype Android XR glasses built by Samsung. That the company has already made a fair amount of progress is only fitting given that Android XR scratches the same itch the company has had since it unveiled its ill-fated Google Glass a dozen years ago. It’s just that the available technologies—including Google’s Gemini LLM—have come a long, long way. Unlike the weird, downright alien-looking Glass, Google’s Android XR prototype resembles a slightly chunky pair of conventional glasses. It uses a conversational voice interface and a transparent mini-display that floats on your view of your surroundings. Google says that shipping products will have “all-day” battery life, a claim, vague though it is, that Glass could never make. But some of the usage scenarios that the company is showing off, such as real-time translation and mapping directions, are the same ones it once envisioned Glass enabling. The market’s rejection of Glass was so resounding that one of the few things people remember about the product is that its fans were seen as creepy, privacy-invading glassholes. Enough has happened since then—including the success of Meta’s smart Ray-Bans—that Android XR eyewear surely has a far better shot at acceptance. But as demoed at I/O, the floating screen came off as a roadblock between the user and the real world. Worst case, it might simply be a new, frictionless form of screen addiction that further distracts us from human contact. Meanwhile, the video announcement of OpenAI and IO’s merger was as polished as a Jony Ive-designed product—San Francisco has rarely looked so invitingly lustrous—but didn’t even try to offer details about their work in progress. Altman and Ive smothered each other in praise and talked about reinventing computing. Absent any specifics, Altman’s assessment of one of Ive’s prototypes (“The coolest piece of technology that the world will have ever seen”) sounded like runaway enthusiasm at best and Barnumesque puffery at worst. Reporting on an OpenAI staff meeting regarding the news, The Wall Street Journal’s Berber Jin provided some additional tidbits about the OpenAI device. Mostly, they involved what it isn’t—such as a phone or glasses. It might not even be a wearable, at least on a full-time basis: According to Jin, the product will be “able to rest in one’s pocket or on one’s desk” and complement an iPhone and MacBook Pro without supplanting them. Whatever this thing is, Jin cites Altman predicting that it will sell 100 million units faster than any product before it. In 2007, by contrast, Apple forecast selling a more modest 10 million iPhones in the phone’s first full year on the market—a challenging goal at the time, though the company surpassed it. Now, discounting the possibility of something transformative emerging from OpenAI-IO would be foolish. Ive, after all, may have played a leading role in creating more landmark tech products than anyone else alive. Altman runs the company that gave us the most significant one of the past decade. But Ive rhapsodizing over their working relationship in the video isn’t any more promising a sign than him rhapsodizing over the $10,000 solid gold Apple Watch was in 2015. And Altman, the biggest investor in Humane’s doomed AI Pin, doesn’t seem to have learned one of the most obvious lessons of that fiasco: Until you have a product in the market, it’s better to tamp down expectations than stoke them. You can’t accuse Apple of hyping any smart glasses it might release in 2026. It hasn’t publicly acknowledged their existence, and won’t until their arrival is much closer. If anything, the company may be hypersensitive to the downsides of premature promotion. Almost a year ago, it began trumpeting a new AI-infused version of Siri—one it clearly didn’t have working at the time, and still hasn’t released. After that embarrassing mishap, silencing the skeptics will require shipping stuff, not previewing what might be ahead. Even companies that aren’t presently trying to earn back their AI cred should take note and avoid repeating Apple’s mistake. I do believe AI demands that we rethink how computers work from the ground up. I also hope the smartphone doesn’t turn out to be the last must-have device, because if it were, that would be awfully boring. Maybe the best metric of success is hitting Apple’s 10-million-units-per-year goal for the original iPhone—which, perhaps coincidentally, is the same one set by EssilorLuxottica, the manufacturer of Meta’s smart Ray-Bans. If anything released next year gets there, it might be the landmark AI gizmo we haven’t yet seen. And if nothing does, we can safely declare that 2026 wasn’t the year of consumer AI hardware after all. You’ve been reading Plugged In, Fast Company’s weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to you—or if you’re reading it on FastCompany.com—you can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard. More top tech stories from Fast Company How Google is rethinking search in an AI-filled worldGoogle execs Liz Reid and Nick Fox explain how the company is rethinking everything from search results to advertising and personalization. Read More → Roku is doing more than ever, but focus is still its secret ingredientThe company that set out to make streaming simple has come a long way since 2008. Yet its current business all connects back to the original mission, says CEO Anthony Wood. Read More → Gen Z is willing to sell their personal data—for just $50 a monthA new app, Verb.AI, wants to pay the generation that’s most laissez-faire on digital privacy for their scrolling time. Read More → Forget return-to-office. Hybrid now means human plus AIAs AI evolves, businesses should use the technology to complement, not replace, human workers. Read More → It turns out TikTok’s viral clear phone is just plastic. Meet the ‘Methaphone’Millions were fooled by a clip of a see-through phone. Its creator says it’s not tech—it’s a tool to break phone addiction. Read More → 4 free Coursera courses to jump-start your AI journeySee what all the AI fuss is about without spending a dime. Read More →
    0 Comments 0 Shares 0 Reviews
  • Andor – Season 2: Mohen Leo (Production VFX Supervisor), TJ Falls (Production VFX Producer) and Scott Pritchard (ILM VFX Supervisor)

    Interviews

    Andor – Season 2: Mohen Leo, TJ Fallsand Scott PritchardBy Vincent Frei - 22/05/2025

    In 2023, Mohen Leo, TJ Falls, and Scott Pritchardoffered an in-depth look at the visual effects of Andor’s first season. Now, the trio returns to share insights into their work on the second—and final—season of this critically acclaimed series.
    Tony Gilroy is known for his detailed approach to storytelling. Can you talk about how your collaboration with him evolved throughout the production of Andor? How does he influence the VFX decisions and the overall tone of the series?
    Mohen Leo: Our history with Tony, from Rogue One through the first season of Andor, had built a strong foundation of mutual trust. For Season 2, he involved VFX from the earliest story discussions, sharing outlines and inviting our ideas for key sequences. His priority is always to keep the show feeling grounded, ensuring that visual effects serve the story’s core and never become extraneous spectacle that might distract from the narrative.
    TJ Falls: Tony is a master storyteller. As Mohen mentioned, we have a great history with Tony from Rogue One and through Season 1 of Andor. We had a great rapport with Tony, and he had implicit trust in us. We began prepping Season 2 while we were in post for Season 1. We were having ongoing conversations with Tony and Production Designer Luke Hull as we were completing work for S1 and planning out how we would progress into Season 2. We wanted to keep the show grounded and gritty while amping up the action and urgency. Tony had a lot of story to cover in 12 episodes. The time jumps between the story arcs were something we discussed early on, and the need to be able to not only justify the time jumps but also to provide the audience with a visual bridge to tell the stories that happened off-screen.
    Tony would look to us to guide and use our institutional knowledge of Star Wars to help keep him honest within the universe. He, similarly, challenged us to maintain our focus and ensure that the visual tone of the series serviced the story.
    Tony Gilroy and Genevieve O’Reilly on the set of Lucasfilm’s ANDOR Season 2, exclusively on Disney+. Photo by Des Willie. ©2024 Lucasfilm Ltd. & TM. All Rights Reserved.
    As you’ve returned for Season 2, have there been any significant changes or new challenges compared to the first season? How has the production evolved in terms of VFX and storytelling?: The return of nearly all key creatives from Season 1, both internally and at our VFX vendors, was a massive advantage. This continuity built immediate trust and an efficient shorthand. It made everyone comfortable to be more ambitious, allowing us to significantly expand the scope and complexity of the visual effects for Season 2.: We had all new directors this season. The rest of the core creative and production teams stayed consistent from Season 1. We worked to keep the creative process as seamless from Season 1 as we could while working with the new directors and adapting to their process while incorporating their individual skills and ideas that they brought to the table.
    This season we were able to work on location much more than on Season 1. That provided us with a great opportunity to build out the connective tissue between real world constraints and the virtual world we were creating. In the case with Senate Plaza in Coruscant we also had to stay consistent with what has previously been established, so that was a fun challenge.

    How did you go about dividing the workload between the various VFX studios?: I can give an answer, but probably better if TJ does.: We were very specific about how we divided the work on this series. We started, as we usually do, with a detailed breakdown of work for the 12 episodes. Mohen and I then discussed a logical split based on type of work, specific elements, and areas of commonality for particular environments. While cost is always a consideration, we focused our vendor casting around the creative strengths of the studios we were partnering with on the project.
    ILM is in the DNA of Star Wars, so we knew we’d want to be working with them on some of the most complex work. We chose ILM for the opening TIE Avenger hangar sequence and subsequent escape. We utilized ILM for work in every episode, including the CG KX/K2 work, but their main focus was on Coruscant, and they had substantial work in the ninth episode for the big Senate escape sequence. Hybride‘s chief focus was on Palmo Plaza and the Ghorman environments. They dealt with everything Ghorman on the ground from the street extensions and the truck crash, through the Ghorman massacre, sharing shots with ILM with the KX work. For Scanline VFX, we identified three primary areas of focus: the work on Mina Rau, Chandrila, and Yavin.

    The TIE Fighter sequence in Season 2 is a standout moment. Can you walk us through the VFX process for that particular sequence? What were some of the technical challenges you faced, and how did you work to make it as intense and realistic as possible?: This is a sequence I’m particularly proud of as VFX played a central role in the sequence coming together from start to finish. We were intimately involved from the initial conversations of the idea for the sequence. Mohen created digital storyboards and we pitched ideas for the sequence to Tony Gilroy. Once we had a sense of the creative brief, we started working with Luke Hulland the art department on the physical hangar set and brought it into previz for virtual scouting. With Jen Kitchingwe had a virtual camera set up that allowed us to virtually use the camera and lenses we would have on our shoot. We blocked out shots with Ariel Kleimanand Christophe Nuyens. This went back through previz and techviz so we could meticulously chart out our plan for the shoot.
    Keeping with our ethos of grounding everything in reality, we wanted to use as much of the practical set as possible. We needed to be sure our handoffs between physical and virtual were seamless – Luke Murphy, our SFX Supervisor, worked closely with us in planning elements and practical effects to be used on the day. Over the course of the shoot, we also had the challenge of the flashing red alarm that goes off once the TIE Avenger crashes into the ceiling. We established the look of the red alarm with Christophe and the lighting team, and then needed to work out the timing. For that, we collaborated with editor John Gilroy to ensure we knew precisely when each alarm beat would flash. Once we had all the pieces, we turned the sequence over to Scott Pritchard and ILM to execute the work.

    Scott Pritchard: This sequence was split between our London and Vancouver studios, with London taking everything inside the hangar, and Vancouver handling the exterior shots after Cassian blasts through the hangar door. We started from a strong foundation thanks to two factors: the amazing hangar set and TIE Avenger prop; and having full sequence previs. The hangar set was built about 2/3 of its overall length, which our environments team extended, adding the hangar doors at the end and also a view to the exterior environment. Extending the hangar was most of the work in the sequence up until the TIE starts moving, where we switched to our CG TIE. As with Season 1, we used a blend of physical SFX work for the pyro effects, augmenting with CG sparks. As TJ mentioned, the hangar’s red warning lighting was a challenge as it had to pulse in a regular tempo throughout the edit. Only the close-up shots of Cassian in the cockpit had practical red lighting, so complex lighting and comp work were required to achieve a consistent look throughout the sequence. ILM London’s compositing supervisor, Claudio Bassi, pitched the idea that as the TIE hit various sections of the ceiling, it would knock out the ceiling lights, progressively darkening the hangar. It was a great motif that helped heighten the tension as we get towards the moment where Cassian faces the range trooper.
    Once we cut to outside the hangar, ILM Vancouver took the reins. The exterior weather conditions were briefed to us as ‘polar night’ – it’s never entirely dark, instead there’s a consistent low-level ambient light. This was a challenge as we had to consider the overall tonal range of each shot and make sure there was enough contrast to guide the viewer’s eye to where it needed to be, not just on individual shots but looking at eye-trace as one shot cut to another. A key moment is when Cassian fires rockets into an ice arch, leading to its collapse. The ice could very easily look like rock, so we needed to see the light from the rocket’s explosions scattered inside the ice. It required detailed work in both lighting and comp to get to the right look. Again, as the ice arch starts to collapse and the two chase TIE Advanced ships get taken out, it needed careful balancing work to make sure viewers could read the situation and the action in each shot.
    The world-building in Andor is impressive, especially with iconic locations like Coruscant and Yavin. How did you approach creating these environments and ensuring they felt as authentic as possible to the Star Wars universe?: Our approach to world-building in Andor relied on a close collaboration between the VFX team and Luke Hull, the production designer, along with his art department. This partnership was established in Season 1 and continued for Season 2. Having worked on many Star Wars projects over the decades, VFX was often able to provide inspiration and references for art department designs.
    For example, for locations like Yavin and Coruscant, VFX provided the art department with existing 3D assets: the Yavin temple model from Rogue One and the Coruscant city layout around the Senate from the Prequel films. The Coruscant model, in particular, involved some ‘digital archaeology.’ The data was stored on tapes from around 2001 and consisted of NURBS models in an older Softimage file format. To make them usable, we had to acquire old Softimage 2010 and XSI licenses, install them on a Windows 7 PC, and then convert the data to the FBX format that current software can read.
    Supplying these original layouts to the art department enabled them to create their new designs and integrate our real-world shooting locations while maintaining consistency with the worlds seen in previous Star Wars productions. Given that Andor is set approximately twenty years after the Prequels, we also had the opportunity to update and adjust layouts and designs to reflect that time difference and realize the specific creative vision Luke Hull and Tony Gilroy had for the show.

    StageCraft technology is a huge part of the production. How did you use it to bring these complex environments, like Coruscant and Yavin, to life? What are the main benefits and limitations of using StageCraft for these settings?: Our use of StageCraft for Season 2 was similar to that on Season 1. We used it to create the exterior views through the windows of the Safehouse on Coruscant. As with our work for the Chandrillan Embassy in Season 1, we created four different times of day/weather conditions. One key difference was that the foreground buildings were much closer to the Safehouse, so we devised three projection points, which would ensure that the perspective of the exterior was correct for each room. On set we retained a large amount of flexibility with our content. We had our own video feed from the unit cameras, and we were able to selectively isolate and grade sections of the city based on their view through the camera. Working in context like this meant that we could make any final tweaks while each shot was being set up and rehearsed.
    While we were shooting a scene set at night, the lighting team rigged a series of lights running above the windows that, when triggered, would flash in sequence, casting a moving light along the floor and walls of the set, as if from a moving car above. I thought we could use the LED wall to do something similar from below, catching highlights on the metal pipework that ran across the ceiling. During a break in shooting, I hatched a plan with colour operator Melissa Goddard, brain bar supervisor Ben Brown, and we came up with a moving rectangular section on the LED wall which matched the practical lights for speed, intensity and colour temperature. We set up two buttons on our iPad to trigger the ‘light’ to move in either direction. We demoed the idea to the DP after lunch, who loved it, and so when it came to shoot, he could either call from a car above from the practical lights, or a car below from the LEDs.: Just to clarify – the Coruscant Safehouse set was the only application of Stagecraft LED screens in Season 2. All other Coruscant scenes relied on urban location photography or stage sets with traditional blue screen extensions.
    The various Yavin locations were achieved primarily with large backlot sets at Longcross Studios. A huge set of the airfield, temple entrance and partial temple interior was extended by Scanline VFX, led by Sue Rowe, in post, creating the iconic temple exterior from A New Hope. VFX also added flying and parked spaceships, and augmented the surrounding forest to feel more tropical.

    Andor blends CG with actual real-world locations. Can you share how you balanced these two elements, especially when creating large-scale environments or specific landscapes that felt grounded in reality?: A great example of this is the environment around the Senate. The plates for this were shot in the City of Arts & Sciences in Valencia. Blending the distinctive Calatrava architecture with well-known Star Wars buildings like the Senate was an amazing challenge, it wasn’t immediately clear how the two could sit alongside each other. Our Vancouver team, led by Tania Richard, did an incredible job taking motifs and details from the Valencia buildings and incorporating them into the Senate building on both large and small scales, but still contiguous with the overall Senate design. The production team was ingenious in how they used each of the Valencia buildings to represent many locations around the Senate and the surrounding areas. For example, the Science Museum was used for the walkway where Cassian shoots Kloris, the main entrance to the Senate, and the interior of the Senate Atrium. It was a major challenge ensuring that all those locations were represented across the larger environment, so viewers understood the geography of the scene, but also blended with the design language of their immediate surroundings.
    Everything in the Senate Plaza had a purpose. When laying out the overall layout of the Plaza, we considered aspects such as how far Senators would realistically walk from their transports to the Senate entrance. When extending the Plaza beyond the extents of the City of Arts & Sciences, we used Calatrava architecture from elsewhere. The bridge just in front of the Senatorial Office Building is based on a Calatrava-designed bridge in my home city of Dublin. As we reach the furthest extents of the Senate Plaza, we begin blending in more traditional Coruscant architecture so as to soften the transition to the far background.

    Coruscant is such a pivotal location in Star Wars. How did you approach creating such a vast, densely populated urban environment? What were the key visual cues that made it feel alive and realistic?: Our approach to Coruscant in Season 2 built upon what we established in the first season: primarily, shooting in real-world city locations whenever feasible. The stunning Calatrava architecture at Valencia’s City of Arts and Sciences, for instance, served as the foundation for the Senate exterior and other affluent districts. For the city’s grittier neighborhoods, we filmed in urban environments in London, like the Barbican and areas around Twickenham Stadium.
    Filming in these actual city locations provided a strong, realistic basis for the cinematography, lighting, and overall mood of each environment. This remained true even when VFX later modified large portions of the frame with Star Wars architecture. This methodology gave the director and DP confidence on set that their vision would carry through to the final shot. Our art department and VFX concept artists then created numerous paintovers based on plates and location photography, offering clear visual guides for transforming each real location into its Coruscant counterpart during post-production. For the broader cityscapes, we took direct inspiration from 3D street maps of cities such as Tokyo, New York, and Hong Kong. We would exaggerate the scale and replace existing buildings with our Coruscant designs while preserving the fundamental urban patterns.

    When it comes to creating environments like Yavin, which has a very natural, jungle-like aesthetic, how do you ensure the VFX stays true to the organic feel of the location while still maintaining the science-fiction elements of Star Wars?: Nearly all of the Yavin jungle scenes were shot in a large wooded area that is part of Longcross Studios. The greens and art departments did an amazing job augmenting the natural forest with tropical plants and vines. The scenes featuring the two rebel factions in the clearing were captured almost entirely in-camera, with VFX primarily adding blaster fire, augmenting the crashed ship, and painting out equipment. Only the shots of the TIE Avenger landing and taking off, as well as the giant creature snatching the two rebels, featured significant CG elements. The key elements connecting these practical locations back to the Yavin established in A New Hope and Rogue One were the iconic temples. The establishing shots approaching the main temple in episode 7 utilized plate photography from South America, which had been shot for another Disney project but ultimately not used. Other aerial shots, such as the U-Wing flying above the jungle in episode 12, were fully computer-generated by ILM.
    K-2SO is a beloved character, and his return is highly anticipated. What can you tell us about the process of bringing him back to life with VFX in Season 2? What new challenges did this bring compared to his original appearance?: We had already updated a regular KX droid for the scene on Niamos in Season 1, so much of the work to update the asset to the latest pipeline requirements had already been done. We now needed to switch over to the textures & shaders specific to K2, and give them the same updates. Unique to Series 2 was that there were a number of scenes involving both a practical and a digital K2 – when he gets crushed on Ghorman in episode 8, and then ‘rebooted’ on Yavin in episode 9. The practical props were a lot more beaten up than our hero asset, so we made bespoke variants to match the practical droid in each sequence. Additionally, for the reboot sequence on Yavin, we realised pretty quickly that the extreme movements meant that we were seeing into areas that previously had not required much detail – for instance, underneath his shoulder armour. We came up with a shoulder joint design that allowed for the required movement while also staying mechanically correct. When we next see him in Episode 10, a year has passed, and he is now the K-2SO as we know him from Rogue One.

    K-2SO has a unique design, particularly in his facial expressions and movement. How did you approach animating him for Season 2, and were there any specific changes or updates made to his character model or animation?: Following Rogue One, Mohen made detailed records of the takeaways learned from creating K-2SO, and he kindly shared these notes with us early on in the show. They were incredibly helpful in tuning the fine details of the animation. Our animation team, led by Mathieu Vig, did a superb job of identifying the nuances of Alan’s performance and making sure they came across. There were plenty of pitfalls to avoid – for instance, the curve to his upper back meant that it was very easy for his neck to look hyperextended. We also had to be very careful with his eyes, as they’re sources of light, they could very easily look cartoonish if they moved around too much. Dialling in just the right amount of eye movement was crucial to a good performance.
    As the eyes also had several separate emissive and reflective components, they required delicate balancing in the comp on a per-shot basis. Luckily, we had great reference from Rogue One to be able to dial in the eyes to suit both the lighting of a shot but also its performance details. One Rogue One shot in particular, where he says ‘Your behavior, Jyn Erso, is continually unexpected’, was a particularly good reference for how we could balance the lights in his eyes to, in effect, enlarge his pupils, and give him a softer expression.
    K-2SO also represented my first opportunity to work with ILM’s new studio in Mumbai. Amongst other shots, they took on the ‘hallway fight’ sequence in Episode 12 where K2 dispatches Heert and his troopers, and they did a fantastic job from animation right through to final comp.
    K-2SO’s interactions with the live-action actors are key to his character. How did you work with the actors to ensure his presence felt as real and integrated as possible on screen, especially in terms of timing and reactions?: Alan Tudyk truly defined K-2SO in Rogue One, so his return for Andor Season 2 was absolutely critical to us. He was on set for every one of K2’s shots, performing on stilts and in a performance capture suit. This approach was vital because it gave Alan complete ownership of the character’s physical performance and, crucially, allowed for spontaneous, genuine interactions with the other actors, particularly Diego Luna. Witnessing Alan and Diego reunite on camera was fantastic; that unique chemistry and humor we loved in Rogue One was instantly palpable.
    In post-production, our VFX animators then meticulously translated every nuance of Alan’s on-set performance to the digital K-2SO model. It’s a detailed process that still requires artistic expertise. For instance, K2’s facial structure is largely static, so direct translation of Alan’s facial expressions isn’t always possible. In these cases, our animators found creative solutions – translating a specific facial cue from Alan into a subtle head tilt or a particular eye movement for K2, always ensuring the final animation remained true to the intent and spirit of Alan’s original performance.

    Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint?: The Plaza sequence in episode 8, which runs for about 23 minutes, stands out as particularly memorable – both for its challenges and its rewarding outcome. Just preparing for it was a daunting task. Its successful execution hinged on incredibly tight collaboration between numerous departments: stunts, creature effects, special effects, the camera department, our tireless greenscreens crew, and of course, VFX. The stunts team, under Marc Mailley, drove the choreography of all the action.
    Our On-Set VFX Supervisor, Marcus Dryden, was instrumental. He worked hand-in-glove with the director, DP, and assistant directors to ensure we meticulously captured all the necessary elements. This included everything from crowd replication plates and practical effects elements to the performances of stunt teams and creature actors, plus all the crucial on-set data. The shoot for this sequence alone took over three weeks.
    Hybride, under the leadership of Joseph Kasparian and Olivier Beaulieu, then completed the environments, added the blaster fire, and augmented the special effects in post-production, with ILM contributing the KX droids that wreak havoc in the plaza.: I agree with Mohen here, for me the Ghorman Plaza episode is the most rewarding to have worked on. It required us to weave our work into that of so many other departments – stunts, sfx, costume – to name just a few. When we received the plates, to see the quality of the work that had gone into the photography alone was inspirational for me and the ILM crew. It’s gratifying to be part of a team where you know that everyone involved is on top of their game. And of course all that is underpinned by writing of that calibre from Tony Gilroy and his team – it just draws everything together.
    From a pure design viewpoint, I’m also very proud of the work that Tania Richard and her ILM Vancouver crew did for the Senate shots. As I mentioned before, it was a hugely challenging environment not just logistically, but also in bringing together two very distinctive architectural languages, and they made them work in tandem beautifully.

    Looking back on the project, what aspects of the visual effects are you most proud of?: I’m incredibly proud of this entire season. The seamless collaboration we had between Visual Effects and every other department made the work, while challenging, an absolute joy to execute. Almost all of the department heads returned from the first season, which provided a shorthand shortcut as we started the show with implicit trust and understanding of what we were looking to achieve.
    The work is beautiful, and the commitment of our crew and vendors has been unwavering. I’m most proud of the effort and care that each individual person contributed to the show and the fact that we went into the project with a common goal and were, as a team, able to showcase the vision that we, and Tony, had for the series.: I’m really proud of the deep integration of the visual effects – not just visually, but fundamentally within the filmmaking process and storytelling. Tony invited VFX to be a key participant in shaping the story, from early story drafts through to the final color grade. Despite the scale and spectacle of many sequences, the VFX always feel purposeful, supporting the narrative and characters rather than distracting from them.
    This was significantly bolstered by the return of a large number of key creatives from Season 1, both within the production and at our VFX vendors. That shared experience and established understanding of Tony’s vision for Andor were invaluable in making the VFX an organic part of the show.: I could not be prouder of the entire ILM team for everything they brought to their work on the show. Working across three sites, Andor was a truly global effort, and I particularly enjoyed how each site took complete ownership of their work. It was a privilege working with all of them and contributing to such an exceptional series.

    VFX progression frame Lucasfilm’s ANDOR Season 2, exclusively on Disney+. Photo courtesy of Lucasfilm. ©2025 Lucasfilm Ltd. & TM. All Rights Reserved.
    How long have you worked on this show?: This show has been an unbelievable journey. Season 2 alone was nearly 3 years. We wrapped Season 2 in January of 2025. We started prepping Season 2 in February 2022, while we were still in post for Season 1.
    I officially started working on Season 1 early in 2019 while it was still being developed. So that’s 6 years of time working on Andor. Mohen and I both also worked on Rogue One, so if you factor in the movie, which was shooting in 2015, that’s nearly ten years of work within this part of the Star Wars universe.: I started on the project during early development in the summer of 2019 and finished in December of 2024.: I started on Season 1 in September 2020 and finished up on Season 2 in December 2024.
    What’s the VFX shots count?: We had a grand total of 4,124 shots over the course of our 12 episodes. Outside of Industrial Light & Magic, which oversaw the show, we also partnered with Hybride, Scanline, Soho VFX, and Midas VFX.
    What is your next project?: You’ll have to wait and see!: Unfortunately, I can’t say just yet either!
    A big thanks for your time.
    WANT TO KNOW MORE?ILM: Dedicated page about Andor – Season 2 on ILM website.
    © Vincent Frei – The Art of VFX – 2025
    #andor #season #mohen #leo #production
    Andor – Season 2: Mohen Leo (Production VFX Supervisor), TJ Falls (Production VFX Producer) and Scott Pritchard (ILM VFX Supervisor)
    Interviews Andor – Season 2: Mohen Leo, TJ Fallsand Scott PritchardBy Vincent Frei - 22/05/2025 In 2023, Mohen Leo, TJ Falls, and Scott Pritchardoffered an in-depth look at the visual effects of Andor’s first season. Now, the trio returns to share insights into their work on the second—and final—season of this critically acclaimed series. Tony Gilroy is known for his detailed approach to storytelling. Can you talk about how your collaboration with him evolved throughout the production of Andor? How does he influence the VFX decisions and the overall tone of the series? Mohen Leo: Our history with Tony, from Rogue One through the first season of Andor, had built a strong foundation of mutual trust. For Season 2, he involved VFX from the earliest story discussions, sharing outlines and inviting our ideas for key sequences. His priority is always to keep the show feeling grounded, ensuring that visual effects serve the story’s core and never become extraneous spectacle that might distract from the narrative. TJ Falls: Tony is a master storyteller. As Mohen mentioned, we have a great history with Tony from Rogue One and through Season 1 of Andor. We had a great rapport with Tony, and he had implicit trust in us. We began prepping Season 2 while we were in post for Season 1. We were having ongoing conversations with Tony and Production Designer Luke Hull as we were completing work for S1 and planning out how we would progress into Season 2. We wanted to keep the show grounded and gritty while amping up the action and urgency. Tony had a lot of story to cover in 12 episodes. The time jumps between the story arcs were something we discussed early on, and the need to be able to not only justify the time jumps but also to provide the audience with a visual bridge to tell the stories that happened off-screen. Tony would look to us to guide and use our institutional knowledge of Star Wars to help keep him honest within the universe. He, similarly, challenged us to maintain our focus and ensure that the visual tone of the series serviced the story. Tony Gilroy and Genevieve O’Reilly on the set of Lucasfilm’s ANDOR Season 2, exclusively on Disney+. Photo by Des Willie. ©2024 Lucasfilm Ltd. & TM. All Rights Reserved. As you’ve returned for Season 2, have there been any significant changes or new challenges compared to the first season? How has the production evolved in terms of VFX and storytelling?: The return of nearly all key creatives from Season 1, both internally and at our VFX vendors, was a massive advantage. This continuity built immediate trust and an efficient shorthand. It made everyone comfortable to be more ambitious, allowing us to significantly expand the scope and complexity of the visual effects for Season 2.: We had all new directors this season. The rest of the core creative and production teams stayed consistent from Season 1. We worked to keep the creative process as seamless from Season 1 as we could while working with the new directors and adapting to their process while incorporating their individual skills and ideas that they brought to the table. This season we were able to work on location much more than on Season 1. That provided us with a great opportunity to build out the connective tissue between real world constraints and the virtual world we were creating. In the case with Senate Plaza in Coruscant we also had to stay consistent with what has previously been established, so that was a fun challenge. How did you go about dividing the workload between the various VFX studios?: I can give an answer, but probably better if TJ does.: We were very specific about how we divided the work on this series. We started, as we usually do, with a detailed breakdown of work for the 12 episodes. Mohen and I then discussed a logical split based on type of work, specific elements, and areas of commonality for particular environments. While cost is always a consideration, we focused our vendor casting around the creative strengths of the studios we were partnering with on the project. ILM is in the DNA of Star Wars, so we knew we’d want to be working with them on some of the most complex work. We chose ILM for the opening TIE Avenger hangar sequence and subsequent escape. We utilized ILM for work in every episode, including the CG KX/K2 work, but their main focus was on Coruscant, and they had substantial work in the ninth episode for the big Senate escape sequence. Hybride‘s chief focus was on Palmo Plaza and the Ghorman environments. They dealt with everything Ghorman on the ground from the street extensions and the truck crash, through the Ghorman massacre, sharing shots with ILM with the KX work. For Scanline VFX, we identified three primary areas of focus: the work on Mina Rau, Chandrila, and Yavin. The TIE Fighter sequence in Season 2 is a standout moment. Can you walk us through the VFX process for that particular sequence? What were some of the technical challenges you faced, and how did you work to make it as intense and realistic as possible?: This is a sequence I’m particularly proud of as VFX played a central role in the sequence coming together from start to finish. We were intimately involved from the initial conversations of the idea for the sequence. Mohen created digital storyboards and we pitched ideas for the sequence to Tony Gilroy. Once we had a sense of the creative brief, we started working with Luke Hulland the art department on the physical hangar set and brought it into previz for virtual scouting. With Jen Kitchingwe had a virtual camera set up that allowed us to virtually use the camera and lenses we would have on our shoot. We blocked out shots with Ariel Kleimanand Christophe Nuyens. This went back through previz and techviz so we could meticulously chart out our plan for the shoot. Keeping with our ethos of grounding everything in reality, we wanted to use as much of the practical set as possible. We needed to be sure our handoffs between physical and virtual were seamless – Luke Murphy, our SFX Supervisor, worked closely with us in planning elements and practical effects to be used on the day. Over the course of the shoot, we also had the challenge of the flashing red alarm that goes off once the TIE Avenger crashes into the ceiling. We established the look of the red alarm with Christophe and the lighting team, and then needed to work out the timing. For that, we collaborated with editor John Gilroy to ensure we knew precisely when each alarm beat would flash. Once we had all the pieces, we turned the sequence over to Scott Pritchard and ILM to execute the work. Scott Pritchard: This sequence was split between our London and Vancouver studios, with London taking everything inside the hangar, and Vancouver handling the exterior shots after Cassian blasts through the hangar door. We started from a strong foundation thanks to two factors: the amazing hangar set and TIE Avenger prop; and having full sequence previs. The hangar set was built about 2/3 of its overall length, which our environments team extended, adding the hangar doors at the end and also a view to the exterior environment. Extending the hangar was most of the work in the sequence up until the TIE starts moving, where we switched to our CG TIE. As with Season 1, we used a blend of physical SFX work for the pyro effects, augmenting with CG sparks. As TJ mentioned, the hangar’s red warning lighting was a challenge as it had to pulse in a regular tempo throughout the edit. Only the close-up shots of Cassian in the cockpit had practical red lighting, so complex lighting and comp work were required to achieve a consistent look throughout the sequence. ILM London’s compositing supervisor, Claudio Bassi, pitched the idea that as the TIE hit various sections of the ceiling, it would knock out the ceiling lights, progressively darkening the hangar. It was a great motif that helped heighten the tension as we get towards the moment where Cassian faces the range trooper. Once we cut to outside the hangar, ILM Vancouver took the reins. The exterior weather conditions were briefed to us as ‘polar night’ – it’s never entirely dark, instead there’s a consistent low-level ambient light. This was a challenge as we had to consider the overall tonal range of each shot and make sure there was enough contrast to guide the viewer’s eye to where it needed to be, not just on individual shots but looking at eye-trace as one shot cut to another. A key moment is when Cassian fires rockets into an ice arch, leading to its collapse. The ice could very easily look like rock, so we needed to see the light from the rocket’s explosions scattered inside the ice. It required detailed work in both lighting and comp to get to the right look. Again, as the ice arch starts to collapse and the two chase TIE Advanced ships get taken out, it needed careful balancing work to make sure viewers could read the situation and the action in each shot. The world-building in Andor is impressive, especially with iconic locations like Coruscant and Yavin. How did you approach creating these environments and ensuring they felt as authentic as possible to the Star Wars universe?: Our approach to world-building in Andor relied on a close collaboration between the VFX team and Luke Hull, the production designer, along with his art department. This partnership was established in Season 1 and continued for Season 2. Having worked on many Star Wars projects over the decades, VFX was often able to provide inspiration and references for art department designs. For example, for locations like Yavin and Coruscant, VFX provided the art department with existing 3D assets: the Yavin temple model from Rogue One and the Coruscant city layout around the Senate from the Prequel films. The Coruscant model, in particular, involved some ‘digital archaeology.’ The data was stored on tapes from around 2001 and consisted of NURBS models in an older Softimage file format. To make them usable, we had to acquire old Softimage 2010 and XSI licenses, install them on a Windows 7 PC, and then convert the data to the FBX format that current software can read. Supplying these original layouts to the art department enabled them to create their new designs and integrate our real-world shooting locations while maintaining consistency with the worlds seen in previous Star Wars productions. Given that Andor is set approximately twenty years after the Prequels, we also had the opportunity to update and adjust layouts and designs to reflect that time difference and realize the specific creative vision Luke Hull and Tony Gilroy had for the show. StageCraft technology is a huge part of the production. How did you use it to bring these complex environments, like Coruscant and Yavin, to life? What are the main benefits and limitations of using StageCraft for these settings?: Our use of StageCraft for Season 2 was similar to that on Season 1. We used it to create the exterior views through the windows of the Safehouse on Coruscant. As with our work for the Chandrillan Embassy in Season 1, we created four different times of day/weather conditions. One key difference was that the foreground buildings were much closer to the Safehouse, so we devised three projection points, which would ensure that the perspective of the exterior was correct for each room. On set we retained a large amount of flexibility with our content. We had our own video feed from the unit cameras, and we were able to selectively isolate and grade sections of the city based on their view through the camera. Working in context like this meant that we could make any final tweaks while each shot was being set up and rehearsed. While we were shooting a scene set at night, the lighting team rigged a series of lights running above the windows that, when triggered, would flash in sequence, casting a moving light along the floor and walls of the set, as if from a moving car above. I thought we could use the LED wall to do something similar from below, catching highlights on the metal pipework that ran across the ceiling. During a break in shooting, I hatched a plan with colour operator Melissa Goddard, brain bar supervisor Ben Brown, and we came up with a moving rectangular section on the LED wall which matched the practical lights for speed, intensity and colour temperature. We set up two buttons on our iPad to trigger the ‘light’ to move in either direction. We demoed the idea to the DP after lunch, who loved it, and so when it came to shoot, he could either call from a car above from the practical lights, or a car below from the LEDs.: Just to clarify – the Coruscant Safehouse set was the only application of Stagecraft LED screens in Season 2. All other Coruscant scenes relied on urban location photography or stage sets with traditional blue screen extensions. The various Yavin locations were achieved primarily with large backlot sets at Longcross Studios. A huge set of the airfield, temple entrance and partial temple interior was extended by Scanline VFX, led by Sue Rowe, in post, creating the iconic temple exterior from A New Hope. VFX also added flying and parked spaceships, and augmented the surrounding forest to feel more tropical. Andor blends CG with actual real-world locations. Can you share how you balanced these two elements, especially when creating large-scale environments or specific landscapes that felt grounded in reality?: A great example of this is the environment around the Senate. The plates for this were shot in the City of Arts & Sciences in Valencia. Blending the distinctive Calatrava architecture with well-known Star Wars buildings like the Senate was an amazing challenge, it wasn’t immediately clear how the two could sit alongside each other. Our Vancouver team, led by Tania Richard, did an incredible job taking motifs and details from the Valencia buildings and incorporating them into the Senate building on both large and small scales, but still contiguous with the overall Senate design. The production team was ingenious in how they used each of the Valencia buildings to represent many locations around the Senate and the surrounding areas. For example, the Science Museum was used for the walkway where Cassian shoots Kloris, the main entrance to the Senate, and the interior of the Senate Atrium. It was a major challenge ensuring that all those locations were represented across the larger environment, so viewers understood the geography of the scene, but also blended with the design language of their immediate surroundings. Everything in the Senate Plaza had a purpose. When laying out the overall layout of the Plaza, we considered aspects such as how far Senators would realistically walk from their transports to the Senate entrance. When extending the Plaza beyond the extents of the City of Arts & Sciences, we used Calatrava architecture from elsewhere. The bridge just in front of the Senatorial Office Building is based on a Calatrava-designed bridge in my home city of Dublin. As we reach the furthest extents of the Senate Plaza, we begin blending in more traditional Coruscant architecture so as to soften the transition to the far background. Coruscant is such a pivotal location in Star Wars. How did you approach creating such a vast, densely populated urban environment? What were the key visual cues that made it feel alive and realistic?: Our approach to Coruscant in Season 2 built upon what we established in the first season: primarily, shooting in real-world city locations whenever feasible. The stunning Calatrava architecture at Valencia’s City of Arts and Sciences, for instance, served as the foundation for the Senate exterior and other affluent districts. For the city’s grittier neighborhoods, we filmed in urban environments in London, like the Barbican and areas around Twickenham Stadium. Filming in these actual city locations provided a strong, realistic basis for the cinematography, lighting, and overall mood of each environment. This remained true even when VFX later modified large portions of the frame with Star Wars architecture. This methodology gave the director and DP confidence on set that their vision would carry through to the final shot. Our art department and VFX concept artists then created numerous paintovers based on plates and location photography, offering clear visual guides for transforming each real location into its Coruscant counterpart during post-production. For the broader cityscapes, we took direct inspiration from 3D street maps of cities such as Tokyo, New York, and Hong Kong. We would exaggerate the scale and replace existing buildings with our Coruscant designs while preserving the fundamental urban patterns. When it comes to creating environments like Yavin, which has a very natural, jungle-like aesthetic, how do you ensure the VFX stays true to the organic feel of the location while still maintaining the science-fiction elements of Star Wars?: Nearly all of the Yavin jungle scenes were shot in a large wooded area that is part of Longcross Studios. The greens and art departments did an amazing job augmenting the natural forest with tropical plants and vines. The scenes featuring the two rebel factions in the clearing were captured almost entirely in-camera, with VFX primarily adding blaster fire, augmenting the crashed ship, and painting out equipment. Only the shots of the TIE Avenger landing and taking off, as well as the giant creature snatching the two rebels, featured significant CG elements. The key elements connecting these practical locations back to the Yavin established in A New Hope and Rogue One were the iconic temples. The establishing shots approaching the main temple in episode 7 utilized plate photography from South America, which had been shot for another Disney project but ultimately not used. Other aerial shots, such as the U-Wing flying above the jungle in episode 12, were fully computer-generated by ILM. K-2SO is a beloved character, and his return is highly anticipated. What can you tell us about the process of bringing him back to life with VFX in Season 2? What new challenges did this bring compared to his original appearance?: We had already updated a regular KX droid for the scene on Niamos in Season 1, so much of the work to update the asset to the latest pipeline requirements had already been done. We now needed to switch over to the textures & shaders specific to K2, and give them the same updates. Unique to Series 2 was that there were a number of scenes involving both a practical and a digital K2 – when he gets crushed on Ghorman in episode 8, and then ‘rebooted’ on Yavin in episode 9. The practical props were a lot more beaten up than our hero asset, so we made bespoke variants to match the practical droid in each sequence. Additionally, for the reboot sequence on Yavin, we realised pretty quickly that the extreme movements meant that we were seeing into areas that previously had not required much detail – for instance, underneath his shoulder armour. We came up with a shoulder joint design that allowed for the required movement while also staying mechanically correct. When we next see him in Episode 10, a year has passed, and he is now the K-2SO as we know him from Rogue One. K-2SO has a unique design, particularly in his facial expressions and movement. How did you approach animating him for Season 2, and were there any specific changes or updates made to his character model or animation?: Following Rogue One, Mohen made detailed records of the takeaways learned from creating K-2SO, and he kindly shared these notes with us early on in the show. They were incredibly helpful in tuning the fine details of the animation. Our animation team, led by Mathieu Vig, did a superb job of identifying the nuances of Alan’s performance and making sure they came across. There were plenty of pitfalls to avoid – for instance, the curve to his upper back meant that it was very easy for his neck to look hyperextended. We also had to be very careful with his eyes, as they’re sources of light, they could very easily look cartoonish if they moved around too much. Dialling in just the right amount of eye movement was crucial to a good performance. As the eyes also had several separate emissive and reflective components, they required delicate balancing in the comp on a per-shot basis. Luckily, we had great reference from Rogue One to be able to dial in the eyes to suit both the lighting of a shot but also its performance details. One Rogue One shot in particular, where he says ‘Your behavior, Jyn Erso, is continually unexpected’, was a particularly good reference for how we could balance the lights in his eyes to, in effect, enlarge his pupils, and give him a softer expression. K-2SO also represented my first opportunity to work with ILM’s new studio in Mumbai. Amongst other shots, they took on the ‘hallway fight’ sequence in Episode 12 where K2 dispatches Heert and his troopers, and they did a fantastic job from animation right through to final comp. K-2SO’s interactions with the live-action actors are key to his character. How did you work with the actors to ensure his presence felt as real and integrated as possible on screen, especially in terms of timing and reactions?: Alan Tudyk truly defined K-2SO in Rogue One, so his return for Andor Season 2 was absolutely critical to us. He was on set for every one of K2’s shots, performing on stilts and in a performance capture suit. This approach was vital because it gave Alan complete ownership of the character’s physical performance and, crucially, allowed for spontaneous, genuine interactions with the other actors, particularly Diego Luna. Witnessing Alan and Diego reunite on camera was fantastic; that unique chemistry and humor we loved in Rogue One was instantly palpable. In post-production, our VFX animators then meticulously translated every nuance of Alan’s on-set performance to the digital K-2SO model. It’s a detailed process that still requires artistic expertise. For instance, K2’s facial structure is largely static, so direct translation of Alan’s facial expressions isn’t always possible. In these cases, our animators found creative solutions – translating a specific facial cue from Alan into a subtle head tilt or a particular eye movement for K2, always ensuring the final animation remained true to the intent and spirit of Alan’s original performance. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint?: The Plaza sequence in episode 8, which runs for about 23 minutes, stands out as particularly memorable – both for its challenges and its rewarding outcome. Just preparing for it was a daunting task. Its successful execution hinged on incredibly tight collaboration between numerous departments: stunts, creature effects, special effects, the camera department, our tireless greenscreens crew, and of course, VFX. The stunts team, under Marc Mailley, drove the choreography of all the action. Our On-Set VFX Supervisor, Marcus Dryden, was instrumental. He worked hand-in-glove with the director, DP, and assistant directors to ensure we meticulously captured all the necessary elements. This included everything from crowd replication plates and practical effects elements to the performances of stunt teams and creature actors, plus all the crucial on-set data. The shoot for this sequence alone took over three weeks. Hybride, under the leadership of Joseph Kasparian and Olivier Beaulieu, then completed the environments, added the blaster fire, and augmented the special effects in post-production, with ILM contributing the KX droids that wreak havoc in the plaza.: I agree with Mohen here, for me the Ghorman Plaza episode is the most rewarding to have worked on. It required us to weave our work into that of so many other departments – stunts, sfx, costume – to name just a few. When we received the plates, to see the quality of the work that had gone into the photography alone was inspirational for me and the ILM crew. It’s gratifying to be part of a team where you know that everyone involved is on top of their game. And of course all that is underpinned by writing of that calibre from Tony Gilroy and his team – it just draws everything together. From a pure design viewpoint, I’m also very proud of the work that Tania Richard and her ILM Vancouver crew did for the Senate shots. As I mentioned before, it was a hugely challenging environment not just logistically, but also in bringing together two very distinctive architectural languages, and they made them work in tandem beautifully. Looking back on the project, what aspects of the visual effects are you most proud of?: I’m incredibly proud of this entire season. The seamless collaboration we had between Visual Effects and every other department made the work, while challenging, an absolute joy to execute. Almost all of the department heads returned from the first season, which provided a shorthand shortcut as we started the show with implicit trust and understanding of what we were looking to achieve. The work is beautiful, and the commitment of our crew and vendors has been unwavering. I’m most proud of the effort and care that each individual person contributed to the show and the fact that we went into the project with a common goal and were, as a team, able to showcase the vision that we, and Tony, had for the series.: I’m really proud of the deep integration of the visual effects – not just visually, but fundamentally within the filmmaking process and storytelling. Tony invited VFX to be a key participant in shaping the story, from early story drafts through to the final color grade. Despite the scale and spectacle of many sequences, the VFX always feel purposeful, supporting the narrative and characters rather than distracting from them. This was significantly bolstered by the return of a large number of key creatives from Season 1, both within the production and at our VFX vendors. That shared experience and established understanding of Tony’s vision for Andor were invaluable in making the VFX an organic part of the show.: I could not be prouder of the entire ILM team for everything they brought to their work on the show. Working across three sites, Andor was a truly global effort, and I particularly enjoyed how each site took complete ownership of their work. It was a privilege working with all of them and contributing to such an exceptional series. VFX progression frame Lucasfilm’s ANDOR Season 2, exclusively on Disney+. Photo courtesy of Lucasfilm. ©2025 Lucasfilm Ltd. & TM. All Rights Reserved. How long have you worked on this show?: This show has been an unbelievable journey. Season 2 alone was nearly 3 years. We wrapped Season 2 in January of 2025. We started prepping Season 2 in February 2022, while we were still in post for Season 1. I officially started working on Season 1 early in 2019 while it was still being developed. So that’s 6 years of time working on Andor. Mohen and I both also worked on Rogue One, so if you factor in the movie, which was shooting in 2015, that’s nearly ten years of work within this part of the Star Wars universe.: I started on the project during early development in the summer of 2019 and finished in December of 2024.: I started on Season 1 in September 2020 and finished up on Season 2 in December 2024. What’s the VFX shots count?: We had a grand total of 4,124 shots over the course of our 12 episodes. Outside of Industrial Light & Magic, which oversaw the show, we also partnered with Hybride, Scanline, Soho VFX, and Midas VFX. What is your next project?: You’ll have to wait and see!: Unfortunately, I can’t say just yet either! A big thanks for your time. WANT TO KNOW MORE?ILM: Dedicated page about Andor – Season 2 on ILM website. © Vincent Frei – The Art of VFX – 2025 #andor #season #mohen #leo #production
    WWW.ARTOFVFX.COM
    Andor – Season 2: Mohen Leo (Production VFX Supervisor), TJ Falls (Production VFX Producer) and Scott Pritchard (ILM VFX Supervisor)
    Interviews Andor – Season 2: Mohen Leo (Production VFX Supervisor), TJ Falls (Production VFX Producer) and Scott Pritchard (ILM VFX Supervisor) By Vincent Frei - 22/05/2025 In 2023, Mohen Leo (Production VFX Supervisor), TJ Falls (Production VFX Producer), and Scott Pritchard (ILM VFX Supervisor) offered an in-depth look at the visual effects of Andor’s first season. Now, the trio returns to share insights into their work on the second—and final—season of this critically acclaimed series. Tony Gilroy is known for his detailed approach to storytelling. Can you talk about how your collaboration with him evolved throughout the production of Andor? How does he influence the VFX decisions and the overall tone of the series? Mohen Leo (ML): Our history with Tony, from Rogue One through the first season of Andor, had built a strong foundation of mutual trust. For Season 2, he involved VFX from the earliest story discussions, sharing outlines and inviting our ideas for key sequences. His priority is always to keep the show feeling grounded, ensuring that visual effects serve the story’s core and never become extraneous spectacle that might distract from the narrative. TJ Falls (TJ): Tony is a master storyteller. As Mohen mentioned, we have a great history with Tony from Rogue One and through Season 1 of Andor. We had a great rapport with Tony, and he had implicit trust in us. We began prepping Season 2 while we were in post for Season 1. We were having ongoing conversations with Tony and Production Designer Luke Hull as we were completing work for S1 and planning out how we would progress into Season 2. We wanted to keep the show grounded and gritty while amping up the action and urgency. Tony had a lot of story to cover in 12 episodes. The time jumps between the story arcs were something we discussed early on, and the need to be able to not only justify the time jumps but also to provide the audience with a visual bridge to tell the stories that happened off-screen. Tony would look to us to guide and use our institutional knowledge of Star Wars to help keep him honest within the universe. He, similarly, challenged us to maintain our focus and ensure that the visual tone of the series serviced the story. Tony Gilroy and Genevieve O’Reilly on the set of Lucasfilm’s ANDOR Season 2, exclusively on Disney+. Photo by Des Willie. ©2024 Lucasfilm Ltd. & TM. All Rights Reserved. As you’ve returned for Season 2, have there been any significant changes or new challenges compared to the first season? How has the production evolved in terms of VFX and storytelling? (ML): The return of nearly all key creatives from Season 1, both internally and at our VFX vendors, was a massive advantage. This continuity built immediate trust and an efficient shorthand. It made everyone comfortable to be more ambitious, allowing us to significantly expand the scope and complexity of the visual effects for Season 2. (TJ): We had all new directors this season. The rest of the core creative and production teams stayed consistent from Season 1. We worked to keep the creative process as seamless from Season 1 as we could while working with the new directors and adapting to their process while incorporating their individual skills and ideas that they brought to the table. This season we were able to work on location much more than on Season 1. That provided us with a great opportunity to build out the connective tissue between real world constraints and the virtual world we were creating. In the case with Senate Plaza in Coruscant we also had to stay consistent with what has previously been established, so that was a fun challenge. How did you go about dividing the workload between the various VFX studios? (ML): I can give an answer, but probably better if TJ does. (TJ): We were very specific about how we divided the work on this series. We started, as we usually do, with a detailed breakdown of work for the 12 episodes. Mohen and I then discussed a logical split based on type of work, specific elements, and areas of commonality for particular environments. While cost is always a consideration, we focused our vendor casting around the creative strengths of the studios we were partnering with on the project. ILM is in the DNA of Star Wars, so we knew we’d want to be working with them on some of the most complex work. We chose ILM for the opening TIE Avenger hangar sequence and subsequent escape. We utilized ILM for work in every episode, including the CG KX/K2 work, but their main focus was on Coruscant, and they had substantial work in the ninth episode for the big Senate escape sequence. Hybride‘s chief focus was on Palmo Plaza and the Ghorman environments. They dealt with everything Ghorman on the ground from the street extensions and the truck crash, through the Ghorman massacre, sharing shots with ILM with the KX work. For Scanline VFX, we identified three primary areas of focus: the work on Mina Rau, Chandrila, and Yavin. The TIE Fighter sequence in Season 2 is a standout moment. Can you walk us through the VFX process for that particular sequence? What were some of the technical challenges you faced, and how did you work to make it as intense and realistic as possible? (TJ): This is a sequence I’m particularly proud of as VFX played a central role in the sequence coming together from start to finish. We were intimately involved from the initial conversations of the idea for the sequence. Mohen created digital storyboards and we pitched ideas for the sequence to Tony Gilroy. Once we had a sense of the creative brief, we started working with Luke Hull (Production Designer) and the art department on the physical hangar set and brought it into previz for virtual scouting. With Jen Kitching (our Previz Supervisor from The Third Floor) we had a virtual camera set up that allowed us to virtually use the camera and lenses we would have on our shoot. We blocked out shots with Ariel Kleiman (Director) and Christophe Nuyens (the DoP). This went back through previz and techviz so we could meticulously chart out our plan for the shoot. Keeping with our ethos of grounding everything in reality, we wanted to use as much of the practical set as possible. We needed to be sure our handoffs between physical and virtual were seamless – Luke Murphy, our SFX Supervisor, worked closely with us in planning elements and practical effects to be used on the day. Over the course of the shoot, we also had the challenge of the flashing red alarm that goes off once the TIE Avenger crashes into the ceiling. We established the look of the red alarm with Christophe and the lighting team, and then needed to work out the timing. For that, we collaborated with editor John Gilroy to ensure we knew precisely when each alarm beat would flash. Once we had all the pieces, we turned the sequence over to Scott Pritchard and ILM to execute the work. Scott Pritchard (SP): This sequence was split between our London and Vancouver studios, with London taking everything inside the hangar, and Vancouver handling the exterior shots after Cassian blasts through the hangar door. We started from a strong foundation thanks to two factors: the amazing hangar set and TIE Avenger prop; and having full sequence previs. The hangar set was built about 2/3 of its overall length (as much as could be built on the soundstage), which our environments team extended, adding the hangar doors at the end and also a view to the exterior environment. Extending the hangar was most of the work in the sequence up until the TIE starts moving, where we switched to our CG TIE. As with Season 1, we used a blend of physical SFX work for the pyro effects, augmenting with CG sparks. As TJ mentioned, the hangar’s red warning lighting was a challenge as it had to pulse in a regular tempo throughout the edit. Only the close-up shots of Cassian in the cockpit had practical red lighting, so complex lighting and comp work were required to achieve a consistent look throughout the sequence. ILM London’s compositing supervisor, Claudio Bassi, pitched the idea that as the TIE hit various sections of the ceiling, it would knock out the ceiling lights, progressively darkening the hangar. It was a great motif that helped heighten the tension as we get towards the moment where Cassian faces the range trooper. Once we cut to outside the hangar, ILM Vancouver took the reins. The exterior weather conditions were briefed to us as ‘polar night’ – it’s never entirely dark, instead there’s a consistent low-level ambient light. This was a challenge as we had to consider the overall tonal range of each shot and make sure there was enough contrast to guide the viewer’s eye to where it needed to be, not just on individual shots but looking at eye-trace as one shot cut to another. A key moment is when Cassian fires rockets into an ice arch, leading to its collapse. The ice could very easily look like rock, so we needed to see the light from the rocket’s explosions scattered inside the ice. It required detailed work in both lighting and comp to get to the right look. Again, as the ice arch starts to collapse and the two chase TIE Advanced ships get taken out, it needed careful balancing work to make sure viewers could read the situation and the action in each shot. The world-building in Andor is impressive, especially with iconic locations like Coruscant and Yavin. How did you approach creating these environments and ensuring they felt as authentic as possible to the Star Wars universe? (ML): Our approach to world-building in Andor relied on a close collaboration between the VFX team and Luke Hull, the production designer, along with his art department. This partnership was established in Season 1 and continued for Season 2. Having worked on many Star Wars projects over the decades, VFX was often able to provide inspiration and references for art department designs. For example, for locations like Yavin and Coruscant, VFX provided the art department with existing 3D assets: the Yavin temple model from Rogue One and the Coruscant city layout around the Senate from the Prequel films. The Coruscant model, in particular, involved some ‘digital archaeology.’ The data was stored on tapes from around 2001 and consisted of NURBS models in an older Softimage file format. To make them usable, we had to acquire old Softimage 2010 and XSI licenses, install them on a Windows 7 PC, and then convert the data to the FBX format that current software can read. Supplying these original layouts to the art department enabled them to create their new designs and integrate our real-world shooting locations while maintaining consistency with the worlds seen in previous Star Wars productions. Given that Andor is set approximately twenty years after the Prequels, we also had the opportunity to update and adjust layouts and designs to reflect that time difference and realize the specific creative vision Luke Hull and Tony Gilroy had for the show. StageCraft technology is a huge part of the production. How did you use it to bring these complex environments, like Coruscant and Yavin, to life? What are the main benefits and limitations of using StageCraft for these settings? (SP): Our use of StageCraft for Season 2 was similar to that on Season 1. We used it to create the exterior views through the windows of the Safehouse on Coruscant. As with our work for the Chandrillan Embassy in Season 1, we created four different times of day/weather conditions. One key difference was that the foreground buildings were much closer to the Safehouse, so we devised three projection points (one for each room of the Safehouse), which would ensure that the perspective of the exterior was correct for each room. On set we retained a large amount of flexibility with our content. We had our own video feed from the unit cameras, and we were able to selectively isolate and grade sections of the city based on their view through the camera. Working in context like this meant that we could make any final tweaks while each shot was being set up and rehearsed. While we were shooting a scene set at night, the lighting team rigged a series of lights running above the windows that, when triggered, would flash in sequence, casting a moving light along the floor and walls of the set, as if from a moving car above. I thought we could use the LED wall to do something similar from below, catching highlights on the metal pipework that ran across the ceiling. During a break in shooting, I hatched a plan with colour operator Melissa Goddard, brain bar supervisor Ben Brown, and we came up with a moving rectangular section on the LED wall which matched the practical lights for speed, intensity and colour temperature. We set up two buttons on our iPad to trigger the ‘light’ to move in either direction. We demoed the idea to the DP after lunch, who loved it, and so when it came to shoot, he could either call from a car above from the practical lights, or a car below from the LEDs. (ML): Just to clarify – the Coruscant Safehouse set was the only application of Stagecraft LED screens in Season 2. All other Coruscant scenes relied on urban location photography or stage sets with traditional blue screen extensions. The various Yavin locations were achieved primarily with large backlot sets at Longcross Studios. A huge set of the airfield, temple entrance and partial temple interior was extended by Scanline VFX, led by Sue Rowe, in post, creating the iconic temple exterior from A New Hope. VFX also added flying and parked spaceships, and augmented the surrounding forest to feel more tropical. Andor blends CG with actual real-world locations. Can you share how you balanced these two elements, especially when creating large-scale environments or specific landscapes that felt grounded in reality? (SP): A great example of this is the environment around the Senate. The plates for this were shot in the City of Arts & Sciences in Valencia. Blending the distinctive Calatrava architecture with well-known Star Wars buildings like the Senate was an amazing challenge, it wasn’t immediately clear how the two could sit alongside each other. Our Vancouver team, led by Tania Richard, did an incredible job taking motifs and details from the Valencia buildings and incorporating them into the Senate building on both large and small scales, but still contiguous with the overall Senate design. The production team was ingenious in how they used each of the Valencia buildings to represent many locations around the Senate and the surrounding areas. For example, the Science Museum was used for the walkway where Cassian shoots Kloris (Mon’s driver), the main entrance to the Senate, and the interior of the Senate Atrium (where Ghorman Senator Oran is arrested). It was a major challenge ensuring that all those locations were represented across the larger environment, so viewers understood the geography of the scene, but also blended with the design language of their immediate surroundings. Everything in the Senate Plaza had a purpose. When laying out the overall layout of the Plaza, we considered aspects such as how far Senators would realistically walk from their transports to the Senate entrance. When extending the Plaza beyond the extents of the City of Arts & Sciences, we used Calatrava architecture from elsewhere. The bridge just in front of the Senatorial Office Building is based on a Calatrava-designed bridge in my home city of Dublin. As we reach the furthest extents of the Senate Plaza, we begin blending in more traditional Coruscant architecture so as to soften the transition to the far background. Coruscant is such a pivotal location in Star Wars. How did you approach creating such a vast, densely populated urban environment? What were the key visual cues that made it feel alive and realistic? (ML): Our approach to Coruscant in Season 2 built upon what we established in the first season: primarily, shooting in real-world city locations whenever feasible. The stunning Calatrava architecture at Valencia’s City of Arts and Sciences, for instance, served as the foundation for the Senate exterior and other affluent districts. For the city’s grittier neighborhoods, we filmed in urban environments in London, like the Barbican and areas around Twickenham Stadium. Filming in these actual city locations provided a strong, realistic basis for the cinematography, lighting, and overall mood of each environment. This remained true even when VFX later modified large portions of the frame with Star Wars architecture. This methodology gave the director and DP confidence on set that their vision would carry through to the final shot. Our art department and VFX concept artists then created numerous paintovers based on plates and location photography, offering clear visual guides for transforming each real location into its Coruscant counterpart during post-production. For the broader cityscapes, we took direct inspiration from 3D street maps of cities such as Tokyo, New York, and Hong Kong. We would exaggerate the scale and replace existing buildings with our Coruscant designs while preserving the fundamental urban patterns. When it comes to creating environments like Yavin, which has a very natural, jungle-like aesthetic, how do you ensure the VFX stays true to the organic feel of the location while still maintaining the science-fiction elements of Star Wars? (ML): Nearly all of the Yavin jungle scenes were shot in a large wooded area that is part of Longcross Studios. The greens and art departments did an amazing job augmenting the natural forest with tropical plants and vines. The scenes featuring the two rebel factions in the clearing were captured almost entirely in-camera, with VFX primarily adding blaster fire, augmenting the crashed ship, and painting out equipment. Only the shots of the TIE Avenger landing and taking off, as well as the giant creature snatching the two rebels, featured significant CG elements. The key elements connecting these practical locations back to the Yavin established in A New Hope and Rogue One were the iconic temples. The establishing shots approaching the main temple in episode 7 utilized plate photography from South America, which had been shot for another Disney project but ultimately not used. Other aerial shots, such as the U-Wing flying above the jungle in episode 12, were fully computer-generated by ILM. K-2SO is a beloved character, and his return is highly anticipated. What can you tell us about the process of bringing him back to life with VFX in Season 2? What new challenges did this bring compared to his original appearance? (SP): We had already updated a regular KX droid for the scene on Niamos in Season 1, so much of the work to update the asset to the latest pipeline requirements had already been done. We now needed to switch over to the textures & shaders specific to K2, and give them the same updates. Unique to Series 2 was that there were a number of scenes involving both a practical and a digital K2 – when he gets crushed on Ghorman in episode 8, and then ‘rebooted’ on Yavin in episode 9. The practical props were a lot more beaten up than our hero asset, so we made bespoke variants to match the practical droid in each sequence. Additionally, for the reboot sequence on Yavin, we realised pretty quickly that the extreme movements meant that we were seeing into areas that previously had not required much detail – for instance, underneath his shoulder armour. We came up with a shoulder joint design that allowed for the required movement while also staying mechanically correct. When we next see him in Episode 10, a year has passed, and he is now the K-2SO as we know him from Rogue One. K-2SO has a unique design, particularly in his facial expressions and movement. How did you approach animating him for Season 2, and were there any specific changes or updates made to his character model or animation? (SP): Following Rogue One, Mohen made detailed records of the takeaways learned from creating K-2SO, and he kindly shared these notes with us early on in the show. They were incredibly helpful in tuning the fine details of the animation. Our animation team, led by Mathieu Vig, did a superb job of identifying the nuances of Alan’s performance and making sure they came across. There were plenty of pitfalls to avoid – for instance, the curve to his upper back meant that it was very easy for his neck to look hyperextended. We also had to be very careful with his eyes, as they’re sources of light, they could very easily look cartoonish if they moved around too much. Dialling in just the right amount of eye movement was crucial to a good performance. As the eyes also had several separate emissive and reflective components, they required delicate balancing in the comp on a per-shot basis. Luckily, we had great reference from Rogue One to be able to dial in the eyes to suit both the lighting of a shot but also its performance details. One Rogue One shot in particular, where he says ‘Your behavior, Jyn Erso, is continually unexpected’, was a particularly good reference for how we could balance the lights in his eyes to, in effect, enlarge his pupils, and give him a softer expression. K-2SO also represented my first opportunity to work with ILM’s new studio in Mumbai. Amongst other shots, they took on the ‘hallway fight’ sequence in Episode 12 where K2 dispatches Heert and his troopers, and they did a fantastic job from animation right through to final comp. K-2SO’s interactions with the live-action actors are key to his character. How did you work with the actors to ensure his presence felt as real and integrated as possible on screen, especially in terms of timing and reactions? (ML): Alan Tudyk truly defined K-2SO in Rogue One, so his return for Andor Season 2 was absolutely critical to us. He was on set for every one of K2’s shots, performing on stilts and in a performance capture suit. This approach was vital because it gave Alan complete ownership of the character’s physical performance and, crucially, allowed for spontaneous, genuine interactions with the other actors, particularly Diego Luna. Witnessing Alan and Diego reunite on camera was fantastic; that unique chemistry and humor we loved in Rogue One was instantly palpable. In post-production, our VFX animators then meticulously translated every nuance of Alan’s on-set performance to the digital K-2SO model. It’s a detailed process that still requires artistic expertise. For instance, K2’s facial structure is largely static, so direct translation of Alan’s facial expressions isn’t always possible. In these cases, our animators found creative solutions – translating a specific facial cue from Alan into a subtle head tilt or a particular eye movement for K2, always ensuring the final animation remained true to the intent and spirit of Alan’s original performance. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint? (ML): The Plaza sequence in episode 8, which runs for about 23 minutes, stands out as particularly memorable – both for its challenges and its rewarding outcome. Just preparing for it was a daunting task. Its successful execution hinged on incredibly tight collaboration between numerous departments: stunts, creature effects, special effects, the camera department, our tireless greenscreens crew, and of course, VFX. The stunts team, under Marc Mailley, drove the choreography of all the action. Our On-Set VFX Supervisor, Marcus Dryden, was instrumental. He worked hand-in-glove with the director, DP, and assistant directors to ensure we meticulously captured all the necessary elements. This included everything from crowd replication plates and practical effects elements to the performances of stunt teams and creature actors, plus all the crucial on-set data. The shoot for this sequence alone took over three weeks. Hybride, under the leadership of Joseph Kasparian and Olivier Beaulieu, then completed the environments, added the blaster fire, and augmented the special effects in post-production, with ILM contributing the KX droids that wreak havoc in the plaza. (SP): I agree with Mohen here, for me the Ghorman Plaza episode is the most rewarding to have worked on. It required us to weave our work into that of so many other departments – stunts, sfx, costume – to name just a few. When we received the plates, to see the quality of the work that had gone into the photography alone was inspirational for me and the ILM crew. It’s gratifying to be part of a team where you know that everyone involved is on top of their game. And of course all that is underpinned by writing of that calibre from Tony Gilroy and his team – it just draws everything together. From a pure design viewpoint, I’m also very proud of the work that Tania Richard and her ILM Vancouver crew did for the Senate shots. As I mentioned before, it was a hugely challenging environment not just logistically, but also in bringing together two very distinctive architectural languages, and they made them work in tandem beautifully. Looking back on the project, what aspects of the visual effects are you most proud of? (TJ): I’m incredibly proud of this entire season. The seamless collaboration we had between Visual Effects and every other department made the work, while challenging, an absolute joy to execute. Almost all of the department heads returned from the first season, which provided a shorthand shortcut as we started the show with implicit trust and understanding of what we were looking to achieve. The work is beautiful, and the commitment of our crew and vendors has been unwavering. I’m most proud of the effort and care that each individual person contributed to the show and the fact that we went into the project with a common goal and were, as a team, able to showcase the vision that we, and Tony, had for the series. (ML): I’m really proud of the deep integration of the visual effects – not just visually, but fundamentally within the filmmaking process and storytelling. Tony invited VFX to be a key participant in shaping the story, from early story drafts through to the final color grade. Despite the scale and spectacle of many sequences, the VFX always feel purposeful, supporting the narrative and characters rather than distracting from them. This was significantly bolstered by the return of a large number of key creatives from Season 1, both within the production and at our VFX vendors. That shared experience and established understanding of Tony’s vision for Andor were invaluable in making the VFX an organic part of the show. (SP): I could not be prouder of the entire ILM team for everything they brought to their work on the show. Working across three sites, Andor was a truly global effort, and I particularly enjoyed how each site took complete ownership of their work. It was a privilege working with all of them and contributing to such an exceptional series. VFX progression frame Lucasfilm’s ANDOR Season 2, exclusively on Disney+. Photo courtesy of Lucasfilm. ©2025 Lucasfilm Ltd. & TM. All Rights Reserved. How long have you worked on this show? (TJ): This show has been an unbelievable journey. Season 2 alone was nearly 3 years. We wrapped Season 2 in January of 2025. We started prepping Season 2 in February 2022, while we were still in post for Season 1. I officially started working on Season 1 early in 2019 while it was still being developed. So that’s 6 years of time working on Andor. Mohen and I both also worked on Rogue One, so if you factor in the movie, which was shooting in 2015, that’s nearly ten years of work within this part of the Star Wars universe. (ML): I started on the project during early development in the summer of 2019 and finished in December of 2024. (SP): I started on Season 1 in September 2020 and finished up on Season 2 in December 2024. What’s the VFX shots count? (TJ): We had a grand total of 4,124 shots over the course of our 12 episodes. Outside of Industrial Light & Magic, which oversaw the show, we also partnered with Hybride, Scanline, Soho VFX, and Midas VFX. What is your next project? (TJ): You’ll have to wait and see! (SP): Unfortunately, I can’t say just yet either! A big thanks for your time. WANT TO KNOW MORE?ILM: Dedicated page about Andor – Season 2 on ILM website. © Vincent Frei – The Art of VFX – 2025
    0 Comments 0 Shares 0 Reviews
  • Noctua and Pulsar create gaming mouse with built-in fan for sweaty hands

    In context: Noctua is mostly known for its fans, CPU heatsinks, and other cooling products for computing devices. The Austrian company also cooperates with third-party peripheral and GPU manufacturers, though its latest partnership is likely the most unusual so far.
    Noctua is putting a 4x4 cmfan inside a competitive gaming mouse made by Pulsar Gaming Gears. The Korean peripheral manufacturer announced the oddity ahead of Computex, promising that the new mouse would be demoed during the computer hardware show held in Taipei, Taiwan.
    Noctua is well-known for the "exceptional" cooling performance of its fans, Pulsar stated, while gamers – and competitive gamers in particular – are prone to sweating during esports events. The Asian manufacturer is therefore equipping its pre-existing Feinmann gaming mouse with a tiny Noctua fan, so that gamers can be comfortable even in the heat of the most ferociousbattles.
    Pulsar didn't have to reinvent the wheel, as the Feinmann already includes a very light shell riddled with holes. The Feinmann F01 is an ultra-lightweight gaming mouse weighing just 46g, providing all the features a competitive player would expect, including an 8,000Hz polling rate, a 32,000 DPI sensor, a "fast" 8K docking charger, and more.
    Thanks to the newly embedded Noctua fan, gamers buying the new Feinmann model can expect their hands to be constantly cool. Even users with particularly sweaty grips should enjoy a more comfortable gaming experience. We would very much like to test Pulsar's statements while replaying Doom Eternal's DLC 1 during summer months, just to be absolutely sure it really works the way the manufacturer says.
    // Related Stories

    Some Computex attendees say the Noctua-powered Feinmann mouse is indeed comfortable and the additional air flow keeps palms cool. Pulsar's product doesn't appear to be just a gimmick, though the specs are clearly being affected by the new fan. The mouse weight is now a bit higher, while battery life should be around 10-11 hours.
    Pulsar said the mouse is still a prototype, so battery life and other specs are "preliminary." Modern wireless mice can go on for hundreds of hours on a single charge, so we're curious to know how the final product will turn out. The standard version of the Feinmann F01 Gaming Mouse is currently on sale at so we expect the new model will cost more than that.
    #noctua #pulsar #create #gaming #mouse
    Noctua and Pulsar create gaming mouse with built-in fan for sweaty hands
    In context: Noctua is mostly known for its fans, CPU heatsinks, and other cooling products for computing devices. The Austrian company also cooperates with third-party peripheral and GPU manufacturers, though its latest partnership is likely the most unusual so far. Noctua is putting a 4x4 cmfan inside a competitive gaming mouse made by Pulsar Gaming Gears. The Korean peripheral manufacturer announced the oddity ahead of Computex, promising that the new mouse would be demoed during the computer hardware show held in Taipei, Taiwan. Noctua is well-known for the "exceptional" cooling performance of its fans, Pulsar stated, while gamers – and competitive gamers in particular – are prone to sweating during esports events. The Asian manufacturer is therefore equipping its pre-existing Feinmann gaming mouse with a tiny Noctua fan, so that gamers can be comfortable even in the heat of the most ferociousbattles. Pulsar didn't have to reinvent the wheel, as the Feinmann already includes a very light shell riddled with holes. The Feinmann F01 is an ultra-lightweight gaming mouse weighing just 46g, providing all the features a competitive player would expect, including an 8,000Hz polling rate, a 32,000 DPI sensor, a "fast" 8K docking charger, and more. Thanks to the newly embedded Noctua fan, gamers buying the new Feinmann model can expect their hands to be constantly cool. Even users with particularly sweaty grips should enjoy a more comfortable gaming experience. We would very much like to test Pulsar's statements while replaying Doom Eternal's DLC 1 during summer months, just to be absolutely sure it really works the way the manufacturer says. // Related Stories Some Computex attendees say the Noctua-powered Feinmann mouse is indeed comfortable and the additional air flow keeps palms cool. Pulsar's product doesn't appear to be just a gimmick, though the specs are clearly being affected by the new fan. The mouse weight is now a bit higher, while battery life should be around 10-11 hours. Pulsar said the mouse is still a prototype, so battery life and other specs are "preliminary." Modern wireless mice can go on for hundreds of hours on a single charge, so we're curious to know how the final product will turn out. The standard version of the Feinmann F01 Gaming Mouse is currently on sale at so we expect the new model will cost more than that. #noctua #pulsar #create #gaming #mouse
    WWW.TECHSPOT.COM
    Noctua and Pulsar create gaming mouse with built-in fan for sweaty hands
    In context: Noctua is mostly known for its fans, CPU heatsinks, and other cooling products for computing devices. The Austrian company also cooperates with third-party peripheral and GPU manufacturers, though its latest partnership is likely the most unusual so far. Noctua is putting a 4x4 cm (1.57 x 1.57 inches) fan inside a competitive gaming mouse made by Pulsar Gaming Gears. The Korean peripheral manufacturer announced the oddity ahead of Computex, promising that the new mouse would be demoed during the computer hardware show held in Taipei, Taiwan. Noctua is well-known for the "exceptional" cooling performance of its fans, Pulsar stated, while gamers – and competitive gamers in particular – are prone to sweating during esports events. The Asian manufacturer is therefore equipping its pre-existing Feinmann gaming mouse with a tiny Noctua fan, so that gamers can be comfortable even in the heat of the most ferocious (e)battles. Pulsar didn't have to reinvent the wheel, as the Feinmann already includes a very light shell riddled with holes. The Feinmann F01 is an ultra-lightweight gaming mouse weighing just 46g, providing all the features a competitive player would expect, including an 8,000Hz polling rate, a 32,000 DPI sensor, a "fast" 8K docking charger, and more. Thanks to the newly embedded Noctua fan, gamers buying the new Feinmann model can expect their hands to be constantly cool. Even users with particularly sweaty grips should enjoy a more comfortable gaming experience. We would very much like to test Pulsar's statements while replaying Doom Eternal's DLC 1 during summer months, just to be absolutely sure it really works the way the manufacturer says. // Related Stories Some Computex attendees say the Noctua-powered Feinmann mouse is indeed comfortable and the additional air flow keeps palms cool. Pulsar's product doesn't appear to be just a gimmick, though the specs are clearly being affected by the new fan. The mouse weight is now a bit higher (65g), while battery life should be around 10-11 hours. Pulsar said the mouse is still a prototype, so battery life and other specs are "preliminary." Modern wireless mice can go on for hundreds of hours on a single charge, so we're curious to know how the final product will turn out. The standard version of the Feinmann F01 Gaming Mouse is currently on sale at $180, so we expect the new model will cost more than that.
    0 Comments 0 Shares 0 Reviews
  • Google still doesn't have much to show for Android XR

    When Google unveiled Android XR last year, it seemed like a clear response to Apple's Vision Pro: It was a plan for a true mixed reality platform that could easily hop between AR, VR and smart glasses like Meta's Ray-Bans. At Google I/O 2025 today, Google announced the second developer preview for Android XR, and it also showed off a bit more about how it could work in headsets and smart glasses. It'll likely be a while before we see Android XR devices in action, though, as Google also revealed Samsung's Project Moohan headset will arrive later this year. Additionally, Xreal is also building Project Aura, a pair of tethered smart glasses powered by the platform.
    Update: Google demoed prototype Android XR smart glasses at I/O with live translation, which Engadget's Karissa Bell called "lightweight, but with a limited field of view." Google isn't planning to sell those devices, but it is partnering with Warby Parker and Gentle Monster to provide frames for future smart glasses. 
    Basically, there really isn't much to get excited about just yet. It's clear that Google is working hard to catch up with both Apple and Meta, which actually have XR products on the shelves already. Given that Google tends to kill its ambitious projects with a swiftness — just take a look at Google Glass, Cardboard and Daydream, which were all early stabs at AR and VR — it's hard to put much faith in the future of Android XR. Is the availability of much better XR hardware enough to make the platform a success? At this point, it's just too tough to tell.
    For now, though, it looks like Google is aiming to deliver all of the features you'd expect with Android XR. Its second developer preview adds the ability to play 180-degree and 360-degree immersive videos, bring hand-tracking into apps and support dynamic refresh rates. As expected, Google is also making it easier to integrate its Gemini AI into Android XR apps, something the company promised when it first announced the platform last year.
    Google
    In a series of pre-rendered videos, Google showed off the ideal ways to use Gemini in smart glasses and headsets. If your glasses have a built-in display, you could see a small Google Map to give you directions, message friends while you're prepping dinner or take a picture while dancing with your partner at sunset. All I can say is: "Cool demo, bro." Get back to us when this is all working in headsets and glasses we can actually wear.
    Update 5/21, 2:45PM ET: This story has been updated with references to Google's XR prototype glasses.This article originally appeared on Engadget at
    #google #still #doesn039t #have #much
    Google still doesn't have much to show for Android XR
    When Google unveiled Android XR last year, it seemed like a clear response to Apple's Vision Pro: It was a plan for a true mixed reality platform that could easily hop between AR, VR and smart glasses like Meta's Ray-Bans. At Google I/O 2025 today, Google announced the second developer preview for Android XR, and it also showed off a bit more about how it could work in headsets and smart glasses. It'll likely be a while before we see Android XR devices in action, though, as Google also revealed Samsung's Project Moohan headset will arrive later this year. Additionally, Xreal is also building Project Aura, a pair of tethered smart glasses powered by the platform. Update: Google demoed prototype Android XR smart glasses at I/O with live translation, which Engadget's Karissa Bell called "lightweight, but with a limited field of view." Google isn't planning to sell those devices, but it is partnering with Warby Parker and Gentle Monster to provide frames for future smart glasses.  Basically, there really isn't much to get excited about just yet. It's clear that Google is working hard to catch up with both Apple and Meta, which actually have XR products on the shelves already. Given that Google tends to kill its ambitious projects with a swiftness — just take a look at Google Glass, Cardboard and Daydream, which were all early stabs at AR and VR — it's hard to put much faith in the future of Android XR. Is the availability of much better XR hardware enough to make the platform a success? At this point, it's just too tough to tell. For now, though, it looks like Google is aiming to deliver all of the features you'd expect with Android XR. Its second developer preview adds the ability to play 180-degree and 360-degree immersive videos, bring hand-tracking into apps and support dynamic refresh rates. As expected, Google is also making it easier to integrate its Gemini AI into Android XR apps, something the company promised when it first announced the platform last year. Google In a series of pre-rendered videos, Google showed off the ideal ways to use Gemini in smart glasses and headsets. If your glasses have a built-in display, you could see a small Google Map to give you directions, message friends while you're prepping dinner or take a picture while dancing with your partner at sunset. All I can say is: "Cool demo, bro." Get back to us when this is all working in headsets and glasses we can actually wear. Update 5/21, 2:45PM ET: This story has been updated with references to Google's XR prototype glasses.This article originally appeared on Engadget at #google #still #doesn039t #have #much
    WWW.ENGADGET.COM
    Google still doesn't have much to show for Android XR
    When Google unveiled Android XR last year, it seemed like a clear response to Apple's Vision Pro: It was a plan for a true mixed reality platform that could easily hop between AR, VR and smart glasses like Meta's Ray-Bans. At Google I/O 2025 today, Google announced the second developer preview for Android XR, and it also showed off a bit more about how it could work in headsets and smart glasses. It'll likely be a while before we see Android XR devices in action, though, as Google also revealed Samsung's Project Moohan headset will arrive later this year. Additionally, Xreal is also building Project Aura, a pair of tethered smart glasses powered by the platform. Update: Google demoed prototype Android XR smart glasses at I/O with live translation, which Engadget's Karissa Bell called "lightweight, but with a limited field of view." Google isn't planning to sell those devices, but it is partnering with Warby Parker and Gentle Monster to provide frames for future smart glasses.  Basically, there really isn't much to get excited about just yet. It's clear that Google is working hard to catch up with both Apple and Meta, which actually have XR products on the shelves already. Given that Google tends to kill its ambitious projects with a swiftness — just take a look at Google Glass, Cardboard and Daydream, which were all early stabs at AR and VR — it's hard to put much faith in the future of Android XR. Is the availability of much better XR hardware enough to make the platform a success? At this point, it's just too tough to tell. For now, though, it looks like Google is aiming to deliver all of the features you'd expect with Android XR. Its second developer preview adds the ability to play 180-degree and 360-degree immersive videos, bring hand-tracking into apps and support dynamic refresh rates (which could seriously help battery life). As expected, Google is also making it easier to integrate its Gemini AI into Android XR apps, something the company promised when it first announced the platform last year. Google In a series of pre-rendered videos, Google showed off the ideal ways to use Gemini in smart glasses and headsets. If your glasses have a built-in display (something Meta's Ray-Bans don't offer yet), you could see a small Google Map to give you directions, message friends while you're prepping dinner or take a picture while dancing with your partner at sunset (seriously). All I can say is: "Cool demo, bro." Get back to us when this is all working in headsets and glasses we can actually wear. Update 5/21, 2:45PM ET: This story has been updated with references to Google's XR prototype glasses.This article originally appeared on Engadget at https://www.engadget.com/ar-vr/google-still-doesnt-have-much-to-show-for-android-xr-174529434.html?src=rss
    0 Comments 0 Shares 0 Reviews
  • Live Updates From Google I/O 2025

    © Gizmodo I wish I was making this stuff up, but chaos seems to follow me at all tech events. After waiting an hour to try out Google’s hyped-up Android XR smart glasses for five minutes, I was actually given a three-minute demo, where I actually had 90 seconds to use Gemini in an extremely controlled environment. And actually, if you watch the video in my hands-on write-up below, you’ll see that I spent even less time with it because Gemini fumbled a few times in the beginning. Oof. I really hope there’s another chance to try them again because it was just too rushed. I think it might be the most rushed product demo I’ve ever had in my life, and I’ve been covering new gadgets for the past 15 years. —Raymond Wong Google, a company valued at trillion, seemingly brought one pair of Android XR smart glasses for press to demo… and one pair of Samsung’s Project Moohan mixed reality headset running the same augmented reality platform. I’m told the wait is 1 hour to try either device for 5 minutes. Of course, I’m going to try out the smart glasses. But if I want to demo Moohan, I need to get back in line and wait all over again. This is madness! —Raymond Wong May 20Keynote Fin © Raymond Wong / Gizmodo Talk about a loooooong keynote. Total duration: 1 hour and 55 minutes, and then Sundar Pichai walked off stage. What do you make of all the AI announcements? Let’s hang in the comments! I’m headed over to a demo area to try out a pair of Android XR smart glasses. I can’t lie, even though the video stream from the live demo lagged for a good portion, I’m hyped! It really feels like Google is finally delivering on Google Glass over a decade later. Shoulda had Google co-founder Sergey Brin jump out of a helicopter and land on stage again, though. —Raymond Wong Pieces of Project Astra, Google’s computer vision-based UI, are winding up in various different products, it seems, and not all of them are geared toward smart glasses specifically. One of the most exciting updates to Astra is “computer control,” which allows one to do a lot more on their devices with computer vision alone. For instance, you could just point your phone at an objectand then ask Astra to search for the bike, find some brakes for it, and then even pull up a YouTube tutorial on how to fix it—all without typing anything into your phone. —James Pero Shopping bots aren’t just for scalpers anymore. Google is putting the power of automated consumerism in your hands with its new AI shopping tool. There are some pretty wild ideas here, too, including a virtual shopping avatar that’s supposed to represent your own body—the idea is you can make it try on clothes to see how they fit. How all that works in practice is TBD, but if you’re ready for a full AI shopping experience, you’ve finally got it. For the whole story, check out our story from Gizmodo’s Senior Editor, Consumer Tech, Raymond Wong. —James Pero I got what I wanted. Google showed off what its Android XR tech can bring to smart glasses. In a live demo, Google showcased how a pair of unspecified smart glasses did a few of the things that I’ve been waiting to do, including projecting live navigation and remembering objects in your environment—basically the stuff that it pitched with Project Astra last year, but in a glasses form factor. There’s still a lot that needs to happen, both hardware and software-wise, before you can walk around wearing glasses that actually do all those things, but it was exciting to see that Google is making progress in that direction. It’s worth noting that not all of the demos went off smoothly—there was lots of stutter in the live translation demo—but I guess props to them for giving it a go. When we’ll actually get to walk around wearing functional smart glasses with some kind of optical passthrough or virtual display is anyone’s guess, but the race is certainly heating up. —James Pero Google’s SynthID has been around for nearly three years, but it’s been largely kept out of the public eye. The system disturbs AI-generated images, video, or audio with an invisible, undetectable watermark that can be observed with Google DeepMind’s proprietary tool. At I/O, Google said it was working with both Nvidia and GetReal to introduce the same watermarking technique with those companies’ AI image generators. Users may be able to detect these watermarks themselves, even if only part of the media was modified with AI. Early testers are getting access to it “today,” but hopefully more people can acess it at a later date from labs.google/synthid. — Kyle Barr This keynote has been going on for 1.5 hours now. Do I run to the restroom now or wait? But how much longer until it ends??? Can we petiton to Sundar Pichai to make these keynotes shorter or at least have an intermission? Update: I ran for it right near the end before Android XR news hit. I almost made it… —Raymond Wong © Raymond Wong / Gizmodo Google’s new video generator Veo, is getting a big upgrade that includes sound generation, and it’s not just dialogue. Veo 3 can also generate sound effects and music. In a demo, Google showed off an animated forest scene that includes all three—dialogue, sound effects, and video. The length of clips, I assume, will be short at first, but the results look pretty sophisticated if the demo is to be believed. —James Pero If you pay for a Google One subscription, you’ll start to see Gemini in your Google Chrome browserlater this week. This will appear as the sparkle icon at the top of your browser app. You can use this to bring up a prompt box to ask a question about the current page you’re browsing, such as if you want to consolidate a number of user reviews for a local campsite. — Kyle Barr © Google / GIF by Gizmodo Google’s high-tech video conferencing tech, now called Beam, looks impressive. You can make eye contact! It feels like the person in the screen is right in front of you! It’s glasses-free 3D! Come back down to Earth, buddy—it’s not coming out as a consumer product. Commercial first with partners like HP. Time to apply for a new job? —Raymond Wong here: Google doesn’t want Search to be tied to your browser or apps anymore. Search Live is akin to the video and audio comprehension capabilities of Gemini Live, but with the added benefit of getting quick answers based on sites from around the web. Google showed how Search Live could comprehend queries about at-home science experiment and bring in answers from sites like Quora or YouTube. — Kyle Barr Google is getting deep into augmented reality with Android XR—its operating system built specifically for AR glasses and VR headsets. Google showed us how users may be able to see a holographic live Google Maps view directly on their glasses or set up calendar events, all without needing to touch a single screen. This uses Gemini AI to comprehend your voice prompts and follow through on your instructions. Google doesn’t have its own device to share at I/O, but its planning to work with companies like XReal and Samsung to craft new devices across both AR and VR. — Kyle Barr Read our full report here: I know how much you all love subscriptions! Google does too, apparently, and is now offering a per month AI bundle that groups some of its most advanced AI services. Subscribing to Google AI Ultra will get you: Gemini and its full capabilities Flow, a new, more advanced AI filmmaking tool based on Veo Whisk, which allows text-to-image creation NotebookLM, an AI note-taking app Gemini in Gmail and Docs Gemini in Chrome Project Mariner, an agentic research AI 30TB of storage I’m not sure who needs all of this, but maybe there are more AI superusers than I thought. —James Pero Google CEO Sundar Pichai was keen to claim that users are big, big fans of AI overviews in Google Search results. If there wasn’t already enough AI on your search bar, Google will now stick an entire “AI Mode” tab on your search bar next to the Google Lens button. This encompasses the Gemini 2.5 model. This opens up an entirely new UI for searching via a prompt with a chatbot. After you input your rambling search query, it will bring up an assortment of short-form textual answers, links, and even a Google Maps widget depending on what you were looking for. AI Mode should be available starting today. Google said AI Mode pulls together information from the web alongside its other data like weather or academic research through Google Scholar. It should also eventually encompass your “personal context,” which will be available later this summer. Eventually, Google will add more AI Mode capabilities directly to AI Overviews. — Kyle Barr May 20News Embargo Has Lifted! © Xreal Get your butt over to Gizmodo.com’s home page because the Google I/O news embargo just lifted. We’ve got a bunch of stories, including this one about Google partnering up with Xreal for a new pair of “optical see-through”smart glasses called Project Aura. The smart glasses run Android XR and are powered by a Qualcomm chip. You can see three cameras. Wireless, these are not—you’ll need to tether to a phone or other device. Update: Little scoop: I’ve confirmed that Project Aura has a 70-degree field of view, which is way wider than the One Pro’s FOV, which is 57 degrees. —Raymond Wong © Raymond Wong / Gizmodo Google’s DeepMind CEO showed off the updated version of Project Astra running on a phone and drove home how its “personal, proactive, and powerful” AI features are the groundwork for a “universal assistant” that truly understands and works on your behalf. If you think Gemini is a fad, it’s time to get familiar with it because it’s not going anywhere. —Raymond Wong May 20Gemini 2.5 Pro Is Here © Gizmodo Google says Gemini 2.5 Pro is its “most advanced model yet,” and comes with “enhanced reasoning,” better coding ability, and can even create interactive simulations. You can try it now via Google AI Studio. —James Pero There are two major types of transformer AI used today. One is the LLM, AKA large language models, and diffusion models—which are mostly used for image generation. The Gemini Diffusion model blurs the lines of these types of models. Google said its new research model can iterate on a solution quickly and correct itself while generating an answer. For math or coding prompts, Gemini Diffusion can potentially output an entire response much faster than a typical Chatbot. Unlike a traditional LLM model, which may take a few seconds to answer a question, Gemini Diffusion can create a response to a complex math equation in the blink of an eye, and still share the steps it took to reach its conclusion. — Kyle Barr © Gizmodo New Gemini 2.5 Flash and Gemini Pro models are incoming and, naturally, Google says both are faster and more sophisticated across the board. One of the improvements for Gemini 2.5 Flash is even more inflection when speaking. Unfortunately for my ears, Google demoed the new Flash speaking in a whisper that sent chills down my spine. —James Pero Is anybody keeping track of how many times Google execs have said “Gemini” and “AI” so far? Oops, I think I’m already drunk, and we’re only 20 minutes in. —Raymond Wong © Raymond Wong / Gizmodo Google’s Project Astra is supposed to be getting much better at avoiding hallucinations, AKA when the AI makes stuff up. Project Astra’s vision and audio comprehension capabilities are supposed to be far better at knowing when you’re trying to trick it. In a video, Google showed how its Gemini Live AI wouldn’t buy your bullshit if you tell it that a garbage truck is a convertible, a lamp pole is a skyscraper, or your shadow is some stalker. This should hopefully mean the AI doesn’t confidently lie to you, as well. Google CEO Sundar Pichai said “Gemini is really good at telling you when you’re wrong.” These enhanced features should be rolling out today for Gemini app on iOS and Android. — Kyle Barr May 20Release the Agents Like pretty much every other AI player, Google is pursuing agentic AI in a big way. I’d prepare for a lot more talk about how Gemini can take tasks off your hands as the keynote progresses. —James Pero © Gizmodo Google has finally moved Project Starline—its futuristic video-calling machine—into a commercial project called Google Beam. According to Pichai, Google Beam can take a 2D image and transform it into a 3D one, and will also incorporate live translate. —James Pero © Gizmodo Google’s CEO, Sundar Pichai, says Google is shipping at a relentless pace, and to be honest, I tend to agree. There are tons of Gemini models out there already, even though it’s only been out for two years. Probably my favorite milestone, though, is that it has now completed Pokémon Blue, earning all 8 badges according to Pichai. —James Pero May 20Let’s Do This Buckle up, kiddos, it’s I/O time. Methinks there will be a lot to get to, so you may want to grab a snack now. —James Pero Counting down until the keynote… only a few more minutes to go. The DJ just said AI is changing music and how it’s made. But don’t forget that we’re all here… in person. Will we all be wearing Android XR smart glasses next year? Mixed reality headsets? —Raymond Wong © Raymond Wong / Gizmodo Fun fact: I haven’t attended Google I/O in person since before Covid-19. The Wi-Fi is definitely stronger and more stable now. It’s so great to be back and covering for Gizmodo. Dream job, unlocked! —Raymond Wong © Raymond Wong / Gizmodo Mini breakfast burritos… bagels… but these bagels can’t compare to real Made In New York City bagels with that authentic NY water —Raymond Wong © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo I’ve arrived at the Shoreline Amphitheatre in Mountain View, Calif., where the Google I/O keynote is taking place in 40 minutes. Seats are filling up. But first, must go check out the breakfast situation because my tummy is growling… —Raymond Wong May 20Should We Do a Giveaway? © Raymond Wong / Gizmodo Google I/O attendees get a special tote bag, a metal water bottle, a cap, and a cute sheet of stickers. I always end up donating this stuff to Goodwill during the holidays. A guy living in NYC with two cats only has so much room for tote bags and water bottles… Would be cool to do giveaway. Leave a comment to let us know if you’d be into that and I can pester top brass to make it happen —Raymond Wong May 20Got My Press Badge! In 13 hours, Google will blitz everyone with Gemini AI, Gemini AI, and tons more Gemini AI. Who’s ready for… Gemini AI? —Raymond Wong May 19Google Glass: The Redux © Google / Screenshot by Gizmodo Google is very obviously inching toward the release of some kind of smart glasses product for the first time sinceGoogle Glass, and if I were a betting man, I’d say this one will have a much warmer reception than its forebearer. I’m not saying Google can snatch the crown from Meta and its Ray-Ban smart glasses right out of the gate, but if it plays its cards right, it could capitalize on the integration with its other hardwarein a big way. Meta may finally have a real competitor on its hands. ICYMI: Here’s Google’s President of the Android Ecosystem, Sameer Samat, teasing some kind of smart glasses device in a recorded demo last week. —James Pero Hi folks, I’m James Pero, Gizmodo’s new Senior Writer. There’s a lot we have to get to with Google I/O, so I’ll keep this introduction short. I like long walks on the beach, the wind in my nonexistent hair, and I’m really, really, looking forward to bringing you even more of the spicy, insightful, and entertaining coverage on consumer tech that Gizmodo is known for. I’m starting my tenure here out hot with Google I/O, so make sure you check back here throughout the week to get those sweet, sweet blogs and commentary from me and Gizmodo’s Senior Consumer Tech Editor Raymond Wong. —James Pero © Raymond Wong / Gizmodo Hey everyone! Raymond Wong, senior editor in charge of Gizmodo’s consumer tech team, here! Landed in San Francisco, and I’ll be making my way over to Mountain View, California, later today to pick up my press badge and scope out the scene for tomorrow’s Google I/O keynote, which kicks off at 1 p.m. ET / 10 a.m. PT. Google I/O is a developer conference, but that doesn’t mean it’s news only for engineers. While there will be a lot of nerdy stuff that will have developers hollering, what Google announces—expect updates on Gemini AI, Android, and Android XR, to name a few headliners—will shape consumer productsfor the rest of this year and also the years to come. I/O is a glimpse at Google’s technology roadmap as AI weaves itself into the way we compute at our desks and on the go. This is going to be a fun live blog! —Raymond Wong
    #live #updates #google
    Live Updates From Google I/O 2025 🔴
    © Gizmodo I wish I was making this stuff up, but chaos seems to follow me at all tech events. After waiting an hour to try out Google’s hyped-up Android XR smart glasses for five minutes, I was actually given a three-minute demo, where I actually had 90 seconds to use Gemini in an extremely controlled environment. And actually, if you watch the video in my hands-on write-up below, you’ll see that I spent even less time with it because Gemini fumbled a few times in the beginning. Oof. I really hope there’s another chance to try them again because it was just too rushed. I think it might be the most rushed product demo I’ve ever had in my life, and I’ve been covering new gadgets for the past 15 years. —Raymond Wong Google, a company valued at trillion, seemingly brought one pair of Android XR smart glasses for press to demo… and one pair of Samsung’s Project Moohan mixed reality headset running the same augmented reality platform. I’m told the wait is 1 hour to try either device for 5 minutes. Of course, I’m going to try out the smart glasses. But if I want to demo Moohan, I need to get back in line and wait all over again. This is madness! —Raymond Wong May 20Keynote Fin © Raymond Wong / Gizmodo Talk about a loooooong keynote. Total duration: 1 hour and 55 minutes, and then Sundar Pichai walked off stage. What do you make of all the AI announcements? Let’s hang in the comments! I’m headed over to a demo area to try out a pair of Android XR smart glasses. I can’t lie, even though the video stream from the live demo lagged for a good portion, I’m hyped! It really feels like Google is finally delivering on Google Glass over a decade later. Shoulda had Google co-founder Sergey Brin jump out of a helicopter and land on stage again, though. —Raymond Wong Pieces of Project Astra, Google’s computer vision-based UI, are winding up in various different products, it seems, and not all of them are geared toward smart glasses specifically. One of the most exciting updates to Astra is “computer control,” which allows one to do a lot more on their devices with computer vision alone. For instance, you could just point your phone at an objectand then ask Astra to search for the bike, find some brakes for it, and then even pull up a YouTube tutorial on how to fix it—all without typing anything into your phone. —James Pero Shopping bots aren’t just for scalpers anymore. Google is putting the power of automated consumerism in your hands with its new AI shopping tool. There are some pretty wild ideas here, too, including a virtual shopping avatar that’s supposed to represent your own body—the idea is you can make it try on clothes to see how they fit. How all that works in practice is TBD, but if you’re ready for a full AI shopping experience, you’ve finally got it. For the whole story, check out our story from Gizmodo’s Senior Editor, Consumer Tech, Raymond Wong. —James Pero I got what I wanted. Google showed off what its Android XR tech can bring to smart glasses. In a live demo, Google showcased how a pair of unspecified smart glasses did a few of the things that I’ve been waiting to do, including projecting live navigation and remembering objects in your environment—basically the stuff that it pitched with Project Astra last year, but in a glasses form factor. There’s still a lot that needs to happen, both hardware and software-wise, before you can walk around wearing glasses that actually do all those things, but it was exciting to see that Google is making progress in that direction. It’s worth noting that not all of the demos went off smoothly—there was lots of stutter in the live translation demo—but I guess props to them for giving it a go. When we’ll actually get to walk around wearing functional smart glasses with some kind of optical passthrough or virtual display is anyone’s guess, but the race is certainly heating up. —James Pero Google’s SynthID has been around for nearly three years, but it’s been largely kept out of the public eye. The system disturbs AI-generated images, video, or audio with an invisible, undetectable watermark that can be observed with Google DeepMind’s proprietary tool. At I/O, Google said it was working with both Nvidia and GetReal to introduce the same watermarking technique with those companies’ AI image generators. Users may be able to detect these watermarks themselves, even if only part of the media was modified with AI. Early testers are getting access to it “today,” but hopefully more people can acess it at a later date from labs.google/synthid. — Kyle Barr This keynote has been going on for 1.5 hours now. Do I run to the restroom now or wait? But how much longer until it ends??? Can we petiton to Sundar Pichai to make these keynotes shorter or at least have an intermission? Update: I ran for it right near the end before Android XR news hit. I almost made it… —Raymond Wong © Raymond Wong / Gizmodo Google’s new video generator Veo, is getting a big upgrade that includes sound generation, and it’s not just dialogue. Veo 3 can also generate sound effects and music. In a demo, Google showed off an animated forest scene that includes all three—dialogue, sound effects, and video. The length of clips, I assume, will be short at first, but the results look pretty sophisticated if the demo is to be believed. —James Pero If you pay for a Google One subscription, you’ll start to see Gemini in your Google Chrome browserlater this week. This will appear as the sparkle icon at the top of your browser app. You can use this to bring up a prompt box to ask a question about the current page you’re browsing, such as if you want to consolidate a number of user reviews for a local campsite. — Kyle Barr © Google / GIF by Gizmodo Google’s high-tech video conferencing tech, now called Beam, looks impressive. You can make eye contact! It feels like the person in the screen is right in front of you! It’s glasses-free 3D! Come back down to Earth, buddy—it’s not coming out as a consumer product. Commercial first with partners like HP. Time to apply for a new job? —Raymond Wong here: Google doesn’t want Search to be tied to your browser or apps anymore. Search Live is akin to the video and audio comprehension capabilities of Gemini Live, but with the added benefit of getting quick answers based on sites from around the web. Google showed how Search Live could comprehend queries about at-home science experiment and bring in answers from sites like Quora or YouTube. — Kyle Barr Google is getting deep into augmented reality with Android XR—its operating system built specifically for AR glasses and VR headsets. Google showed us how users may be able to see a holographic live Google Maps view directly on their glasses or set up calendar events, all without needing to touch a single screen. This uses Gemini AI to comprehend your voice prompts and follow through on your instructions. Google doesn’t have its own device to share at I/O, but its planning to work with companies like XReal and Samsung to craft new devices across both AR and VR. — Kyle Barr Read our full report here: I know how much you all love subscriptions! Google does too, apparently, and is now offering a per month AI bundle that groups some of its most advanced AI services. Subscribing to Google AI Ultra will get you: Gemini and its full capabilities Flow, a new, more advanced AI filmmaking tool based on Veo Whisk, which allows text-to-image creation NotebookLM, an AI note-taking app Gemini in Gmail and Docs Gemini in Chrome Project Mariner, an agentic research AI 30TB of storage I’m not sure who needs all of this, but maybe there are more AI superusers than I thought. —James Pero Google CEO Sundar Pichai was keen to claim that users are big, big fans of AI overviews in Google Search results. If there wasn’t already enough AI on your search bar, Google will now stick an entire “AI Mode” tab on your search bar next to the Google Lens button. This encompasses the Gemini 2.5 model. This opens up an entirely new UI for searching via a prompt with a chatbot. After you input your rambling search query, it will bring up an assortment of short-form textual answers, links, and even a Google Maps widget depending on what you were looking for. AI Mode should be available starting today. Google said AI Mode pulls together information from the web alongside its other data like weather or academic research through Google Scholar. It should also eventually encompass your “personal context,” which will be available later this summer. Eventually, Google will add more AI Mode capabilities directly to AI Overviews. — Kyle Barr May 20News Embargo Has Lifted! © Xreal Get your butt over to Gizmodo.com’s home page because the Google I/O news embargo just lifted. We’ve got a bunch of stories, including this one about Google partnering up with Xreal for a new pair of “optical see-through”smart glasses called Project Aura. The smart glasses run Android XR and are powered by a Qualcomm chip. You can see three cameras. Wireless, these are not—you’ll need to tether to a phone or other device. Update: Little scoop: I’ve confirmed that Project Aura has a 70-degree field of view, which is way wider than the One Pro’s FOV, which is 57 degrees. —Raymond Wong © Raymond Wong / Gizmodo Google’s DeepMind CEO showed off the updated version of Project Astra running on a phone and drove home how its “personal, proactive, and powerful” AI features are the groundwork for a “universal assistant” that truly understands and works on your behalf. If you think Gemini is a fad, it’s time to get familiar with it because it’s not going anywhere. —Raymond Wong May 20Gemini 2.5 Pro Is Here © Gizmodo Google says Gemini 2.5 Pro is its “most advanced model yet,” and comes with “enhanced reasoning,” better coding ability, and can even create interactive simulations. You can try it now via Google AI Studio. —James Pero There are two major types of transformer AI used today. One is the LLM, AKA large language models, and diffusion models—which are mostly used for image generation. The Gemini Diffusion model blurs the lines of these types of models. Google said its new research model can iterate on a solution quickly and correct itself while generating an answer. For math or coding prompts, Gemini Diffusion can potentially output an entire response much faster than a typical Chatbot. Unlike a traditional LLM model, which may take a few seconds to answer a question, Gemini Diffusion can create a response to a complex math equation in the blink of an eye, and still share the steps it took to reach its conclusion. — Kyle Barr © Gizmodo New Gemini 2.5 Flash and Gemini Pro models are incoming and, naturally, Google says both are faster and more sophisticated across the board. One of the improvements for Gemini 2.5 Flash is even more inflection when speaking. Unfortunately for my ears, Google demoed the new Flash speaking in a whisper that sent chills down my spine. —James Pero Is anybody keeping track of how many times Google execs have said “Gemini” and “AI” so far? Oops, I think I’m already drunk, and we’re only 20 minutes in. —Raymond Wong © Raymond Wong / Gizmodo Google’s Project Astra is supposed to be getting much better at avoiding hallucinations, AKA when the AI makes stuff up. Project Astra’s vision and audio comprehension capabilities are supposed to be far better at knowing when you’re trying to trick it. In a video, Google showed how its Gemini Live AI wouldn’t buy your bullshit if you tell it that a garbage truck is a convertible, a lamp pole is a skyscraper, or your shadow is some stalker. This should hopefully mean the AI doesn’t confidently lie to you, as well. Google CEO Sundar Pichai said “Gemini is really good at telling you when you’re wrong.” These enhanced features should be rolling out today for Gemini app on iOS and Android. — Kyle Barr May 20Release the Agents Like pretty much every other AI player, Google is pursuing agentic AI in a big way. I’d prepare for a lot more talk about how Gemini can take tasks off your hands as the keynote progresses. —James Pero © Gizmodo Google has finally moved Project Starline—its futuristic video-calling machine—into a commercial project called Google Beam. According to Pichai, Google Beam can take a 2D image and transform it into a 3D one, and will also incorporate live translate. —James Pero © Gizmodo Google’s CEO, Sundar Pichai, says Google is shipping at a relentless pace, and to be honest, I tend to agree. There are tons of Gemini models out there already, even though it’s only been out for two years. Probably my favorite milestone, though, is that it has now completed Pokémon Blue, earning all 8 badges according to Pichai. —James Pero May 20Let’s Do This Buckle up, kiddos, it’s I/O time. Methinks there will be a lot to get to, so you may want to grab a snack now. —James Pero Counting down until the keynote… only a few more minutes to go. The DJ just said AI is changing music and how it’s made. But don’t forget that we’re all here… in person. Will we all be wearing Android XR smart glasses next year? Mixed reality headsets? —Raymond Wong © Raymond Wong / Gizmodo Fun fact: I haven’t attended Google I/O in person since before Covid-19. The Wi-Fi is definitely stronger and more stable now. It’s so great to be back and covering for Gizmodo. Dream job, unlocked! —Raymond Wong © Raymond Wong / Gizmodo Mini breakfast burritos… bagels… but these bagels can’t compare to real Made In New York City bagels with that authentic NY water 😏 —Raymond Wong © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo I’ve arrived at the Shoreline Amphitheatre in Mountain View, Calif., where the Google I/O keynote is taking place in 40 minutes. Seats are filling up. But first, must go check out the breakfast situation because my tummy is growling… —Raymond Wong May 20Should We Do a Giveaway? © Raymond Wong / Gizmodo Google I/O attendees get a special tote bag, a metal water bottle, a cap, and a cute sheet of stickers. I always end up donating this stuff to Goodwill during the holidays. A guy living in NYC with two cats only has so much room for tote bags and water bottles… Would be cool to do giveaway. Leave a comment to let us know if you’d be into that and I can pester top brass to make it happen 🤪 —Raymond Wong May 20Got My Press Badge! In 13 hours, Google will blitz everyone with Gemini AI, Gemini AI, and tons more Gemini AI. Who’s ready for… Gemini AI? —Raymond Wong May 19Google Glass: The Redux © Google / Screenshot by Gizmodo Google is very obviously inching toward the release of some kind of smart glasses product for the first time sinceGoogle Glass, and if I were a betting man, I’d say this one will have a much warmer reception than its forebearer. I’m not saying Google can snatch the crown from Meta and its Ray-Ban smart glasses right out of the gate, but if it plays its cards right, it could capitalize on the integration with its other hardwarein a big way. Meta may finally have a real competitor on its hands. ICYMI: Here’s Google’s President of the Android Ecosystem, Sameer Samat, teasing some kind of smart glasses device in a recorded demo last week. —James Pero Hi folks, I’m James Pero, Gizmodo’s new Senior Writer. There’s a lot we have to get to with Google I/O, so I’ll keep this introduction short. I like long walks on the beach, the wind in my nonexistent hair, and I’m really, really, looking forward to bringing you even more of the spicy, insightful, and entertaining coverage on consumer tech that Gizmodo is known for. I’m starting my tenure here out hot with Google I/O, so make sure you check back here throughout the week to get those sweet, sweet blogs and commentary from me and Gizmodo’s Senior Consumer Tech Editor Raymond Wong. —James Pero © Raymond Wong / Gizmodo Hey everyone! Raymond Wong, senior editor in charge of Gizmodo’s consumer tech team, here! Landed in San Francisco, and I’ll be making my way over to Mountain View, California, later today to pick up my press badge and scope out the scene for tomorrow’s Google I/O keynote, which kicks off at 1 p.m. ET / 10 a.m. PT. Google I/O is a developer conference, but that doesn’t mean it’s news only for engineers. While there will be a lot of nerdy stuff that will have developers hollering, what Google announces—expect updates on Gemini AI, Android, and Android XR, to name a few headliners—will shape consumer productsfor the rest of this year and also the years to come. I/O is a glimpse at Google’s technology roadmap as AI weaves itself into the way we compute at our desks and on the go. This is going to be a fun live blog! —Raymond Wong #live #updates #google
    GIZMODO.COM
    Live Updates From Google I/O 2025 🔴
    © Gizmodo I wish I was making this stuff up, but chaos seems to follow me at all tech events. After waiting an hour to try out Google’s hyped-up Android XR smart glasses for five minutes, I was actually given a three-minute demo, where I actually had 90 seconds to use Gemini in an extremely controlled environment. And actually, if you watch the video in my hands-on write-up below, you’ll see that I spent even less time with it because Gemini fumbled a few times in the beginning. Oof. I really hope there’s another chance to try them again because it was just too rushed. I think it might be the most rushed product demo I’ve ever had in my life, and I’ve been covering new gadgets for the past 15 years. —Raymond Wong Google, a company valued at $2 trillion, seemingly brought one pair of Android XR smart glasses for press to demo… and one pair of Samsung’s Project Moohan mixed reality headset running the same augmented reality platform. I’m told the wait is 1 hour to try either device for 5 minutes. Of course, I’m going to try out the smart glasses. But if I want to demo Moohan, I need to get back in line and wait all over again. This is madness! —Raymond Wong May 20Keynote Fin © Raymond Wong / Gizmodo Talk about a loooooong keynote. Total duration: 1 hour and 55 minutes, and then Sundar Pichai walked off stage. What do you make of all the AI announcements? Let’s hang in the comments! I’m headed over to a demo area to try out a pair of Android XR smart glasses. I can’t lie, even though the video stream from the live demo lagged for a good portion, I’m hyped! It really feels like Google is finally delivering on Google Glass over a decade later. Shoulda had Google co-founder Sergey Brin jump out of a helicopter and land on stage again, though. —Raymond Wong Pieces of Project Astra, Google’s computer vision-based UI, are winding up in various different products, it seems, and not all of them are geared toward smart glasses specifically. One of the most exciting updates to Astra is “computer control,” which allows one to do a lot more on their devices with computer vision alone. For instance, you could just point your phone at an object (say, a bike) and then ask Astra to search for the bike, find some brakes for it, and then even pull up a YouTube tutorial on how to fix it—all without typing anything into your phone. —James Pero Shopping bots aren’t just for scalpers anymore. Google is putting the power of automated consumerism in your hands with its new AI shopping tool. There are some pretty wild ideas here, too, including a virtual shopping avatar that’s supposed to represent your own body—the idea is you can make it try on clothes to see how they fit. How all that works in practice is TBD, but if you’re ready for a full AI shopping experience, you’ve finally got it. For the whole story, check out our story from Gizmodo’s Senior Editor, Consumer Tech, Raymond Wong. —James Pero I got what I wanted. Google showed off what its Android XR tech can bring to smart glasses. In a live demo, Google showcased how a pair of unspecified smart glasses did a few of the things that I’ve been waiting to do, including projecting live navigation and remembering objects in your environment—basically the stuff that it pitched with Project Astra last year, but in a glasses form factor. There’s still a lot that needs to happen, both hardware and software-wise, before you can walk around wearing glasses that actually do all those things, but it was exciting to see that Google is making progress in that direction. It’s worth noting that not all of the demos went off smoothly—there was lots of stutter in the live translation demo—but I guess props to them for giving it a go. When we’ll actually get to walk around wearing functional smart glasses with some kind of optical passthrough or virtual display is anyone’s guess, but the race is certainly heating up. —James Pero Google’s SynthID has been around for nearly three years, but it’s been largely kept out of the public eye. The system disturbs AI-generated images, video, or audio with an invisible, undetectable watermark that can be observed with Google DeepMind’s proprietary tool. At I/O, Google said it was working with both Nvidia and GetReal to introduce the same watermarking technique with those companies’ AI image generators. Users may be able to detect these watermarks themselves, even if only part of the media was modified with AI. Early testers are getting access to it “today,” but hopefully more people can acess it at a later date from labs.google/synthid. — Kyle Barr This keynote has been going on for 1.5 hours now. Do I run to the restroom now or wait? But how much longer until it ends??? Can we petiton to Sundar Pichai to make these keynotes shorter or at least have an intermission? Update: I ran for it right near the end before Android XR news hit. I almost made it… —Raymond Wong © Raymond Wong / Gizmodo Google’s new video generator Veo, is getting a big upgrade that includes sound generation, and it’s not just dialogue. Veo 3 can also generate sound effects and music. In a demo, Google showed off an animated forest scene that includes all three—dialogue, sound effects, and video. The length of clips, I assume, will be short at first, but the results look pretty sophisticated if the demo is to be believed. —James Pero If you pay for a Google One subscription, you’ll start to see Gemini in your Google Chrome browser (and—judging by this developer conference—everywhere else) later this week. This will appear as the sparkle icon at the top of your browser app. You can use this to bring up a prompt box to ask a question about the current page you’re browsing, such as if you want to consolidate a number of user reviews for a local campsite. — Kyle Barr © Google / GIF by Gizmodo Google’s high-tech video conferencing tech, now called Beam, looks impressive. You can make eye contact! It feels like the person in the screen is right in front of you! It’s glasses-free 3D! Come back down to Earth, buddy—it’s not coming out as a consumer product. Commercial first with partners like HP. Time to apply for a new job? —Raymond Wong Read more here: Google doesn’t want Search to be tied to your browser or apps anymore. Search Live is akin to the video and audio comprehension capabilities of Gemini Live, but with the added benefit of getting quick answers based on sites from around the web. Google showed how Search Live could comprehend queries about at-home science experiment and bring in answers from sites like Quora or YouTube. — Kyle Barr Google is getting deep into augmented reality with Android XR—its operating system built specifically for AR glasses and VR headsets. Google showed us how users may be able to see a holographic live Google Maps view directly on their glasses or set up calendar events, all without needing to touch a single screen. This uses Gemini AI to comprehend your voice prompts and follow through on your instructions. Google doesn’t have its own device to share at I/O, but its planning to work with companies like XReal and Samsung to craft new devices across both AR and VR. — Kyle Barr Read our full report here: I know how much you all love subscriptions! Google does too, apparently, and is now offering a $250 per month AI bundle that groups some of its most advanced AI services. Subscribing to Google AI Ultra will get you: Gemini and its full capabilities Flow, a new, more advanced AI filmmaking tool based on Veo Whisk, which allows text-to-image creation NotebookLM, an AI note-taking app Gemini in Gmail and Docs Gemini in Chrome Project Mariner, an agentic research AI 30TB of storage I’m not sure who needs all of this, but maybe there are more AI superusers than I thought. —James Pero Google CEO Sundar Pichai was keen to claim that users are big, big fans of AI overviews in Google Search results. If there wasn’t already enough AI on your search bar, Google will now stick an entire “AI Mode” tab on your search bar next to the Google Lens button. This encompasses the Gemini 2.5 model. This opens up an entirely new UI for searching via a prompt with a chatbot. After you input your rambling search query, it will bring up an assortment of short-form textual answers, links, and even a Google Maps widget depending on what you were looking for. AI Mode should be available starting today. Google said AI Mode pulls together information from the web alongside its other data like weather or academic research through Google Scholar. It should also eventually encompass your “personal context,” which will be available later this summer. Eventually, Google will add more AI Mode capabilities directly to AI Overviews. — Kyle Barr May 20News Embargo Has Lifted! © Xreal Get your butt over to Gizmodo.com’s home page because the Google I/O news embargo just lifted. We’ve got a bunch of stories, including this one about Google partnering up with Xreal for a new pair of “optical see-through” (OST) smart glasses called Project Aura. The smart glasses run Android XR and are powered by a Qualcomm chip. You can see three cameras. Wireless, these are not—you’ll need to tether to a phone or other device. Update: Little scoop: I’ve confirmed that Project Aura has a 70-degree field of view, which is way wider than the One Pro’s FOV, which is 57 degrees. —Raymond Wong © Raymond Wong / Gizmodo Google’s DeepMind CEO showed off the updated version of Project Astra running on a phone and drove home how its “personal, proactive, and powerful” AI features are the groundwork for a “universal assistant” that truly understands and works on your behalf. If you think Gemini is a fad, it’s time to get familiar with it because it’s not going anywhere. —Raymond Wong May 20Gemini 2.5 Pro Is Here © Gizmodo Google says Gemini 2.5 Pro is its “most advanced model yet,” and comes with “enhanced reasoning,” better coding ability, and can even create interactive simulations. You can try it now via Google AI Studio. —James Pero There are two major types of transformer AI used today. One is the LLM, AKA large language models, and diffusion models—which are mostly used for image generation. The Gemini Diffusion model blurs the lines of these types of models. Google said its new research model can iterate on a solution quickly and correct itself while generating an answer. For math or coding prompts, Gemini Diffusion can potentially output an entire response much faster than a typical Chatbot. Unlike a traditional LLM model, which may take a few seconds to answer a question, Gemini Diffusion can create a response to a complex math equation in the blink of an eye, and still share the steps it took to reach its conclusion. — Kyle Barr © Gizmodo New Gemini 2.5 Flash and Gemini Pro models are incoming and, naturally, Google says both are faster and more sophisticated across the board. One of the improvements for Gemini 2.5 Flash is even more inflection when speaking. Unfortunately for my ears, Google demoed the new Flash speaking in a whisper that sent chills down my spine. —James Pero Is anybody keeping track of how many times Google execs have said “Gemini” and “AI” so far? Oops, I think I’m already drunk, and we’re only 20 minutes in. —Raymond Wong © Raymond Wong / Gizmodo Google’s Project Astra is supposed to be getting much better at avoiding hallucinations, AKA when the AI makes stuff up. Project Astra’s vision and audio comprehension capabilities are supposed to be far better at knowing when you’re trying to trick it. In a video, Google showed how its Gemini Live AI wouldn’t buy your bullshit if you tell it that a garbage truck is a convertible, a lamp pole is a skyscraper, or your shadow is some stalker. This should hopefully mean the AI doesn’t confidently lie to you, as well. Google CEO Sundar Pichai said “Gemini is really good at telling you when you’re wrong.” These enhanced features should be rolling out today for Gemini app on iOS and Android. — Kyle Barr May 20Release the Agents Like pretty much every other AI player, Google is pursuing agentic AI in a big way. I’d prepare for a lot more talk about how Gemini can take tasks off your hands as the keynote progresses. —James Pero © Gizmodo Google has finally moved Project Starline—its futuristic video-calling machine—into a commercial project called Google Beam. According to Pichai, Google Beam can take a 2D image and transform it into a 3D one, and will also incorporate live translate. —James Pero © Gizmodo Google’s CEO, Sundar Pichai, says Google is shipping at a relentless pace, and to be honest, I tend to agree. There are tons of Gemini models out there already, even though it’s only been out for two years. Probably my favorite milestone, though, is that it has now completed Pokémon Blue, earning all 8 badges according to Pichai. —James Pero May 20Let’s Do This Buckle up, kiddos, it’s I/O time. Methinks there will be a lot to get to, so you may want to grab a snack now. —James Pero Counting down until the keynote… only a few more minutes to go. The DJ just said AI is changing music and how it’s made. But don’t forget that we’re all here… in person. Will we all be wearing Android XR smart glasses next year? Mixed reality headsets? —Raymond Wong © Raymond Wong / Gizmodo Fun fact: I haven’t attended Google I/O in person since before Covid-19. The Wi-Fi is definitely stronger and more stable now. It’s so great to be back and covering for Gizmodo. Dream job, unlocked! —Raymond Wong © Raymond Wong / Gizmodo Mini breakfast burritos… bagels… but these bagels can’t compare to real Made In New York City bagels with that authentic NY water 😏 —Raymond Wong © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo I’ve arrived at the Shoreline Amphitheatre in Mountain View, Calif., where the Google I/O keynote is taking place in 40 minutes. Seats are filling up. But first, must go check out the breakfast situation because my tummy is growling… —Raymond Wong May 20Should We Do a Giveaway? © Raymond Wong / Gizmodo Google I/O attendees get a special tote bag, a metal water bottle, a cap, and a cute sheet of stickers. I always end up donating this stuff to Goodwill during the holidays. A guy living in NYC with two cats only has so much room for tote bags and water bottles… Would be cool to do giveaway. Leave a comment to let us know if you’d be into that and I can pester top brass to make it happen 🤪 —Raymond Wong May 20Got My Press Badge! In 13 hours, Google will blitz everyone with Gemini AI, Gemini AI, and tons more Gemini AI. Who’s ready for… Gemini AI? —Raymond Wong May 19Google Glass: The Redux © Google / Screenshot by Gizmodo Google is very obviously inching toward the release of some kind of smart glasses product for the first time since (gulp) Google Glass, and if I were a betting man, I’d say this one will have a much warmer reception than its forebearer. I’m not saying Google can snatch the crown from Meta and its Ray-Ban smart glasses right out of the gate, but if it plays its cards right, it could capitalize on the integration with its other hardware (hello, Pixel devices) in a big way. Meta may finally have a real competitor on its hands. ICYMI: Here’s Google’s President of the Android Ecosystem, Sameer Samat, teasing some kind of smart glasses device in a recorded demo last week. —James Pero Hi folks, I’m James Pero, Gizmodo’s new Senior Writer. There’s a lot we have to get to with Google I/O, so I’ll keep this introduction short. I like long walks on the beach, the wind in my nonexistent hair, and I’m really, really, looking forward to bringing you even more of the spicy, insightful, and entertaining coverage on consumer tech that Gizmodo is known for. I’m starting my tenure here out hot with Google I/O, so make sure you check back here throughout the week to get those sweet, sweet blogs and commentary from me and Gizmodo’s Senior Consumer Tech Editor Raymond Wong. —James Pero © Raymond Wong / Gizmodo Hey everyone! Raymond Wong, senior editor in charge of Gizmodo’s consumer tech team, here! Landed in San Francisco (the sunrise was *chef’s kiss*), and I’ll be making my way over to Mountain View, California, later today to pick up my press badge and scope out the scene for tomorrow’s Google I/O keynote, which kicks off at 1 p.m. ET / 10 a.m. PT. Google I/O is a developer conference, but that doesn’t mean it’s news only for engineers. While there will be a lot of nerdy stuff that will have developers hollering, what Google announces—expect updates on Gemini AI, Android, and Android XR, to name a few headliners—will shape consumer products (hardware, software, and services) for the rest of this year and also the years to come. I/O is a glimpse at Google’s technology roadmap as AI weaves itself into the way we compute at our desks and on the go. This is going to be a fun live blog! —Raymond Wong
    0 Comments 0 Shares 0 Reviews
More Results
CGShares https://cgshares.com