• Christian Marclay explores a universe of thresholds in his latest single-channel montage of film clips

    DoorsChristian Marclay
    Institute of Contemporary Art Boston
    Through September 1, 2025Brooklyn Museum

    Through April 12, 2026On the screen, a movie clip plays of a character entering through a door to leave out another. It cuts to another clip of someone else doing the same thing over and over, all sourced from a panoply of Western cinema. The audience, sitting for an unknown amount of time, watches this shape-shifting protagonist from different cultural periods come and go, as the film endlessly loops.

    So goes Christian Marclay’s latest single-channel film, Doors, currently exhibited for the first time in the United States at the Institute of Contemporary Art Boston.. Assembled over ten years, the film is a dizzying feat, a carefully crafted montage of film clips revolving around the simple premise of someone entering through a door and then leaving out a door. In the exhibition, Marclay writes, “Doors are fascinating objects, rich with symbolism.” Here, he shows hundreds of them, examining through film how the simple act of moving through a threshold multiplied endlessly creates a profoundly new reading of what said threshold signifies.
    On paper, this may sound like an extremely jarring experience. But Marclay—a visual artist, composer, and DJ whose previous works such as The Clockinvolved similar mega-montages of disparate film clips—has a sensitive touch. The sequences feel incredibly smooth, the montage carefully constructed to mimic continuity as closely as possible. This is even more impressive when one imagines the constraints that a door’s movement offers; it must open and close a certain direction, with particular types of hinges or means of swinging. It makes the seamlessness of the film all the more fascinating to dissect. When a tiny wooden doorframe cuts to a large double steel door, my brain had no issue at all registering a sense of continued motion through the frame—a form of cinematic magic.
    Christian Marclay, Doors, 2022. Single-channel video projection.
    Watching the clips, there seemed to be no discernible meta narrative—simply movement through doors. Nevertheless, Marclay is a master of controlling tone. Though the relentlessness of watching the loops does create an overall feeling of tension that the film is clearly playing on, there are often moments of levity that interrupt, giving visitors a chance to breathe. The pacing too, swings from a person rushing in and out, to a slow stroll between doors in a corridor. It leaves one musing on just how ubiquitous this simple action is, and how mutable these simple acts of pulling a door and stepping inside can be. Sometimes mundane, sometimes thrilling, sometimes in anticipation, sometimes in search—Doors invites us to reflect on our own interaction with these objects, and with the very act of stepping through a doorframe.

    Much of the experience rests on the soundscape and music, which is equally—if not more heavily—important in creating the transition across clips. Marclay’s previous work leaned heavily on his interest in aural media; this added dimension only enriches Doors and elevates it beyond a formal visual study of clips that match each other. The film bleeds music from one scene to another, sometimes prematurely, to make believable the movement of one character across multiple movies. This overlap of sounds is essentially an echo of the space we left behind and are entering into. We as the audience almost believe—even if just for a second—that the transition is real.
    The effect is powerful and calls to mind several references. No doubt Doors owes some degree of inspiration to the lineage of surrealist art, perhaps in the work of Magritte or Duchamp. For those steeped in architecture, one may think of Bernard Tschumi’s Manhattan Transcripts, where his transcriptions of events, spaces, and movements similarly both shatter and call to attention simple spatial sequences. One may also be reminded of the work of Situationist International, particularly the psychogeography of Guy Debord. I confess that my first thought was theequally famous door-chase scene in Monsters, Inc. But regardless of what corollaries one may conjure, Doors has a wholly unique feel. It is simplistic and singular in constructing its webbed world.
    Installation view, Christian Marclay: Doors, the Institute of Contemporary Art/Boston, 2025.But what exactly are we to take away from this world? In an interview with Artforum, Marclay declares, “I’m building in people’s minds an architecture in which to get lost.” The clip evokes a certain act of labyrinthian mapping—or perhaps a mode of perpetual resetting. I began to imagine this almost as a non-Euclidean enfilade of sorts where each room invites you to quickly grasp a new environment and then very quickly anticipate what may be in the next. With the understanding that you can’t backtrack, and the unpredictability of the next door taking you anywhere, the film holds you in total suspense. The production of new spaces and new architecture is activated all at once in the moment someone steps into a new doorway.

    All of this is without even mentioning the chosen films themselves. There is a degree to which the pop-culture element of Marclay’s work makes certain moments click—I can’t help but laugh as I watch Adam Sandler in Punch Drunk Love exit a door and emerge as Bette Davis in All About Eve. But to a degree, I also see the references being secondary, and certainly unneeded to understand the visceral experience Marclay crafts. It helps that, aside from a couple of jarring character movements or one-off spoken jokes, the movement is repetitive and universal.
    Doors runs on a continuous loop. I sat watching for just under an hour before convincing myself that I would never find any appropriate or correct time to leave. Instead, I could sit endlessly and reflect on each character movement, each new reveal of a room. Is the door the most important architectural element in creating space? Marclay makes a strong case for it with this piece.
    Harish Krishnamoorthy is an architectural and urban designer based in Cambridge, Massachusetts, and Bangalore, India. He is an editor at PAIRS.
    #christian #marclay #explores #universe #thresholds
    Christian Marclay explores a universe of thresholds in his latest single-channel montage of film clips
    DoorsChristian Marclay Institute of Contemporary Art Boston Through September 1, 2025Brooklyn Museum Through April 12, 2026On the screen, a movie clip plays of a character entering through a door to leave out another. It cuts to another clip of someone else doing the same thing over and over, all sourced from a panoply of Western cinema. The audience, sitting for an unknown amount of time, watches this shape-shifting protagonist from different cultural periods come and go, as the film endlessly loops. So goes Christian Marclay’s latest single-channel film, Doors, currently exhibited for the first time in the United States at the Institute of Contemporary Art Boston.. Assembled over ten years, the film is a dizzying feat, a carefully crafted montage of film clips revolving around the simple premise of someone entering through a door and then leaving out a door. In the exhibition, Marclay writes, “Doors are fascinating objects, rich with symbolism.” Here, he shows hundreds of them, examining through film how the simple act of moving through a threshold multiplied endlessly creates a profoundly new reading of what said threshold signifies. On paper, this may sound like an extremely jarring experience. But Marclay—a visual artist, composer, and DJ whose previous works such as The Clockinvolved similar mega-montages of disparate film clips—has a sensitive touch. The sequences feel incredibly smooth, the montage carefully constructed to mimic continuity as closely as possible. This is even more impressive when one imagines the constraints that a door’s movement offers; it must open and close a certain direction, with particular types of hinges or means of swinging. It makes the seamlessness of the film all the more fascinating to dissect. When a tiny wooden doorframe cuts to a large double steel door, my brain had no issue at all registering a sense of continued motion through the frame—a form of cinematic magic. Christian Marclay, Doors, 2022. Single-channel video projection. Watching the clips, there seemed to be no discernible meta narrative—simply movement through doors. Nevertheless, Marclay is a master of controlling tone. Though the relentlessness of watching the loops does create an overall feeling of tension that the film is clearly playing on, there are often moments of levity that interrupt, giving visitors a chance to breathe. The pacing too, swings from a person rushing in and out, to a slow stroll between doors in a corridor. It leaves one musing on just how ubiquitous this simple action is, and how mutable these simple acts of pulling a door and stepping inside can be. Sometimes mundane, sometimes thrilling, sometimes in anticipation, sometimes in search—Doors invites us to reflect on our own interaction with these objects, and with the very act of stepping through a doorframe. Much of the experience rests on the soundscape and music, which is equally—if not more heavily—important in creating the transition across clips. Marclay’s previous work leaned heavily on his interest in aural media; this added dimension only enriches Doors and elevates it beyond a formal visual study of clips that match each other. The film bleeds music from one scene to another, sometimes prematurely, to make believable the movement of one character across multiple movies. This overlap of sounds is essentially an echo of the space we left behind and are entering into. We as the audience almost believe—even if just for a second—that the transition is real. The effect is powerful and calls to mind several references. No doubt Doors owes some degree of inspiration to the lineage of surrealist art, perhaps in the work of Magritte or Duchamp. For those steeped in architecture, one may think of Bernard Tschumi’s Manhattan Transcripts, where his transcriptions of events, spaces, and movements similarly both shatter and call to attention simple spatial sequences. One may also be reminded of the work of Situationist International, particularly the psychogeography of Guy Debord. I confess that my first thought was theequally famous door-chase scene in Monsters, Inc. But regardless of what corollaries one may conjure, Doors has a wholly unique feel. It is simplistic and singular in constructing its webbed world. Installation view, Christian Marclay: Doors, the Institute of Contemporary Art/Boston, 2025.But what exactly are we to take away from this world? In an interview with Artforum, Marclay declares, “I’m building in people’s minds an architecture in which to get lost.” The clip evokes a certain act of labyrinthian mapping—or perhaps a mode of perpetual resetting. I began to imagine this almost as a non-Euclidean enfilade of sorts where each room invites you to quickly grasp a new environment and then very quickly anticipate what may be in the next. With the understanding that you can’t backtrack, and the unpredictability of the next door taking you anywhere, the film holds you in total suspense. The production of new spaces and new architecture is activated all at once in the moment someone steps into a new doorway. All of this is without even mentioning the chosen films themselves. There is a degree to which the pop-culture element of Marclay’s work makes certain moments click—I can’t help but laugh as I watch Adam Sandler in Punch Drunk Love exit a door and emerge as Bette Davis in All About Eve. But to a degree, I also see the references being secondary, and certainly unneeded to understand the visceral experience Marclay crafts. It helps that, aside from a couple of jarring character movements or one-off spoken jokes, the movement is repetitive and universal. Doors runs on a continuous loop. I sat watching for just under an hour before convincing myself that I would never find any appropriate or correct time to leave. Instead, I could sit endlessly and reflect on each character movement, each new reveal of a room. Is the door the most important architectural element in creating space? Marclay makes a strong case for it with this piece. Harish Krishnamoorthy is an architectural and urban designer based in Cambridge, Massachusetts, and Bangalore, India. He is an editor at PAIRS. #christian #marclay #explores #universe #thresholds
    WWW.ARCHPAPER.COM
    Christian Marclay explores a universe of thresholds in his latest single-channel montage of film clips
    Doors (2022) Christian Marclay Institute of Contemporary Art Boston Through September 1, 2025Brooklyn Museum Through April 12, 2026On the screen, a movie clip plays of a character entering through a door to leave out another. It cuts to another clip of someone else doing the same thing over and over, all sourced from a panoply of Western cinema. The audience, sitting for an unknown amount of time, watches this shape-shifting protagonist from different cultural periods come and go, as the film endlessly loops. So goes Christian Marclay’s latest single-channel film, Doors (2022), currently exhibited for the first time in the United States at the Institute of Contemporary Art Boston. (It also premieres June 13 at the Brooklyn Museum and will run through April 12, 2026). Assembled over ten years, the film is a dizzying feat, a carefully crafted montage of film clips revolving around the simple premise of someone entering through a door and then leaving out a door. In the exhibition, Marclay writes, “Doors are fascinating objects, rich with symbolism.” Here, he shows hundreds of them, examining through film how the simple act of moving through a threshold multiplied endlessly creates a profoundly new reading of what said threshold signifies. On paper, this may sound like an extremely jarring experience. But Marclay—a visual artist, composer, and DJ whose previous works such as The Clock (2010) involved similar mega-montages of disparate film clips—has a sensitive touch. The sequences feel incredibly smooth, the montage carefully constructed to mimic continuity as closely as possible. This is even more impressive when one imagines the constraints that a door’s movement offers; it must open and close a certain direction, with particular types of hinges or means of swinging. It makes the seamlessness of the film all the more fascinating to dissect. When a tiny wooden doorframe cuts to a large double steel door, my brain had no issue at all registering a sense of continued motion through the frame—a form of cinematic magic. Christian Marclay, Doors (still), 2022. Single-channel video projection (color and black-and-white; 55:00 minutes on continuous loop). Watching the clips, there seemed to be no discernible meta narrative—simply movement through doors. Nevertheless, Marclay is a master of controlling tone. Though the relentlessness of watching the loops does create an overall feeling of tension that the film is clearly playing on, there are often moments of levity that interrupt, giving visitors a chance to breathe. The pacing too, swings from a person rushing in and out, to a slow stroll between doors in a corridor. It leaves one musing on just how ubiquitous this simple action is, and how mutable these simple acts of pulling a door and stepping inside can be. Sometimes mundane, sometimes thrilling, sometimes in anticipation, sometimes in search—Doors invites us to reflect on our own interaction with these objects, and with the very act of stepping through a doorframe. Much of the experience rests on the soundscape and music, which is equally—if not more heavily—important in creating the transition across clips. Marclay’s previous work leaned heavily on his interest in aural media; this added dimension only enriches Doors and elevates it beyond a formal visual study of clips that match each other. The film bleeds music from one scene to another, sometimes prematurely, to make believable the movement of one character across multiple movies. This overlap of sounds is essentially an echo of the space we left behind and are entering into. We as the audience almost believe—even if just for a second—that the transition is real. The effect is powerful and calls to mind several references. No doubt Doors owes some degree of inspiration to the lineage of surrealist art, perhaps in the work of Magritte or Duchamp. For those steeped in architecture, one may think of Bernard Tschumi’s Manhattan Transcripts, where his transcriptions of events, spaces, and movements similarly both shatter and call to attention simple spatial sequences. One may also be reminded of the work of Situationist International, particularly the psychogeography of Guy Debord. I confess that my first thought was the (in my view) equally famous door-chase scene in Monsters, Inc. But regardless of what corollaries one may conjure, Doors has a wholly unique feel. It is simplistic and singular in constructing its webbed world. Installation view, Christian Marclay: Doors, the Institute of Contemporary Art/Boston, 2025. (Mel Taing) But what exactly are we to take away from this world? In an interview with Artforum, Marclay declares, “I’m building in people’s minds an architecture in which to get lost.” The clip evokes a certain act of labyrinthian mapping—or perhaps a mode of perpetual resetting. I began to imagine this almost as a non-Euclidean enfilade of sorts where each room invites you to quickly grasp a new environment and then very quickly anticipate what may be in the next. With the understanding that you can’t backtrack, and the unpredictability of the next door taking you anywhere, the film holds you in total suspense. The production of new spaces and new architecture is activated all at once in the moment someone steps into a new doorway. All of this is without even mentioning the chosen films themselves. There is a degree to which the pop-culture element of Marclay’s work makes certain moments click—I can’t help but laugh as I watch Adam Sandler in Punch Drunk Love exit a door and emerge as Bette Davis in All About Eve. But to a degree, I also see the references being secondary, and certainly unneeded to understand the visceral experience Marclay crafts. It helps that, aside from a couple of jarring character movements or one-off spoken jokes, the movement is repetitive and universal. Doors runs on a continuous loop. I sat watching for just under an hour before convincing myself that I would never find any appropriate or correct time to leave. Instead, I could sit endlessly and reflect on each character movement, each new reveal of a room. Is the door the most important architectural element in creating space? Marclay makes a strong case for it with this piece. Harish Krishnamoorthy is an architectural and urban designer based in Cambridge, Massachusetts, and Bangalore, India. He is an editor at PAIRS.
    0 Comments 0 Shares
  • Microsoft and Google pursue differing AI agent approaches in M365 and Workspace

    Microsoft and Google are taking distinctive approaches with AI agents in their productivity suites, and enterprises need to account for the differences when formulating digital labor strategies, analysts said.

    In recent months, both companies have announced a dizzying array of new agents aimed at extracting value from corporate documents and maximizing efficiency. The tech giants have dropped numerous hints about where they’re headed with AI agents in their respective office suites, Microsoft 365 and Google Workspace.

    Microsoft is reshaping its Copilot assistant as a series of tools to create, tap into, and act on insights at individual and organizational levels. The Microsoft 365 roadmap lists hundreds of specialized AI tools under development to automate work for functions such as HR and accounting. The company is also developing smaller AI models to carry out specific functions.

    Google is going the opposite way, with its large-language model Gemini at the heart of Workspace. Google offers tools that include Gems for workers to create simple custom agents that automate tasks such as customer service, and Agentspace in Google Cloud to build more complex custom agents for collaboration and workflow management. At the recent Google I/O developer conference, the company added real-time speech translation to Google Meet.

    “For both, the goal is to bring usable and practical productivity and efficiency capabilities to work tools,” said Liz Miller, vice president and principal analyst at Constellation Research.

    But the differing AI agent strategies are heavily rooted in each company’s philosophical approaches to productivity. Although Microsoft has long encouraged customers to move from its traditional “perpetual-license” Office suite to the Microsoft 365 subscription-based model, M365 notably retains the familiar desktop apps. Google Workspace, on the other hand, has always been cloud-based.

    Microsoft users are typically a bit more tethered to traditional enterprise work styles, while Google has always been the “cloud-first darling for smaller organizations that still crave real-time collaboration,” Miller said.

    When it comes to the generative AI models being integrated into the two office suites, “Google’s Gemini models are beating out the models being deployed by Microsoft,” Miller said. “But as Microsoft expands its model ‘inventory’ in use across M365, this could change.”

    Microsoft has an advantage, as many desktop users live in Outlook or Word. The intelligence Copilot can bring from CRM software is readily available, while that integration is more complex in the cloud-native Google Workspace.

    “Microsoft still has an edge in a foundational understanding of work and the capacity to extend Copilot connections across applications as expansive as the Office suite through to Dynamics, giving AI a greater opportunity to be present in the spaces and presentation layers where workers enjoy working,” Miller said.

    Microsoft’s Copilot Agents and Google’s Gems and Agentspace are in their early stages, but there have been positive developments, said J.P. Gownder, a vice president and principal analyst on Forrester’s Future of Work team.

    Microsoft recently adopted Google’s A2A protocol, which makes it easier for users of both productivity suites to collaborate and unlock value from stagnant data sitting on other platforms. “That should be a win for interoperability,” Gownder said.

    But most companies that are Microsoft shops have years or decades of digital assets that hold them back from considering Google, he said. For example, Excel macros, pivot tables, and customizations cannot be easily or automatically migrated to Google Sheets, he said.

    “As early as this market is, I don’t think it’s fair to rank either player — Microsoft or Google — as being the leader; both of them are constructing new ecosystems to support the growth of agentic AI,” Gownder said.

    Most Microsoft Office users have moved to M365, but AI is helping Google is making inroads into larger organizations, especially among enterprises that are newer and less oriented toward legacy Microsoft products, said Jack Gold, principal analyst at J. Gold Associates.

    Technologies like A2A blur the line between on-premises and cloud productivity. As a result, “Google Workspace is no longer perceived as inferior, as it had been in the past,” Gold said.

    And for budget-constrained enterprises, the value of AI agent features is not the only consideration. “There is also the cost equation at work here, as Google seems to have a much more transparent cost structure than Microsoft with all of its user classes and discounts,” Gold said.

    Microsoft does not include Copilot in its M365 subscriptions, which vary in price depending on the type of customer. The Copilot business subscriptions range from per user per month for M365 Copilot to per month for 25,000 messages for Copilot Studio, which is also available under a pay-as-you-go model. Google has flat subscription pricing for Workspace, starting at per user per month for business plans with Gemini included.
    #microsoft #google #pursue #differing #agent
    Microsoft and Google pursue differing AI agent approaches in M365 and Workspace
    Microsoft and Google are taking distinctive approaches with AI agents in their productivity suites, and enterprises need to account for the differences when formulating digital labor strategies, analysts said. In recent months, both companies have announced a dizzying array of new agents aimed at extracting value from corporate documents and maximizing efficiency. The tech giants have dropped numerous hints about where they’re headed with AI agents in their respective office suites, Microsoft 365 and Google Workspace. Microsoft is reshaping its Copilot assistant as a series of tools to create, tap into, and act on insights at individual and organizational levels. The Microsoft 365 roadmap lists hundreds of specialized AI tools under development to automate work for functions such as HR and accounting. The company is also developing smaller AI models to carry out specific functions. Google is going the opposite way, with its large-language model Gemini at the heart of Workspace. Google offers tools that include Gems for workers to create simple custom agents that automate tasks such as customer service, and Agentspace in Google Cloud to build more complex custom agents for collaboration and workflow management. At the recent Google I/O developer conference, the company added real-time speech translation to Google Meet. “For both, the goal is to bring usable and practical productivity and efficiency capabilities to work tools,” said Liz Miller, vice president and principal analyst at Constellation Research. But the differing AI agent strategies are heavily rooted in each company’s philosophical approaches to productivity. Although Microsoft has long encouraged customers to move from its traditional “perpetual-license” Office suite to the Microsoft 365 subscription-based model, M365 notably retains the familiar desktop apps. Google Workspace, on the other hand, has always been cloud-based. Microsoft users are typically a bit more tethered to traditional enterprise work styles, while Google has always been the “cloud-first darling for smaller organizations that still crave real-time collaboration,” Miller said. When it comes to the generative AI models being integrated into the two office suites, “Google’s Gemini models are beating out the models being deployed by Microsoft,” Miller said. “But as Microsoft expands its model ‘inventory’ in use across M365, this could change.” Microsoft has an advantage, as many desktop users live in Outlook or Word. The intelligence Copilot can bring from CRM software is readily available, while that integration is more complex in the cloud-native Google Workspace. “Microsoft still has an edge in a foundational understanding of work and the capacity to extend Copilot connections across applications as expansive as the Office suite through to Dynamics, giving AI a greater opportunity to be present in the spaces and presentation layers where workers enjoy working,” Miller said. Microsoft’s Copilot Agents and Google’s Gems and Agentspace are in their early stages, but there have been positive developments, said J.P. Gownder, a vice president and principal analyst on Forrester’s Future of Work team. Microsoft recently adopted Google’s A2A protocol, which makes it easier for users of both productivity suites to collaborate and unlock value from stagnant data sitting on other platforms. “That should be a win for interoperability,” Gownder said. But most companies that are Microsoft shops have years or decades of digital assets that hold them back from considering Google, he said. For example, Excel macros, pivot tables, and customizations cannot be easily or automatically migrated to Google Sheets, he said. “As early as this market is, I don’t think it’s fair to rank either player — Microsoft or Google — as being the leader; both of them are constructing new ecosystems to support the growth of agentic AI,” Gownder said. Most Microsoft Office users have moved to M365, but AI is helping Google is making inroads into larger organizations, especially among enterprises that are newer and less oriented toward legacy Microsoft products, said Jack Gold, principal analyst at J. Gold Associates. Technologies like A2A blur the line between on-premises and cloud productivity. As a result, “Google Workspace is no longer perceived as inferior, as it had been in the past,” Gold said. And for budget-constrained enterprises, the value of AI agent features is not the only consideration. “There is also the cost equation at work here, as Google seems to have a much more transparent cost structure than Microsoft with all of its user classes and discounts,” Gold said. Microsoft does not include Copilot in its M365 subscriptions, which vary in price depending on the type of customer. The Copilot business subscriptions range from per user per month for M365 Copilot to per month for 25,000 messages for Copilot Studio, which is also available under a pay-as-you-go model. Google has flat subscription pricing for Workspace, starting at per user per month for business plans with Gemini included. #microsoft #google #pursue #differing #agent
    WWW.COMPUTERWORLD.COM
    Microsoft and Google pursue differing AI agent approaches in M365 and Workspace
    Microsoft and Google are taking distinctive approaches with AI agents in their productivity suites, and enterprises need to account for the differences when formulating digital labor strategies, analysts said. In recent months, both companies have announced a dizzying array of new agents aimed at extracting value from corporate documents and maximizing efficiency. The tech giants have dropped numerous hints about where they’re headed with AI agents in their respective office suites, Microsoft 365 and Google Workspace. Microsoft is reshaping its Copilot assistant as a series of tools to create, tap into, and act on insights at individual and organizational levels. The Microsoft 365 roadmap lists hundreds of specialized AI tools under development to automate work for functions such as HR and accounting. The company is also developing smaller AI models to carry out specific functions. Google is going the opposite way, with its large-language model Gemini at the heart of Workspace. Google offers tools that include Gems for workers to create simple custom agents that automate tasks such as customer service, and Agentspace in Google Cloud to build more complex custom agents for collaboration and workflow management. At the recent Google I/O developer conference, the company added real-time speech translation to Google Meet. “For both, the goal is to bring usable and practical productivity and efficiency capabilities to work tools,” said Liz Miller, vice president and principal analyst at Constellation Research. But the differing AI agent strategies are heavily rooted in each company’s philosophical approaches to productivity. Although Microsoft has long encouraged customers to move from its traditional “perpetual-license” Office suite to the Microsoft 365 subscription-based model, M365 notably retains the familiar desktop apps. Google Workspace, on the other hand, has always been cloud-based. Microsoft users are typically a bit more tethered to traditional enterprise work styles, while Google has always been the “cloud-first darling for smaller organizations that still crave real-time collaboration,” Miller said. When it comes to the generative AI models being integrated into the two office suites, “Google’s Gemini models are beating out the models being deployed by Microsoft,” Miller said. “But as Microsoft expands its model ‘inventory’ in use across M365, this could change.” Microsoft has an advantage, as many desktop users live in Outlook or Word. The intelligence Copilot can bring from CRM software is readily available, while that integration is more complex in the cloud-native Google Workspace. “Microsoft still has an edge in a foundational understanding of work and the capacity to extend Copilot connections across applications as expansive as the Office suite through to Dynamics, giving AI a greater opportunity to be present in the spaces and presentation layers where workers enjoy working,” Miller said. Microsoft’s Copilot Agents and Google’s Gems and Agentspace are in their early stages, but there have been positive developments, said J.P. Gownder, a vice president and principal analyst on Forrester’s Future of Work team. Microsoft recently adopted Google’s A2A protocol, which makes it easier for users of both productivity suites to collaborate and unlock value from stagnant data sitting on other platforms. “That should be a win for interoperability,” Gownder said. But most companies that are Microsoft shops have years or decades of digital assets that hold them back from considering Google, he said. For example, Excel macros, pivot tables, and customizations cannot be easily or automatically migrated to Google Sheets, he said. “As early as this market is, I don’t think it’s fair to rank either player — Microsoft or Google — as being the leader; both of them are constructing new ecosystems to support the growth of agentic AI,” Gownder said. Most Microsoft Office users have moved to M365, but AI is helping Google is making inroads into larger organizations, especially among enterprises that are newer and less oriented toward legacy Microsoft products, said Jack Gold, principal analyst at J. Gold Associates. Technologies like A2A blur the line between on-premises and cloud productivity. As a result, “Google Workspace is no longer perceived as inferior, as it had been in the past,” Gold said. And for budget-constrained enterprises, the value of AI agent features is not the only consideration. “There is also the cost equation at work here, as Google seems to have a much more transparent cost structure than Microsoft with all of its user classes and discounts,” Gold said. Microsoft does not include Copilot in its M365 subscriptions, which vary in price depending on the type of customer. The Copilot business subscriptions range from $30 per user per month for M365 Copilot to $200 per month for 25,000 messages for Copilot Studio, which is also available under a pay-as-you-go model. Google has flat subscription pricing for Workspace, starting at $14 per user per month for business plans with Gemini included.
    0 Comments 0 Shares
  • How Dell Technologies Is Building the Engines of AI Factories With NVIDIA Blackwell

    Over a century ago, Henry Ford pioneered the mass production of cars and engines to provide transportation at an affordable price. Today, the technology industry manufactures the engines for a new kind of factory — those that produce intelligence.
    As companies and countries increasingly focus on AI, and move from experimentation to implementation, the demand for AI technologies continues to grow exponentially. Leading system builders are racing to ramp up production of the servers for AI factories – the engines of AI factories – to meet the world’s exploding demand for intelligence and growth.
    Dell Technologies is a leader in this renaissance. Dell and NVIDIA have partnered for decades and continue to push the pace of innovation. In its last earnings call, Dell projected that its AI server business will grow at least billion this year.
    “We’re on a mission to bring AI to millions of customers around the world,” said Michael Dell, chairman and chief executive officer, Dell Technologies, in a recent announcement at Dell Technologies World. “With the Dell AI Factory with NVIDIA, enterprises can manage the entire AI lifecycle across use cases, from training to deployment, at any scale.”
    The latest Dell AI servers, powered by NVIDIA Blackwell, offer up to 50x more AI reasoning inference output and 5x improvement in throughput compared with the Hopper platform. Customers use them to generate tokens for new AI applications that will help solve some of the world’s biggest challenges, from disease prevention to advanced manufacturing.
    Dell servers with NVIDIA GB200 are shipping at scale for a variety of customers, such as CoreWeave’s new NVIDIA GB200 NVL72 system. One of Dell’s U.S. factories can ship thousands of NVIDIA Blackwell GPUs to customers in a week. It’s why they were chosen by one of their largest customers to deploy 100,000 NVIDIA GPUs in just six weeks.
    But how is an AI server made? We visited a facility to find out.

    Building the Engines of Intelligence
    We visited one of Dell’s U.S. facilities that builds the most compute-dense NVIDIA Blackwell generation servers ever manufactured.
    Modern automobile engines have more than 200 major components and take three to seven years to roll out to market. NVIDIA GB200 NVL72 servers have 1.2 million parts and were designed just a year ago.
    Amid a forest of racks, grouped by phases of assembly, Dell employees quickly slide in GB200 trays, NVLink Switch networking trays and then test the systems. The company said its ability to engineer the compute, network and storage assembly under one roof and fine tune, deploy and integrate complete systems is a powerful differentiator. Speed also matters. The Dell team can build, test, ship – test again on site at a customer location – and turn over a rack in 24 hours.
    The servers are destined for state-of-the-art data centers that require a dizzying quantity of cables, pipes and hoses to operate. One data center can have 27,000 miles of network cable — enough to wrap around the Earth. It can pack about six miles of water pipes, 77 miles of rubber hoses, and is capable of circulating 100,000 gallons of water per minute for cooling.
    With new AI factories being announced each week – the European Union has plans for seven AI factories, while India, Japan, Saudi Arabia, the UAE and Norway are also developing them – the demand for these engines of intelligence will only grow in the months and years ahead.
    #how #dell #technologies #building #engines
    How Dell Technologies Is Building the Engines of AI Factories With NVIDIA Blackwell
    Over a century ago, Henry Ford pioneered the mass production of cars and engines to provide transportation at an affordable price. Today, the technology industry manufactures the engines for a new kind of factory — those that produce intelligence. As companies and countries increasingly focus on AI, and move from experimentation to implementation, the demand for AI technologies continues to grow exponentially. Leading system builders are racing to ramp up production of the servers for AI factories – the engines of AI factories – to meet the world’s exploding demand for intelligence and growth. Dell Technologies is a leader in this renaissance. Dell and NVIDIA have partnered for decades and continue to push the pace of innovation. In its last earnings call, Dell projected that its AI server business will grow at least billion this year. “We’re on a mission to bring AI to millions of customers around the world,” said Michael Dell, chairman and chief executive officer, Dell Technologies, in a recent announcement at Dell Technologies World. “With the Dell AI Factory with NVIDIA, enterprises can manage the entire AI lifecycle across use cases, from training to deployment, at any scale.” The latest Dell AI servers, powered by NVIDIA Blackwell, offer up to 50x more AI reasoning inference output and 5x improvement in throughput compared with the Hopper platform. Customers use them to generate tokens for new AI applications that will help solve some of the world’s biggest challenges, from disease prevention to advanced manufacturing. Dell servers with NVIDIA GB200 are shipping at scale for a variety of customers, such as CoreWeave’s new NVIDIA GB200 NVL72 system. One of Dell’s U.S. factories can ship thousands of NVIDIA Blackwell GPUs to customers in a week. It’s why they were chosen by one of their largest customers to deploy 100,000 NVIDIA GPUs in just six weeks. But how is an AI server made? We visited a facility to find out. Building the Engines of Intelligence We visited one of Dell’s U.S. facilities that builds the most compute-dense NVIDIA Blackwell generation servers ever manufactured. Modern automobile engines have more than 200 major components and take three to seven years to roll out to market. NVIDIA GB200 NVL72 servers have 1.2 million parts and were designed just a year ago. Amid a forest of racks, grouped by phases of assembly, Dell employees quickly slide in GB200 trays, NVLink Switch networking trays and then test the systems. The company said its ability to engineer the compute, network and storage assembly under one roof and fine tune, deploy and integrate complete systems is a powerful differentiator. Speed also matters. The Dell team can build, test, ship – test again on site at a customer location – and turn over a rack in 24 hours. The servers are destined for state-of-the-art data centers that require a dizzying quantity of cables, pipes and hoses to operate. One data center can have 27,000 miles of network cable — enough to wrap around the Earth. It can pack about six miles of water pipes, 77 miles of rubber hoses, and is capable of circulating 100,000 gallons of water per minute for cooling. With new AI factories being announced each week – the European Union has plans for seven AI factories, while India, Japan, Saudi Arabia, the UAE and Norway are also developing them – the demand for these engines of intelligence will only grow in the months and years ahead. #how #dell #technologies #building #engines
    BLOGS.NVIDIA.COM
    How Dell Technologies Is Building the Engines of AI Factories With NVIDIA Blackwell
    Over a century ago, Henry Ford pioneered the mass production of cars and engines to provide transportation at an affordable price. Today, the technology industry manufactures the engines for a new kind of factory — those that produce intelligence. As companies and countries increasingly focus on AI, and move from experimentation to implementation, the demand for AI technologies continues to grow exponentially. Leading system builders are racing to ramp up production of the servers for AI factories – the engines of AI factories – to meet the world’s exploding demand for intelligence and growth. Dell Technologies is a leader in this renaissance. Dell and NVIDIA have partnered for decades and continue to push the pace of innovation. In its last earnings call, Dell projected that its AI server business will grow at least $15 billion this year. “We’re on a mission to bring AI to millions of customers around the world,” said Michael Dell, chairman and chief executive officer, Dell Technologies, in a recent announcement at Dell Technologies World. “With the Dell AI Factory with NVIDIA, enterprises can manage the entire AI lifecycle across use cases, from training to deployment, at any scale.” The latest Dell AI servers, powered by NVIDIA Blackwell, offer up to 50x more AI reasoning inference output and 5x improvement in throughput compared with the Hopper platform. Customers use them to generate tokens for new AI applications that will help solve some of the world’s biggest challenges, from disease prevention to advanced manufacturing. Dell servers with NVIDIA GB200 are shipping at scale for a variety of customers, such as CoreWeave’s new NVIDIA GB200 NVL72 system. One of Dell’s U.S. factories can ship thousands of NVIDIA Blackwell GPUs to customers in a week. It’s why they were chosen by one of their largest customers to deploy 100,000 NVIDIA GPUs in just six weeks. But how is an AI server made? We visited a facility to find out. Building the Engines of Intelligence We visited one of Dell’s U.S. facilities that builds the most compute-dense NVIDIA Blackwell generation servers ever manufactured. Modern automobile engines have more than 200 major components and take three to seven years to roll out to market. NVIDIA GB200 NVL72 servers have 1.2 million parts and were designed just a year ago. Amid a forest of racks, grouped by phases of assembly, Dell employees quickly slide in GB200 trays, NVLink Switch networking trays and then test the systems. The company said its ability to engineer the compute, network and storage assembly under one roof and fine tune, deploy and integrate complete systems is a powerful differentiator. Speed also matters. The Dell team can build, test, ship – test again on site at a customer location – and turn over a rack in 24 hours. The servers are destined for state-of-the-art data centers that require a dizzying quantity of cables, pipes and hoses to operate. One data center can have 27,000 miles of network cable — enough to wrap around the Earth. It can pack about six miles of water pipes, 77 miles of rubber hoses, and is capable of circulating 100,000 gallons of water per minute for cooling. With new AI factories being announced each week – the European Union has plans for seven AI factories, while India, Japan, Saudi Arabia, the UAE and Norway are also developing them – the demand for these engines of intelligence will only grow in the months and years ahead.
    14 Comments 0 Shares
  • NCSU CIO Marc Hoit Talks Fed Funding Limbo, AI’s Role in Shrinking Talent Pool

    Shane Snider, Senior Writer, InformationWeekMay 30, 20254 Min ReadPaul Hamilton via Alamy StockWhen Marc Hoit came to North Carolina State Universityin 2008 to take on the role of vice chancellor for information technology and chief information officer, the school and its IT operation looked much different.Hoit had left his role as Interim CIO at the University of Florida, where he was also a professional in structural engineering.At the time, NC State had an IT staff of about 210 and an annual budget of million. Fast forward to 2025, and Hoit now oversees a bigger department with a budget of million. NCSU has a total of 39,603 students.Aside from taking care of the university’s massive IT needs, Hoit’s department must also lend a hand to research initiatives and academic computing needs. Before Hoit’s arrival, those functions were handled by separate departments. The administration decided to merge the functions under one CIO.“They wanted a lot of the IT to be centralized,” Hoit says in a live interview with InformationWeek. “We had a lot of pieces and had to decide how much we could centralize … It balanced out nicely.”That unified approach would prove to be beneficial, especially as technology was advancing at an unprecedented pace. While many find the pace of innovation dizzying, Hoit has a different viewpoint.Related:Marc Hoit, North Carolina State University“Really, the pace of the fundamentals has not rapidly changed,” he says. “Networking is networking. You have to ask: Do I need fiber instead of copper? Do I need bigger servers? Do I need to change routing protocols? Those are the operational pieces that make it work. You have to change, but the high-level strategy stays the same. We need to register students … we need to make that easier. We need to give them classes. We need to give them grades … those needs are consistent.”The Trump EffectThe Trump Administration’s rapid cost-cutting measures hit research universities especially hard. Just this week, the attorneys general of 16 states filed a lawsuit to block the administration from making massive federal funding cuts for research. And earlier this month, 13 US universities sued to block Trump’s cuts to research funding by the National Science Foundation. Cuts from the National Institutes of Healthand the US Department of Energy also sought to cap funds for research.Hoit says people may want to see less government spending but may not realize that the university already picks up a substantial share of the costs for those research projects. “We’ll have to adjust and figure out what to do, and that may mean that grants that paid for some expensive equipment … the university will have to pick up those on its own. And that might be difficult to accomplish.”Related:Hoit says NCSU is in a somewhat better position because its research funding is more spread out than some public institutions. “If you were a big NIH grant recipient with a medical school and a lot of money company from grants, you probably got hit harder. In our case, we have a very interesting portfolio with a broader mix of funding. And we have a lot of industry funding and partnerships.”The Trump administration’s aggressive tariff policies have also impacted universities, who must attempt to budget for hardware needs without knowing the ultimate impact of the trade war. On Wednesday, the US Court of International Trade halted the administration's sweeping tariffs on goods imported from foreign nations. But legal experts warn that the block may be temporary as the administration expects to appeal and use other potential workarounds.Hoit says the university learned lessons from the first Trump administration. “The writing was kind of on the wall then,” he says. “But a lot of the vendors are trying their best to manufacture in the US or to manufacture in lower tariff countries and to move out of the problematic ones.”Related:He said the COVID 19 pandemic was also a learning opportunity for dealing with massive supply chain disruptions. “taught us that the supply chain that we relied on to be super-fast, integrated and efficient … you can’t really rely on that.”Shrinking Talent Pool and AI SolutionAccording to the National Center for Education Statistics, colleges and universities saw a 15% drop in enrollment between 2010 and 2021. NCSU has largely bucked that trend because of explosive growth in the Research Triangle Park area of the state. But the drop in higher education ambition has created another problem for IT leaders in general: A shrinking talent pool. That’s true at the university level as well.AI could help bridge the talent gap but could cause interest to dwindle in certain tech careers.“I keep telling my civil engineering peers that the world is changing,” Hoit says. “If you can write a code that gives you the formulas and process steps in order to build a bridge, why do I need an engineer? Why don’t I just feed that to AI and let it build it. When I started teaching, I would tell people, go be a civil engineer … you’ll have a career for life. In the last three years, I’ve started thinking, ‘Hmm … How many civil engineers are we really going to need?”About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    #ncsu #cio #marc #hoit #talks
    NCSU CIO Marc Hoit Talks Fed Funding Limbo, AI’s Role in Shrinking Talent Pool
    Shane Snider, Senior Writer, InformationWeekMay 30, 20254 Min ReadPaul Hamilton via Alamy StockWhen Marc Hoit came to North Carolina State Universityin 2008 to take on the role of vice chancellor for information technology and chief information officer, the school and its IT operation looked much different.Hoit had left his role as Interim CIO at the University of Florida, where he was also a professional in structural engineering.At the time, NC State had an IT staff of about 210 and an annual budget of million. Fast forward to 2025, and Hoit now oversees a bigger department with a budget of million. NCSU has a total of 39,603 students.Aside from taking care of the university’s massive IT needs, Hoit’s department must also lend a hand to research initiatives and academic computing needs. Before Hoit’s arrival, those functions were handled by separate departments. The administration decided to merge the functions under one CIO.“They wanted a lot of the IT to be centralized,” Hoit says in a live interview with InformationWeek. “We had a lot of pieces and had to decide how much we could centralize … It balanced out nicely.”That unified approach would prove to be beneficial, especially as technology was advancing at an unprecedented pace. While many find the pace of innovation dizzying, Hoit has a different viewpoint.Related:Marc Hoit, North Carolina State University“Really, the pace of the fundamentals has not rapidly changed,” he says. “Networking is networking. You have to ask: Do I need fiber instead of copper? Do I need bigger servers? Do I need to change routing protocols? Those are the operational pieces that make it work. You have to change, but the high-level strategy stays the same. We need to register students … we need to make that easier. We need to give them classes. We need to give them grades … those needs are consistent.”The Trump EffectThe Trump Administration’s rapid cost-cutting measures hit research universities especially hard. Just this week, the attorneys general of 16 states filed a lawsuit to block the administration from making massive federal funding cuts for research. And earlier this month, 13 US universities sued to block Trump’s cuts to research funding by the National Science Foundation. Cuts from the National Institutes of Healthand the US Department of Energy also sought to cap funds for research.Hoit says people may want to see less government spending but may not realize that the university already picks up a substantial share of the costs for those research projects. “We’ll have to adjust and figure out what to do, and that may mean that grants that paid for some expensive equipment … the university will have to pick up those on its own. And that might be difficult to accomplish.”Related:Hoit says NCSU is in a somewhat better position because its research funding is more spread out than some public institutions. “If you were a big NIH grant recipient with a medical school and a lot of money company from grants, you probably got hit harder. In our case, we have a very interesting portfolio with a broader mix of funding. And we have a lot of industry funding and partnerships.”The Trump administration’s aggressive tariff policies have also impacted universities, who must attempt to budget for hardware needs without knowing the ultimate impact of the trade war. On Wednesday, the US Court of International Trade halted the administration's sweeping tariffs on goods imported from foreign nations. But legal experts warn that the block may be temporary as the administration expects to appeal and use other potential workarounds.Hoit says the university learned lessons from the first Trump administration. “The writing was kind of on the wall then,” he says. “But a lot of the vendors are trying their best to manufacture in the US or to manufacture in lower tariff countries and to move out of the problematic ones.”Related:He said the COVID 19 pandemic was also a learning opportunity for dealing with massive supply chain disruptions. “taught us that the supply chain that we relied on to be super-fast, integrated and efficient … you can’t really rely on that.”Shrinking Talent Pool and AI SolutionAccording to the National Center for Education Statistics, colleges and universities saw a 15% drop in enrollment between 2010 and 2021. NCSU has largely bucked that trend because of explosive growth in the Research Triangle Park area of the state. But the drop in higher education ambition has created another problem for IT leaders in general: A shrinking talent pool. That’s true at the university level as well.AI could help bridge the talent gap but could cause interest to dwindle in certain tech careers.“I keep telling my civil engineering peers that the world is changing,” Hoit says. “If you can write a code that gives you the formulas and process steps in order to build a bridge, why do I need an engineer? Why don’t I just feed that to AI and let it build it. When I started teaching, I would tell people, go be a civil engineer … you’ll have a career for life. In the last three years, I’ve started thinking, ‘Hmm … How many civil engineers are we really going to need?”About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #ncsu #cio #marc #hoit #talks
    WWW.INFORMATIONWEEK.COM
    NCSU CIO Marc Hoit Talks Fed Funding Limbo, AI’s Role in Shrinking Talent Pool
    Shane Snider, Senior Writer, InformationWeekMay 30, 20254 Min ReadPaul Hamilton via Alamy StockWhen Marc Hoit came to North Carolina State University (NCSU) in 2008 to take on the role of vice chancellor for information technology and chief information officer, the school and its IT operation looked much different.Hoit had left his role as Interim CIO at the University of Florida, where he was also a professional in structural engineering.At the time, NC State had an IT staff of about 210 and an annual budget of $34 million. Fast forward to 2025, and Hoit now oversees a bigger department with a budget of $72 million. NCSU has a total of 39,603 students.Aside from taking care of the university’s massive IT needs, Hoit’s department must also lend a hand to research initiatives and academic computing needs. Before Hoit’s arrival, those functions were handled by separate departments. The administration decided to merge the functions under one CIO.“They wanted a lot of the IT to be centralized,” Hoit says in a live interview with InformationWeek. “We had a lot of pieces and had to decide how much we could centralize … It balanced out nicely.”That unified approach would prove to be beneficial, especially as technology was advancing at an unprecedented pace. While many find the pace of innovation dizzying, Hoit has a different viewpoint.Related:Marc Hoit, North Carolina State University“Really, the pace of the fundamentals has not rapidly changed,” he says. “Networking is networking. You have to ask: Do I need fiber instead of copper? Do I need bigger servers? Do I need to change routing protocols? Those are the operational pieces that make it work. You have to change, but the high-level strategy stays the same. We need to register students … we need to make that easier. We need to give them classes. We need to give them grades … those needs are consistent.”The Trump EffectThe Trump Administration’s rapid cost-cutting measures hit research universities especially hard. Just this week, the attorneys general of 16 states filed a lawsuit to block the administration from making massive federal funding cuts for research. And earlier this month, 13 US universities sued to block Trump’s cuts to research funding by the National Science Foundation. Cuts from the National Institutes of Health (NIH) and the US Department of Energy also sought to cap funds for research.Hoit says people may want to see less government spending but may not realize that the university already picks up a substantial share of the costs for those research projects. “We’ll have to adjust and figure out what to do, and that may mean that grants that paid for some expensive equipment … the university will have to pick up those on its own. And that might be difficult to accomplish.”Related:Hoit says NCSU is in a somewhat better position because its research funding is more spread out than some public institutions. “If you were a big NIH grant recipient with a medical school and a lot of money company from grants, you probably got hit harder. In our case, we have a very interesting portfolio with a broader mix of funding. And we have a lot of industry funding and partnerships.”The Trump administration’s aggressive tariff policies have also impacted universities, who must attempt to budget for hardware needs without knowing the ultimate impact of the trade war. On Wednesday, the US Court of International Trade halted the administration's sweeping tariffs on goods imported from foreign nations. But legal experts warn that the block may be temporary as the administration expects to appeal and use other potential workarounds.Hoit says the university learned lessons from the first Trump administration. “The writing was kind of on the wall then,” he says. “But a lot of the vendors are trying their best to manufacture in the US or to manufacture in lower tariff countries and to move out of the problematic ones.”Related:He said the COVID 19 pandemic was also a learning opportunity for dealing with massive supply chain disruptions. “[The pandemic] taught us that the supply chain that we relied on to be super-fast, integrated and efficient … you can’t really rely on that.”Shrinking Talent Pool and AI SolutionAccording to the National Center for Education Statistics (NCES), colleges and universities saw a 15% drop in enrollment between 2010 and 2021. NCSU has largely bucked that trend because of explosive growth in the Research Triangle Park area of the state. But the drop in higher education ambition has created another problem for IT leaders in general: A shrinking talent pool. That’s true at the university level as well.AI could help bridge the talent gap but could cause interest to dwindle in certain tech careers.“I keep telling my civil engineering peers that the world is changing,” Hoit says. “If you can write a code that gives you the formulas and process steps in order to build a bridge, why do I need an engineer? Why don’t I just feed that to AI and let it build it. When I started teaching, I would tell people, go be a civil engineer … you’ll have a career for life. In the last three years, I’ve started thinking, ‘Hmm … How many civil engineers are we really going to need?”About the AuthorShane SniderSenior Writer, InformationWeekShane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.See more from Shane SniderWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comments 0 Shares
  • Google’s ‘world-model’ bet: building the AI operating layer before Microsoft captures the UI

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    After three hours at Google’s I/O 2025 event last week in Silicon Valley, it became increasingly clear: Google is rallying its formidable AI efforts – prominently branded under the Gemini name but encompassing a diverse range of underlying model architectures and research – with laser focus. It is releasing a slew of innovations and technologies around it, then integrating them into products at a breathtaking pace.
    Beyond headline-grabbing features, Google laid out a bolder ambition: an operating system for the AI age – not the disk-booting kind, but a logic layer every app could tap – a “world model” meant to power a universal assistant that understands our physical surroundings, and reasons and acts on our behalf. It’s a strategic offensive that many observers may have missed amid the bamboozlement of features. 
    On one hand, it’s a high-stakes strategy to leapfrog entrenched competitors. But on the other, as Google pours billions into this moonshot, a critical question looms: Can Google’s brilliance in AI research and technology translate into products faster than its rivals, whose edge has its own brilliance: packaging AI into immediately accessible and commercially potent products? Can Google out-maneuver a laser-focused Microsoft, fend off OpenAI’s vertical hardware dreams, and, crucially, keep its own search empire alive in the disruptive currents of AI?
    Google is already pursuing this future at dizzying scale. Pichai told I/O that the company now processes 480 trillion tokens a month – 50× more than a year ago – and almost 5x more than the 100 trillion tokens a month that Microsoft’s Satya Nadella said his company processed. This momentum is also reflected in developer adoption, with Pichai saying that over 7 million developers are now building with the Gemini API, representing a five-fold increase since the last I/O, while Gemini usage on Vertex AI has surged more than 40 times. And unit costs keep falling as Gemini 2.5 models and the Ironwood TPU squeeze more performance from each watt and dollar. AI Modeand AI Overviewsare the live test beds where Google tunes latency, quality, and future ad formats as it shifts search into an AI-first era.
    Source: Google I/O 20025
    Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google, and not other companies – creates another big tension: How much control does Google want over this all-knowing assistant, built upon its crown jewel of search? Does it primarily want to leverage it first for itself, to save its billion search business that depends on owning the starting point and avoiding disruption by OpenAI? Or will Google fully open its foundational AI for other developers and companies to leverage – another  segment representing a significant portion of its business, engaging over 20 million developers, more than any other company? 
    It has sometimes stopped short of a radical focus on building these core products for others with the same clarity as its nemesis, Microsoft. That’s because it keeps a lot of core functionality reserved for its cherished search engine. That said, Google is making significant efforts to provide developer access wherever possible. A telling example is Project Mariner. Google could have embedded the agentic browser-automation features directly inside Chrome, giving consumers an immediate showcase under Google’s full control. However, Google followed up by saying Mariner’s computer-use capabilities would be released via the Gemini API more broadly “this summer.” This signals that external access is coming for any rival that wants comparable automation. In fact, Google said partners Automation Anywhere and UiPath were already building with it.
    Google’s grand design: the ‘world model’ and universal assistant
    The clearest articulation of Google’s grand design came from Demis Hassabis, CEO of Google DeepMind, during the I/O keynote. He stated Google continued to “double down” on efforts towards artificial general intelligence. While Gemini was already “the best multimodal model,” Hassabis explained, Google is working hard to “extend it to become what we call a world model. That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.” 
    This concept of ‘a world model,’ as articulated by Hassabis, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems.
    Hassabis has developed this concept of a “world model” and its manifestation as a “universal AI assistant” in several talks since late 2024, and it was presented at I/O most comprehensively – with CEO Sundar Pichai and Gemini lead Josh Woodward echoing the vision on the same stage.Speaking about the Gemini app, Google’s equivalent to OpenAI’s ChatGPT, Hassabis declared, “This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that’s personal, proactive, and powerful, and one of our key milestones on the road to AGI.” 
    This vision was made tangible through I/O demonstrations. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.”
    CEO Sundar Pichai reinforced this, citing Project Astra which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context”enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understandsform the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google is stuffing these capabilities into a crowded toolkit: AI Studio and Firebase Studio are core starting points for developers, while Vertex AI remains the enterprise on-ramp.
    The strategic stakes: defending search, courting developers amid an AI arms race
    This colossal undertaking is driven by Google’s massive R&D capabilities but also by strategic necessity. In the enterprise software landscape, Microsoft has a formidable hold, a Fortune 500 Chief AI Officer told VentureBeat, reassuring customers with its full commitment to tooling Copilot. The executive requested anonymity because of the sensitivity of commenting on the intense competition between the AI cloud providers. Microsoft’s dominance in Office 365 productivity applications will be exceptionally hard to dislodge through direct feature-for-feature competition, the executive said.
    Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology. As Pichai mused with podcaster David Friedberg shortly before I/O, that means awareness of physical surroundings. And so AR glasses, Pichai said, “maybe that’s the next leap…that’s what’s exciting for me.”
    But this AI offensive is a race against multiple clocks. First, the billion search-ads engine that funds Google must be protected even as it is reinvented. The U.S. Department of Justice’s monopolization ruling still hangs over Google – divestiture of Chrome has been floated as the leading remedy. And in Europe, the Digital Markets Act as well as emerging copyright-liability lawsuits could hem in how freely Gemini crawls or displays the open web.
    Finally, execution speed matters. Google has been criticized for moving slowly in past years. But over the past 12 months, it became clear Google had been working patiently on multiple fronts, and that it has paid off with faster growth than rivals. The challenge of successfully navigating this AI transition at massive scale is immense, as evidenced by the recent Bloomberg report detailing how even a tech titan like Apple is grappling with significant setbacks and internal reorganizations in its AI initiatives. This industry-wide difficulty underscores the high stakes for all players. While Pichai lacks the showmanship of some rivals, the long list of enterprise customer testimonials Google paraded at its Cloud Next event last month – about actual AI deployments – underscores a leader who lets sustained product cadence and enterprise wins speak for themselves. 
    At the same time, focused competitors advance. Microsoft’s enterprise march continues. Its Build conference showcased Microsoft 365 Copilot as the “UI for AI,” Azure AI Foundry as a “production line for intelligence,” and Copilot Studio for sophisticated agent-building, with impressive low-code workflow demos. Nadella’s “open agentic web” visionoffers businesses a pragmatic AI adoption path, allowing selective integration of AI tech – whether it be Google’s or another competitor’s – within a Microsoft-centric framework.
    OpenAI, meanwhile, is way out ahead with the consumer reach of its ChatGPT product, with recent references by the company to having 600 million monthly users, and 800 million weekly users. This compares to the Gemini app’s 400 million monthly users. And in December, OpenAI launched a full-blown search offering, and is reportedly planning an ad offering – posing what could be an existential threat to Google’s search model. Beyond making leading models, OpenAI is making a provocative vertical play with its reported billion acquisition of Jony Ive’s IO, pledging to move “beyond these legacy products” – and hinting that it was launching a hardware product that would attempt to disrupt AI just like the iPhone disrupted mobile. While any of this may potentially disrupt Google’s next-gen personal computing ambitions, it’s also true that OpenAI’s ability to build a deep moat like Apple did with the iPhone may be limited in an AI era increasingly defined by open protocolsand easier model interchangeability.
    Internally, Google navigates its vast ecosystem. As Jeanine Banks, Google’s VP of Developer X, told VentureBeat serving Google’s diverse global developer community means “it’s not a one size fits all,” leading to a rich but sometimes complex array of tools – AI Studio, Vertex AI, Firebase Studio, numerous APIs.
    Meanwhile, Amazon is pressing from another flank: Bedrock already hosts Anthropic, Meta, Mistral and Cohere models, giving AWS customers a pragmatic, multi-model default.
    For enterprise decision-makers: navigating Google’s ‘world model’ future
    Google’s audacious bid to build the foundational intelligence for the AI age presents enterprise leaders with compelling opportunities and critical considerations:

    Move now or retrofit later: Falling a release cycle behind could force costly rewrites when assistant-first interfaces become default.
    Tap into revolutionary potential: For organizations seeking to embrace the most powerful AI, leveraging Google’s “world model” research, multimodal capabilities, and the AGI trajectory promised by Google offers a path to potentially significant innovation.
    Prepare for a new interaction paradigm: Success for Google’s “universal assistant” would mean a primary new interface for services and data. Enterprises should strategize for integration via APIs and agentic frameworks for context-aware delivery.
    Factor in the long game: Aligning with Google’s vision is a long-term commitment. The full “world model” and AGI are potentially distant horizons. Decision-makers must balance this with immediate needs and platform complexities.
    Contrast with focused alternatives: Pragmatic solutions from Microsoft offer tangible enterprise productivity now. Disruptive hardware-AI from OpenAI/IO presents another distinct path. A diversified strategy, leveraging the best of each, often makes sense, especially with the increasingly open agentic web allowing for such flexibility.

    These complex choices and real-world AI adoption strategies will be central to discussions at VentureBeat’s Transform 2025 next month. The leading independent event brings enterprise technical decision-makers together with leaders from pioneering companies to share firsthand experiences on platform choices – Google, Microsoft, and beyond – and navigating AI deployment, all curated by the VentureBeat editorial team. With limited seating, early registration is encouraged.
    Google’s defining offensive: shaping the future or strategic overreach?
    Google’s I/O spectacle was a strong statement: Google signalled that it intends to architect and operate the foundational intelligence of the AI-driven future. Its pursuit of a “world model” and its AGI ambitions aim to redefine computing, outflank competitors, and secure its dominance. The audacity is compelling; the technological promise is immense.
    The big question is execution and timing. Can Google innovate and integrate its vast technologies into a cohesive, compelling experience faster than rivals solidify their positions? Can it do so while transforming search and navigating regulatory challenges? And can it do so while focused so broadly on both consumers and business – an agenda that is arguably much broader than that of its key competitors?
    The next few years will be pivotal. If Google delivers on its “world model” vision, it may usher in an era of personalized, ambient intelligence, effectively becoming the new operational layer for our digital lives. If not, its grand ambition could be a cautionary tale of a giant reaching for everything, only to find the future defined by others who aimed more specifically, more quickly. 

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #googles #worldmodel #bet #building #operating
    Google’s ‘world-model’ bet: building the AI operating layer before Microsoft captures the UI
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More After three hours at Google’s I/O 2025 event last week in Silicon Valley, it became increasingly clear: Google is rallying its formidable AI efforts – prominently branded under the Gemini name but encompassing a diverse range of underlying model architectures and research – with laser focus. It is releasing a slew of innovations and technologies around it, then integrating them into products at a breathtaking pace. Beyond headline-grabbing features, Google laid out a bolder ambition: an operating system for the AI age – not the disk-booting kind, but a logic layer every app could tap – a “world model” meant to power a universal assistant that understands our physical surroundings, and reasons and acts on our behalf. It’s a strategic offensive that many observers may have missed amid the bamboozlement of features.  On one hand, it’s a high-stakes strategy to leapfrog entrenched competitors. But on the other, as Google pours billions into this moonshot, a critical question looms: Can Google’s brilliance in AI research and technology translate into products faster than its rivals, whose edge has its own brilliance: packaging AI into immediately accessible and commercially potent products? Can Google out-maneuver a laser-focused Microsoft, fend off OpenAI’s vertical hardware dreams, and, crucially, keep its own search empire alive in the disruptive currents of AI? Google is already pursuing this future at dizzying scale. Pichai told I/O that the company now processes 480 trillion tokens a month – 50× more than a year ago – and almost 5x more than the 100 trillion tokens a month that Microsoft’s Satya Nadella said his company processed. This momentum is also reflected in developer adoption, with Pichai saying that over 7 million developers are now building with the Gemini API, representing a five-fold increase since the last I/O, while Gemini usage on Vertex AI has surged more than 40 times. And unit costs keep falling as Gemini 2.5 models and the Ironwood TPU squeeze more performance from each watt and dollar. AI Modeand AI Overviewsare the live test beds where Google tunes latency, quality, and future ad formats as it shifts search into an AI-first era. Source: Google I/O 20025 Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google, and not other companies – creates another big tension: How much control does Google want over this all-knowing assistant, built upon its crown jewel of search? Does it primarily want to leverage it first for itself, to save its billion search business that depends on owning the starting point and avoiding disruption by OpenAI? Or will Google fully open its foundational AI for other developers and companies to leverage – another  segment representing a significant portion of its business, engaging over 20 million developers, more than any other company?  It has sometimes stopped short of a radical focus on building these core products for others with the same clarity as its nemesis, Microsoft. That’s because it keeps a lot of core functionality reserved for its cherished search engine. That said, Google is making significant efforts to provide developer access wherever possible. A telling example is Project Mariner. Google could have embedded the agentic browser-automation features directly inside Chrome, giving consumers an immediate showcase under Google’s full control. However, Google followed up by saying Mariner’s computer-use capabilities would be released via the Gemini API more broadly “this summer.” This signals that external access is coming for any rival that wants comparable automation. In fact, Google said partners Automation Anywhere and UiPath were already building with it. Google’s grand design: the ‘world model’ and universal assistant The clearest articulation of Google’s grand design came from Demis Hassabis, CEO of Google DeepMind, during the I/O keynote. He stated Google continued to “double down” on efforts towards artificial general intelligence. While Gemini was already “the best multimodal model,” Hassabis explained, Google is working hard to “extend it to become what we call a world model. That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.”  This concept of ‘a world model,’ as articulated by Hassabis, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems. Hassabis has developed this concept of a “world model” and its manifestation as a “universal AI assistant” in several talks since late 2024, and it was presented at I/O most comprehensively – with CEO Sundar Pichai and Gemini lead Josh Woodward echoing the vision on the same stage.Speaking about the Gemini app, Google’s equivalent to OpenAI’s ChatGPT, Hassabis declared, “This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that’s personal, proactive, and powerful, and one of our key milestones on the road to AGI.”  This vision was made tangible through I/O demonstrations. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.” CEO Sundar Pichai reinforced this, citing Project Astra which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context”enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understandsform the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google is stuffing these capabilities into a crowded toolkit: AI Studio and Firebase Studio are core starting points for developers, while Vertex AI remains the enterprise on-ramp. The strategic stakes: defending search, courting developers amid an AI arms race This colossal undertaking is driven by Google’s massive R&D capabilities but also by strategic necessity. In the enterprise software landscape, Microsoft has a formidable hold, a Fortune 500 Chief AI Officer told VentureBeat, reassuring customers with its full commitment to tooling Copilot. The executive requested anonymity because of the sensitivity of commenting on the intense competition between the AI cloud providers. Microsoft’s dominance in Office 365 productivity applications will be exceptionally hard to dislodge through direct feature-for-feature competition, the executive said. Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology. As Pichai mused with podcaster David Friedberg shortly before I/O, that means awareness of physical surroundings. And so AR glasses, Pichai said, “maybe that’s the next leap…that’s what’s exciting for me.” But this AI offensive is a race against multiple clocks. First, the billion search-ads engine that funds Google must be protected even as it is reinvented. The U.S. Department of Justice’s monopolization ruling still hangs over Google – divestiture of Chrome has been floated as the leading remedy. And in Europe, the Digital Markets Act as well as emerging copyright-liability lawsuits could hem in how freely Gemini crawls or displays the open web. Finally, execution speed matters. Google has been criticized for moving slowly in past years. But over the past 12 months, it became clear Google had been working patiently on multiple fronts, and that it has paid off with faster growth than rivals. The challenge of successfully navigating this AI transition at massive scale is immense, as evidenced by the recent Bloomberg report detailing how even a tech titan like Apple is grappling with significant setbacks and internal reorganizations in its AI initiatives. This industry-wide difficulty underscores the high stakes for all players. While Pichai lacks the showmanship of some rivals, the long list of enterprise customer testimonials Google paraded at its Cloud Next event last month – about actual AI deployments – underscores a leader who lets sustained product cadence and enterprise wins speak for themselves.  At the same time, focused competitors advance. Microsoft’s enterprise march continues. Its Build conference showcased Microsoft 365 Copilot as the “UI for AI,” Azure AI Foundry as a “production line for intelligence,” and Copilot Studio for sophisticated agent-building, with impressive low-code workflow demos. Nadella’s “open agentic web” visionoffers businesses a pragmatic AI adoption path, allowing selective integration of AI tech – whether it be Google’s or another competitor’s – within a Microsoft-centric framework. OpenAI, meanwhile, is way out ahead with the consumer reach of its ChatGPT product, with recent references by the company to having 600 million monthly users, and 800 million weekly users. This compares to the Gemini app’s 400 million monthly users. And in December, OpenAI launched a full-blown search offering, and is reportedly planning an ad offering – posing what could be an existential threat to Google’s search model. Beyond making leading models, OpenAI is making a provocative vertical play with its reported billion acquisition of Jony Ive’s IO, pledging to move “beyond these legacy products” – and hinting that it was launching a hardware product that would attempt to disrupt AI just like the iPhone disrupted mobile. While any of this may potentially disrupt Google’s next-gen personal computing ambitions, it’s also true that OpenAI’s ability to build a deep moat like Apple did with the iPhone may be limited in an AI era increasingly defined by open protocolsand easier model interchangeability. Internally, Google navigates its vast ecosystem. As Jeanine Banks, Google’s VP of Developer X, told VentureBeat serving Google’s diverse global developer community means “it’s not a one size fits all,” leading to a rich but sometimes complex array of tools – AI Studio, Vertex AI, Firebase Studio, numerous APIs. Meanwhile, Amazon is pressing from another flank: Bedrock already hosts Anthropic, Meta, Mistral and Cohere models, giving AWS customers a pragmatic, multi-model default. For enterprise decision-makers: navigating Google’s ‘world model’ future Google’s audacious bid to build the foundational intelligence for the AI age presents enterprise leaders with compelling opportunities and critical considerations: Move now or retrofit later: Falling a release cycle behind could force costly rewrites when assistant-first interfaces become default. Tap into revolutionary potential: For organizations seeking to embrace the most powerful AI, leveraging Google’s “world model” research, multimodal capabilities, and the AGI trajectory promised by Google offers a path to potentially significant innovation. Prepare for a new interaction paradigm: Success for Google’s “universal assistant” would mean a primary new interface for services and data. Enterprises should strategize for integration via APIs and agentic frameworks for context-aware delivery. Factor in the long game: Aligning with Google’s vision is a long-term commitment. The full “world model” and AGI are potentially distant horizons. Decision-makers must balance this with immediate needs and platform complexities. Contrast with focused alternatives: Pragmatic solutions from Microsoft offer tangible enterprise productivity now. Disruptive hardware-AI from OpenAI/IO presents another distinct path. A diversified strategy, leveraging the best of each, often makes sense, especially with the increasingly open agentic web allowing for such flexibility. These complex choices and real-world AI adoption strategies will be central to discussions at VentureBeat’s Transform 2025 next month. The leading independent event brings enterprise technical decision-makers together with leaders from pioneering companies to share firsthand experiences on platform choices – Google, Microsoft, and beyond – and navigating AI deployment, all curated by the VentureBeat editorial team. With limited seating, early registration is encouraged. Google’s defining offensive: shaping the future or strategic overreach? Google’s I/O spectacle was a strong statement: Google signalled that it intends to architect and operate the foundational intelligence of the AI-driven future. Its pursuit of a “world model” and its AGI ambitions aim to redefine computing, outflank competitors, and secure its dominance. The audacity is compelling; the technological promise is immense. The big question is execution and timing. Can Google innovate and integrate its vast technologies into a cohesive, compelling experience faster than rivals solidify their positions? Can it do so while transforming search and navigating regulatory challenges? And can it do so while focused so broadly on both consumers and business – an agenda that is arguably much broader than that of its key competitors? The next few years will be pivotal. If Google delivers on its “world model” vision, it may usher in an era of personalized, ambient intelligence, effectively becoming the new operational layer for our digital lives. If not, its grand ambition could be a cautionary tale of a giant reaching for everything, only to find the future defined by others who aimed more specifically, more quickly.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #googles #worldmodel #bet #building #operating
    VENTUREBEAT.COM
    Google’s ‘world-model’ bet: building the AI operating layer before Microsoft captures the UI
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More After three hours at Google’s I/O 2025 event last week in Silicon Valley, it became increasingly clear: Google is rallying its formidable AI efforts – prominently branded under the Gemini name but encompassing a diverse range of underlying model architectures and research – with laser focus. It is releasing a slew of innovations and technologies around it, then integrating them into products at a breathtaking pace. Beyond headline-grabbing features, Google laid out a bolder ambition: an operating system for the AI age – not the disk-booting kind, but a logic layer every app could tap – a “world model” meant to power a universal assistant that understands our physical surroundings, and reasons and acts on our behalf. It’s a strategic offensive that many observers may have missed amid the bamboozlement of features.  On one hand, it’s a high-stakes strategy to leapfrog entrenched competitors. But on the other, as Google pours billions into this moonshot, a critical question looms: Can Google’s brilliance in AI research and technology translate into products faster than its rivals, whose edge has its own brilliance: packaging AI into immediately accessible and commercially potent products? Can Google out-maneuver a laser-focused Microsoft, fend off OpenAI’s vertical hardware dreams, and, crucially, keep its own search empire alive in the disruptive currents of AI? Google is already pursuing this future at dizzying scale. Pichai told I/O that the company now processes 480 trillion tokens a month – 50× more than a year ago – and almost 5x more than the 100 trillion tokens a month that Microsoft’s Satya Nadella said his company processed. This momentum is also reflected in developer adoption, with Pichai saying that over 7 million developers are now building with the Gemini API, representing a five-fold increase since the last I/O, while Gemini usage on Vertex AI has surged more than 40 times. And unit costs keep falling as Gemini 2.5 models and the Ironwood TPU squeeze more performance from each watt and dollar. AI Mode (rolling out in the U.S.) and AI Overviews (already serving 1.5 billion users monthly) are the live test beds where Google tunes latency, quality, and future ad formats as it shifts search into an AI-first era. Source: Google I/O 20025 Google’s doubling-down on what it calls “a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google, and not other companies – creates another big tension: How much control does Google want over this all-knowing assistant, built upon its crown jewel of search? Does it primarily want to leverage it first for itself, to save its $200 billion search business that depends on owning the starting point and avoiding disruption by OpenAI? Or will Google fully open its foundational AI for other developers and companies to leverage – another  segment representing a significant portion of its business, engaging over 20 million developers, more than any other company?  It has sometimes stopped short of a radical focus on building these core products for others with the same clarity as its nemesis, Microsoft. That’s because it keeps a lot of core functionality reserved for its cherished search engine. That said, Google is making significant efforts to provide developer access wherever possible. A telling example is Project Mariner. Google could have embedded the agentic browser-automation features directly inside Chrome, giving consumers an immediate showcase under Google’s full control. However, Google followed up by saying Mariner’s computer-use capabilities would be released via the Gemini API more broadly “this summer.” This signals that external access is coming for any rival that wants comparable automation. In fact, Google said partners Automation Anywhere and UiPath were already building with it. Google’s grand design: the ‘world model’ and universal assistant The clearest articulation of Google’s grand design came from Demis Hassabis, CEO of Google DeepMind, during the I/O keynote. He stated Google continued to “double down” on efforts towards artificial general intelligence (AGI). While Gemini was already “the best multimodal model,” Hassabis explained, Google is working hard to “extend it to become what we call a world model. That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.”  This concept of ‘a world model,’ as articulated by Hassabis, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind’s work on models like Genie 2. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems. Hassabis has developed this concept of a “world model” and its manifestation as a “universal AI assistant” in several talks since late 2024, and it was presented at I/O most comprehensively – with CEO Sundar Pichai and Gemini lead Josh Woodward echoing the vision on the same stage. (While other AI leaders, including Microsoft’s Satya Nadella, OpenAI’s Sam Altman, and xAI’s Elon Musk have all discussed ‘world models,” Google uniquely and most comprehensively ties this foundational concept to its near-term strategic thrust: the ‘universal AI assistant.) Speaking about the Gemini app, Google’s equivalent to OpenAI’s ChatGPT, Hassabis declared, “This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that’s personal, proactive, and powerful, and one of our key milestones on the road to AGI.”  This vision was made tangible through I/O demonstrations. Google demoed a new app called Flow – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.” CEO Sundar Pichai reinforced this, citing Project Astra which “explores the future capabilities of a universal AI assistant that can understand the world around you.” These Astra capabilities, like live video understanding and screen sharing, are now integrated into Gemini Live. Josh Woodward, who leads Google Labs and the Gemini App, detailed the app’s goal to be the “most personal, proactive, and powerful AI assistant.” He showcased how “personal context” (connecting search history, and soon Gmail/Calendar) enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understands (e.g., thermodynamics explained via cycling. This, Woodward emphasized, is “where we’re headed with Gemini,” enabled by the Gemini 2.5 Pro model allowing users to “think things into existence.”  The new developer tools unveiled at I/O are building blocks. Gemini 2.5 Pro with “Deep Think” and the hyper-efficient 2.5 Flash (now with native audio and URL context grounding from Gemini API) form the core intelligence. Google also quietly previewed Gemini Diffusion, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google is stuffing these capabilities into a crowded toolkit: AI Studio and Firebase Studio are core starting points for developers, while Vertex AI remains the enterprise on-ramp. The strategic stakes: defending search, courting developers amid an AI arms race This colossal undertaking is driven by Google’s massive R&D capabilities but also by strategic necessity. In the enterprise software landscape, Microsoft has a formidable hold, a Fortune 500 Chief AI Officer told VentureBeat, reassuring customers with its full commitment to tooling Copilot. The executive requested anonymity because of the sensitivity of commenting on the intense competition between the AI cloud providers. Microsoft’s dominance in Office 365 productivity applications will be exceptionally hard to dislodge through direct feature-for-feature competition, the executive said. Google’s path to potential leadership – its “end-run” around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly “universal AI assistant” powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology. As Pichai mused with podcaster David Friedberg shortly before I/O, that means awareness of physical surroundings. And so AR glasses, Pichai said, “maybe that’s the next leap…that’s what’s exciting for me.” But this AI offensive is a race against multiple clocks. First, the $200 billion search-ads engine that funds Google must be protected even as it is reinvented. The U.S. Department of Justice’s monopolization ruling still hangs over Google – divestiture of Chrome has been floated as the leading remedy. And in Europe, the Digital Markets Act as well as emerging copyright-liability lawsuits could hem in how freely Gemini crawls or displays the open web. Finally, execution speed matters. Google has been criticized for moving slowly in past years. But over the past 12 months, it became clear Google had been working patiently on multiple fronts, and that it has paid off with faster growth than rivals. The challenge of successfully navigating this AI transition at massive scale is immense, as evidenced by the recent Bloomberg report detailing how even a tech titan like Apple is grappling with significant setbacks and internal reorganizations in its AI initiatives. This industry-wide difficulty underscores the high stakes for all players. While Pichai lacks the showmanship of some rivals, the long list of enterprise customer testimonials Google paraded at its Cloud Next event last month – about actual AI deployments – underscores a leader who lets sustained product cadence and enterprise wins speak for themselves.  At the same time, focused competitors advance. Microsoft’s enterprise march continues. Its Build conference showcased Microsoft 365 Copilot as the “UI for AI,” Azure AI Foundry as a “production line for intelligence,” and Copilot Studio for sophisticated agent-building, with impressive low-code workflow demos (Microsoft Build Keynote, Miti Joshi at 22:52, Kadesha Kerr at 51:26). Nadella’s “open agentic web” vision (NLWeb, MCP) offers businesses a pragmatic AI adoption path, allowing selective integration of AI tech – whether it be Google’s or another competitor’s – within a Microsoft-centric framework. OpenAI, meanwhile, is way out ahead with the consumer reach of its ChatGPT product, with recent references by the company to having 600 million monthly users, and 800 million weekly users. This compares to the Gemini app’s 400 million monthly users. And in December, OpenAI launched a full-blown search offering, and is reportedly planning an ad offering – posing what could be an existential threat to Google’s search model. Beyond making leading models, OpenAI is making a provocative vertical play with its reported $6.5 billion acquisition of Jony Ive’s IO, pledging to move “beyond these legacy products” – and hinting that it was launching a hardware product that would attempt to disrupt AI just like the iPhone disrupted mobile. While any of this may potentially disrupt Google’s next-gen personal computing ambitions, it’s also true that OpenAI’s ability to build a deep moat like Apple did with the iPhone may be limited in an AI era increasingly defined by open protocols (like MCP) and easier model interchangeability. Internally, Google navigates its vast ecosystem. As Jeanine Banks, Google’s VP of Developer X, told VentureBeat serving Google’s diverse global developer community means “it’s not a one size fits all,” leading to a rich but sometimes complex array of tools – AI Studio, Vertex AI, Firebase Studio, numerous APIs. Meanwhile, Amazon is pressing from another flank: Bedrock already hosts Anthropic, Meta, Mistral and Cohere models, giving AWS customers a pragmatic, multi-model default. For enterprise decision-makers: navigating Google’s ‘world model’ future Google’s audacious bid to build the foundational intelligence for the AI age presents enterprise leaders with compelling opportunities and critical considerations: Move now or retrofit later: Falling a release cycle behind could force costly rewrites when assistant-first interfaces become default. Tap into revolutionary potential: For organizations seeking to embrace the most powerful AI, leveraging Google’s “world model” research, multimodal capabilities (like Veo 3 and Imagen 4 showcased by Woodward at I/O), and the AGI trajectory promised by Google offers a path to potentially significant innovation. Prepare for a new interaction paradigm: Success for Google’s “universal assistant” would mean a primary new interface for services and data. Enterprises should strategize for integration via APIs and agentic frameworks for context-aware delivery. Factor in the long game (and its risks): Aligning with Google’s vision is a long-term commitment. The full “world model” and AGI are potentially distant horizons. Decision-makers must balance this with immediate needs and platform complexities. Contrast with focused alternatives: Pragmatic solutions from Microsoft offer tangible enterprise productivity now. Disruptive hardware-AI from OpenAI/IO presents another distinct path. A diversified strategy, leveraging the best of each, often makes sense, especially with the increasingly open agentic web allowing for such flexibility. These complex choices and real-world AI adoption strategies will be central to discussions at VentureBeat’s Transform 2025 next month. The leading independent event brings enterprise technical decision-makers together with leaders from pioneering companies to share firsthand experiences on platform choices – Google, Microsoft, and beyond – and navigating AI deployment, all curated by the VentureBeat editorial team. With limited seating, early registration is encouraged. Google’s defining offensive: shaping the future or strategic overreach? Google’s I/O spectacle was a strong statement: Google signalled that it intends to architect and operate the foundational intelligence of the AI-driven future. Its pursuit of a “world model” and its AGI ambitions aim to redefine computing, outflank competitors, and secure its dominance. The audacity is compelling; the technological promise is immense. The big question is execution and timing. Can Google innovate and integrate its vast technologies into a cohesive, compelling experience faster than rivals solidify their positions? Can it do so while transforming search and navigating regulatory challenges? And can it do so while focused so broadly on both consumers and business – an agenda that is arguably much broader than that of its key competitors? The next few years will be pivotal. If Google delivers on its “world model” vision, it may usher in an era of personalized, ambient intelligence, effectively becoming the new operational layer for our digital lives. If not, its grand ambition could be a cautionary tale of a giant reaching for everything, only to find the future defined by others who aimed more specifically, more quickly.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comments 0 Shares
  • The best new product offerings from NYCxDESIGN 2025

    We came, we saw, we conquered. From Long Island City to DUMBO, Greenpoint, Chelsea, Tribeca, Nomad, and Soho, design took over New York this past week for NYCxDESIGN. As the widespread agenda can attest, it was a buzzy and busy week in celebration of design.

    This year, the week coincided with both ICFF and Shelter, which made its inaugural debut. If two fairs didn’t present enough design to see, there were also a dizzying array of exhibitions, gatherings, and talks, including AN Interior’s own 10th anniversary party held at Salvatori’s showroom. Brooklyn had a stronger showing than in past years: The programming officially kicked off in Williamsburg and then celebrated its closing night in DUMBO, a newly designated design district. Throughout all the latest products presented, the following stood out for its visual concept, craftsmanship, attention to production, and longevity. Below are the latest releases pulled from both fairs as well as the many showroom and gallery activations throughout the city that were well worth traversing boroughs to check out in-person.
    The Arcora by HEAKO StudioArcora and Himalaya Lunar by HEAKO Studio
    These refined yet playful lights from HEAKO Studio were on view at Shelter. In addition to the standing Oblique Glow light, which balances off of a skyscraper-inspired steel base, the Himalaya Lunar and the Arcora were the latest lighting from the New York–based studio. The former is a white stone affixed to an L-shaped brass pipe, finished by hand. The latter continues the geometric language with a curved aluminum body, built around an illuminating globe. It can be a sconce or tabletop lamp.

    A wood and leather chair on view at OUTSIDE/INReflect by Tanuvi Hegde
    Presented at OUTSIDE/IN by Lyle Gallery and Hello Human, Reflect is a chair designed for the fidgety, stimulated, and anxious. Brooklyn-based furniture designer and architect Tanuvi Hegde uses cherry wood with a hand-stitched leather cushion to craft seating embedded with a steel ball within the armrest for fidgeting. Reflect is part of Hedge’s thesis, ”Exhibit: Furniture for the Anxious Being,” that explores how furniture can respond to emotions and mental health.
    The CMPT collection resolves compact livingCMPT by Lichen and Karimoku Furniture
    Design platform and showroom Lichen collaborated with Karimoku Furniture at ICFF. In addition to re-introducing the ZE sofa from Karimoku’s archive, the duo launched a new collection, CMPT, that combines the latter’s craftsmanship with the former’s New York sensibilities. Designed for practicality, storage, and the limits of compact spaces, the collection begins with the Apple Box, a chestnut cube that can be stacked atop one another to create shifting consoles or compartments. Each modular box is held together by an exposed wooden peg. The collection, elegantly simple, is designed to grow with its owners throughout their life.
    about the latest product releases that caught AN’s eye on aninteriormag.com.
    #best #new #product #offerings #nycxdesign
    The best new product offerings from NYCxDESIGN 2025
    We came, we saw, we conquered. From Long Island City to DUMBO, Greenpoint, Chelsea, Tribeca, Nomad, and Soho, design took over New York this past week for NYCxDESIGN. As the widespread agenda can attest, it was a buzzy and busy week in celebration of design. This year, the week coincided with both ICFF and Shelter, which made its inaugural debut. If two fairs didn’t present enough design to see, there were also a dizzying array of exhibitions, gatherings, and talks, including AN Interior’s own 10th anniversary party held at Salvatori’s showroom. Brooklyn had a stronger showing than in past years: The programming officially kicked off in Williamsburg and then celebrated its closing night in DUMBO, a newly designated design district. Throughout all the latest products presented, the following stood out for its visual concept, craftsmanship, attention to production, and longevity. Below are the latest releases pulled from both fairs as well as the many showroom and gallery activations throughout the city that were well worth traversing boroughs to check out in-person. The Arcora by HEAKO StudioArcora and Himalaya Lunar by HEAKO Studio These refined yet playful lights from HEAKO Studio were on view at Shelter. In addition to the standing Oblique Glow light, which balances off of a skyscraper-inspired steel base, the Himalaya Lunar and the Arcora were the latest lighting from the New York–based studio. The former is a white stone affixed to an L-shaped brass pipe, finished by hand. The latter continues the geometric language with a curved aluminum body, built around an illuminating globe. It can be a sconce or tabletop lamp. A wood and leather chair on view at OUTSIDE/INReflect by Tanuvi Hegde Presented at OUTSIDE/IN by Lyle Gallery and Hello Human, Reflect is a chair designed for the fidgety, stimulated, and anxious. Brooklyn-based furniture designer and architect Tanuvi Hegde uses cherry wood with a hand-stitched leather cushion to craft seating embedded with a steel ball within the armrest for fidgeting. Reflect is part of Hedge’s thesis, ”Exhibit: Furniture for the Anxious Being,” that explores how furniture can respond to emotions and mental health. The CMPT collection resolves compact livingCMPT by Lichen and Karimoku Furniture Design platform and showroom Lichen collaborated with Karimoku Furniture at ICFF. In addition to re-introducing the ZE sofa from Karimoku’s archive, the duo launched a new collection, CMPT, that combines the latter’s craftsmanship with the former’s New York sensibilities. Designed for practicality, storage, and the limits of compact spaces, the collection begins with the Apple Box, a chestnut cube that can be stacked atop one another to create shifting consoles or compartments. Each modular box is held together by an exposed wooden peg. The collection, elegantly simple, is designed to grow with its owners throughout their life. about the latest product releases that caught AN’s eye on aninteriormag.com. #best #new #product #offerings #nycxdesign
    WWW.ARCHPAPER.COM
    The best new product offerings from NYCxDESIGN 2025
    We came, we saw, we conquered. From Long Island City to DUMBO, Greenpoint, Chelsea, Tribeca, Nomad, and Soho, design took over New York this past week for NYCxDESIGN. As the widespread agenda can attest, it was a buzzy and busy week in celebration of design. This year, the week coincided with both ICFF and Shelter, which made its inaugural debut. If two fairs didn’t present enough design to see, there were also a dizzying array of exhibitions, gatherings, and talks, including AN Interior’s own 10th anniversary party held at Salvatori’s showroom. Brooklyn had a stronger showing than in past years: The programming officially kicked off in Williamsburg and then celebrated its closing night in DUMBO, a newly designated design district. Throughout all the latest products presented, the following stood out for its visual concept, craftsmanship, attention to production, and longevity. Below are the latest releases pulled from both fairs as well as the many showroom and gallery activations throughout the city that were well worth traversing boroughs to check out in-person. The Arcora by HEAKO Studio (Courtesy HEAKO Studio) Arcora and Himalaya Lunar by HEAKO Studio These refined yet playful lights from HEAKO Studio were on view at Shelter. In addition to the standing Oblique Glow light, which balances off of a skyscraper-inspired steel base, the Himalaya Lunar and the Arcora were the latest lighting from the New York–based studio. The former is a white stone affixed to an L-shaped brass pipe, finished by hand. The latter continues the geometric language with a curved aluminum body, built around an illuminating globe. It can be a sconce or tabletop lamp. A wood and leather chair on view at OUTSIDE/IN (Jonathan Hokklo) Reflect by Tanuvi Hegde Presented at OUTSIDE/IN by Lyle Gallery and Hello Human, Reflect is a chair designed for the fidgety, stimulated, and anxious. Brooklyn-based furniture designer and architect Tanuvi Hegde uses cherry wood with a hand-stitched leather cushion to craft seating embedded with a steel ball within the armrest for fidgeting. Reflect is part of Hedge’s thesis, ”Exhibit (A): Furniture for the Anxious Being,” that explores how furniture can respond to emotions and mental health. The CMPT collection resolves compact living (Courtesy Lichen/Karimoku Furniture) CMPT by Lichen and Karimoku Furniture Design platform and showroom Lichen collaborated with Karimoku Furniture at ICFF. In addition to re-introducing the ZE sofa from Karimoku’s archive, the duo launched a new collection, CMPT, that combines the latter’s craftsmanship with the former’s New York sensibilities. Designed for practicality, storage, and the limits of compact spaces, the collection begins with the Apple Box, a chestnut cube that can be stacked atop one another to create shifting consoles or compartments. Each modular box is held together by an exposed wooden peg. The collection, elegantly simple, is designed to grow with its owners throughout their life. Read more about the latest product releases that caught AN’s eye on aninteriormag.com.
    0 Comments 0 Shares
  • The Elden Ring movie director already made the best video game-brained movie

    A rumored Elden Ring movie became a little more of a reality on Thursday night when Bandai Namco announced that Alex Garlandwas set to direct a film adaptation of the FromSoftware action role-playing-game for indie-studio darling A24. George R.R. Martin, who provided game director Hidetaka Miyazaki with a murky amount of mythological foundation for the original game, will serve as a producer on the film.

    Garland might look like an odd choice for Elden Ring based on his filmography; the writer-director has never made a fantasy epic, nor has he orchestrated the kind of medieval combat that would make him an obvious choice to bring Miyazaki’s tough-as-hell boss fights to live action. But Garland’s “gamer cred” is indisputable and an understanding of play is core to much of his work. Hot take time: I’d say his 2012 film Dredd is the greatest video game movie that isn’t actually based on a video game ever made.

    Starting out as a novelist before pivoting to screenwriting and directing, Garland has made his gaming inspirations known throughout his career. He has said that his time outrunning zombie dogs in Resident Evil was the direct inspiration for the fast zombies in 28 Days Later, which he wrote for director Danny Boyle. When he and Boyle teamed up to adapt Garland’s own novel, The Beach, the collaboration resulted in the closest thing we will ever get to Leonardo DiCaprio’s Banjo-Kazooie movie.

    In 2005, riding high off 28 Days Later’s success, Garland was tasked by Microsoft with adapting Halo into a feature film — a project that stalled out and sat on a shelf for so long that streaming television was invented and Halo became a decent Paramount Plus show instead. He also went on to collaborate on actual video games: He worked with Ninja Theory and Bandai Namco on 2010’s Enslaved: Odyssey to the West, and he served as a story supervisor on 2013’s DmC: Devil May Cry. At some point around that time, he played and fell hard for The Last of Us.Garland’s gaming tastes are all over the map — in 2020 he aggressively kept up an Animal Crossing island like the rest of us — but his visible influences veer toward the AAA action experience. His adaptation of Jeff VanderMeer’s Annihilation has the pace and encounters of an open-world game. His FX show Devs seems right up the alley of anyone looking for Deus Ex or Control vibes. Both Civil War and his 2025 film Warfare bring audiences closer to the kind of tactical military action that we rarely see in movies, but that is all over multiplayer shooters. But for my money, his off-the-leash translation of video game aesthetics and experience in cinematic form happened with Dredd.

    Written and produced by Garland and technically directed by Pete Travis, Dredd drops the classic 2000 AD comic antihero, played by The Boys’ Karl Urban, into a The Raid-esque action scenario: To stop a violent drug lord, the Judge must blast his way through 200 stories of a highly barricaded Mega-City One high-rise. Between the slo-mo effects induced by the illicit drugand the psychic abilities of Dredd’s sidekick Cassandra, Dredd is a dizzying array of action beats that plunges viewers into a bullet hell without resorting to any gimmicky first-person shooting.

    By all accounts, the making of Dredd was a fraught experience for all involved, with the studio losing enough faith in Travis that Garland remained on set for the entire shoot and supervised the edit. Urban even claims Garland “actually directed the movie.” When you see it, that makes sense — even the Slo-Mo effects feel specifically like a bullet-time mechanic rather than a complete acid trip.

    Will Garland make a great Elden Ring movie? What does that even look like? The good news is he’s probably been thinking about it for years, as a fan of FromSoft games. In interviews over the years, the filmmaker has cited Dark Souls as a particular favorite franchise, and even offered an explanation for why an adaptation would be such a challenge.

    “The Dark Souls games seem to have this embedded poetry in them,” Garland told Gamespot in 2020. “You’ll be wandering around and find some weird bit of dialogue with some sort of broken song with a bit of armor outside a doorway and it feels like you’ve drifted into some existential dream. That’s what I really love about Dark Souls. These spaces are so imaginative and they seem to flow into each other and flow out of each other. It’s very dreamlikeI can’t imagine how that would. The quality that makes Dark Souls special is probably unique to video games.”

    The joy Garland finds in Dark Souls games isn’t far off from what Elden Ring offers him as a director — in the end, a successful adaptation will ride on mood and pace and some wicked fights. That’s what Dredd nails, even without a game as actual source material. Dredd broods without relying on too much exposition. Cassandra’s ethereal psychic powers thread a bit of innocence and whimsy into a heavy-metal dystopia. The action is brutal to the point that it often feels like a horror movie. 

    “Elden Ring from the guy who brought us Dredd” makes a lot of sense. Now to find an actor with eight arms…
    #elden #ring #movie #director #already
    The Elden Ring movie director already made the best video game-brained movie
    A rumored Elden Ring movie became a little more of a reality on Thursday night when Bandai Namco announced that Alex Garlandwas set to direct a film adaptation of the FromSoftware action role-playing-game for indie-studio darling A24. George R.R. Martin, who provided game director Hidetaka Miyazaki with a murky amount of mythological foundation for the original game, will serve as a producer on the film. Garland might look like an odd choice for Elden Ring based on his filmography; the writer-director has never made a fantasy epic, nor has he orchestrated the kind of medieval combat that would make him an obvious choice to bring Miyazaki’s tough-as-hell boss fights to live action. But Garland’s “gamer cred” is indisputable and an understanding of play is core to much of his work. Hot take time: I’d say his 2012 film Dredd is the greatest video game movie that isn’t actually based on a video game ever made. Starting out as a novelist before pivoting to screenwriting and directing, Garland has made his gaming inspirations known throughout his career. He has said that his time outrunning zombie dogs in Resident Evil was the direct inspiration for the fast zombies in 28 Days Later, which he wrote for director Danny Boyle. When he and Boyle teamed up to adapt Garland’s own novel, The Beach, the collaboration resulted in the closest thing we will ever get to Leonardo DiCaprio’s Banjo-Kazooie movie. In 2005, riding high off 28 Days Later’s success, Garland was tasked by Microsoft with adapting Halo into a feature film — a project that stalled out and sat on a shelf for so long that streaming television was invented and Halo became a decent Paramount Plus show instead. He also went on to collaborate on actual video games: He worked with Ninja Theory and Bandai Namco on 2010’s Enslaved: Odyssey to the West, and he served as a story supervisor on 2013’s DmC: Devil May Cry. At some point around that time, he played and fell hard for The Last of Us.Garland’s gaming tastes are all over the map — in 2020 he aggressively kept up an Animal Crossing island like the rest of us — but his visible influences veer toward the AAA action experience. His adaptation of Jeff VanderMeer’s Annihilation has the pace and encounters of an open-world game. His FX show Devs seems right up the alley of anyone looking for Deus Ex or Control vibes. Both Civil War and his 2025 film Warfare bring audiences closer to the kind of tactical military action that we rarely see in movies, but that is all over multiplayer shooters. But for my money, his off-the-leash translation of video game aesthetics and experience in cinematic form happened with Dredd. Written and produced by Garland and technically directed by Pete Travis, Dredd drops the classic 2000 AD comic antihero, played by The Boys’ Karl Urban, into a The Raid-esque action scenario: To stop a violent drug lord, the Judge must blast his way through 200 stories of a highly barricaded Mega-City One high-rise. Between the slo-mo effects induced by the illicit drugand the psychic abilities of Dredd’s sidekick Cassandra, Dredd is a dizzying array of action beats that plunges viewers into a bullet hell without resorting to any gimmicky first-person shooting. By all accounts, the making of Dredd was a fraught experience for all involved, with the studio losing enough faith in Travis that Garland remained on set for the entire shoot and supervised the edit. Urban even claims Garland “actually directed the movie.” When you see it, that makes sense — even the Slo-Mo effects feel specifically like a bullet-time mechanic rather than a complete acid trip. Will Garland make a great Elden Ring movie? What does that even look like? The good news is he’s probably been thinking about it for years, as a fan of FromSoft games. In interviews over the years, the filmmaker has cited Dark Souls as a particular favorite franchise, and even offered an explanation for why an adaptation would be such a challenge. “The Dark Souls games seem to have this embedded poetry in them,” Garland told Gamespot in 2020. “You’ll be wandering around and find some weird bit of dialogue with some sort of broken song with a bit of armor outside a doorway and it feels like you’ve drifted into some existential dream. That’s what I really love about Dark Souls. These spaces are so imaginative and they seem to flow into each other and flow out of each other. It’s very dreamlikeI can’t imagine how that would. The quality that makes Dark Souls special is probably unique to video games.” The joy Garland finds in Dark Souls games isn’t far off from what Elden Ring offers him as a director — in the end, a successful adaptation will ride on mood and pace and some wicked fights. That’s what Dredd nails, even without a game as actual source material. Dredd broods without relying on too much exposition. Cassandra’s ethereal psychic powers thread a bit of innocence and whimsy into a heavy-metal dystopia. The action is brutal to the point that it often feels like a horror movie.  “Elden Ring from the guy who brought us Dredd” makes a lot of sense. Now to find an actor with eight arms… #elden #ring #movie #director #already
    WWW.POLYGON.COM
    The Elden Ring movie director already made the best video game-brained movie
    A rumored Elden Ring movie became a little more of a reality on Thursday night when Bandai Namco announced that Alex Garland (Civil War, Ex Machina) was set to direct a film adaptation of the FromSoftware action role-playing-game for indie-studio darling A24. George R.R. Martin, who provided game director Hidetaka Miyazaki with a murky amount of mythological foundation for the original game, will serve as a producer on the film. Garland might look like an odd choice for Elden Ring based on his filmography; the writer-director has never made a fantasy epic, nor has he orchestrated the kind of medieval combat that would make him an obvious choice to bring Miyazaki’s tough-as-hell boss fights to live action. But Garland’s “gamer cred” is indisputable and an understanding of play is core to much of his work. Hot take time: I’d say his 2012 film Dredd is the greatest video game movie that isn’t actually based on a video game ever made. Starting out as a novelist before pivoting to screenwriting and directing, Garland has made his gaming inspirations known throughout his career. He has said that his time outrunning zombie dogs in Resident Evil was the direct inspiration for the fast zombies in 28 Days Later, which he wrote for director Danny Boyle. When he and Boyle teamed up to adapt Garland’s own novel, The Beach, the collaboration resulted in the closest thing we will ever get to Leonardo DiCaprio’s Banjo-Kazooie movie. In 2005, riding high off 28 Days Later’s success, Garland was tasked by Microsoft with adapting Halo into a feature film — a project that stalled out and sat on a shelf for so long that streaming television was invented and Halo became a decent Paramount Plus show instead. He also went on to collaborate on actual video games: He worked with Ninja Theory and Bandai Namco on 2010’s Enslaved: Odyssey to the West, and he served as a story supervisor on 2013’s DmC: Devil May Cry. At some point around that time, he played and fell hard for The Last of Us. (In fact, Garland thinks TLOU is better than 28 Days Later, but hey, none of us are right about everything.) Garland’s gaming tastes are all over the map — in 2020 he aggressively kept up an Animal Crossing island like the rest of us — but his visible influences veer toward the AAA action experience. His adaptation of Jeff VanderMeer’s Annihilation has the pace and encounters of an open-world game. His FX show Devs seems right up the alley of anyone looking for Deus Ex or Control vibes. Both Civil War and his 2025 film Warfare bring audiences closer to the kind of tactical military action that we rarely see in movies, but that is all over multiplayer shooters. But for my money, his off-the-leash translation of video game aesthetics and experience in cinematic form happened with Dredd. Written and produced by Garland and technically directed by Pete Travis (Vantage Point), Dredd drops the classic 2000 AD comic antihero, played by The Boys’ Karl Urban, into a The Raid-esque action scenario: To stop a violent drug lord (Lena Heady), the Judge must blast his way through 200 stories of a highly barricaded Mega-City One high-rise. Between the slo-mo effects induced by the illicit drug (appropriately named “Slo-Mo”) and the psychic abilities of Dredd’s sidekick Cassandra, Dredd is a dizzying array of action beats that plunges viewers into a bullet hell without resorting to any gimmicky first-person shooting. By all accounts, the making of Dredd was a fraught experience for all involved, with the studio losing enough faith in Travis that Garland remained on set for the entire shoot and supervised the edit. Urban even claims Garland “actually directed the movie.” When you see it, that makes sense — even the Slo-Mo effects feel specifically like a bullet-time mechanic rather than a complete acid trip. Will Garland make a great Elden Ring movie? What does that even look like? The good news is he’s probably been thinking about it for years, as a fan of FromSoft games. In interviews over the years, the filmmaker has cited Dark Souls as a particular favorite franchise, and even offered an explanation for why an adaptation would be such a challenge. “The Dark Souls games seem to have this embedded poetry in them,” Garland told Gamespot in 2020. “You’ll be wandering around and find some weird bit of dialogue with some sort of broken song with a bit of armor outside a doorway and it feels like you’ve drifted into some existential dream. That’s what I really love about Dark Souls. These spaces are so imaginative and they seem to flow into each other and flow out of each other. It’s very dreamlike […] I can’t imagine how that would [be adapted]. The quality that makes Dark Souls special is probably unique to video games.” The joy Garland finds in Dark Souls games isn’t far off from what Elden Ring offers him as a director — in the end, a successful adaptation will ride on mood and pace and some wicked fights. That’s what Dredd nails, even without a game as actual source material. Dredd broods without relying on too much exposition. Cassandra’s ethereal psychic powers thread a bit of innocence and whimsy into a heavy-metal dystopia. The action is brutal to the point that it often feels like a horror movie (a style Garland pushed to even more gut-wrenching, realistic extremes in Warfare).  “Elden Ring from the guy who brought us Dredd” makes a lot of sense. Now to find an actor with eight arms…
    0 Comments 0 Shares
  • Sloths once came in a dizzying array of sizes. Here’s why

    The sloth family tree once sported a dizzying array of branches, body sizes and lifestyles, from small and limber tree climbers to lumbering bear-sized landlubbers. 
    Why sloth body size was once so diverse, while today’s sloths are limited to just two diminutive tree-dwellers, has been a long-standing question. Scientists have proposed that sloths’ body size might be linked to a wide variety of factors: habitat preferences, diets, changes in global temperature, or pressure from large predators or humans.

    Sign up for our newsletter

    We summarize the week's scientific breakthroughs every Thursday.
    #sloths #once #came #dizzying #array
    Sloths once came in a dizzying array of sizes. Here’s why
    The sloth family tree once sported a dizzying array of branches, body sizes and lifestyles, from small and limber tree climbers to lumbering bear-sized landlubbers.  Why sloth body size was once so diverse, while today’s sloths are limited to just two diminutive tree-dwellers, has been a long-standing question. Scientists have proposed that sloths’ body size might be linked to a wide variety of factors: habitat preferences, diets, changes in global temperature, or pressure from large predators or humans. Sign up for our newsletter We summarize the week's scientific breakthroughs every Thursday. #sloths #once #came #dizzying #array
    WWW.SCIENCENEWS.ORG
    Sloths once came in a dizzying array of sizes. Here’s why
    The sloth family tree once sported a dizzying array of branches, body sizes and lifestyles, from small and limber tree climbers to lumbering bear-sized landlubbers.  Why sloth body size was once so diverse, while today’s sloths are limited to just two diminutive tree-dwellers, has been a long-standing question. Scientists have proposed that sloths’ body size might be linked to a wide variety of factors: habitat preferences, diets, changes in global temperature, or pressure from large predators or humans. Sign up for our newsletter We summarize the week's scientific breakthroughs every Thursday.
    0 Comments 0 Shares
  • Giant ground sloths evolved three different times for the same reason

    Ancient sloths came in a variety of sizesDiego Barletta
    A cooling, drying climate turned sloths into giants – before humans potentially drove the huge animals to extinction.
    Today’s sloths are small, famously sluggish herbivores that move through the tropical canopies of rainforests. But for tens of millions of years, South America was home to a dizzying diversity of sloths. Many were ground-dwelling giants, with some behemoths approaching 5 tonnes in weight.
    Advertisement
    That staggering size range is of particular interest to Alberto Boscaini at the University of Buenos Aires in Argentina and his colleagues.
    “Body size correlates with everything in the biological traits of an animal,” says Boscaini. “This was a promising way of studyingevolution.”
    Boscaini and his colleagues compiled data on the physical features, DNA and proteins of 67 extinct and living sloth genera – groups of closely related species – to develop a family tree showing their evolutionary relationships.

    Unmissable news about our planet delivered straight to your inbox every month.

    Sign up to newsletter

    The researchers then took this evolutionary history, which covered a span of 35 million years, and added information about each sloth’s habitat, diet and lifestyle. They also studied trends in body-size evolution, making body mass estimates of 49 of the ancient and modern sloth groups.
    The results suggest sloth body-size evolution was heavily influenced by climatic and habitat changes. For instance, some sloth genera began living in trees – similar to today’s sloths – and shrank in body size as they did so.
    Meanwhile, three different lineages of sloths independently evolved elephantine proportions – and it seems they did this within the last several million years, as the planet cooled and the growth of the Andes mountains made South America more arid.
    “Gigantism is more closely associated with cold and dry climates,” says team member Daniel Casali at the University of São Paulo, Brazil.
    Many of these diverse sloths disappeared during two stages: one around 12,000 years ago and the other around 6000 years ago, says Boscaini.
    “This matches with the expansion of Homo sapiens, first over the entire American supercontinent, and later in the Caribbean,” he says — which is where some giant sloths lived. Notably, the only surviving sloth species live in trees so are much harder for humans to hunt than massive ground sloths.

    The idea that humans were the death blow for ancient megafauna is well-supported, says Thaís Rabito Pansani at the University of New Mexico, who wasn’t involved in the study.
    “However, in science, we need several lines of evidence to reinforce our hypotheses, especially in unresolved and highly debated issues such as the extinction of megafauna,” she says. The new evidence shores up this story.
    “Sloths were thriving for most of their history,” says Casali. “teach us how a very successfulcan become so vulnerable very quickly.”
    Journal reference:Science DOI: 10.1126/science.adu0704
    Topics:evolution
    #giant #ground #sloths #evolved #three
    Giant ground sloths evolved three different times for the same reason
    Ancient sloths came in a variety of sizesDiego Barletta A cooling, drying climate turned sloths into giants – before humans potentially drove the huge animals to extinction. Today’s sloths are small, famously sluggish herbivores that move through the tropical canopies of rainforests. But for tens of millions of years, South America was home to a dizzying diversity of sloths. Many were ground-dwelling giants, with some behemoths approaching 5 tonnes in weight. Advertisement That staggering size range is of particular interest to Alberto Boscaini at the University of Buenos Aires in Argentina and his colleagues. “Body size correlates with everything in the biological traits of an animal,” says Boscaini. “This was a promising way of studyingevolution.” Boscaini and his colleagues compiled data on the physical features, DNA and proteins of 67 extinct and living sloth genera – groups of closely related species – to develop a family tree showing their evolutionary relationships. Unmissable news about our planet delivered straight to your inbox every month. Sign up to newsletter The researchers then took this evolutionary history, which covered a span of 35 million years, and added information about each sloth’s habitat, diet and lifestyle. They also studied trends in body-size evolution, making body mass estimates of 49 of the ancient and modern sloth groups. The results suggest sloth body-size evolution was heavily influenced by climatic and habitat changes. For instance, some sloth genera began living in trees – similar to today’s sloths – and shrank in body size as they did so. Meanwhile, three different lineages of sloths independently evolved elephantine proportions – and it seems they did this within the last several million years, as the planet cooled and the growth of the Andes mountains made South America more arid. “Gigantism is more closely associated with cold and dry climates,” says team member Daniel Casali at the University of São Paulo, Brazil. Many of these diverse sloths disappeared during two stages: one around 12,000 years ago and the other around 6000 years ago, says Boscaini. “This matches with the expansion of Homo sapiens, first over the entire American supercontinent, and later in the Caribbean,” he says — which is where some giant sloths lived. Notably, the only surviving sloth species live in trees so are much harder for humans to hunt than massive ground sloths. The idea that humans were the death blow for ancient megafauna is well-supported, says Thaís Rabito Pansani at the University of New Mexico, who wasn’t involved in the study. “However, in science, we need several lines of evidence to reinforce our hypotheses, especially in unresolved and highly debated issues such as the extinction of megafauna,” she says. The new evidence shores up this story. “Sloths were thriving for most of their history,” says Casali. “teach us how a very successfulcan become so vulnerable very quickly.” Journal reference:Science DOI: 10.1126/science.adu0704 Topics:evolution #giant #ground #sloths #evolved #three
    WWW.NEWSCIENTIST.COM
    Giant ground sloths evolved three different times for the same reason
    Ancient sloths came in a variety of sizesDiego Barletta A cooling, drying climate turned sloths into giants – before humans potentially drove the huge animals to extinction. Today’s sloths are small, famously sluggish herbivores that move through the tropical canopies of rainforests. But for tens of millions of years, South America was home to a dizzying diversity of sloths. Many were ground-dwelling giants, with some behemoths approaching 5 tonnes in weight. Advertisement That staggering size range is of particular interest to Alberto Boscaini at the University of Buenos Aires in Argentina and his colleagues. “Body size correlates with everything in the biological traits of an animal,” says Boscaini. “This was a promising way of studying [sloth] evolution.” Boscaini and his colleagues compiled data on the physical features, DNA and proteins of 67 extinct and living sloth genera – groups of closely related species – to develop a family tree showing their evolutionary relationships. Unmissable news about our planet delivered straight to your inbox every month. Sign up to newsletter The researchers then took this evolutionary history, which covered a span of 35 million years, and added information about each sloth’s habitat, diet and lifestyle. They also studied trends in body-size evolution, making body mass estimates of 49 of the ancient and modern sloth groups. The results suggest sloth body-size evolution was heavily influenced by climatic and habitat changes. For instance, some sloth genera began living in trees – similar to today’s sloths – and shrank in body size as they did so. Meanwhile, three different lineages of sloths independently evolved elephantine proportions – and it seems they did this within the last several million years, as the planet cooled and the growth of the Andes mountains made South America more arid. “Gigantism is more closely associated with cold and dry climates,” says team member Daniel Casali at the University of São Paulo, Brazil. Many of these diverse sloths disappeared during two stages: one around 12,000 years ago and the other around 6000 years ago, says Boscaini. “This matches with the expansion of Homo sapiens, first over the entire American supercontinent, and later in the Caribbean,” he says — which is where some giant sloths lived. Notably, the only surviving sloth species live in trees so are much harder for humans to hunt than massive ground sloths. The idea that humans were the death blow for ancient megafauna is well-supported, says Thaís Rabito Pansani at the University of New Mexico, who wasn’t involved in the study. “However, in science, we need several lines of evidence to reinforce our hypotheses, especially in unresolved and highly debated issues such as the extinction of megafauna,” she says. The new evidence shores up this story. “Sloths were thriving for most of their history,” says Casali. “[The findings] teach us how a very successful [group] can become so vulnerable very quickly.” Journal reference:Science DOI: 10.1126/science.adu0704 Topics:evolution
    0 Comments 0 Shares
CGShares https://cgshares.com