• NVIDIA CEO Drops the Blueprint for Europe’s AI Boom

    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it.
    “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris.
    From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future.

    A New Industrial Revolution
    At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing.
    “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance.
    At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware.
    There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers.
    Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue.
    NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth.
    Quantum Meets Classical
    Europe’s quantum ambitions just got a boost.
    The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems.
    Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction.
    “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.”
    Sovereign Models, Smarter Agents
    European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs.
    “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said.
    These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe.
    “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said.
    Huang explained how NVIDIA is helping countries across Europe build AI infrastructure.
    Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments.
    The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents.
    To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity.
    “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute.
    The Industrial Cloud Goes Live
    AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution.
    “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent.
    Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.”
    To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale.
    “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.”
    NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation.
    And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics.
    The Next Wave
    The next wave of AI has begun — and it’s exponential, Huang explained.
    “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.”
    This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said.
    To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.”
    Huang and Grek, as he explained how AI is driving advancements in robotics.
    These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence.
    “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.”
    With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe.
    Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    #nvidia #ceo #drops #blueprint #europes
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions. #nvidia #ceo #drops #blueprint #europes
    BLOGS.NVIDIA.COM
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    Like
    Love
    Sad
    23
    0 Comentários 0 Compartilhamentos
  • Would you switch browsers for a chatbot?

    Hi, friends! Welcome to Installer No. 87, your guide to the best and Verge-iest stuff in the world.This week, I’ve been reading about Sabrina Carpenter and Khaby Lame and intimacy coordinators, finally making a dent in Barbarians at the Gate, watching all the Ben Schwartz and Friends I can find on YouTube, planning my days with the new Finalist beta, recklessly installing all the Apple developer betas after WWDC, thoroughly enjoying Dakota Johnson’s current press tour, and trying to clear all my inboxes before I go on parental leave. It’s… going.I also have for you a much-awaited new browser, a surprise update to a great photo editor, a neat trailer for a meh-looking movie, a classic Steve Jobs speech, and much more. Slightly shorter issue this week, sorry; there’s just a lot going on, but I didn’t want to leave y’all hanging entirely. Oh, and: we’ll be off next week, for Juneteenth, vacation, and general summer chaos reasons. We’ll be back in full force after that, though! Let’s get into it.The DropDia. I know there are a lot of Arc fans here in the Installerverse, and I know you, like me, will have a lot of feelings about the company’s new and extremely AI-focused browser. Personally, I don’t see leaving Arc anytime soon, but there are some really fascinating ideasin Dia already. Snapseed 3.0. I completely forgot Snapseed even existed, and now here’s a really nice update with a bunch of new editing tools and a nice new redesign! As straightforward photo editors go, this is one of the better ones. The new version is only on iOS right now, but I assume it’s heading to Android shortly.“I Tried To Make Something In America.” I was first turned onto the story of the Smarter Scrubber by a great Search Engine episode, and this is a great companion to the story about what it really takes to bring manufacturing back to the US. And why it’s hard to justify.. That link, and the trailer, will only do anything for you if you have a newer iPhone. But even if you don’t care about the movie, the trailer — which actually buzzes in sync with the car’s rumbles and revs — is just really, really cool. Android 16. You can’t get the cool, colorful new look just yet or the desktop mode I am extremely excited about — there’s a lot of good stuff in Android 16 but most of it is coming later. Still, Live Updates look good, and there’s some helpful accessibility stuff, as well.The Infinite Machine Olto. I am such a sucker for any kind of futuristic-looking electric scooter, and this one really hits the sweet spot. Part moped, part e-bike, all Blade Runner vibes. If it wasn’t then I would’ve probably ordered one already.The Fujifilm X-E5. I kept wondering why Fujifilm didn’t just make, like, a hundred different great-looking cameras at every imaginable price because everyone wants a camera this cool. Well, here we are! It’s a spin on the X100VI but with interchangeable lenses and a few power-user features. All my photographer friends are going to want this.Call Her Alex. I confess I’m no Call Her Daddy diehard, but I found this two-part doc on Alex Cooper really interesting. Cooper’s story is all about understanding people, the internet, and what it means to feel connected now. It’s all very low-stakes and somehow also existential? It’s only two parts, you should watch it.“Steve Jobs - 2005 Stanford Commencement Address.” For the 20th anniversary of Jobs’ famousspeech, the Steve Jobs Archive put together a big package of stories, notes, and other materials around the speech. Plus, a newly high-def version of the video. This one’s always worth the 15 minutes.Dune: Awakening. Dune has ascended to the rare territory of “I will check out anything from this franchise, ever, no questions asked.” This game is big on open-world survival and ornithopters, too, so it’s even more my kind of thing. And it’s apparently punishingly difficult in spots.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“I had tried the paper planner in the leather Paper Republic journal but since have moved onto the Remarkable Paper Pro color e-ink device which takes everything you like about paper but makes it editable and color coded. Combine this with a Remarkable planner in PDF format off of Etsy and you are golden.” — Jason“I started reading a manga series from content creator Cory Kenshin called Monsters We Make. So far, I love it. Already preordered Vol. 2.” — Rob“I recently went down the third party controller rabbit hole after my trusty adapted Xbox One controller finally kicked the bucket, and I wanted something I could use across my PC, phone, handheld, Switch, etc. I’ve been playing with the GameSir Cyclone 2 for a few weeks, and it feels really deluxe. The thumbsticks are impossibly smooth and accurate thanks to its TMR joysticks. The face buttons took a second for my brain to adjust to; the short travel distance initially registered as mushy, but once I stopped trying to pound the buttons like I was at the arcade, I found the subtle mechanical click super satisfying.” — Sam“The Apple TV Plus miniseries Long Way Home. It’s Ewan McGregor and Charley Boorman’s fourth Long Way series. This time they are touring some European countries on vintage bikes that they fixed, and it’s such a light-hearted show from two really down to earth humans. Connecting with other people in different cultures and seeing their journey is such a treat!” — Esmael“Podcast recommendation: Devil and the Deep Blue Sea by Christianity Today. A deep dive into the Satanic Panic of the 80’s and 90’s.” — Drew“Splatoon 3and the new How to Train Your Dragon.” — Aaron“I can’t put Mario Kart World down. When I get tired of the intense Knockout Tour mode I go to Free Roam and try to knock out P-Switch challenges, some of which are really tough! I’m obsessed.” — Dave“Fable, a cool app for finding books with virtual book clubs. It’s the closest to a more cozy online bookstore with more honest reviews. I just wish you could click on the author’s name to see their other books.” — Astrid“This is the Summer Games Fest weekand there are a TON of game demos to try out on Steam. One that has caught my attention / play time the most is Wildgate. It’s a team based spaceship shooter where ship crews battle and try to escape with a powerful artifact.” — Sean“Battlefront 2 is back for some reason. Still looks great.” — IanSigning offI have long been fascinated by weather forecasting. I recommend Andrew Blum’s book, The Weather Machine, to people all the time, as a way to understand both how we learned to predict the weather and why it’s a literally culture-changing thing to be able to do so. And if you want to make yourself so, so angry, there’s a whole chunk of Michael Lewis’s book, The Fifth Risk, about how a bunch of companies managed to basically privatize forecasts… based on government data. The weather is a huge business, an extremely powerful political force, and even more important to our way of life than we realize. And we’re really good at predicting the weather!I’ve also been hearing for years that weather forecasting is a perfect use for AI. It’s all about vast quantities of historical data, tiny fluctuations in readings, and finding patterns that often don’t want to be found. So, of course, as soon as I read my colleague Justine Calma’s story about a new Google project called Weather Lab, I spent the next hour poking through the data to see how well DeepMind managed to predict and track recent storms. It’s deeply wonky stuff, but it’s cool to see Big Tech trying to figure out Mother Nature — and almost getting it right. Almost.See you next week!See More:
    #would #you #switch #browsers #chatbot
    Would you switch browsers for a chatbot?
    Hi, friends! Welcome to Installer No. 87, your guide to the best and Verge-iest stuff in the world.This week, I’ve been reading about Sabrina Carpenter and Khaby Lame and intimacy coordinators, finally making a dent in Barbarians at the Gate, watching all the Ben Schwartz and Friends I can find on YouTube, planning my days with the new Finalist beta, recklessly installing all the Apple developer betas after WWDC, thoroughly enjoying Dakota Johnson’s current press tour, and trying to clear all my inboxes before I go on parental leave. It’s… going.I also have for you a much-awaited new browser, a surprise update to a great photo editor, a neat trailer for a meh-looking movie, a classic Steve Jobs speech, and much more. Slightly shorter issue this week, sorry; there’s just a lot going on, but I didn’t want to leave y’all hanging entirely. Oh, and: we’ll be off next week, for Juneteenth, vacation, and general summer chaos reasons. We’ll be back in full force after that, though! Let’s get into it.The DropDia. I know there are a lot of Arc fans here in the Installerverse, and I know you, like me, will have a lot of feelings about the company’s new and extremely AI-focused browser. Personally, I don’t see leaving Arc anytime soon, but there are some really fascinating ideasin Dia already. Snapseed 3.0. I completely forgot Snapseed even existed, and now here’s a really nice update with a bunch of new editing tools and a nice new redesign! As straightforward photo editors go, this is one of the better ones. The new version is only on iOS right now, but I assume it’s heading to Android shortly.“I Tried To Make Something In America.” I was first turned onto the story of the Smarter Scrubber by a great Search Engine episode, and this is a great companion to the story about what it really takes to bring manufacturing back to the US. And why it’s hard to justify.. That link, and the trailer, will only do anything for you if you have a newer iPhone. But even if you don’t care about the movie, the trailer — which actually buzzes in sync with the car’s rumbles and revs — is just really, really cool. Android 16. You can’t get the cool, colorful new look just yet or the desktop mode I am extremely excited about — there’s a lot of good stuff in Android 16 but most of it is coming later. Still, Live Updates look good, and there’s some helpful accessibility stuff, as well.The Infinite Machine Olto. I am such a sucker for any kind of futuristic-looking electric scooter, and this one really hits the sweet spot. Part moped, part e-bike, all Blade Runner vibes. If it wasn’t then I would’ve probably ordered one already.The Fujifilm X-E5. I kept wondering why Fujifilm didn’t just make, like, a hundred different great-looking cameras at every imaginable price because everyone wants a camera this cool. Well, here we are! It’s a spin on the X100VI but with interchangeable lenses and a few power-user features. All my photographer friends are going to want this.Call Her Alex. I confess I’m no Call Her Daddy diehard, but I found this two-part doc on Alex Cooper really interesting. Cooper’s story is all about understanding people, the internet, and what it means to feel connected now. It’s all very low-stakes and somehow also existential? It’s only two parts, you should watch it.“Steve Jobs - 2005 Stanford Commencement Address.” For the 20th anniversary of Jobs’ famousspeech, the Steve Jobs Archive put together a big package of stories, notes, and other materials around the speech. Plus, a newly high-def version of the video. This one’s always worth the 15 minutes.Dune: Awakening. Dune has ascended to the rare territory of “I will check out anything from this franchise, ever, no questions asked.” This game is big on open-world survival and ornithopters, too, so it’s even more my kind of thing. And it’s apparently punishingly difficult in spots.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“I had tried the paper planner in the leather Paper Republic journal but since have moved onto the Remarkable Paper Pro color e-ink device which takes everything you like about paper but makes it editable and color coded. Combine this with a Remarkable planner in PDF format off of Etsy and you are golden.” — Jason“I started reading a manga series from content creator Cory Kenshin called Monsters We Make. So far, I love it. Already preordered Vol. 2.” — Rob“I recently went down the third party controller rabbit hole after my trusty adapted Xbox One controller finally kicked the bucket, and I wanted something I could use across my PC, phone, handheld, Switch, etc. I’ve been playing with the GameSir Cyclone 2 for a few weeks, and it feels really deluxe. The thumbsticks are impossibly smooth and accurate thanks to its TMR joysticks. The face buttons took a second for my brain to adjust to; the short travel distance initially registered as mushy, but once I stopped trying to pound the buttons like I was at the arcade, I found the subtle mechanical click super satisfying.” — Sam“The Apple TV Plus miniseries Long Way Home. It’s Ewan McGregor and Charley Boorman’s fourth Long Way series. This time they are touring some European countries on vintage bikes that they fixed, and it’s such a light-hearted show from two really down to earth humans. Connecting with other people in different cultures and seeing their journey is such a treat!” — Esmael“Podcast recommendation: Devil and the Deep Blue Sea by Christianity Today. A deep dive into the Satanic Panic of the 80’s and 90’s.” — Drew“Splatoon 3and the new How to Train Your Dragon.” — Aaron“I can’t put Mario Kart World down. When I get tired of the intense Knockout Tour mode I go to Free Roam and try to knock out P-Switch challenges, some of which are really tough! I’m obsessed.” — Dave“Fable, a cool app for finding books with virtual book clubs. It’s the closest to a more cozy online bookstore with more honest reviews. I just wish you could click on the author’s name to see their other books.” — Astrid“This is the Summer Games Fest weekand there are a TON of game demos to try out on Steam. One that has caught my attention / play time the most is Wildgate. It’s a team based spaceship shooter where ship crews battle and try to escape with a powerful artifact.” — Sean“Battlefront 2 is back for some reason. Still looks great.” — IanSigning offI have long been fascinated by weather forecasting. I recommend Andrew Blum’s book, The Weather Machine, to people all the time, as a way to understand both how we learned to predict the weather and why it’s a literally culture-changing thing to be able to do so. And if you want to make yourself so, so angry, there’s a whole chunk of Michael Lewis’s book, The Fifth Risk, about how a bunch of companies managed to basically privatize forecasts… based on government data. The weather is a huge business, an extremely powerful political force, and even more important to our way of life than we realize. And we’re really good at predicting the weather!I’ve also been hearing for years that weather forecasting is a perfect use for AI. It’s all about vast quantities of historical data, tiny fluctuations in readings, and finding patterns that often don’t want to be found. So, of course, as soon as I read my colleague Justine Calma’s story about a new Google project called Weather Lab, I spent the next hour poking through the data to see how well DeepMind managed to predict and track recent storms. It’s deeply wonky stuff, but it’s cool to see Big Tech trying to figure out Mother Nature — and almost getting it right. Almost.See you next week!See More: #would #you #switch #browsers #chatbot
    WWW.THEVERGE.COM
    Would you switch browsers for a chatbot?
    Hi, friends! Welcome to Installer No. 87, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, happy It’s Officially Too Hot Now Week, and also you can read all the old editions at the Installer homepage.) This week, I’ve been reading about Sabrina Carpenter and Khaby Lame and intimacy coordinators, finally making a dent in Barbarians at the Gate, watching all the Ben Schwartz and Friends I can find on YouTube, planning my days with the new Finalist beta, recklessly installing all the Apple developer betas after WWDC, thoroughly enjoying Dakota Johnson’s current press tour, and trying to clear all my inboxes before I go on parental leave. It’s… going.I also have for you a much-awaited new browser, a surprise update to a great photo editor, a neat trailer for a meh-looking movie, a classic Steve Jobs speech, and much more. Slightly shorter issue this week, sorry; there’s just a lot going on, but I didn’t want to leave y’all hanging entirely. Oh, and: we’ll be off next week, for Juneteenth, vacation, and general summer chaos reasons. We’ll be back in full force after that, though! Let’s get into it.(As always, the best part of Installer is your ideas and tips. What do you want to know more about? What awesome tricks do you know that everyone else should? What app should everyone be using? Tell me everything: installer@theverge.com. And if you know someone else who might enjoy Installer, forward it to them and tell them to subscribe here.)The DropDia. I know there are a lot of Arc fans here in the Installerverse, and I know you, like me, will have a lot of feelings about the company’s new and extremely AI-focused browser. Personally, I don’t see leaving Arc anytime soon, but there are some really fascinating ideas (and nice design touches) in Dia already. Snapseed 3.0. I completely forgot Snapseed even existed, and now here’s a really nice update with a bunch of new editing tools and a nice new redesign! As straightforward photo editors go, this is one of the better ones. The new version is only on iOS right now, but I assume it’s heading to Android shortly.“I Tried To Make Something In America.” I was first turned onto the story of the Smarter Scrubber by a great Search Engine episode, and this is a great companion to the story about what it really takes to bring manufacturing back to the US. And why it’s hard to justify.. That link, and the trailer, will only do anything for you if you have a newer iPhone. But even if you don’t care about the movie, the trailer — which actually buzzes in sync with the car’s rumbles and revs — is just really, really cool. Android 16. You can’t get the cool, colorful new look just yet or the desktop mode I am extremely excited about — there’s a lot of good stuff in Android 16 but most of it is coming later. Still, Live Updates look good, and there’s some helpful accessibility stuff, as well.The Infinite Machine Olto. I am such a sucker for any kind of futuristic-looking electric scooter, and this one really hits the sweet spot. Part moped, part e-bike, all Blade Runner vibes. If it wasn’t $3,500, then I would’ve probably ordered one already.The Fujifilm X-E5. I kept wondering why Fujifilm didn’t just make, like, a hundred different great-looking cameras at every imaginable price because everyone wants a camera this cool. Well, here we are! It’s a spin on the X100VI but with interchangeable lenses and a few power-user features. All my photographer friends are going to want this.Call Her Alex. I confess I’m no Call Her Daddy diehard, but I found this two-part doc on Alex Cooper really interesting. Cooper’s story is all about understanding people, the internet, and what it means to feel connected now. It’s all very low-stakes and somehow also existential? It’s only two parts, you should watch it.“Steve Jobs - 2005 Stanford Commencement Address.” For the 20th anniversary of Jobs’ famous (and genuinely fabulous) speech, the Steve Jobs Archive put together a big package of stories, notes, and other materials around the speech. Plus, a newly high-def version of the video. This one’s always worth the 15 minutes.Dune: Awakening. Dune has ascended to the rare territory of “I will check out anything from this franchise, ever, no questions asked.” This game is big on open-world survival and ornithopters, too, so it’s even more my kind of thing. And it’s apparently punishingly difficult in spots.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“I had tried the paper planner in the leather Paper Republic journal but since have moved onto the Remarkable Paper Pro color e-ink device which takes everything you like about paper but makes it editable and color coded. Combine this with a Remarkable planner in PDF format off of Etsy and you are golden.” — Jason“I started reading a manga series from content creator Cory Kenshin called Monsters We Make. So far, I love it. Already preordered Vol. 2.” — Rob“I recently went down the third party controller rabbit hole after my trusty adapted Xbox One controller finally kicked the bucket, and I wanted something I could use across my PC, phone, handheld, Switch, etc. I’ve been playing with the GameSir Cyclone 2 for a few weeks, and it feels really deluxe. The thumbsticks are impossibly smooth and accurate thanks to its TMR joysticks. The face buttons took a second for my brain to adjust to; the short travel distance initially registered as mushy, but once I stopped trying to pound the buttons like I was at the arcade, I found the subtle mechanical click super satisfying.” — Sam“The Apple TV Plus miniseries Long Way Home. It’s Ewan McGregor and Charley Boorman’s fourth Long Way series. This time they are touring some European countries on vintage bikes that they fixed, and it’s such a light-hearted show from two really down to earth humans. Connecting with other people in different cultures and seeing their journey is such a treat!” — Esmael“Podcast recommendation: Devil and the Deep Blue Sea by Christianity Today. A deep dive into the Satanic Panic of the 80’s and 90’s.” — Drew“Splatoon 3 (the free Switch 2 update) and the new How to Train Your Dragon.” — Aaron“I can’t put Mario Kart World down. When I get tired of the intense Knockout Tour mode I go to Free Roam and try to knock out P-Switch challenges, some of which are really tough! I’m obsessed.” — Dave“Fable, a cool app for finding books with virtual book clubs. It’s the closest to a more cozy online bookstore with more honest reviews. I just wish you could click on the author’s name to see their other books.” — Astrid“This is the Summer Games Fest week (formerly E3, RIP) and there are a TON of game demos to try out on Steam. One that has caught my attention / play time the most is Wildgate. It’s a team based spaceship shooter where ship crews battle and try to escape with a powerful artifact.” — Sean“Battlefront 2 is back for some reason. Still looks great.” — IanSigning offI have long been fascinated by weather forecasting. I recommend Andrew Blum’s book, The Weather Machine, to people all the time, as a way to understand both how we learned to predict the weather and why it’s a literally culture-changing thing to be able to do so. And if you want to make yourself so, so angry, there’s a whole chunk of Michael Lewis’s book, The Fifth Risk, about how a bunch of companies managed to basically privatize forecasts… based on government data. The weather is a huge business, an extremely powerful political force, and even more important to our way of life than we realize. And we’re really good at predicting the weather!I’ve also been hearing for years that weather forecasting is a perfect use for AI. It’s all about vast quantities of historical data, tiny fluctuations in readings, and finding patterns that often don’t want to be found. So, of course, as soon as I read my colleague Justine Calma’s story about a new Google project called Weather Lab, I spent the next hour poking through the data to see how well DeepMind managed to predict and track recent storms. It’s deeply wonky stuff, but it’s cool to see Big Tech trying to figure out Mother Nature — and almost getting it right. Almost.See you next week!See More:
    Like
    Love
    Wow
    Angry
    Sad
    525
    0 Comentários 0 Compartilhamentos
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Comentários 0 Compartilhamentos
  • Inside Mark Zuckerberg’s AI hiring spree

    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    #inside #mark #zuckerbergs #hiring #spree
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More: #inside #mark #zuckerbergs #hiring #spree
    WWW.THEVERGE.COM
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch (amazingly, not all of them do), Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI (a deal Zuckerberg passed on). “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies (although that is highly unlikely to happen). Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s $14.3 billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will need (and want) to approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent $3 billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    0 Comentários 0 Compartilhamentos
  • The Download: gambling with humanity’s future, and the FDA under Trump

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.Tech billionaires are making a risky bet with humanity’s future

    Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals, but their grand visions for the next decade and beyond are remarkably similar.They include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.Three features play a central role with powering these visions, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits.In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker reveals how these fantastical visions conceal a darker agenda. Read the full story.

    —Bryan Gardiner

    This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands!

    Here’s what food and drug regulation might look like under the Trump administration

    Earlier this week, two new leaders of the US Food and Drug Administration published a list of priorities for the agency. Both Marty Makary and Vinay Prasad are controversial figures in the science community. They were generally highly respected academics until the covid pandemic, when their contrarian opinions on masking, vaccines, and lockdowns turned many of their colleagues off them.

    Given all this, along with recent mass firings of FDA employees, lots of people were pretty anxious to see what this list might include—and what we might expect the future of food and drug regulation in the US to look like. So let’s dive into the pair’s plans for new investigations, speedy approvals, and the “unleashing” of AI.

    —Jessica Hamzelou

    This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 NASA is investigating leaks on the ISSIt’s postponed launching private astronauts to the station while it evaluates.+ Its core component has been springing small air leaks for months.+ Meanwhile, this Chinese probe is en route to a near-Earth asteroid.2 Undocumented migrants are using social media to warn of ICE raidsThe DIY networks are anonymously reporting police presences across LA.+ Platforms’ relationships with protest activism has changed drastically. 

    3 Google’s AI Overviews is hallucinating about the fatal Air India crashIt incorrectly stated that it involved an Airbus plane, not a Boeing 787.+ Why Google’s AI Overviews gets things wrong.4 Chinese engineers are sneaking suitcases of hard drives into the countryTo covertly train advanced AI models.+ The US is cracking down on Huawei’s ability to produce chips.+ What the US-China AI race overlooks.5 The National Hurricane Center is joining forces with DeepMindIt’s the first time the center has used AI to predict nature’s worst storms.+ Here’s what we know about hurricanes and climate change.6 OpenAI is working on a product with toymaker MattelAI-powered Barbies?!+ Nothing is safe from the creep of AI, not even playtime.+ OpenAI has ambitions to reach billions of users.7 Chatbots posing as licensed therapists may be breaking the lawDigital rights organizations have filed a complaint to the FTC.+ How do you teach an AI model to give therapy?8 Major companies are abandoning their climate commitmentsBut some experts argue this may not be entirely bad.+ Google, Amazon and the problem with Big Tech’s climate claims.9 Vibe coding is shaking up software engineeringEven though AI-generated code is inherently unreliable.+ What is vibe coding, exactly?10 TikTok really loves hotdogs And who can blame it?Quote of the day

    “It kind of jams two years of work into two months.”

    —Andrew Butcher, president of the Maine Connectivity Authority, tells Ars Technica why it’s so difficult to meet the Trump administration’s new plans to increase broadband access in certain states.

    One more thing

    The surprising barrier that keeps us from building the housing we needIt’s a tough time to try and buy a home in America. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. For many, that marks the end of an American dream built around owning a house. Over the same time, rents have gone up 26%.The reason for the current rise in the cost of housing is clear to most economists: a lack of supply. Simply put, we don’t build enough houses and apartments, and we haven’t for years.

    But the reality is that even if we ease the endless permitting delays and begin cutting red tape, we will still be faced with a distressing fact: The construction industry is not very efficient when it comes to building stuff. Read the full story.

    —David Rotman

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ If you’re one of the unlucky people who has triskaidekaphobia, look away now.+ 15-year old Nicholas is preparing to head from his home in the UK to Japan to become a professional sumo wrestler.+ Earlier this week, London played host to 20,000 women in bald caps. But why?+ Why do dads watch TV standing up? I need to know.
    #download #gambling #with #humanitys #future
    The Download: gambling with humanity’s future, and the FDA under Trump
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.Tech billionaires are making a risky bet with humanity’s future Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals, but their grand visions for the next decade and beyond are remarkably similar.They include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.Three features play a central role with powering these visions, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits.In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker reveals how these fantastical visions conceal a darker agenda. Read the full story. —Bryan Gardiner This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands! Here’s what food and drug regulation might look like under the Trump administration Earlier this week, two new leaders of the US Food and Drug Administration published a list of priorities for the agency. Both Marty Makary and Vinay Prasad are controversial figures in the science community. They were generally highly respected academics until the covid pandemic, when their contrarian opinions on masking, vaccines, and lockdowns turned many of their colleagues off them. Given all this, along with recent mass firings of FDA employees, lots of people were pretty anxious to see what this list might include—and what we might expect the future of food and drug regulation in the US to look like. So let’s dive into the pair’s plans for new investigations, speedy approvals, and the “unleashing” of AI. —Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 NASA is investigating leaks on the ISSIt’s postponed launching private astronauts to the station while it evaluates.+ Its core component has been springing small air leaks for months.+ Meanwhile, this Chinese probe is en route to a near-Earth asteroid.2 Undocumented migrants are using social media to warn of ICE raidsThe DIY networks are anonymously reporting police presences across LA.+ Platforms’ relationships with protest activism has changed drastically.  3 Google’s AI Overviews is hallucinating about the fatal Air India crashIt incorrectly stated that it involved an Airbus plane, not a Boeing 787.+ Why Google’s AI Overviews gets things wrong.4 Chinese engineers are sneaking suitcases of hard drives into the countryTo covertly train advanced AI models.+ The US is cracking down on Huawei’s ability to produce chips.+ What the US-China AI race overlooks.5 The National Hurricane Center is joining forces with DeepMindIt’s the first time the center has used AI to predict nature’s worst storms.+ Here’s what we know about hurricanes and climate change.6 OpenAI is working on a product with toymaker MattelAI-powered Barbies?!+ Nothing is safe from the creep of AI, not even playtime.+ OpenAI has ambitions to reach billions of users.7 Chatbots posing as licensed therapists may be breaking the lawDigital rights organizations have filed a complaint to the FTC.+ How do you teach an AI model to give therapy?8 Major companies are abandoning their climate commitmentsBut some experts argue this may not be entirely bad.+ Google, Amazon and the problem with Big Tech’s climate claims.9 Vibe coding is shaking up software engineeringEven though AI-generated code is inherently unreliable.+ What is vibe coding, exactly?10 TikTok really loves hotdogs And who can blame it?Quote of the day “It kind of jams two years of work into two months.” —Andrew Butcher, president of the Maine Connectivity Authority, tells Ars Technica why it’s so difficult to meet the Trump administration’s new plans to increase broadband access in certain states. One more thing The surprising barrier that keeps us from building the housing we needIt’s a tough time to try and buy a home in America. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. For many, that marks the end of an American dream built around owning a house. Over the same time, rents have gone up 26%.The reason for the current rise in the cost of housing is clear to most economists: a lack of supply. Simply put, we don’t build enough houses and apartments, and we haven’t for years. But the reality is that even if we ease the endless permitting delays and begin cutting red tape, we will still be faced with a distressing fact: The construction industry is not very efficient when it comes to building stuff. Read the full story. —David Rotman We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ If you’re one of the unlucky people who has triskaidekaphobia, look away now.+ 15-year old Nicholas is preparing to head from his home in the UK to Japan to become a professional sumo wrestler.+ Earlier this week, London played host to 20,000 women in bald caps. But why?+ Why do dads watch TV standing up? I need to know. #download #gambling #with #humanitys #future
    WWW.TECHNOLOGYREVIEW.COM
    The Download: gambling with humanity’s future, and the FDA under Trump
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.Tech billionaires are making a risky bet with humanity’s future Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals, but their grand visions for the next decade and beyond are remarkably similar.They include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality (or something close to it); establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.Three features play a central role with powering these visions, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits.In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker reveals how these fantastical visions conceal a darker agenda. Read the full story. —Bryan Gardiner This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands! Here’s what food and drug regulation might look like under the Trump administration Earlier this week, two new leaders of the US Food and Drug Administration published a list of priorities for the agency. Both Marty Makary and Vinay Prasad are controversial figures in the science community. They were generally highly respected academics until the covid pandemic, when their contrarian opinions on masking, vaccines, and lockdowns turned many of their colleagues off them. Given all this, along with recent mass firings of FDA employees, lots of people were pretty anxious to see what this list might include—and what we might expect the future of food and drug regulation in the US to look like. So let’s dive into the pair’s plans for new investigations, speedy approvals, and the “unleashing” of AI. —Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 NASA is investigating leaks on the ISSIt’s postponed launching private astronauts to the station while it evaluates. (WP $)+ Its core component has been springing small air leaks for months. (Reuters)+ Meanwhile, this Chinese probe is en route to a near-Earth asteroid. (Wired $) 2 Undocumented migrants are using social media to warn of ICE raidsThe DIY networks are anonymously reporting police presences across LA. (Wired $)+ Platforms’ relationships with protest activism has changed drastically. (NY Mag $)  3 Google’s AI Overviews is hallucinating about the fatal Air India crashIt incorrectly stated that it involved an Airbus plane, not a Boeing 787. (Ars Technica)+ Why Google’s AI Overviews gets things wrong. (MIT Technology Review) 4 Chinese engineers are sneaking suitcases of hard drives into the countryTo covertly train advanced AI models. (WSJ $)+ The US is cracking down on Huawei’s ability to produce chips. (Bloomberg $)+ What the US-China AI race overlooks. (Rest of World) 5 The National Hurricane Center is joining forces with DeepMindIt’s the first time the center has used AI to predict nature’s worst storms. (NYT $)+ Here’s what we know about hurricanes and climate change. (MIT Technology Review) 6 OpenAI is working on a product with toymaker MattelAI-powered Barbies?! (FT $)+ Nothing is safe from the creep of AI, not even playtime. (LA Times $)+ OpenAI has ambitions to reach billions of users. (Bloomberg $) 7 Chatbots posing as licensed therapists may be breaking the lawDigital rights organizations have filed a complaint to the FTC. (404 Media)+ How do you teach an AI model to give therapy? (MIT Technology Review) 8 Major companies are abandoning their climate commitmentsBut some experts argue this may not be entirely bad. (Bloomberg $)+ Google, Amazon and the problem with Big Tech’s climate claims. (MIT Technology Review) 9 Vibe coding is shaking up software engineeringEven though AI-generated code is inherently unreliable. (Wired $)+ What is vibe coding, exactly? (MIT Technology Review) 10 TikTok really loves hotdogs And who can blame it? (Insider $) Quote of the day “It kind of jams two years of work into two months.” —Andrew Butcher, president of the Maine Connectivity Authority, tells Ars Technica why it’s so difficult to meet the Trump administration’s new plans to increase broadband access in certain states. One more thing The surprising barrier that keeps us from building the housing we needIt’s a tough time to try and buy a home in America. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. For many, that marks the end of an American dream built around owning a house. Over the same time, rents have gone up 26%.The reason for the current rise in the cost of housing is clear to most economists: a lack of supply. Simply put, we don’t build enough houses and apartments, and we haven’t for years. But the reality is that even if we ease the endless permitting delays and begin cutting red tape, we will still be faced with a distressing fact: The construction industry is not very efficient when it comes to building stuff. Read the full story. —David Rotman We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + If you’re one of the unlucky people who has triskaidekaphobia, look away now.+ 15-year old Nicholas is preparing to head from his home in the UK to Japan to become a professional sumo wrestler.+ Earlier this week, London played host to 20,000 women in bald caps. But why? ($)+ Why do dads watch TV standing up? I need to know.
    0 Comentários 0 Compartilhamentos