• NVIDIA CEO Drops the Blueprint for Europe’s AI Boom

    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it.
    “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris.
    From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future.

    A New Industrial Revolution
    At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing.
    “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance.
    At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware.
    There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers.
    Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue.
    NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth.
    Quantum Meets Classical
    Europe’s quantum ambitions just got a boost.
    The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems.
    Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction.
    “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.”
    Sovereign Models, Smarter Agents
    European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs.
    “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said.
    These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe.
    “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said.
    Huang explained how NVIDIA is helping countries across Europe build AI infrastructure.
    Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments.
    The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents.
    To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity.
    “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute.
    The Industrial Cloud Goes Live
    AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution.
    “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent.
    Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.”
    To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale.
    “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.”
    NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation.
    And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics.
    The Next Wave
    The next wave of AI has begun — and it’s exponential, Huang explained.
    “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.”
    This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said.
    To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.”
    Huang and Grek, as he explained how AI is driving advancements in robotics.
    These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence.
    “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.”
    With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe.
    Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    #nvidia #ceo #drops #blueprint #europes
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions. #nvidia #ceo #drops #blueprint #europes
    BLOGS.NVIDIA.COM
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    Like
    Love
    Sad
    23
    0 Комментарии 0 Поделились
  • Fortifying retail: how UK brands can defend against cyber breaches

    The recent wave of cyber attacks targeting UK retailers has been a moment of reckoning for the entire retail industry. As someone who went through supporting one of the largest retail breaches in history, this news hits close to home.
    The National Cyber Security Centre’scall to strengthen IT support protocols reinforces a hard truth: cybersecurity is no longer just a technical/operational issue. It’s a business issue that directly affects revenue, customer trust, and brand reputation.
    Retailers today are navigating an increasingly complex threat landscape, while also managing a vast user base that needs to stay informed and secure. The recent attacks don’t represent a failure, but an opportunity - an inflection point to invest in stronger visibility, continuous monitoring and a culture of shared responsibility that meets the realities of modern retail.

    We know that the cyber groups responsible for the recent retail hacks used sophisticated social engineering techniques, such as impersonating employees to deceive IT help desks into resetting passwords and providing information, thereby gaining unauthorised access to internal systems.
    Employees are increasingly a target, and retailers employ some of the largest, most diverse workforces, making them an even bigger risk with countless touchpoints for breaches. In these organisations, a cybersecurity-first culture is vital to combatting threats. Cybersecurity-first culture includes employees that are aware of these types of attacks and understand how to report them if they are contacted.
    In order to establish a cybersecurity-first culture, employees must be empowered to recognise and respond to threats, not just avoid them. This can be done through simulation training and threat assessments - showcasing real life examples of threats and brainstorming possible solutions to control and prevent further and future damage.
    This allows security teams to focus on strategy instead of constant firefighting, while leadership support - through budget, tools, and tone - reinforces its importance at every level.

    In addition to support workers, vendors also pose a significant attack path for bad actors. According to data from Elastic Path, 42% of retailers admit that legacy technology could be leaving them exposed to cyber risks. And with the accelerating pace of innovation, modern cyber threats are not only more complex, but often enter through unexpected avenues, like third-party vendors. Research from Vanta shows 46% of organisations say that a vendor of theirs has experienced a data breach since they started working together.
    The M&S breach is a case in point, with it being reported that attackers exploited a vulnerability in a contractor’s systems, not the retailer’s own. This underscores that visibility must extend beyond your perimeter to encompass the entire digital supply chain, in real time.
    Threats don’t wait for your quarterly review or annual audit. If you're only checking your controls or vendor status once a year, you're already behind. This means real-time visibility is now foundational to cyber defence. We need to know when something changes the moment it happens. This can be done through continuous monitoring, both for the technical controls and the relationships that introduce risk into your environment.
    We also need to rethink the way we resource and prioritise that visibility. Manual processes don’t scale with the complexity of modern infrastructure. Automation and tooling can help surface the right signals from the noise - whether it’s misconfigurations, access drift, or suspicious vendor behavior.

    The best case scenario is that security measures are embedded into all digital architecture, utilising a few security ‘must haves’ such as secure coding, continuous monitoring, and regular testing and improvement. Retailers who want to get proactive and about breaches following the events of the last few weeks can follow this action plan to get started:
    First, awareness - have your security leadership send a message out to managers of help desks and support teams to make sure they are aware of the recent attacks on retailers, and are in a position to inform teams of what to look out for.
    Then, investigate - pinpoint the attack path used on other retailers to make sure you have a full understanding of the risk to your organisation.
    After that, assess - conduct a threat assessment to identify what could go wrong, or how this attack path could be used in your organisation.
    The final step is to identify - figure out the highest risk gaps in your organisation, and the remediation steps to address each one.

    Strong cybersecurity doesn’t come from quick fixes - it takes time, leadership buy-in, and a shift in mindset across the organisation. My advice to security teams is simple: speak in outcomes. Frame cyber risk as business risk, because that’s what it is. The retailers that have fallen victim to recent attacks are facing huge financial losses, which makes this not just an IT issue - it’s a boardroom issue.
    Customers are paying attention. They want to trust the brands they buy from, and that trust is built on transparency and preparation. The recent retail attacks aren’t a reason to panic - they’re a reason to reset, evaluate current state risks, and fully understand the potential impacts of what is happening elsewhere. This is the moment to invest in your infrastructure, empower your teams, and embed security into your operations. The organisations that do this now won’t just be safer - they’ll be more competitive, more resilient, and better positioned for whatever comes next.
    Jadee Hanson is the Chief Information Security Officer at Vanta

    about cyber security in retail
    Content Goes Here
    Harrods becomes latest UK retailer to fall victim to cyber attack
    Retail cyber crime spree a ‘wake-up call’, says NCSC CEO
    Retail cyber attacks hit food distributor Peter Green Chilled
    #fortifying #retail #how #brands #can
    Fortifying retail: how UK brands can defend against cyber breaches
    The recent wave of cyber attacks targeting UK retailers has been a moment of reckoning for the entire retail industry. As someone who went through supporting one of the largest retail breaches in history, this news hits close to home. The National Cyber Security Centre’scall to strengthen IT support protocols reinforces a hard truth: cybersecurity is no longer just a technical/operational issue. It’s a business issue that directly affects revenue, customer trust, and brand reputation. Retailers today are navigating an increasingly complex threat landscape, while also managing a vast user base that needs to stay informed and secure. The recent attacks don’t represent a failure, but an opportunity - an inflection point to invest in stronger visibility, continuous monitoring and a culture of shared responsibility that meets the realities of modern retail. We know that the cyber groups responsible for the recent retail hacks used sophisticated social engineering techniques, such as impersonating employees to deceive IT help desks into resetting passwords and providing information, thereby gaining unauthorised access to internal systems. Employees are increasingly a target, and retailers employ some of the largest, most diverse workforces, making them an even bigger risk with countless touchpoints for breaches. In these organisations, a cybersecurity-first culture is vital to combatting threats. Cybersecurity-first culture includes employees that are aware of these types of attacks and understand how to report them if they are contacted. In order to establish a cybersecurity-first culture, employees must be empowered to recognise and respond to threats, not just avoid them. This can be done through simulation training and threat assessments - showcasing real life examples of threats and brainstorming possible solutions to control and prevent further and future damage. This allows security teams to focus on strategy instead of constant firefighting, while leadership support - through budget, tools, and tone - reinforces its importance at every level. In addition to support workers, vendors also pose a significant attack path for bad actors. According to data from Elastic Path, 42% of retailers admit that legacy technology could be leaving them exposed to cyber risks. And with the accelerating pace of innovation, modern cyber threats are not only more complex, but often enter through unexpected avenues, like third-party vendors. Research from Vanta shows 46% of organisations say that a vendor of theirs has experienced a data breach since they started working together. The M&S breach is a case in point, with it being reported that attackers exploited a vulnerability in a contractor’s systems, not the retailer’s own. This underscores that visibility must extend beyond your perimeter to encompass the entire digital supply chain, in real time. Threats don’t wait for your quarterly review or annual audit. If you're only checking your controls or vendor status once a year, you're already behind. This means real-time visibility is now foundational to cyber defence. We need to know when something changes the moment it happens. This can be done through continuous monitoring, both for the technical controls and the relationships that introduce risk into your environment. We also need to rethink the way we resource and prioritise that visibility. Manual processes don’t scale with the complexity of modern infrastructure. Automation and tooling can help surface the right signals from the noise - whether it’s misconfigurations, access drift, or suspicious vendor behavior. The best case scenario is that security measures are embedded into all digital architecture, utilising a few security ‘must haves’ such as secure coding, continuous monitoring, and regular testing and improvement. Retailers who want to get proactive and about breaches following the events of the last few weeks can follow this action plan to get started: First, awareness - have your security leadership send a message out to managers of help desks and support teams to make sure they are aware of the recent attacks on retailers, and are in a position to inform teams of what to look out for. Then, investigate - pinpoint the attack path used on other retailers to make sure you have a full understanding of the risk to your organisation. After that, assess - conduct a threat assessment to identify what could go wrong, or how this attack path could be used in your organisation. The final step is to identify - figure out the highest risk gaps in your organisation, and the remediation steps to address each one. Strong cybersecurity doesn’t come from quick fixes - it takes time, leadership buy-in, and a shift in mindset across the organisation. My advice to security teams is simple: speak in outcomes. Frame cyber risk as business risk, because that’s what it is. The retailers that have fallen victim to recent attacks are facing huge financial losses, which makes this not just an IT issue - it’s a boardroom issue. Customers are paying attention. They want to trust the brands they buy from, and that trust is built on transparency and preparation. The recent retail attacks aren’t a reason to panic - they’re a reason to reset, evaluate current state risks, and fully understand the potential impacts of what is happening elsewhere. This is the moment to invest in your infrastructure, empower your teams, and embed security into your operations. The organisations that do this now won’t just be safer - they’ll be more competitive, more resilient, and better positioned for whatever comes next. Jadee Hanson is the Chief Information Security Officer at Vanta about cyber security in retail Content Goes Here Harrods becomes latest UK retailer to fall victim to cyber attack Retail cyber crime spree a ‘wake-up call’, says NCSC CEO Retail cyber attacks hit food distributor Peter Green Chilled #fortifying #retail #how #brands #can
    WWW.COMPUTERWEEKLY.COM
    Fortifying retail: how UK brands can defend against cyber breaches
    The recent wave of cyber attacks targeting UK retailers has been a moment of reckoning for the entire retail industry. As someone who went through supporting one of the largest retail breaches in history, this news hits close to home. The National Cyber Security Centre’s (NCSC) call to strengthen IT support protocols reinforces a hard truth: cybersecurity is no longer just a technical/operational issue. It’s a business issue that directly affects revenue, customer trust, and brand reputation. Retailers today are navigating an increasingly complex threat landscape, while also managing a vast user base that needs to stay informed and secure. The recent attacks don’t represent a failure, but an opportunity - an inflection point to invest in stronger visibility, continuous monitoring and a culture of shared responsibility that meets the realities of modern retail. We know that the cyber groups responsible for the recent retail hacks used sophisticated social engineering techniques, such as impersonating employees to deceive IT help desks into resetting passwords and providing information, thereby gaining unauthorised access to internal systems. Employees are increasingly a target, and retailers employ some of the largest, most diverse workforces, making them an even bigger risk with countless touchpoints for breaches. In these organisations, a cybersecurity-first culture is vital to combatting threats. Cybersecurity-first culture includes employees that are aware of these types of attacks and understand how to report them if they are contacted. In order to establish a cybersecurity-first culture, employees must be empowered to recognise and respond to threats, not just avoid them. This can be done through simulation training and threat assessments - showcasing real life examples of threats and brainstorming possible solutions to control and prevent further and future damage. This allows security teams to focus on strategy instead of constant firefighting, while leadership support - through budget, tools, and tone - reinforces its importance at every level. In addition to support workers, vendors also pose a significant attack path for bad actors. According to data from Elastic Path, 42% of retailers admit that legacy technology could be leaving them exposed to cyber risks. And with the accelerating pace of innovation, modern cyber threats are not only more complex, but often enter through unexpected avenues, like third-party vendors. Research from Vanta shows 46% of organisations say that a vendor of theirs has experienced a data breach since they started working together. The M&S breach is a case in point, with it being reported that attackers exploited a vulnerability in a contractor’s systems, not the retailer’s own. This underscores that visibility must extend beyond your perimeter to encompass the entire digital supply chain, in real time. Threats don’t wait for your quarterly review or annual audit. If you're only checking your controls or vendor status once a year, you're already behind. This means real-time visibility is now foundational to cyber defence. We need to know when something changes the moment it happens. This can be done through continuous monitoring, both for the technical controls and the relationships that introduce risk into your environment. We also need to rethink the way we resource and prioritise that visibility. Manual processes don’t scale with the complexity of modern infrastructure. Automation and tooling can help surface the right signals from the noise - whether it’s misconfigurations, access drift, or suspicious vendor behavior. The best case scenario is that security measures are embedded into all digital architecture, utilising a few security ‘must haves’ such as secure coding, continuous monitoring, and regular testing and improvement. Retailers who want to get proactive and about breaches following the events of the last few weeks can follow this action plan to get started: First, awareness - have your security leadership send a message out to managers of help desks and support teams to make sure they are aware of the recent attacks on retailers, and are in a position to inform teams of what to look out for. Then, investigate - pinpoint the attack path used on other retailers to make sure you have a full understanding of the risk to your organisation. After that, assess - conduct a threat assessment to identify what could go wrong, or how this attack path could be used in your organisation. The final step is to identify - figure out the highest risk gaps in your organisation, and the remediation steps to address each one. Strong cybersecurity doesn’t come from quick fixes - it takes time, leadership buy-in, and a shift in mindset across the organisation. My advice to security teams is simple: speak in outcomes. Frame cyber risk as business risk, because that’s what it is. The retailers that have fallen victim to recent attacks are facing huge financial losses, which makes this not just an IT issue - it’s a boardroom issue. Customers are paying attention. They want to trust the brands they buy from, and that trust is built on transparency and preparation. The recent retail attacks aren’t a reason to panic - they’re a reason to reset, evaluate current state risks, and fully understand the potential impacts of what is happening elsewhere. This is the moment to invest in your infrastructure, empower your teams, and embed security into your operations. The organisations that do this now won’t just be safer - they’ll be more competitive, more resilient, and better positioned for whatever comes next. Jadee Hanson is the Chief Information Security Officer at Vanta Read more about cyber security in retail Content Goes Here Harrods becomes latest UK retailer to fall victim to cyber attack Retail cyber crime spree a ‘wake-up call’, says NCSC CEO Retail cyber attacks hit food distributor Peter Green Chilled
    0 Комментарии 0 Поделились
  • Meta officially ‘acqui-hires’ Scale AI — will it draw regulator scrutiny?

    Meta is looking to up its weakening AI game with a key talent grab.

    Following days of speculation, the social media giant has confirmed that Scale AI’s founder and CEO, Alexandr Wang, is joining Meta to work on its AI efforts.

    Meta will invest billion in Scale AI as part of the deal, and will have a 49% stake in the AI startup, which specializes in data labeling and model evaluation services. Other key Scale employees will also move over to Meta, while CSO Jason Droege will step in as Scale’s interim CEO.

    This move comes as the Mark Zuckerberg-led company goes all-in on building a new research lab focused on “superintelligence,” the next step beyond artificial general intelligence.

    The arrangement also reflects a growing trend in big tech, where industry giants are buying companies without really buying them — what’s increasingly being referred to as “acqui-hiring.” It involves recruiting key personnel from a company, licensing its technology, and selling its products, but leaving it as a private entity.

    “This is fundamentally a massive ‘acqui-hire’ play disguised as a strategic investment,” said Wyatt Mayham, lead AI consultant at Northwest AI Consulting. “While Meta gets Scale’s data infrastructure, the real prize is Wang joining Meta to lead their superintelligence lab. At the billion price tag, this might be the most expensive individual talent acquisition in tech history.”

    Closing gaps with competitors

    Meta has struggled to keep up with OpenAI, Anthropic, and other key competitors in the AI race, recently even delaying the launch of its new flagship model, Behemoth, purportedly due to internal concerns about its performance. It has also seen the departure of several of its top researchers.

     “It’s not really a secret at this point that Meta’s Llama 4 models have had significant performance issues,” Mayham said. “Zuck is essentially betting that Wang’s track record building AI infrastructure can solve Meta’s alignment and model quality problems faster than internal development.” And, he added, Scale’s enterprise-grade human feedback loops are exactly what Meta’s Llama models need to compete with ChatGPT and Claude on reliability and task-following.

    Data quality, a key focus for Wang, is a big factor in solving those performance problems. He wrote in a note to Scale employees on Thursday, later posted on X, that when he founded Scale AI in 2016 amidst some of the early AI breakthroughs, “it was clear even then that data was the lifeblood of AI systems, and that was the inspiration behind starting Scale.”

    But despite Meta’s huge investment, Scale AI is underscoring its commitment to sovereignty: “Scale remains an independent leader in AI, committed to providing industry-leading AI solutions and safeguarding customer data,” the company wrote in a blog post. “Scale will continue to partner with leading AI labs, multinational enterprises, and governments to deliver expert data and technology solutions through every phase of AI’s evolution.”

    Allowing big tech to side-step notification

    But while it’s only just been inked, the high-profile deal is already raising some eyebrows. According to experts, arrangements like these allow tech companies to acquire top talent and key technologies in a side-stepping manner, thus avoiding regulatory notification requirements.

    The US Federal Trade Commissionrequires mergers and acquisitions totaling more than million be reported in advance. Licensing deals or the mass hiring-away of a company’s employees don’t have this requirement. This allows companies to move more quickly, as they don’t have to undergo the lengthy federal review process.

    Microsoft’s deal with Inflection AI is probably one of the highest-profile examples of the “acqui-hiring” trend. In March 2024, the tech giant paid the startup million in licensing fees and hired much of its team, including co-founders Mustafa Suleymanand Karén Simonyan.

    Similarly, last year Amazon hired more than 50% of Adept AI’s key personnel, including its CEO, to focus on AGI. Google also inked a licensing agreement with Character AI and hired a majority of its founders and researchers.

    However, regulators have caught on, with the FTC launching inquiries into both the Microsoft-Inflection and Amazon-Adept deals, and the US Justice Departmentanalyzing Google-Character AI.

    Reflecting ‘desperation’ in the AI industry

    Meta’s decision to go forward with this arrangement anyway, despite that dicey backdrop, seems to indicate how anxious the company is to keep up in the AI race.

    “The most interesting piece of this all is the timing,” said Mayham. “It reflects broader industry desperation. Tech giants are increasingly buying parts of promising AI startups to secure key talent without acquiring full companies, following similar patterns with Microsoft-Inflection and Google-Character AI.”

    However, the regulatory risks are “real but nuanced,” he noted. Meta’s acquisition could face scrutiny from antitrust regulators, particularly as the company is involved in an ongoing FTC lawsuit over its Instagram and WhatsApp acquisitions. While the 49% ownership position appears designed to avoid triggering automatic thresholds, US regulatory bodies like the FTC and DOJ can review minority stake acquisitions under the Clayton Antitrust Act if they seem to threaten competition.

    Perhaps more importantly, Meta is not considered a leader in AGI development and is trailing OpenAI, Anthropic, and Google, meaning regulators may not consider the deal all that concerning.

    All told, the arrangement certainly signals Meta’s recognition that the AI race has shifted from a compute and model size competition to a data quality and alignment battle, Mayham noted.

    “I think theof this is that Zuck’s biggest bet is that talent and data infrastructure matter more than raw compute power in the AI race,” he said. “The regulatory risk is manageable given Meta’s trailing position, but the acqui-hire premium shows how expensive top AI talent has become.”
    #meta #officially #acquihires #scale #will
    Meta officially ‘acqui-hires’ Scale AI — will it draw regulator scrutiny?
    Meta is looking to up its weakening AI game with a key talent grab. Following days of speculation, the social media giant has confirmed that Scale AI’s founder and CEO, Alexandr Wang, is joining Meta to work on its AI efforts. Meta will invest billion in Scale AI as part of the deal, and will have a 49% stake in the AI startup, which specializes in data labeling and model evaluation services. Other key Scale employees will also move over to Meta, while CSO Jason Droege will step in as Scale’s interim CEO. This move comes as the Mark Zuckerberg-led company goes all-in on building a new research lab focused on “superintelligence,” the next step beyond artificial general intelligence. The arrangement also reflects a growing trend in big tech, where industry giants are buying companies without really buying them — what’s increasingly being referred to as “acqui-hiring.” It involves recruiting key personnel from a company, licensing its technology, and selling its products, but leaving it as a private entity. “This is fundamentally a massive ‘acqui-hire’ play disguised as a strategic investment,” said Wyatt Mayham, lead AI consultant at Northwest AI Consulting. “While Meta gets Scale’s data infrastructure, the real prize is Wang joining Meta to lead their superintelligence lab. At the billion price tag, this might be the most expensive individual talent acquisition in tech history.” Closing gaps with competitors Meta has struggled to keep up with OpenAI, Anthropic, and other key competitors in the AI race, recently even delaying the launch of its new flagship model, Behemoth, purportedly due to internal concerns about its performance. It has also seen the departure of several of its top researchers.  “It’s not really a secret at this point that Meta’s Llama 4 models have had significant performance issues,” Mayham said. “Zuck is essentially betting that Wang’s track record building AI infrastructure can solve Meta’s alignment and model quality problems faster than internal development.” And, he added, Scale’s enterprise-grade human feedback loops are exactly what Meta’s Llama models need to compete with ChatGPT and Claude on reliability and task-following. Data quality, a key focus for Wang, is a big factor in solving those performance problems. He wrote in a note to Scale employees on Thursday, later posted on X, that when he founded Scale AI in 2016 amidst some of the early AI breakthroughs, “it was clear even then that data was the lifeblood of AI systems, and that was the inspiration behind starting Scale.” But despite Meta’s huge investment, Scale AI is underscoring its commitment to sovereignty: “Scale remains an independent leader in AI, committed to providing industry-leading AI solutions and safeguarding customer data,” the company wrote in a blog post. “Scale will continue to partner with leading AI labs, multinational enterprises, and governments to deliver expert data and technology solutions through every phase of AI’s evolution.” Allowing big tech to side-step notification But while it’s only just been inked, the high-profile deal is already raising some eyebrows. According to experts, arrangements like these allow tech companies to acquire top talent and key technologies in a side-stepping manner, thus avoiding regulatory notification requirements. The US Federal Trade Commissionrequires mergers and acquisitions totaling more than million be reported in advance. Licensing deals or the mass hiring-away of a company’s employees don’t have this requirement. This allows companies to move more quickly, as they don’t have to undergo the lengthy federal review process. Microsoft’s deal with Inflection AI is probably one of the highest-profile examples of the “acqui-hiring” trend. In March 2024, the tech giant paid the startup million in licensing fees and hired much of its team, including co-founders Mustafa Suleymanand Karén Simonyan. Similarly, last year Amazon hired more than 50% of Adept AI’s key personnel, including its CEO, to focus on AGI. Google also inked a licensing agreement with Character AI and hired a majority of its founders and researchers. However, regulators have caught on, with the FTC launching inquiries into both the Microsoft-Inflection and Amazon-Adept deals, and the US Justice Departmentanalyzing Google-Character AI. Reflecting ‘desperation’ in the AI industry Meta’s decision to go forward with this arrangement anyway, despite that dicey backdrop, seems to indicate how anxious the company is to keep up in the AI race. “The most interesting piece of this all is the timing,” said Mayham. “It reflects broader industry desperation. Tech giants are increasingly buying parts of promising AI startups to secure key talent without acquiring full companies, following similar patterns with Microsoft-Inflection and Google-Character AI.” However, the regulatory risks are “real but nuanced,” he noted. Meta’s acquisition could face scrutiny from antitrust regulators, particularly as the company is involved in an ongoing FTC lawsuit over its Instagram and WhatsApp acquisitions. While the 49% ownership position appears designed to avoid triggering automatic thresholds, US regulatory bodies like the FTC and DOJ can review minority stake acquisitions under the Clayton Antitrust Act if they seem to threaten competition. Perhaps more importantly, Meta is not considered a leader in AGI development and is trailing OpenAI, Anthropic, and Google, meaning regulators may not consider the deal all that concerning. All told, the arrangement certainly signals Meta’s recognition that the AI race has shifted from a compute and model size competition to a data quality and alignment battle, Mayham noted. “I think theof this is that Zuck’s biggest bet is that talent and data infrastructure matter more than raw compute power in the AI race,” he said. “The regulatory risk is manageable given Meta’s trailing position, but the acqui-hire premium shows how expensive top AI talent has become.” #meta #officially #acquihires #scale #will
    WWW.COMPUTERWORLD.COM
    Meta officially ‘acqui-hires’ Scale AI — will it draw regulator scrutiny?
    Meta is looking to up its weakening AI game with a key talent grab. Following days of speculation, the social media giant has confirmed that Scale AI’s founder and CEO, Alexandr Wang, is joining Meta to work on its AI efforts. Meta will invest $14.3 billion in Scale AI as part of the deal, and will have a 49% stake in the AI startup, which specializes in data labeling and model evaluation services. Other key Scale employees will also move over to Meta, while CSO Jason Droege will step in as Scale’s interim CEO. This move comes as the Mark Zuckerberg-led company goes all-in on building a new research lab focused on “superintelligence,” the next step beyond artificial general intelligence (AGI). The arrangement also reflects a growing trend in big tech, where industry giants are buying companies without really buying them — what’s increasingly being referred to as “acqui-hiring.” It involves recruiting key personnel from a company, licensing its technology, and selling its products, but leaving it as a private entity. “This is fundamentally a massive ‘acqui-hire’ play disguised as a strategic investment,” said Wyatt Mayham, lead AI consultant at Northwest AI Consulting. “While Meta gets Scale’s data infrastructure, the real prize is Wang joining Meta to lead their superintelligence lab. At the $14.3 billion price tag, this might be the most expensive individual talent acquisition in tech history.” Closing gaps with competitors Meta has struggled to keep up with OpenAI, Anthropic, and other key competitors in the AI race, recently even delaying the launch of its new flagship model, Behemoth, purportedly due to internal concerns about its performance. It has also seen the departure of several of its top researchers.  “It’s not really a secret at this point that Meta’s Llama 4 models have had significant performance issues,” Mayham said. “Zuck is essentially betting that Wang’s track record building AI infrastructure can solve Meta’s alignment and model quality problems faster than internal development.” And, he added, Scale’s enterprise-grade human feedback loops are exactly what Meta’s Llama models need to compete with ChatGPT and Claude on reliability and task-following. Data quality, a key focus for Wang, is a big factor in solving those performance problems. He wrote in a note to Scale employees on Thursday, later posted on X (formerly Twitter), that when he founded Scale AI in 2016 amidst some of the early AI breakthroughs, “it was clear even then that data was the lifeblood of AI systems, and that was the inspiration behind starting Scale.” But despite Meta’s huge investment, Scale AI is underscoring its commitment to sovereignty: “Scale remains an independent leader in AI, committed to providing industry-leading AI solutions and safeguarding customer data,” the company wrote in a blog post. “Scale will continue to partner with leading AI labs, multinational enterprises, and governments to deliver expert data and technology solutions through every phase of AI’s evolution.” Allowing big tech to side-step notification But while it’s only just been inked, the high-profile deal is already raising some eyebrows. According to experts, arrangements like these allow tech companies to acquire top talent and key technologies in a side-stepping manner, thus avoiding regulatory notification requirements. The US Federal Trade Commission (FTC) requires mergers and acquisitions totaling more than $126 million be reported in advance. Licensing deals or the mass hiring-away of a company’s employees don’t have this requirement. This allows companies to move more quickly, as they don’t have to undergo the lengthy federal review process. Microsoft’s deal with Inflection AI is probably one of the highest-profile examples of the “acqui-hiring” trend. In March 2024, the tech giant paid the startup $650 million in licensing fees and hired much of its team, including co-founders Mustafa Suleyman (now CEO of Microsoft AI) and Karén Simonyan (chief scientist of Microsoft AI). Similarly, last year Amazon hired more than 50% of Adept AI’s key personnel, including its CEO, to focus on AGI. Google also inked a licensing agreement with Character AI and hired a majority of its founders and researchers. However, regulators have caught on, with the FTC launching inquiries into both the Microsoft-Inflection and Amazon-Adept deals, and the US Justice Department (DOJ) analyzing Google-Character AI. Reflecting ‘desperation’ in the AI industry Meta’s decision to go forward with this arrangement anyway, despite that dicey backdrop, seems to indicate how anxious the company is to keep up in the AI race. “The most interesting piece of this all is the timing,” said Mayham. “It reflects broader industry desperation. Tech giants are increasingly buying parts of promising AI startups to secure key talent without acquiring full companies, following similar patterns with Microsoft-Inflection and Google-Character AI.” However, the regulatory risks are “real but nuanced,” he noted. Meta’s acquisition could face scrutiny from antitrust regulators, particularly as the company is involved in an ongoing FTC lawsuit over its Instagram and WhatsApp acquisitions. While the 49% ownership position appears designed to avoid triggering automatic thresholds, US regulatory bodies like the FTC and DOJ can review minority stake acquisitions under the Clayton Antitrust Act if they seem to threaten competition. Perhaps more importantly, Meta is not considered a leader in AGI development and is trailing OpenAI, Anthropic, and Google, meaning regulators may not consider the deal all that concerning (yet). All told, the arrangement certainly signals Meta’s recognition that the AI race has shifted from a compute and model size competition to a data quality and alignment battle, Mayham noted. “I think the [gist] of this is that Zuck’s biggest bet is that talent and data infrastructure matter more than raw compute power in the AI race,” he said. “The regulatory risk is manageable given Meta’s trailing position, but the acqui-hire premium shows how expensive top AI talent has become.”
    0 Комментарии 0 Поделились
  • European software sector at critical ‘inflection point,’ warns McKinsey

    The report, Europe’s Moonshot Moment, found that the continent has over 280 software companies generating more than €100 million in annual recurring revenue. These scaleups include the likes of Spotify, Revolut, Adyen, and Vinted.
    However, European software businesses that reach the €100 million ARR threshold take 15 years on average to get there. That’s five years longer than their US peers, the report found.
    Europe also lags in birthing software giants. While 5–10% of US firms reaching €100 million in ARR subsequently scale to €1 billion, fewer than 3% of their European peers reach that milestone.
    The report highlighted some of the reasons for this stalled growth: fragmented markets, conservative corporate norms, and a slower flow of late-stage capital relative to early-stage investment.
    Turning point?

    Register Now
    Despite the hurdles, the report’s authors are confident that all the ingredients for Europe’s success in software are now in place.
    “Europe already holds the essentials to create the world’s next generation of software champions: deep talent pools, vibrant founder networks, and a rapidly maturing capital base,” said Ruben Schaubroeck, senior partner at McKinsey.
    While Europe lost out to Silicon Valley firms like Google and Microsoft in the early internet era, emerging technologies like AI may offer a new opening for the region’s tech startups. Geopolitical shifts could also drive governments to invest in local tech ecosystems and rethink digital sovereignty, said the report.
    “There’s no denying that European tech has faced structural barriers, but we’re at a genuine inflection point,” Phill Robinson, CEO and co-founder at Boardwave, told TNW. “New technology arenas, geopolitics, and an evolving operating environment are creating a unique opportunity for Europe to boost innovation.”
    Now Europe must turn that potential into profits, the report argues. To that end, it suggests five key interventions to boost Europe’s software ecosystem:

    Expand late-stage funding
    Encourage experienced founders to start new companies
    Make it easier for sales and marketing teams to work across borders and help startups grow faster
    Encourage more large firms in Europe to buy software from European startups by offering government support or financial incentives
    Strengthen public-private partnerships to de-risk new technologies

    Scaling up European tech
    The McKinsey/Boardwave report comes hot on the heels of the EU’s landmark Startup and Scaleup Strategy, launched last week. The plan set out several reforms designed to remove barriers to growth for the bloc’s early-stage companies.
    “If implemented boldly, and most importantly quickly, it can help Europe move from fragmented success stories to systemic, continent-wide scale; otherwise, we risk being left behind,” said Robinson, commenting on the new strategy.
    The EU’s proposal includes provisions for a long-awaited “28th regime,” which would allow companies to operate under a single set of rules across the 27 member states. It is intended to reduce headaches around taxes, employment rules, and insolvency.
    Robinson said he believes the EU’s new strategy will strengthen Europe’s software ecosystem by making it easier to operate across borders.
    “We need to act as one innovation ecosystem, not 27 different ones,” he said. “That’s what makes this Europe’s moonshot moment. If we connect and act now, we can lead. And not just in Europe, but globally.”

    Story by

    Siôn Geschwindt

    Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicSiôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindtprotonmailcom

    Get the TNW newsletter
    Get the most important tech news in your inbox each week.

    Also tagged with
    #european #software #sector #critical #inflection
    European software sector at critical ‘inflection point,’ warns McKinsey
    The report, Europe’s Moonshot Moment, found that the continent has over 280 software companies generating more than €100 million in annual recurring revenue. These scaleups include the likes of Spotify, Revolut, Adyen, and Vinted. However, European software businesses that reach the €100 million ARR threshold take 15 years on average to get there. That’s five years longer than their US peers, the report found. Europe also lags in birthing software giants. While 5–10% of US firms reaching €100 million in ARR subsequently scale to €1 billion, fewer than 3% of their European peers reach that milestone. The report highlighted some of the reasons for this stalled growth: fragmented markets, conservative corporate norms, and a slower flow of late-stage capital relative to early-stage investment. Turning point? Register Now Despite the hurdles, the report’s authors are confident that all the ingredients for Europe’s success in software are now in place. “Europe already holds the essentials to create the world’s next generation of software champions: deep talent pools, vibrant founder networks, and a rapidly maturing capital base,” said Ruben Schaubroeck, senior partner at McKinsey. While Europe lost out to Silicon Valley firms like Google and Microsoft in the early internet era, emerging technologies like AI may offer a new opening for the region’s tech startups. Geopolitical shifts could also drive governments to invest in local tech ecosystems and rethink digital sovereignty, said the report. “There’s no denying that European tech has faced structural barriers, but we’re at a genuine inflection point,” Phill Robinson, CEO and co-founder at Boardwave, told TNW. “New technology arenas, geopolitics, and an evolving operating environment are creating a unique opportunity for Europe to boost innovation.” Now Europe must turn that potential into profits, the report argues. To that end, it suggests five key interventions to boost Europe’s software ecosystem: Expand late-stage funding Encourage experienced founders to start new companies Make it easier for sales and marketing teams to work across borders and help startups grow faster Encourage more large firms in Europe to buy software from European startups by offering government support or financial incentives Strengthen public-private partnerships to de-risk new technologies Scaling up European tech The McKinsey/Boardwave report comes hot on the heels of the EU’s landmark Startup and Scaleup Strategy, launched last week. The plan set out several reforms designed to remove barriers to growth for the bloc’s early-stage companies. “If implemented boldly, and most importantly quickly, it can help Europe move from fragmented success stories to systemic, continent-wide scale; otherwise, we risk being left behind,” said Robinson, commenting on the new strategy. The EU’s proposal includes provisions for a long-awaited “28th regime,” which would allow companies to operate under a single set of rules across the 27 member states. It is intended to reduce headaches around taxes, employment rules, and insolvency. Robinson said he believes the EU’s new strategy will strengthen Europe’s software ecosystem by making it easier to operate across borders. “We need to act as one innovation ecosystem, not 27 different ones,” he said. “That’s what makes this Europe’s moonshot moment. If we connect and act now, we can lead. And not just in Europe, but globally.” Story by Siôn Geschwindt Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicSiôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindtprotonmailcom Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with #european #software #sector #critical #inflection
    THENEXTWEB.COM
    European software sector at critical ‘inflection point,’ warns McKinsey
    The report, Europe’s Moonshot Moment, found that the continent has over 280 software companies generating more than €100 million in annual recurring revenue (ARR). These scaleups include the likes of Spotify, Revolut, Adyen, and Vinted. However, European software businesses that reach the €100 million ARR threshold take 15 years on average to get there. That’s five years longer than their US peers, the report found. Europe also lags in birthing software giants. While 5–10% of US firms reaching €100 million in ARR subsequently scale to €1 billion, fewer than 3% of their European peers reach that milestone. The report highlighted some of the reasons for this stalled growth: fragmented markets, conservative corporate norms, and a slower flow of late-stage capital relative to early-stage investment. Turning point? Register Now Despite the hurdles, the report’s authors are confident that all the ingredients for Europe’s success in software are now in place. “Europe already holds the essentials to create the world’s next generation of software champions: deep talent pools, vibrant founder networks, and a rapidly maturing capital base,” said Ruben Schaubroeck, senior partner at McKinsey. While Europe lost out to Silicon Valley firms like Google and Microsoft in the early internet era, emerging technologies like AI may offer a new opening for the region’s tech startups. Geopolitical shifts could also drive governments to invest in local tech ecosystems and rethink digital sovereignty, said the report. “There’s no denying that European tech has faced structural barriers, but we’re at a genuine inflection point,” Phill Robinson, CEO and co-founder at Boardwave, told TNW. “New technology arenas, geopolitics, and an evolving operating environment are creating a unique opportunity for Europe to boost innovation.” Now Europe must turn that potential into profits, the report argues. To that end, it suggests five key interventions to boost Europe’s software ecosystem: Expand late-stage funding Encourage experienced founders to start new companies Make it easier for sales and marketing teams to work across borders and help startups grow faster Encourage more large firms in Europe to buy software from European startups by offering government support or financial incentives Strengthen public-private partnerships to de-risk new technologies Scaling up European tech The McKinsey/Boardwave report comes hot on the heels of the EU’s landmark Startup and Scaleup Strategy, launched last week. The plan set out several reforms designed to remove barriers to growth for the bloc’s early-stage companies. “If implemented boldly, and most importantly quickly, it can help Europe move from fragmented success stories to systemic, continent-wide scale; otherwise, we risk being left behind,” said Robinson, commenting on the new strategy. The EU’s proposal includes provisions for a long-awaited “28th regime,” which would allow companies to operate under a single set of rules across the 27 member states. It is intended to reduce headaches around taxes, employment rules, and insolvency. Robinson said he believes the EU’s new strategy will strengthen Europe’s software ecosystem by making it easier to operate across borders. “We need to act as one innovation ecosystem, not 27 different ones,” he said. “That’s what makes this Europe’s moonshot moment. If we connect and act now, we can lead. And not just in Europe, but globally.” Story by Siôn Geschwindt Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehic (show all) Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindt [at] protonmail [dot] com Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with
    0 Комментарии 0 Поделились
  • Trump Responds to Elon Musk's Attack

    President Donald Trump has responded to billionaire and former number-one ally Elon Musk, after he heavily criticized the White House's so-called "big, beautiful bill."Over the weekend, Musk told CBS News that he's "disappointed" by the price tag of the tax and spending bill, arguing that it "increases the budget deficit, not just decreases it, and undermines the work that the DOGE team is doing.""I think a bill can be big or it can be beautiful," he added, "but I don't know if it can be both. My personal opinion."It was a once-rare but increasingly common moment of public disagreement between the two ultra-public figures, who had stood side by side during the contentious election last year when Trump ultimately retook the White House.Asked about Musk's reaction to the bill, which would indeed raise the debt ceiling by trillion, Trump offered a word salad response."We have to get a lot of votes, we can't be cutting — we need to get a lot of support," the president told reporters on Wednesday, as quoted by USA Today, arguing that the bill would've lost momentum with deeper proposed cuts. "I'm not happy about certain aspects of it, but I'm thrilled by other aspects of it."It's a particularly notable inflection point for Musk, who announced that he would be stepping away from his role as a "special government employee" after months of implementing disastrous and chaotic cost-cutting measures with the help of his so-called Department of Government Efficiency."The DOGE mission will only strengthen over time as it becomes a way of life throughout the government," he tweeted Wednesday evening.But the White House seemed eager to move on after the increased friction with Musk. His "off-boarding will begin tonight," a White House official confirmed to Reuters hours earlier.However, given the power Musk has accrued in the White House, it's unlikely the billionaire will simply vanish from the scene. As his tweet suggests, his influence will likely be felt for a long time to come.Where all of this leaves Musk's relationship with the president is hard to read. Trump has been left with the cleanup job and is looking to codify some of DOGE's catastrophic budget cuts. He's expected to send a whopping billion rescissions package to Congress next week, proposing deep cuts to USAID — which has already been gutted by DOGE — and the Corporation of Public Broadcasting, which formed NPR and funds PBS.Musk's departure from the White House will come to the relief of many, including Republican lawmakers in Washington, DC, and investors in his ailing carmaker Tesla.The mercurial CEO's embrace of far-right ideals and his tenure in the government cutting federal funding have proven to be incredibly damaging to Tesla's brand, sending sales off a cliff worldwide.Trump, for his part, could certainly do with a whole lot less of Musk, whose popularity has tanked, dragging down the administration's favorability with him.After all, the president is certainly fully capable of sowing mayhem all by himself — without a tempestuous billionaire whispering in his ear and rebuking him in public.More on Elon Musk: Elon Musk Just Ghosted a Huge Company MeetingShare This Article
    #trump #responds #elon #musk039s #attack
    Trump Responds to Elon Musk's Attack
    President Donald Trump has responded to billionaire and former number-one ally Elon Musk, after he heavily criticized the White House's so-called "big, beautiful bill."Over the weekend, Musk told CBS News that he's "disappointed" by the price tag of the tax and spending bill, arguing that it "increases the budget deficit, not just decreases it, and undermines the work that the DOGE team is doing.""I think a bill can be big or it can be beautiful," he added, "but I don't know if it can be both. My personal opinion."It was a once-rare but increasingly common moment of public disagreement between the two ultra-public figures, who had stood side by side during the contentious election last year when Trump ultimately retook the White House.Asked about Musk's reaction to the bill, which would indeed raise the debt ceiling by trillion, Trump offered a word salad response."We have to get a lot of votes, we can't be cutting — we need to get a lot of support," the president told reporters on Wednesday, as quoted by USA Today, arguing that the bill would've lost momentum with deeper proposed cuts. "I'm not happy about certain aspects of it, but I'm thrilled by other aspects of it."It's a particularly notable inflection point for Musk, who announced that he would be stepping away from his role as a "special government employee" after months of implementing disastrous and chaotic cost-cutting measures with the help of his so-called Department of Government Efficiency."The DOGE mission will only strengthen over time as it becomes a way of life throughout the government," he tweeted Wednesday evening.But the White House seemed eager to move on after the increased friction with Musk. His "off-boarding will begin tonight," a White House official confirmed to Reuters hours earlier.However, given the power Musk has accrued in the White House, it's unlikely the billionaire will simply vanish from the scene. As his tweet suggests, his influence will likely be felt for a long time to come.Where all of this leaves Musk's relationship with the president is hard to read. Trump has been left with the cleanup job and is looking to codify some of DOGE's catastrophic budget cuts. He's expected to send a whopping billion rescissions package to Congress next week, proposing deep cuts to USAID — which has already been gutted by DOGE — and the Corporation of Public Broadcasting, which formed NPR and funds PBS.Musk's departure from the White House will come to the relief of many, including Republican lawmakers in Washington, DC, and investors in his ailing carmaker Tesla.The mercurial CEO's embrace of far-right ideals and his tenure in the government cutting federal funding have proven to be incredibly damaging to Tesla's brand, sending sales off a cliff worldwide.Trump, for his part, could certainly do with a whole lot less of Musk, whose popularity has tanked, dragging down the administration's favorability with him.After all, the president is certainly fully capable of sowing mayhem all by himself — without a tempestuous billionaire whispering in his ear and rebuking him in public.More on Elon Musk: Elon Musk Just Ghosted a Huge Company MeetingShare This Article #trump #responds #elon #musk039s #attack
    FUTURISM.COM
    Trump Responds to Elon Musk's Attack
    President Donald Trump has responded to billionaire and former number-one ally Elon Musk, after he heavily criticized the White House's so-called "big, beautiful bill."Over the weekend, Musk told CBS News that he's "disappointed" by the price tag of the tax and spending bill, arguing that it "increases the budget deficit, not just decreases it, and undermines the work that the DOGE team is doing.""I think a bill can be big or it can be beautiful," he added, "but I don't know if it can be both. My personal opinion."It was a once-rare but increasingly common moment of public disagreement between the two ultra-public figures, who had stood side by side during the contentious election last year when Trump ultimately retook the White House.Asked about Musk's reaction to the bill, which would indeed raise the debt ceiling by $4 trillion, Trump offered a word salad response."We have to get a lot of votes, we can't be cutting — we need to get a lot of support," the president told reporters on Wednesday, as quoted by USA Today, arguing that the bill would've lost momentum with deeper proposed cuts. "I'm not happy about certain aspects of it, but I'm thrilled by other aspects of it."It's a particularly notable inflection point for Musk, who announced that he would be stepping away from his role as a "special government employee" after months of implementing disastrous and chaotic cost-cutting measures with the help of his so-called Department of Government Efficiency."The DOGE mission will only strengthen over time as it becomes a way of life throughout the government," he tweeted Wednesday evening.But the White House seemed eager to move on after the increased friction with Musk. His "off-boarding will begin tonight," a White House official confirmed to Reuters hours earlier.However, given the power Musk has accrued in the White House, it's unlikely the billionaire will simply vanish from the scene. As his tweet suggests, his influence will likely be felt for a long time to come.Where all of this leaves Musk's relationship with the president is hard to read. Trump has been left with the cleanup job and is looking to codify some of DOGE's catastrophic budget cuts. He's expected to send a whopping $9.4 billion rescissions package to Congress next week, proposing deep cuts to USAID — which has already been gutted by DOGE — and the Corporation of Public Broadcasting, which formed NPR and funds PBS.Musk's departure from the White House will come to the relief of many, including Republican lawmakers in Washington, DC, and investors in his ailing carmaker Tesla.The mercurial CEO's embrace of far-right ideals and his tenure in the government cutting federal funding have proven to be incredibly damaging to Tesla's brand, sending sales off a cliff worldwide.Trump, for his part, could certainly do with a whole lot less of Musk, whose popularity has tanked, dragging down the administration's favorability with him.After all, the president is certainly fully capable of sowing mayhem all by himself — without a tempestuous billionaire whispering in his ear and rebuking him in public.More on Elon Musk: Elon Musk Just Ghosted a Huge Company MeetingShare This Article
    0 Комментарии 0 Поделились
  • Live Updates From Google I/O 2025

    © Gizmodo I wish I was making this stuff up, but chaos seems to follow me at all tech events. After waiting an hour to try out Google’s hyped-up Android XR smart glasses for five minutes, I was actually given a three-minute demo, where I actually had 90 seconds to use Gemini in an extremely controlled environment. And actually, if you watch the video in my hands-on write-up below, you’ll see that I spent even less time with it because Gemini fumbled a few times in the beginning. Oof. I really hope there’s another chance to try them again because it was just too rushed. I think it might be the most rushed product demo I’ve ever had in my life, and I’ve been covering new gadgets for the past 15 years. —Raymond Wong Google, a company valued at trillion, seemingly brought one pair of Android XR smart glasses for press to demo… and one pair of Samsung’s Project Moohan mixed reality headset running the same augmented reality platform. I’m told the wait is 1 hour to try either device for 5 minutes. Of course, I’m going to try out the smart glasses. But if I want to demo Moohan, I need to get back in line and wait all over again. This is madness! —Raymond Wong May 20Keynote Fin © Raymond Wong / Gizmodo Talk about a loooooong keynote. Total duration: 1 hour and 55 minutes, and then Sundar Pichai walked off stage. What do you make of all the AI announcements? Let’s hang in the comments! I’m headed over to a demo area to try out a pair of Android XR smart glasses. I can’t lie, even though the video stream from the live demo lagged for a good portion, I’m hyped! It really feels like Google is finally delivering on Google Glass over a decade later. Shoulda had Google co-founder Sergey Brin jump out of a helicopter and land on stage again, though. —Raymond Wong Pieces of Project Astra, Google’s computer vision-based UI, are winding up in various different products, it seems, and not all of them are geared toward smart glasses specifically. One of the most exciting updates to Astra is “computer control,” which allows one to do a lot more on their devices with computer vision alone. For instance, you could just point your phone at an objectand then ask Astra to search for the bike, find some brakes for it, and then even pull up a YouTube tutorial on how to fix it—all without typing anything into your phone. —James Pero Shopping bots aren’t just for scalpers anymore. Google is putting the power of automated consumerism in your hands with its new AI shopping tool. There are some pretty wild ideas here, too, including a virtual shopping avatar that’s supposed to represent your own body—the idea is you can make it try on clothes to see how they fit. How all that works in practice is TBD, but if you’re ready for a full AI shopping experience, you’ve finally got it. For the whole story, check out our story from Gizmodo’s Senior Editor, Consumer Tech, Raymond Wong. —James Pero I got what I wanted. Google showed off what its Android XR tech can bring to smart glasses. In a live demo, Google showcased how a pair of unspecified smart glasses did a few of the things that I’ve been waiting to do, including projecting live navigation and remembering objects in your environment—basically the stuff that it pitched with Project Astra last year, but in a glasses form factor. There’s still a lot that needs to happen, both hardware and software-wise, before you can walk around wearing glasses that actually do all those things, but it was exciting to see that Google is making progress in that direction. It’s worth noting that not all of the demos went off smoothly—there was lots of stutter in the live translation demo—but I guess props to them for giving it a go. When we’ll actually get to walk around wearing functional smart glasses with some kind of optical passthrough or virtual display is anyone’s guess, but the race is certainly heating up. —James Pero Google’s SynthID has been around for nearly three years, but it’s been largely kept out of the public eye. The system disturbs AI-generated images, video, or audio with an invisible, undetectable watermark that can be observed with Google DeepMind’s proprietary tool. At I/O, Google said it was working with both Nvidia and GetReal to introduce the same watermarking technique with those companies’ AI image generators. Users may be able to detect these watermarks themselves, even if only part of the media was modified with AI. Early testers are getting access to it “today,” but hopefully more people can acess it at a later date from labs.google/synthid. — Kyle Barr This keynote has been going on for 1.5 hours now. Do I run to the restroom now or wait? But how much longer until it ends??? Can we petiton to Sundar Pichai to make these keynotes shorter or at least have an intermission? Update: I ran for it right near the end before Android XR news hit. I almost made it… —Raymond Wong © Raymond Wong / Gizmodo Google’s new video generator Veo, is getting a big upgrade that includes sound generation, and it’s not just dialogue. Veo 3 can also generate sound effects and music. In a demo, Google showed off an animated forest scene that includes all three—dialogue, sound effects, and video. The length of clips, I assume, will be short at first, but the results look pretty sophisticated if the demo is to be believed. —James Pero If you pay for a Google One subscription, you’ll start to see Gemini in your Google Chrome browserlater this week. This will appear as the sparkle icon at the top of your browser app. You can use this to bring up a prompt box to ask a question about the current page you’re browsing, such as if you want to consolidate a number of user reviews for a local campsite. — Kyle Barr © Google / GIF by Gizmodo Google’s high-tech video conferencing tech, now called Beam, looks impressive. You can make eye contact! It feels like the person in the screen is right in front of you! It’s glasses-free 3D! Come back down to Earth, buddy—it’s not coming out as a consumer product. Commercial first with partners like HP. Time to apply for a new job? —Raymond Wong here: Google doesn’t want Search to be tied to your browser or apps anymore. Search Live is akin to the video and audio comprehension capabilities of Gemini Live, but with the added benefit of getting quick answers based on sites from around the web. Google showed how Search Live could comprehend queries about at-home science experiment and bring in answers from sites like Quora or YouTube. — Kyle Barr Google is getting deep into augmented reality with Android XR—its operating system built specifically for AR glasses and VR headsets. Google showed us how users may be able to see a holographic live Google Maps view directly on their glasses or set up calendar events, all without needing to touch a single screen. This uses Gemini AI to comprehend your voice prompts and follow through on your instructions. Google doesn’t have its own device to share at I/O, but its planning to work with companies like XReal and Samsung to craft new devices across both AR and VR. — Kyle Barr Read our full report here: I know how much you all love subscriptions! Google does too, apparently, and is now offering a per month AI bundle that groups some of its most advanced AI services. Subscribing to Google AI Ultra will get you: Gemini and its full capabilities Flow, a new, more advanced AI filmmaking tool based on Veo Whisk, which allows text-to-image creation NotebookLM, an AI note-taking app Gemini in Gmail and Docs Gemini in Chrome Project Mariner, an agentic research AI 30TB of storage I’m not sure who needs all of this, but maybe there are more AI superusers than I thought. —James Pero Google CEO Sundar Pichai was keen to claim that users are big, big fans of AI overviews in Google Search results. If there wasn’t already enough AI on your search bar, Google will now stick an entire “AI Mode” tab on your search bar next to the Google Lens button. This encompasses the Gemini 2.5 model. This opens up an entirely new UI for searching via a prompt with a chatbot. After you input your rambling search query, it will bring up an assortment of short-form textual answers, links, and even a Google Maps widget depending on what you were looking for. AI Mode should be available starting today. Google said AI Mode pulls together information from the web alongside its other data like weather or academic research through Google Scholar. It should also eventually encompass your “personal context,” which will be available later this summer. Eventually, Google will add more AI Mode capabilities directly to AI Overviews. — Kyle Barr May 20News Embargo Has Lifted! © Xreal Get your butt over to Gizmodo.com’s home page because the Google I/O news embargo just lifted. We’ve got a bunch of stories, including this one about Google partnering up with Xreal for a new pair of “optical see-through”smart glasses called Project Aura. The smart glasses run Android XR and are powered by a Qualcomm chip. You can see three cameras. Wireless, these are not—you’ll need to tether to a phone or other device. Update: Little scoop: I’ve confirmed that Project Aura has a 70-degree field of view, which is way wider than the One Pro’s FOV, which is 57 degrees. —Raymond Wong © Raymond Wong / Gizmodo Google’s DeepMind CEO showed off the updated version of Project Astra running on a phone and drove home how its “personal, proactive, and powerful” AI features are the groundwork for a “universal assistant” that truly understands and works on your behalf. If you think Gemini is a fad, it’s time to get familiar with it because it’s not going anywhere. —Raymond Wong May 20Gemini 2.5 Pro Is Here © Gizmodo Google says Gemini 2.5 Pro is its “most advanced model yet,” and comes with “enhanced reasoning,” better coding ability, and can even create interactive simulations. You can try it now via Google AI Studio. —James Pero There are two major types of transformer AI used today. One is the LLM, AKA large language models, and diffusion models—which are mostly used for image generation. The Gemini Diffusion model blurs the lines of these types of models. Google said its new research model can iterate on a solution quickly and correct itself while generating an answer. For math or coding prompts, Gemini Diffusion can potentially output an entire response much faster than a typical Chatbot. Unlike a traditional LLM model, which may take a few seconds to answer a question, Gemini Diffusion can create a response to a complex math equation in the blink of an eye, and still share the steps it took to reach its conclusion. — Kyle Barr © Gizmodo New Gemini 2.5 Flash and Gemini Pro models are incoming and, naturally, Google says both are faster and more sophisticated across the board. One of the improvements for Gemini 2.5 Flash is even more inflection when speaking. Unfortunately for my ears, Google demoed the new Flash speaking in a whisper that sent chills down my spine. —James Pero Is anybody keeping track of how many times Google execs have said “Gemini” and “AI” so far? Oops, I think I’m already drunk, and we’re only 20 minutes in. —Raymond Wong © Raymond Wong / Gizmodo Google’s Project Astra is supposed to be getting much better at avoiding hallucinations, AKA when the AI makes stuff up. Project Astra’s vision and audio comprehension capabilities are supposed to be far better at knowing when you’re trying to trick it. In a video, Google showed how its Gemini Live AI wouldn’t buy your bullshit if you tell it that a garbage truck is a convertible, a lamp pole is a skyscraper, or your shadow is some stalker. This should hopefully mean the AI doesn’t confidently lie to you, as well. Google CEO Sundar Pichai said “Gemini is really good at telling you when you’re wrong.” These enhanced features should be rolling out today for Gemini app on iOS and Android. — Kyle Barr May 20Release the Agents Like pretty much every other AI player, Google is pursuing agentic AI in a big way. I’d prepare for a lot more talk about how Gemini can take tasks off your hands as the keynote progresses. —James Pero © Gizmodo Google has finally moved Project Starline—its futuristic video-calling machine—into a commercial project called Google Beam. According to Pichai, Google Beam can take a 2D image and transform it into a 3D one, and will also incorporate live translate. —James Pero © Gizmodo Google’s CEO, Sundar Pichai, says Google is shipping at a relentless pace, and to be honest, I tend to agree. There are tons of Gemini models out there already, even though it’s only been out for two years. Probably my favorite milestone, though, is that it has now completed Pokémon Blue, earning all 8 badges according to Pichai. —James Pero May 20Let’s Do This Buckle up, kiddos, it’s I/O time. Methinks there will be a lot to get to, so you may want to grab a snack now. —James Pero Counting down until the keynote… only a few more minutes to go. The DJ just said AI is changing music and how it’s made. But don’t forget that we’re all here… in person. Will we all be wearing Android XR smart glasses next year? Mixed reality headsets? —Raymond Wong © Raymond Wong / Gizmodo Fun fact: I haven’t attended Google I/O in person since before Covid-19. The Wi-Fi is definitely stronger and more stable now. It’s so great to be back and covering for Gizmodo. Dream job, unlocked! —Raymond Wong © Raymond Wong / Gizmodo Mini breakfast burritos… bagels… but these bagels can’t compare to real Made In New York City bagels with that authentic NY water —Raymond Wong © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo I’ve arrived at the Shoreline Amphitheatre in Mountain View, Calif., where the Google I/O keynote is taking place in 40 minutes. Seats are filling up. But first, must go check out the breakfast situation because my tummy is growling… —Raymond Wong May 20Should We Do a Giveaway? © Raymond Wong / Gizmodo Google I/O attendees get a special tote bag, a metal water bottle, a cap, and a cute sheet of stickers. I always end up donating this stuff to Goodwill during the holidays. A guy living in NYC with two cats only has so much room for tote bags and water bottles… Would be cool to do giveaway. Leave a comment to let us know if you’d be into that and I can pester top brass to make it happen —Raymond Wong May 20Got My Press Badge! In 13 hours, Google will blitz everyone with Gemini AI, Gemini AI, and tons more Gemini AI. Who’s ready for… Gemini AI? —Raymond Wong May 19Google Glass: The Redux © Google / Screenshot by Gizmodo Google is very obviously inching toward the release of some kind of smart glasses product for the first time sinceGoogle Glass, and if I were a betting man, I’d say this one will have a much warmer reception than its forebearer. I’m not saying Google can snatch the crown from Meta and its Ray-Ban smart glasses right out of the gate, but if it plays its cards right, it could capitalize on the integration with its other hardwarein a big way. Meta may finally have a real competitor on its hands. ICYMI: Here’s Google’s President of the Android Ecosystem, Sameer Samat, teasing some kind of smart glasses device in a recorded demo last week. —James Pero Hi folks, I’m James Pero, Gizmodo’s new Senior Writer. There’s a lot we have to get to with Google I/O, so I’ll keep this introduction short. I like long walks on the beach, the wind in my nonexistent hair, and I’m really, really, looking forward to bringing you even more of the spicy, insightful, and entertaining coverage on consumer tech that Gizmodo is known for. I’m starting my tenure here out hot with Google I/O, so make sure you check back here throughout the week to get those sweet, sweet blogs and commentary from me and Gizmodo’s Senior Consumer Tech Editor Raymond Wong. —James Pero © Raymond Wong / Gizmodo Hey everyone! Raymond Wong, senior editor in charge of Gizmodo’s consumer tech team, here! Landed in San Francisco, and I’ll be making my way over to Mountain View, California, later today to pick up my press badge and scope out the scene for tomorrow’s Google I/O keynote, which kicks off at 1 p.m. ET / 10 a.m. PT. Google I/O is a developer conference, but that doesn’t mean it’s news only for engineers. While there will be a lot of nerdy stuff that will have developers hollering, what Google announces—expect updates on Gemini AI, Android, and Android XR, to name a few headliners—will shape consumer productsfor the rest of this year and also the years to come. I/O is a glimpse at Google’s technology roadmap as AI weaves itself into the way we compute at our desks and on the go. This is going to be a fun live blog! —Raymond Wong
    #live #updates #google
    Live Updates From Google I/O 2025 🔴
    © Gizmodo I wish I was making this stuff up, but chaos seems to follow me at all tech events. After waiting an hour to try out Google’s hyped-up Android XR smart glasses for five minutes, I was actually given a three-minute demo, where I actually had 90 seconds to use Gemini in an extremely controlled environment. And actually, if you watch the video in my hands-on write-up below, you’ll see that I spent even less time with it because Gemini fumbled a few times in the beginning. Oof. I really hope there’s another chance to try them again because it was just too rushed. I think it might be the most rushed product demo I’ve ever had in my life, and I’ve been covering new gadgets for the past 15 years. —Raymond Wong Google, a company valued at trillion, seemingly brought one pair of Android XR smart glasses for press to demo… and one pair of Samsung’s Project Moohan mixed reality headset running the same augmented reality platform. I’m told the wait is 1 hour to try either device for 5 minutes. Of course, I’m going to try out the smart glasses. But if I want to demo Moohan, I need to get back in line and wait all over again. This is madness! —Raymond Wong May 20Keynote Fin © Raymond Wong / Gizmodo Talk about a loooooong keynote. Total duration: 1 hour and 55 minutes, and then Sundar Pichai walked off stage. What do you make of all the AI announcements? Let’s hang in the comments! I’m headed over to a demo area to try out a pair of Android XR smart glasses. I can’t lie, even though the video stream from the live demo lagged for a good portion, I’m hyped! It really feels like Google is finally delivering on Google Glass over a decade later. Shoulda had Google co-founder Sergey Brin jump out of a helicopter and land on stage again, though. —Raymond Wong Pieces of Project Astra, Google’s computer vision-based UI, are winding up in various different products, it seems, and not all of them are geared toward smart glasses specifically. One of the most exciting updates to Astra is “computer control,” which allows one to do a lot more on their devices with computer vision alone. For instance, you could just point your phone at an objectand then ask Astra to search for the bike, find some brakes for it, and then even pull up a YouTube tutorial on how to fix it—all without typing anything into your phone. —James Pero Shopping bots aren’t just for scalpers anymore. Google is putting the power of automated consumerism in your hands with its new AI shopping tool. There are some pretty wild ideas here, too, including a virtual shopping avatar that’s supposed to represent your own body—the idea is you can make it try on clothes to see how they fit. How all that works in practice is TBD, but if you’re ready for a full AI shopping experience, you’ve finally got it. For the whole story, check out our story from Gizmodo’s Senior Editor, Consumer Tech, Raymond Wong. —James Pero I got what I wanted. Google showed off what its Android XR tech can bring to smart glasses. In a live demo, Google showcased how a pair of unspecified smart glasses did a few of the things that I’ve been waiting to do, including projecting live navigation and remembering objects in your environment—basically the stuff that it pitched with Project Astra last year, but in a glasses form factor. There’s still a lot that needs to happen, both hardware and software-wise, before you can walk around wearing glasses that actually do all those things, but it was exciting to see that Google is making progress in that direction. It’s worth noting that not all of the demos went off smoothly—there was lots of stutter in the live translation demo—but I guess props to them for giving it a go. When we’ll actually get to walk around wearing functional smart glasses with some kind of optical passthrough or virtual display is anyone’s guess, but the race is certainly heating up. —James Pero Google’s SynthID has been around for nearly three years, but it’s been largely kept out of the public eye. The system disturbs AI-generated images, video, or audio with an invisible, undetectable watermark that can be observed with Google DeepMind’s proprietary tool. At I/O, Google said it was working with both Nvidia and GetReal to introduce the same watermarking technique with those companies’ AI image generators. Users may be able to detect these watermarks themselves, even if only part of the media was modified with AI. Early testers are getting access to it “today,” but hopefully more people can acess it at a later date from labs.google/synthid. — Kyle Barr This keynote has been going on for 1.5 hours now. Do I run to the restroom now or wait? But how much longer until it ends??? Can we petiton to Sundar Pichai to make these keynotes shorter or at least have an intermission? Update: I ran for it right near the end before Android XR news hit. I almost made it… —Raymond Wong © Raymond Wong / Gizmodo Google’s new video generator Veo, is getting a big upgrade that includes sound generation, and it’s not just dialogue. Veo 3 can also generate sound effects and music. In a demo, Google showed off an animated forest scene that includes all three—dialogue, sound effects, and video. The length of clips, I assume, will be short at first, but the results look pretty sophisticated if the demo is to be believed. —James Pero If you pay for a Google One subscription, you’ll start to see Gemini in your Google Chrome browserlater this week. This will appear as the sparkle icon at the top of your browser app. You can use this to bring up a prompt box to ask a question about the current page you’re browsing, such as if you want to consolidate a number of user reviews for a local campsite. — Kyle Barr © Google / GIF by Gizmodo Google’s high-tech video conferencing tech, now called Beam, looks impressive. You can make eye contact! It feels like the person in the screen is right in front of you! It’s glasses-free 3D! Come back down to Earth, buddy—it’s not coming out as a consumer product. Commercial first with partners like HP. Time to apply for a new job? —Raymond Wong here: Google doesn’t want Search to be tied to your browser or apps anymore. Search Live is akin to the video and audio comprehension capabilities of Gemini Live, but with the added benefit of getting quick answers based on sites from around the web. Google showed how Search Live could comprehend queries about at-home science experiment and bring in answers from sites like Quora or YouTube. — Kyle Barr Google is getting deep into augmented reality with Android XR—its operating system built specifically for AR glasses and VR headsets. Google showed us how users may be able to see a holographic live Google Maps view directly on their glasses or set up calendar events, all without needing to touch a single screen. This uses Gemini AI to comprehend your voice prompts and follow through on your instructions. Google doesn’t have its own device to share at I/O, but its planning to work with companies like XReal and Samsung to craft new devices across both AR and VR. — Kyle Barr Read our full report here: I know how much you all love subscriptions! Google does too, apparently, and is now offering a per month AI bundle that groups some of its most advanced AI services. Subscribing to Google AI Ultra will get you: Gemini and its full capabilities Flow, a new, more advanced AI filmmaking tool based on Veo Whisk, which allows text-to-image creation NotebookLM, an AI note-taking app Gemini in Gmail and Docs Gemini in Chrome Project Mariner, an agentic research AI 30TB of storage I’m not sure who needs all of this, but maybe there are more AI superusers than I thought. —James Pero Google CEO Sundar Pichai was keen to claim that users are big, big fans of AI overviews in Google Search results. If there wasn’t already enough AI on your search bar, Google will now stick an entire “AI Mode” tab on your search bar next to the Google Lens button. This encompasses the Gemini 2.5 model. This opens up an entirely new UI for searching via a prompt with a chatbot. After you input your rambling search query, it will bring up an assortment of short-form textual answers, links, and even a Google Maps widget depending on what you were looking for. AI Mode should be available starting today. Google said AI Mode pulls together information from the web alongside its other data like weather or academic research through Google Scholar. It should also eventually encompass your “personal context,” which will be available later this summer. Eventually, Google will add more AI Mode capabilities directly to AI Overviews. — Kyle Barr May 20News Embargo Has Lifted! © Xreal Get your butt over to Gizmodo.com’s home page because the Google I/O news embargo just lifted. We’ve got a bunch of stories, including this one about Google partnering up with Xreal for a new pair of “optical see-through”smart glasses called Project Aura. The smart glasses run Android XR and are powered by a Qualcomm chip. You can see three cameras. Wireless, these are not—you’ll need to tether to a phone or other device. Update: Little scoop: I’ve confirmed that Project Aura has a 70-degree field of view, which is way wider than the One Pro’s FOV, which is 57 degrees. —Raymond Wong © Raymond Wong / Gizmodo Google’s DeepMind CEO showed off the updated version of Project Astra running on a phone and drove home how its “personal, proactive, and powerful” AI features are the groundwork for a “universal assistant” that truly understands and works on your behalf. If you think Gemini is a fad, it’s time to get familiar with it because it’s not going anywhere. —Raymond Wong May 20Gemini 2.5 Pro Is Here © Gizmodo Google says Gemini 2.5 Pro is its “most advanced model yet,” and comes with “enhanced reasoning,” better coding ability, and can even create interactive simulations. You can try it now via Google AI Studio. —James Pero There are two major types of transformer AI used today. One is the LLM, AKA large language models, and diffusion models—which are mostly used for image generation. The Gemini Diffusion model blurs the lines of these types of models. Google said its new research model can iterate on a solution quickly and correct itself while generating an answer. For math or coding prompts, Gemini Diffusion can potentially output an entire response much faster than a typical Chatbot. Unlike a traditional LLM model, which may take a few seconds to answer a question, Gemini Diffusion can create a response to a complex math equation in the blink of an eye, and still share the steps it took to reach its conclusion. — Kyle Barr © Gizmodo New Gemini 2.5 Flash and Gemini Pro models are incoming and, naturally, Google says both are faster and more sophisticated across the board. One of the improvements for Gemini 2.5 Flash is even more inflection when speaking. Unfortunately for my ears, Google demoed the new Flash speaking in a whisper that sent chills down my spine. —James Pero Is anybody keeping track of how many times Google execs have said “Gemini” and “AI” so far? Oops, I think I’m already drunk, and we’re only 20 minutes in. —Raymond Wong © Raymond Wong / Gizmodo Google’s Project Astra is supposed to be getting much better at avoiding hallucinations, AKA when the AI makes stuff up. Project Astra’s vision and audio comprehension capabilities are supposed to be far better at knowing when you’re trying to trick it. In a video, Google showed how its Gemini Live AI wouldn’t buy your bullshit if you tell it that a garbage truck is a convertible, a lamp pole is a skyscraper, or your shadow is some stalker. This should hopefully mean the AI doesn’t confidently lie to you, as well. Google CEO Sundar Pichai said “Gemini is really good at telling you when you’re wrong.” These enhanced features should be rolling out today for Gemini app on iOS and Android. — Kyle Barr May 20Release the Agents Like pretty much every other AI player, Google is pursuing agentic AI in a big way. I’d prepare for a lot more talk about how Gemini can take tasks off your hands as the keynote progresses. —James Pero © Gizmodo Google has finally moved Project Starline—its futuristic video-calling machine—into a commercial project called Google Beam. According to Pichai, Google Beam can take a 2D image and transform it into a 3D one, and will also incorporate live translate. —James Pero © Gizmodo Google’s CEO, Sundar Pichai, says Google is shipping at a relentless pace, and to be honest, I tend to agree. There are tons of Gemini models out there already, even though it’s only been out for two years. Probably my favorite milestone, though, is that it has now completed Pokémon Blue, earning all 8 badges according to Pichai. —James Pero May 20Let’s Do This Buckle up, kiddos, it’s I/O time. Methinks there will be a lot to get to, so you may want to grab a snack now. —James Pero Counting down until the keynote… only a few more minutes to go. The DJ just said AI is changing music and how it’s made. But don’t forget that we’re all here… in person. Will we all be wearing Android XR smart glasses next year? Mixed reality headsets? —Raymond Wong © Raymond Wong / Gizmodo Fun fact: I haven’t attended Google I/O in person since before Covid-19. The Wi-Fi is definitely stronger and more stable now. It’s so great to be back and covering for Gizmodo. Dream job, unlocked! —Raymond Wong © Raymond Wong / Gizmodo Mini breakfast burritos… bagels… but these bagels can’t compare to real Made In New York City bagels with that authentic NY water 😏 —Raymond Wong © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo I’ve arrived at the Shoreline Amphitheatre in Mountain View, Calif., where the Google I/O keynote is taking place in 40 minutes. Seats are filling up. But first, must go check out the breakfast situation because my tummy is growling… —Raymond Wong May 20Should We Do a Giveaway? © Raymond Wong / Gizmodo Google I/O attendees get a special tote bag, a metal water bottle, a cap, and a cute sheet of stickers. I always end up donating this stuff to Goodwill during the holidays. A guy living in NYC with two cats only has so much room for tote bags and water bottles… Would be cool to do giveaway. Leave a comment to let us know if you’d be into that and I can pester top brass to make it happen 🤪 —Raymond Wong May 20Got My Press Badge! In 13 hours, Google will blitz everyone with Gemini AI, Gemini AI, and tons more Gemini AI. Who’s ready for… Gemini AI? —Raymond Wong May 19Google Glass: The Redux © Google / Screenshot by Gizmodo Google is very obviously inching toward the release of some kind of smart glasses product for the first time sinceGoogle Glass, and if I were a betting man, I’d say this one will have a much warmer reception than its forebearer. I’m not saying Google can snatch the crown from Meta and its Ray-Ban smart glasses right out of the gate, but if it plays its cards right, it could capitalize on the integration with its other hardwarein a big way. Meta may finally have a real competitor on its hands. ICYMI: Here’s Google’s President of the Android Ecosystem, Sameer Samat, teasing some kind of smart glasses device in a recorded demo last week. —James Pero Hi folks, I’m James Pero, Gizmodo’s new Senior Writer. There’s a lot we have to get to with Google I/O, so I’ll keep this introduction short. I like long walks on the beach, the wind in my nonexistent hair, and I’m really, really, looking forward to bringing you even more of the spicy, insightful, and entertaining coverage on consumer tech that Gizmodo is known for. I’m starting my tenure here out hot with Google I/O, so make sure you check back here throughout the week to get those sweet, sweet blogs and commentary from me and Gizmodo’s Senior Consumer Tech Editor Raymond Wong. —James Pero © Raymond Wong / Gizmodo Hey everyone! Raymond Wong, senior editor in charge of Gizmodo’s consumer tech team, here! Landed in San Francisco, and I’ll be making my way over to Mountain View, California, later today to pick up my press badge and scope out the scene for tomorrow’s Google I/O keynote, which kicks off at 1 p.m. ET / 10 a.m. PT. Google I/O is a developer conference, but that doesn’t mean it’s news only for engineers. While there will be a lot of nerdy stuff that will have developers hollering, what Google announces—expect updates on Gemini AI, Android, and Android XR, to name a few headliners—will shape consumer productsfor the rest of this year and also the years to come. I/O is a glimpse at Google’s technology roadmap as AI weaves itself into the way we compute at our desks and on the go. This is going to be a fun live blog! —Raymond Wong #live #updates #google
    GIZMODO.COM
    Live Updates From Google I/O 2025 🔴
    © Gizmodo I wish I was making this stuff up, but chaos seems to follow me at all tech events. After waiting an hour to try out Google’s hyped-up Android XR smart glasses for five minutes, I was actually given a three-minute demo, where I actually had 90 seconds to use Gemini in an extremely controlled environment. And actually, if you watch the video in my hands-on write-up below, you’ll see that I spent even less time with it because Gemini fumbled a few times in the beginning. Oof. I really hope there’s another chance to try them again because it was just too rushed. I think it might be the most rushed product demo I’ve ever had in my life, and I’ve been covering new gadgets for the past 15 years. —Raymond Wong Google, a company valued at $2 trillion, seemingly brought one pair of Android XR smart glasses for press to demo… and one pair of Samsung’s Project Moohan mixed reality headset running the same augmented reality platform. I’m told the wait is 1 hour to try either device for 5 minutes. Of course, I’m going to try out the smart glasses. But if I want to demo Moohan, I need to get back in line and wait all over again. This is madness! —Raymond Wong May 20Keynote Fin © Raymond Wong / Gizmodo Talk about a loooooong keynote. Total duration: 1 hour and 55 minutes, and then Sundar Pichai walked off stage. What do you make of all the AI announcements? Let’s hang in the comments! I’m headed over to a demo area to try out a pair of Android XR smart glasses. I can’t lie, even though the video stream from the live demo lagged for a good portion, I’m hyped! It really feels like Google is finally delivering on Google Glass over a decade later. Shoulda had Google co-founder Sergey Brin jump out of a helicopter and land on stage again, though. —Raymond Wong Pieces of Project Astra, Google’s computer vision-based UI, are winding up in various different products, it seems, and not all of them are geared toward smart glasses specifically. One of the most exciting updates to Astra is “computer control,” which allows one to do a lot more on their devices with computer vision alone. For instance, you could just point your phone at an object (say, a bike) and then ask Astra to search for the bike, find some brakes for it, and then even pull up a YouTube tutorial on how to fix it—all without typing anything into your phone. —James Pero Shopping bots aren’t just for scalpers anymore. Google is putting the power of automated consumerism in your hands with its new AI shopping tool. There are some pretty wild ideas here, too, including a virtual shopping avatar that’s supposed to represent your own body—the idea is you can make it try on clothes to see how they fit. How all that works in practice is TBD, but if you’re ready for a full AI shopping experience, you’ve finally got it. For the whole story, check out our story from Gizmodo’s Senior Editor, Consumer Tech, Raymond Wong. —James Pero I got what I wanted. Google showed off what its Android XR tech can bring to smart glasses. In a live demo, Google showcased how a pair of unspecified smart glasses did a few of the things that I’ve been waiting to do, including projecting live navigation and remembering objects in your environment—basically the stuff that it pitched with Project Astra last year, but in a glasses form factor. There’s still a lot that needs to happen, both hardware and software-wise, before you can walk around wearing glasses that actually do all those things, but it was exciting to see that Google is making progress in that direction. It’s worth noting that not all of the demos went off smoothly—there was lots of stutter in the live translation demo—but I guess props to them for giving it a go. When we’ll actually get to walk around wearing functional smart glasses with some kind of optical passthrough or virtual display is anyone’s guess, but the race is certainly heating up. —James Pero Google’s SynthID has been around for nearly three years, but it’s been largely kept out of the public eye. The system disturbs AI-generated images, video, or audio with an invisible, undetectable watermark that can be observed with Google DeepMind’s proprietary tool. At I/O, Google said it was working with both Nvidia and GetReal to introduce the same watermarking technique with those companies’ AI image generators. Users may be able to detect these watermarks themselves, even if only part of the media was modified with AI. Early testers are getting access to it “today,” but hopefully more people can acess it at a later date from labs.google/synthid. — Kyle Barr This keynote has been going on for 1.5 hours now. Do I run to the restroom now or wait? But how much longer until it ends??? Can we petiton to Sundar Pichai to make these keynotes shorter or at least have an intermission? Update: I ran for it right near the end before Android XR news hit. I almost made it… —Raymond Wong © Raymond Wong / Gizmodo Google’s new video generator Veo, is getting a big upgrade that includes sound generation, and it’s not just dialogue. Veo 3 can also generate sound effects and music. In a demo, Google showed off an animated forest scene that includes all three—dialogue, sound effects, and video. The length of clips, I assume, will be short at first, but the results look pretty sophisticated if the demo is to be believed. —James Pero If you pay for a Google One subscription, you’ll start to see Gemini in your Google Chrome browser (and—judging by this developer conference—everywhere else) later this week. This will appear as the sparkle icon at the top of your browser app. You can use this to bring up a prompt box to ask a question about the current page you’re browsing, such as if you want to consolidate a number of user reviews for a local campsite. — Kyle Barr © Google / GIF by Gizmodo Google’s high-tech video conferencing tech, now called Beam, looks impressive. You can make eye contact! It feels like the person in the screen is right in front of you! It’s glasses-free 3D! Come back down to Earth, buddy—it’s not coming out as a consumer product. Commercial first with partners like HP. Time to apply for a new job? —Raymond Wong Read more here: Google doesn’t want Search to be tied to your browser or apps anymore. Search Live is akin to the video and audio comprehension capabilities of Gemini Live, but with the added benefit of getting quick answers based on sites from around the web. Google showed how Search Live could comprehend queries about at-home science experiment and bring in answers from sites like Quora or YouTube. — Kyle Barr Google is getting deep into augmented reality with Android XR—its operating system built specifically for AR glasses and VR headsets. Google showed us how users may be able to see a holographic live Google Maps view directly on their glasses or set up calendar events, all without needing to touch a single screen. This uses Gemini AI to comprehend your voice prompts and follow through on your instructions. Google doesn’t have its own device to share at I/O, but its planning to work with companies like XReal and Samsung to craft new devices across both AR and VR. — Kyle Barr Read our full report here: I know how much you all love subscriptions! Google does too, apparently, and is now offering a $250 per month AI bundle that groups some of its most advanced AI services. Subscribing to Google AI Ultra will get you: Gemini and its full capabilities Flow, a new, more advanced AI filmmaking tool based on Veo Whisk, which allows text-to-image creation NotebookLM, an AI note-taking app Gemini in Gmail and Docs Gemini in Chrome Project Mariner, an agentic research AI 30TB of storage I’m not sure who needs all of this, but maybe there are more AI superusers than I thought. —James Pero Google CEO Sundar Pichai was keen to claim that users are big, big fans of AI overviews in Google Search results. If there wasn’t already enough AI on your search bar, Google will now stick an entire “AI Mode” tab on your search bar next to the Google Lens button. This encompasses the Gemini 2.5 model. This opens up an entirely new UI for searching via a prompt with a chatbot. After you input your rambling search query, it will bring up an assortment of short-form textual answers, links, and even a Google Maps widget depending on what you were looking for. AI Mode should be available starting today. Google said AI Mode pulls together information from the web alongside its other data like weather or academic research through Google Scholar. It should also eventually encompass your “personal context,” which will be available later this summer. Eventually, Google will add more AI Mode capabilities directly to AI Overviews. — Kyle Barr May 20News Embargo Has Lifted! © Xreal Get your butt over to Gizmodo.com’s home page because the Google I/O news embargo just lifted. We’ve got a bunch of stories, including this one about Google partnering up with Xreal for a new pair of “optical see-through” (OST) smart glasses called Project Aura. The smart glasses run Android XR and are powered by a Qualcomm chip. You can see three cameras. Wireless, these are not—you’ll need to tether to a phone or other device. Update: Little scoop: I’ve confirmed that Project Aura has a 70-degree field of view, which is way wider than the One Pro’s FOV, which is 57 degrees. —Raymond Wong © Raymond Wong / Gizmodo Google’s DeepMind CEO showed off the updated version of Project Astra running on a phone and drove home how its “personal, proactive, and powerful” AI features are the groundwork for a “universal assistant” that truly understands and works on your behalf. If you think Gemini is a fad, it’s time to get familiar with it because it’s not going anywhere. —Raymond Wong May 20Gemini 2.5 Pro Is Here © Gizmodo Google says Gemini 2.5 Pro is its “most advanced model yet,” and comes with “enhanced reasoning,” better coding ability, and can even create interactive simulations. You can try it now via Google AI Studio. —James Pero There are two major types of transformer AI used today. One is the LLM, AKA large language models, and diffusion models—which are mostly used for image generation. The Gemini Diffusion model blurs the lines of these types of models. Google said its new research model can iterate on a solution quickly and correct itself while generating an answer. For math or coding prompts, Gemini Diffusion can potentially output an entire response much faster than a typical Chatbot. Unlike a traditional LLM model, which may take a few seconds to answer a question, Gemini Diffusion can create a response to a complex math equation in the blink of an eye, and still share the steps it took to reach its conclusion. — Kyle Barr © Gizmodo New Gemini 2.5 Flash and Gemini Pro models are incoming and, naturally, Google says both are faster and more sophisticated across the board. One of the improvements for Gemini 2.5 Flash is even more inflection when speaking. Unfortunately for my ears, Google demoed the new Flash speaking in a whisper that sent chills down my spine. —James Pero Is anybody keeping track of how many times Google execs have said “Gemini” and “AI” so far? Oops, I think I’m already drunk, and we’re only 20 minutes in. —Raymond Wong © Raymond Wong / Gizmodo Google’s Project Astra is supposed to be getting much better at avoiding hallucinations, AKA when the AI makes stuff up. Project Astra’s vision and audio comprehension capabilities are supposed to be far better at knowing when you’re trying to trick it. In a video, Google showed how its Gemini Live AI wouldn’t buy your bullshit if you tell it that a garbage truck is a convertible, a lamp pole is a skyscraper, or your shadow is some stalker. This should hopefully mean the AI doesn’t confidently lie to you, as well. Google CEO Sundar Pichai said “Gemini is really good at telling you when you’re wrong.” These enhanced features should be rolling out today for Gemini app on iOS and Android. — Kyle Barr May 20Release the Agents Like pretty much every other AI player, Google is pursuing agentic AI in a big way. I’d prepare for a lot more talk about how Gemini can take tasks off your hands as the keynote progresses. —James Pero © Gizmodo Google has finally moved Project Starline—its futuristic video-calling machine—into a commercial project called Google Beam. According to Pichai, Google Beam can take a 2D image and transform it into a 3D one, and will also incorporate live translate. —James Pero © Gizmodo Google’s CEO, Sundar Pichai, says Google is shipping at a relentless pace, and to be honest, I tend to agree. There are tons of Gemini models out there already, even though it’s only been out for two years. Probably my favorite milestone, though, is that it has now completed Pokémon Blue, earning all 8 badges according to Pichai. —James Pero May 20Let’s Do This Buckle up, kiddos, it’s I/O time. Methinks there will be a lot to get to, so you may want to grab a snack now. —James Pero Counting down until the keynote… only a few more minutes to go. The DJ just said AI is changing music and how it’s made. But don’t forget that we’re all here… in person. Will we all be wearing Android XR smart glasses next year? Mixed reality headsets? —Raymond Wong © Raymond Wong / Gizmodo Fun fact: I haven’t attended Google I/O in person since before Covid-19. The Wi-Fi is definitely stronger and more stable now. It’s so great to be back and covering for Gizmodo. Dream job, unlocked! —Raymond Wong © Raymond Wong / Gizmodo Mini breakfast burritos… bagels… but these bagels can’t compare to real Made In New York City bagels with that authentic NY water 😏 —Raymond Wong © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo I’ve arrived at the Shoreline Amphitheatre in Mountain View, Calif., where the Google I/O keynote is taking place in 40 minutes. Seats are filling up. But first, must go check out the breakfast situation because my tummy is growling… —Raymond Wong May 20Should We Do a Giveaway? © Raymond Wong / Gizmodo Google I/O attendees get a special tote bag, a metal water bottle, a cap, and a cute sheet of stickers. I always end up donating this stuff to Goodwill during the holidays. A guy living in NYC with two cats only has so much room for tote bags and water bottles… Would be cool to do giveaway. Leave a comment to let us know if you’d be into that and I can pester top brass to make it happen 🤪 —Raymond Wong May 20Got My Press Badge! In 13 hours, Google will blitz everyone with Gemini AI, Gemini AI, and tons more Gemini AI. Who’s ready for… Gemini AI? —Raymond Wong May 19Google Glass: The Redux © Google / Screenshot by Gizmodo Google is very obviously inching toward the release of some kind of smart glasses product for the first time since (gulp) Google Glass, and if I were a betting man, I’d say this one will have a much warmer reception than its forebearer. I’m not saying Google can snatch the crown from Meta and its Ray-Ban smart glasses right out of the gate, but if it plays its cards right, it could capitalize on the integration with its other hardware (hello, Pixel devices) in a big way. Meta may finally have a real competitor on its hands. ICYMI: Here’s Google’s President of the Android Ecosystem, Sameer Samat, teasing some kind of smart glasses device in a recorded demo last week. —James Pero Hi folks, I’m James Pero, Gizmodo’s new Senior Writer. There’s a lot we have to get to with Google I/O, so I’ll keep this introduction short. I like long walks on the beach, the wind in my nonexistent hair, and I’m really, really, looking forward to bringing you even more of the spicy, insightful, and entertaining coverage on consumer tech that Gizmodo is known for. I’m starting my tenure here out hot with Google I/O, so make sure you check back here throughout the week to get those sweet, sweet blogs and commentary from me and Gizmodo’s Senior Consumer Tech Editor Raymond Wong. —James Pero © Raymond Wong / Gizmodo Hey everyone! Raymond Wong, senior editor in charge of Gizmodo’s consumer tech team, here! Landed in San Francisco (the sunrise was *chef’s kiss*), and I’ll be making my way over to Mountain View, California, later today to pick up my press badge and scope out the scene for tomorrow’s Google I/O keynote, which kicks off at 1 p.m. ET / 10 a.m. PT. Google I/O is a developer conference, but that doesn’t mean it’s news only for engineers. While there will be a lot of nerdy stuff that will have developers hollering, what Google announces—expect updates on Gemini AI, Android, and Android XR, to name a few headliners—will shape consumer products (hardware, software, and services) for the rest of this year and also the years to come. I/O is a glimpse at Google’s technology roadmap as AI weaves itself into the way we compute at our desks and on the go. This is going to be a fun live blog! —Raymond Wong
    0 Комментарии 0 Поделились
  • Google I/O 2025: Android Takes A Back Seat To AI And XR

    Google CEO Sundar Pichai talking about Google Beam, formerly known as Project Starline, at Google ... More I/O 2025Anshel Sag
    Google used its annual I/O event this week to put the focus squarely on AI — with a strong dash of XR. While there’s no doubt that Google remains very committed to Android and the Android ecosystem, it was more than apparent that the company’s work on AI is only accelerating. Onstage, Google executives showed how its Gemini AI models have seen a more than 50x increase in monthly token usage over the past year, with the major inflection point clearly being the release of Gemini 2.5 in March 2025.

    I believe that Google’s efforts in AI have been supercharged by Gemini 2.5 and the agentic era of AI. The company also showed its continued commitment to getting Android XR off the ground with the second developer preview of Android XR, which it also announced at Google I/O.Google’s monthly tokens processedAnshel Sag

    Incorporating Gemini And AI Everywhere
    For Google, the best way to justify the long-term and continuous investment in Gemini is to make it accessible in as many ways as possible. That includes expanding into markets beyond the smartphone and browser. That’s why Gemini is already replacing Google Assistant in most areas. This is also a necessary move because Google Assistant’s functionality has regressed to the point of frustration as the company has shifted development resources to Gemini. This means that we’re getting Gemini via Google TV, Android Auto and WearOS. Let’s not forget that Android XR is the first operating system from Google that has been built from the ground up during the Gemini era. That translates to most XR experiences from Google being grounded in AI from the outset to make the most of agents and multimodal AI for improving the user experience.

    To accelerate the pace of adoption of on-device AI, Google has also announced improvements to LiteRT, its runtime for using AI models locally that has a heavy focus on maximizing on-device NPUs. Google also announced the AI Edge Portal to enable developers to test and benchmark their on-device models. These models will be crucial for enabling low-latency and secure experiences for users when connectivity might be challenged or when data simply cannot leave the device. While I believe that on-device AI performance is going to be important to developers going forward, it is also important to recognize that hybrid AI — mixing on-device and cloud AI processing — is likely here to stay for a very long time.
    Android XR, Smart Glasses And The Xreal Partnership
    Because Google introduced most of its Android updates in a separate “Android Show” a week before Google I/O, the Android updates during I/O mostly applied to Android XR. The new Material 3 Expressive design system will find its way across Google’s OSes and looks set to deliver snappier, more responsive experiences at equal or better performance. I wrote extensively about Google’s Android XR launch in December 2024, explaining how it would likely serve as Google’s tip of the spear for enabling new and unique AI experiences. At Google I/O, the company showed the sum of these efforts in terms of both creating partnerships and enabling a spectrum of XR devices from partners.Google’s Shahram Izadi, vice president and general manager of Android XR, talking about Project ... More Moohan onstage at Google I/O 2025Anshel Sag

    In this vein, Google reiterated its commitment to Samsung and Project Moohan, which Google now says will ship this year. The company also talked about other partnerships in the ecosystem that will enable new form factors for the AI-enabled wearable XR operating system. Specifically, it will be partnering with Warby Parker and Gentle Monster to develop smart glasses. In a press release, Google said it has allotted million for its partnership with Warby Parker, with million already committed to product development and commercialization and the remaining million dependent on reaching certain milestones.
    I believe that this partnership is akin to the one that Meta established with EssilorLuxottica, leaving the design, fit and retail presence to the eyeglasses experts. Warby Parker is such a good fit because the company is already very forward-thinking on technology, and I believe that this partnership can enable Google to make some beautiful smart glasses to compete with Meta Ray Bans. While I absolutely adore my Meta Ray Bans, I do think they would be considerably more useful if they were running Gemini 2.5, even the flash version of the model. Gentle Monster is also a great fit for Google because it helps capture the Asian market better, and because its designs are so large that they give Google plenty of room to work with.
    Many people have written about their impressions of Project Moohan and the smart glasses from Google I/O, but the reality is that these were not new — or final — products. So, I hope that these XR devices are as exciting to people as they were to me back in December.Google announces Project Aura on stage during the Google I/O developer keynote.Anshel Sag
    For me the more important XR news from the event was the announcement of the Project Aura headset in partnership with Xreal. Project Aura, while still limited in details, does seem to indicate that there’s a middle ground for Google between the more immersive Moohan headset and lightweight smart glasses. It’s evident that Google wants to capture this sweet spot with Xreal’s help. Also, if you know anything about Xreal’s history, it makes sense that it would be the company Google works with to bring 3-D AR to market. Project Aura feels like Google’s way to compete with Meta’s Orion in terms of field of view, 3-D AR capabilities and standalone compute. While many people think of Orion as a pair of standalone glasses, in fact they depend on an external compute puck; with Qualcomm’s help, Google will also use a puck via a wire, though I would love to see that disappear in subsequent versions.
    The Xreal One and One Pro products already feel like moves in the direction Google is leaning, but with Project Aura it seems that Google wants more diversity within Android XR — and it wants to build a product with the company that has already shipped more AR headsets than anyone else. The wider 70-degree field of view should do wonders for the user experience, and while the price of Project Aura is still unclear, I would expect it to be much more expensive than most of Xreal’s current offerings. Google and Xreal say they will disclose more details about Project Aura at the AWE 2025 show in June, which I will be attending — so look for more details from me when that happens.
    Project Starline Becomes Google Beam
    Google also updated its XR conferencing platform, formerly called Project Starline, which it has been building with HP. Google has now changed the project into a product name with the introduction of Google Beam. While not that much has changed since I last tried out Project Starline at HP’s headquarters last September, the technology is still quite impressive — and still quite expensive. One of the new capabilities for Google Beam, also being made available as part of Google Meet, is near-real-time translated conversations that capture a person’s tone, expressions and accents while translating their speech. I got to experience this at Google I/O, and it was extremely convincing, not to mention a great way to enhance the already quite impressive Beam experience. It really did sound like the translated voice was the person’s own voice speaking English; this was significant on its own, but achieving it with spatial video at fairly low latency was even better. I hope that Google will one day be able to do the translations in real time, synced with the user’s speech.
    Google says that it and HP are still coming to market with a Google Beam product later this year and will be showing it off at the InfoComm conference in June. Google has already listed some lead customers for Google Beam, including Deloitte, Salesforce, Citadel, NEC, Hackensack Meridian Health, Duolingo and Recruit. This is a longer list than I expected, but the technology is also more impressive than I had initially expected, so I am happy to see it finally come to market. I do believe that with time we’ll probably see Google Beam expand beyond the 65-inch screen, but for now that’s the best way to attain full immersion. I also expect that sooner or later we could see Beam working with Android XR devices as well.
    Analyst Takeaways From Google I/O
    I believe that Google is one of the few companies that genuinely understands the intersection of AI and XR — and that has the assets and capabilities to leverage that understanding. Other companies may have the knowledge but lack the assets, capabilities or execution. I also believe that Google finally understands the “why” behind XR and how much AI helps answer that question. Google’s previous efforts in XR were for the sake of pursuing XR and didn’t really align well with the rest of the company’s efforts. Especially given the growth of AI overall and the capabilities of Gemini in particular, AR glasses are now one of the best ways to experience AI. Nobody wants to hold their phone up to something for a multimodel AI to see it, and no one wants to type long AI prompts into their phone. They want to interact with AI in the context of more natural visual and auditory experiences. Although smartphones can deliver a fairly good experience for this, they pale in comparison to having the microphones and cameras closer to your eyes and mouth. The more you use AI this way, the less you find yourself needing to pull out your phone. I certainly don’t think smartphones are going to disappear, but I do think they are going to decline in terms of where most of an individual’s AI computing and connectivity happen.
    All of this is why I’m much more confident in Google’s approach to XR this time around, even though the company has burned so many bridges with its previous endeavors in the space. More than that, I believe that Google’s previous absence in the XR market has impeded the market’s growth. Now, however, the company is clearly investing in partnerships and ecosystem enablement. It will be important for the company to continue to execute on this and enable its partners to be successful. A big part of that is building a strong XR ecosystem that can compete with the likes of Apple and Meta. It won’t happen overnight, but the success of that ecosystem will be what makes or breaks Google’s approach to XR beyond its embrace of Gemini.
    Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking and video and speaking sponsorships. Of the companies mentioned in this article, Moor Insights & Strategy currently hasa paid business relationship with Google, HP, Meta, Qualcomm, Salesforce and Samsung.Editorial StandardsReprints & Permissions
    #google #android #takes #back #seat
    Google I/O 2025: Android Takes A Back Seat To AI And XR
    Google CEO Sundar Pichai talking about Google Beam, formerly known as Project Starline, at Google ... More I/O 2025Anshel Sag Google used its annual I/O event this week to put the focus squarely on AI — with a strong dash of XR. While there’s no doubt that Google remains very committed to Android and the Android ecosystem, it was more than apparent that the company’s work on AI is only accelerating. Onstage, Google executives showed how its Gemini AI models have seen a more than 50x increase in monthly token usage over the past year, with the major inflection point clearly being the release of Gemini 2.5 in March 2025. I believe that Google’s efforts in AI have been supercharged by Gemini 2.5 and the agentic era of AI. The company also showed its continued commitment to getting Android XR off the ground with the second developer preview of Android XR, which it also announced at Google I/O.Google’s monthly tokens processedAnshel Sag Incorporating Gemini And AI Everywhere For Google, the best way to justify the long-term and continuous investment in Gemini is to make it accessible in as many ways as possible. That includes expanding into markets beyond the smartphone and browser. That’s why Gemini is already replacing Google Assistant in most areas. This is also a necessary move because Google Assistant’s functionality has regressed to the point of frustration as the company has shifted development resources to Gemini. This means that we’re getting Gemini via Google TV, Android Auto and WearOS. Let’s not forget that Android XR is the first operating system from Google that has been built from the ground up during the Gemini era. That translates to most XR experiences from Google being grounded in AI from the outset to make the most of agents and multimodal AI for improving the user experience. To accelerate the pace of adoption of on-device AI, Google has also announced improvements to LiteRT, its runtime for using AI models locally that has a heavy focus on maximizing on-device NPUs. Google also announced the AI Edge Portal to enable developers to test and benchmark their on-device models. These models will be crucial for enabling low-latency and secure experiences for users when connectivity might be challenged or when data simply cannot leave the device. While I believe that on-device AI performance is going to be important to developers going forward, it is also important to recognize that hybrid AI — mixing on-device and cloud AI processing — is likely here to stay for a very long time. Android XR, Smart Glasses And The Xreal Partnership Because Google introduced most of its Android updates in a separate “Android Show” a week before Google I/O, the Android updates during I/O mostly applied to Android XR. The new Material 3 Expressive design system will find its way across Google’s OSes and looks set to deliver snappier, more responsive experiences at equal or better performance. I wrote extensively about Google’s Android XR launch in December 2024, explaining how it would likely serve as Google’s tip of the spear for enabling new and unique AI experiences. At Google I/O, the company showed the sum of these efforts in terms of both creating partnerships and enabling a spectrum of XR devices from partners.Google’s Shahram Izadi, vice president and general manager of Android XR, talking about Project ... More Moohan onstage at Google I/O 2025Anshel Sag In this vein, Google reiterated its commitment to Samsung and Project Moohan, which Google now says will ship this year. The company also talked about other partnerships in the ecosystem that will enable new form factors for the AI-enabled wearable XR operating system. Specifically, it will be partnering with Warby Parker and Gentle Monster to develop smart glasses. In a press release, Google said it has allotted million for its partnership with Warby Parker, with million already committed to product development and commercialization and the remaining million dependent on reaching certain milestones. I believe that this partnership is akin to the one that Meta established with EssilorLuxottica, leaving the design, fit and retail presence to the eyeglasses experts. Warby Parker is such a good fit because the company is already very forward-thinking on technology, and I believe that this partnership can enable Google to make some beautiful smart glasses to compete with Meta Ray Bans. While I absolutely adore my Meta Ray Bans, I do think they would be considerably more useful if they were running Gemini 2.5, even the flash version of the model. Gentle Monster is also a great fit for Google because it helps capture the Asian market better, and because its designs are so large that they give Google plenty of room to work with. Many people have written about their impressions of Project Moohan and the smart glasses from Google I/O, but the reality is that these were not new — or final — products. So, I hope that these XR devices are as exciting to people as they were to me back in December.Google announces Project Aura on stage during the Google I/O developer keynote.Anshel Sag For me the more important XR news from the event was the announcement of the Project Aura headset in partnership with Xreal. Project Aura, while still limited in details, does seem to indicate that there’s a middle ground for Google between the more immersive Moohan headset and lightweight smart glasses. It’s evident that Google wants to capture this sweet spot with Xreal’s help. Also, if you know anything about Xreal’s history, it makes sense that it would be the company Google works with to bring 3-D AR to market. Project Aura feels like Google’s way to compete with Meta’s Orion in terms of field of view, 3-D AR capabilities and standalone compute. While many people think of Orion as a pair of standalone glasses, in fact they depend on an external compute puck; with Qualcomm’s help, Google will also use a puck via a wire, though I would love to see that disappear in subsequent versions. The Xreal One and One Pro products already feel like moves in the direction Google is leaning, but with Project Aura it seems that Google wants more diversity within Android XR — and it wants to build a product with the company that has already shipped more AR headsets than anyone else. The wider 70-degree field of view should do wonders for the user experience, and while the price of Project Aura is still unclear, I would expect it to be much more expensive than most of Xreal’s current offerings. Google and Xreal say they will disclose more details about Project Aura at the AWE 2025 show in June, which I will be attending — so look for more details from me when that happens. Project Starline Becomes Google Beam Google also updated its XR conferencing platform, formerly called Project Starline, which it has been building with HP. Google has now changed the project into a product name with the introduction of Google Beam. While not that much has changed since I last tried out Project Starline at HP’s headquarters last September, the technology is still quite impressive — and still quite expensive. One of the new capabilities for Google Beam, also being made available as part of Google Meet, is near-real-time translated conversations that capture a person’s tone, expressions and accents while translating their speech. I got to experience this at Google I/O, and it was extremely convincing, not to mention a great way to enhance the already quite impressive Beam experience. It really did sound like the translated voice was the person’s own voice speaking English; this was significant on its own, but achieving it with spatial video at fairly low latency was even better. I hope that Google will one day be able to do the translations in real time, synced with the user’s speech. Google says that it and HP are still coming to market with a Google Beam product later this year and will be showing it off at the InfoComm conference in June. Google has already listed some lead customers for Google Beam, including Deloitte, Salesforce, Citadel, NEC, Hackensack Meridian Health, Duolingo and Recruit. This is a longer list than I expected, but the technology is also more impressive than I had initially expected, so I am happy to see it finally come to market. I do believe that with time we’ll probably see Google Beam expand beyond the 65-inch screen, but for now that’s the best way to attain full immersion. I also expect that sooner or later we could see Beam working with Android XR devices as well. Analyst Takeaways From Google I/O I believe that Google is one of the few companies that genuinely understands the intersection of AI and XR — and that has the assets and capabilities to leverage that understanding. Other companies may have the knowledge but lack the assets, capabilities or execution. I also believe that Google finally understands the “why” behind XR and how much AI helps answer that question. Google’s previous efforts in XR were for the sake of pursuing XR and didn’t really align well with the rest of the company’s efforts. Especially given the growth of AI overall and the capabilities of Gemini in particular, AR glasses are now one of the best ways to experience AI. Nobody wants to hold their phone up to something for a multimodel AI to see it, and no one wants to type long AI prompts into their phone. They want to interact with AI in the context of more natural visual and auditory experiences. Although smartphones can deliver a fairly good experience for this, they pale in comparison to having the microphones and cameras closer to your eyes and mouth. The more you use AI this way, the less you find yourself needing to pull out your phone. I certainly don’t think smartphones are going to disappear, but I do think they are going to decline in terms of where most of an individual’s AI computing and connectivity happen. All of this is why I’m much more confident in Google’s approach to XR this time around, even though the company has burned so many bridges with its previous endeavors in the space. More than that, I believe that Google’s previous absence in the XR market has impeded the market’s growth. Now, however, the company is clearly investing in partnerships and ecosystem enablement. It will be important for the company to continue to execute on this and enable its partners to be successful. A big part of that is building a strong XR ecosystem that can compete with the likes of Apple and Meta. It won’t happen overnight, but the success of that ecosystem will be what makes or breaks Google’s approach to XR beyond its embrace of Gemini. Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking and video and speaking sponsorships. Of the companies mentioned in this article, Moor Insights & Strategy currently hasa paid business relationship with Google, HP, Meta, Qualcomm, Salesforce and Samsung.Editorial StandardsReprints & Permissions #google #android #takes #back #seat
    WWW.FORBES.COM
    Google I/O 2025: Android Takes A Back Seat To AI And XR
    Google CEO Sundar Pichai talking about Google Beam, formerly known as Project Starline, at Google ... More I/O 2025Anshel Sag Google used its annual I/O event this week to put the focus squarely on AI — with a strong dash of XR. While there’s no doubt that Google remains very committed to Android and the Android ecosystem, it was more than apparent that the company’s work on AI is only accelerating. Onstage, Google executives showed how its Gemini AI models have seen a more than 50x increase in monthly token usage over the past year, with the major inflection point clearly being the release of Gemini 2.5 in March 2025. I believe that Google’s efforts in AI have been supercharged by Gemini 2.5 and the agentic era of AI. The company also showed its continued commitment to getting Android XR off the ground with the second developer preview of Android XR, which it also announced at Google I/O. (Note: Google is an advisory client of my firm, Moor Insights & Strategy.) Google’s monthly tokens processedAnshel Sag Incorporating Gemini And AI Everywhere For Google, the best way to justify the long-term and continuous investment in Gemini is to make it accessible in as many ways as possible. That includes expanding into markets beyond the smartphone and browser. That’s why Gemini is already replacing Google Assistant in most areas. This is also a necessary move because Google Assistant’s functionality has regressed to the point of frustration as the company has shifted development resources to Gemini. This means that we’re getting Gemini via Google TV, Android Auto and WearOS. Let’s not forget that Android XR is the first operating system from Google that has been built from the ground up during the Gemini era. That translates to most XR experiences from Google being grounded in AI from the outset to make the most of agents and multimodal AI for improving the user experience. To accelerate the pace of adoption of on-device AI, Google has also announced improvements to LiteRT, its runtime for using AI models locally that has a heavy focus on maximizing on-device NPUs. Google also announced the AI Edge Portal to enable developers to test and benchmark their on-device models. These models will be crucial for enabling low-latency and secure experiences for users when connectivity might be challenged or when data simply cannot leave the device. While I believe that on-device AI performance is going to be important to developers going forward, it is also important to recognize that hybrid AI — mixing on-device and cloud AI processing — is likely here to stay for a very long time. Android XR, Smart Glasses And The Xreal Partnership Because Google introduced most of its Android updates in a separate “Android Show” a week before Google I/O, the Android updates during I/O mostly applied to Android XR. The new Material 3 Expressive design system will find its way across Google’s OSes and looks set to deliver snappier, more responsive experiences at equal or better performance. I wrote extensively about Google’s Android XR launch in December 2024, explaining how it would likely serve as Google’s tip of the spear for enabling new and unique AI experiences. At Google I/O, the company showed the sum of these efforts in terms of both creating partnerships and enabling a spectrum of XR devices from partners.Google’s Shahram Izadi, vice president and general manager of Android XR, talking about Project ... More Moohan onstage at Google I/O 2025Anshel Sag In this vein, Google reiterated its commitment to Samsung and Project Moohan, which Google now says will ship this year. The company also talked about other partnerships in the ecosystem that will enable new form factors for the AI-enabled wearable XR operating system. Specifically, it will be partnering with Warby Parker and Gentle Monster to develop smart glasses. In a press release, Google said it has allotted $150 million for its partnership with Warby Parker, with $75 million already committed to product development and commercialization and the remaining $75 million dependent on reaching certain milestones. I believe that this partnership is akin to the one that Meta established with EssilorLuxottica, leaving the design, fit and retail presence to the eyeglasses experts. Warby Parker is such a good fit because the company is already very forward-thinking on technology, and I believe that this partnership can enable Google to make some beautiful smart glasses to compete with Meta Ray Bans. While I absolutely adore my Meta Ray Bans, I do think they would be considerably more useful if they were running Gemini 2.5, even the flash version of the model. Gentle Monster is also a great fit for Google because it helps capture the Asian market better, and because its designs are so large that they give Google plenty of room to work with. Many people have written about their impressions of Project Moohan and the smart glasses from Google I/O, but the reality is that these were not new — or final — products. So, I hope that these XR devices are as exciting to people as they were to me back in December.Google announces Project Aura on stage during the Google I/O developer keynote.Anshel Sag For me the more important XR news from the event was the announcement of the Project Aura headset in partnership with Xreal. Project Aura, while still limited in details, does seem to indicate that there’s a middle ground for Google between the more immersive Moohan headset and lightweight smart glasses. It’s evident that Google wants to capture this sweet spot with Xreal’s help. Also, if you know anything about Xreal’s history, it makes sense that it would be the company Google works with to bring 3-D AR to market. Project Aura feels like Google’s way to compete with Meta’s Orion in terms of field of view, 3-D AR capabilities and standalone compute. While many people think of Orion as a pair of standalone glasses, in fact they depend on an external compute puck; with Qualcomm’s help, Google will also use a puck via a wire, though I would love to see that disappear in subsequent versions. The Xreal One and One Pro products already feel like moves in the direction Google is leaning, but with Project Aura it seems that Google wants more diversity within Android XR — and it wants to build a product with the company that has already shipped more AR headsets than anyone else. The wider 70-degree field of view should do wonders for the user experience, and while the price of Project Aura is still unclear, I would expect it to be much more expensive than most of Xreal’s current offerings. Google and Xreal say they will disclose more details about Project Aura at the AWE 2025 show in June, which I will be attending — so look for more details from me when that happens. Project Starline Becomes Google Beam Google also updated its XR conferencing platform, formerly called Project Starline, which it has been building with HP. Google has now changed the project into a product name with the introduction of Google Beam. While not that much has changed since I last tried out Project Starline at HP’s headquarters last September, the technology is still quite impressive — and still quite expensive. One of the new capabilities for Google Beam, also being made available as part of Google Meet, is near-real-time translated conversations that capture a person’s tone, expressions and accents while translating their speech. I got to experience this at Google I/O, and it was extremely convincing, not to mention a great way to enhance the already quite impressive Beam experience. It really did sound like the translated voice was the person’s own voice speaking English; this was significant on its own, but achieving it with spatial video at fairly low latency was even better. I hope that Google will one day be able to do the translations in real time, synced with the user’s speech. Google says that it and HP are still coming to market with a Google Beam product later this year and will be showing it off at the InfoComm conference in June. Google has already listed some lead customers for Google Beam, including Deloitte, Salesforce, Citadel, NEC, Hackensack Meridian Health, Duolingo and Recruit. This is a longer list than I expected, but the technology is also more impressive than I had initially expected, so I am happy to see it finally come to market. I do believe that with time we’ll probably see Google Beam expand beyond the 65-inch screen, but for now that’s the best way to attain full immersion. I also expect that sooner or later we could see Beam working with Android XR devices as well. Analyst Takeaways From Google I/O I believe that Google is one of the few companies that genuinely understands the intersection of AI and XR — and that has the assets and capabilities to leverage that understanding. Other companies may have the knowledge but lack the assets, capabilities or execution. I also believe that Google finally understands the “why” behind XR and how much AI helps answer that question. Google’s previous efforts in XR were for the sake of pursuing XR and didn’t really align well with the rest of the company’s efforts. Especially given the growth of AI overall and the capabilities of Gemini in particular, AR glasses are now one of the best ways to experience AI. Nobody wants to hold their phone up to something for a multimodel AI to see it, and no one wants to type long AI prompts into their phone. They want to interact with AI in the context of more natural visual and auditory experiences. Although smartphones can deliver a fairly good experience for this, they pale in comparison to having the microphones and cameras closer to your eyes and mouth. The more you use AI this way, the less you find yourself needing to pull out your phone. I certainly don’t think smartphones are going to disappear, but I do think they are going to decline in terms of where most of an individual’s AI computing and connectivity happen. All of this is why I’m much more confident in Google’s approach to XR this time around, even though the company has burned so many bridges with its previous endeavors in the space (specifically Daydream and Glass). More than that, I believe that Google’s previous absence in the XR market has impeded the market’s growth. Now, however, the company is clearly investing in partnerships and ecosystem enablement. It will be important for the company to continue to execute on this and enable its partners to be successful. A big part of that is building a strong XR ecosystem that can compete with the likes of Apple and Meta. It won’t happen overnight, but the success of that ecosystem will be what makes or breaks Google’s approach to XR beyond its embrace of Gemini. Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking and video and speaking sponsorships. Of the companies mentioned in this article, Moor Insights & Strategy currently has (or has had) a paid business relationship with Google, HP, Meta, Qualcomm, Salesforce and Samsung.Editorial StandardsReprints & Permissions
    0 Комментарии 0 Поделились
  • AMD’s Radeon RX 9060 XT Could Do Budget GPUs Better Than Nvidia

    By

    Kyle Barr

    Published May 20, 2025

    |

    Comments|

    Don't be fooled by the image. AMD doesn't make its own cards, so whatever comes out won't look quite as nice. © AMD

    In the battle of the low-end, 60-class graphics cards, AMD wants to see if it can pull off the same sucker punch of price and performance it gave Nvidia during the launch of its mid-range GPUs. The graphics card maker offered the first, sparse details on its Radeon RX 9060 XT graphics processors late Tuesday at Computex. The card may offer enough power for your PC to hit solid gaming performance at 1440p resolution, similar to the Nvidia GeForce RTX 5060 Ti, on cheaper gaming rigs. The real inflection point of this latest card will be whether you can actually buy it for its base price. The Radeon RX 9060 XT is the step down in GPU performance from the RX 9070 that AMD launched back in March. It’s based on the same RDNA 4 microarchitecture of the mid-range cards, but with 32 of the company’s latest compute units compared to the 56 on the higher-end card. The GPU comes with two options: one with 8 GB and another with 16 GB of GDDR6 VRAM. The version with more memory will be better for your rig long-term, especially if you plan to hook your PC up to a 1440p monitor and run the latest, more graphically intensive games. AMD did not offer us the full range of specs, which makes it hard to pin down just where this GPU will land in terms of raw performance compared to Nvidia’s latest cards. While the number of RDNA 4 compute units—the core clusters on AMD cards that process the thousands of calculations necessary for graphically intensive tasks—offers a vague impression of performance compared to the RX 9070, AMD didn’t provide any charts to compare FPS between games. The GPU runs on a 3.13GHz boost clock and has between 150W and 182W of board power compared to the 2.54 GHz clock and 304W board power on the company’s Radeon RX 9070 XT.

    Without a price tag, it’s impossible to judge how much of a step down the latest card is compared to the RX 9070. AMD didn’t offer any word on a non-XT variant, either. The card will require a PCIe 5.0 x16 interface, the same as its other cards. AMD doesn’t craft its own GPUs and instead relies on AICmakers to produce its cards. We’ll update this article if AMD announces details on price or availability during its Computex keynote. The crown jewel of AMD’s current lineup of graphics cards is the RX 9070 XT. AMD made headlines when it set the suggested sale price of the GPU at only more than the 9070, but it packs enough performance to get playable framerates out of multiple intensive games at 4K with a fair amount of ray tracing settings turned up. Unfortunately, because of a combination of tariffs and stock woes, the 9070 XT ended up priced at over and as high as at some online retailers. We’ve seen prices fluctuate regularly over the past several months, but a near 20% price inflation to what should be a mid-range card is simply too much to stomach. However, the lower-end GPUs are faring better. The RTX 5060 Ti MSRP is set at and the lowest price we’ve seen so far is The RTX 5060 is sitting closer to from some AIC makers like Gigabyte. A fair number of Nvidia’s lowest-end GPUs are currently listed as “Out of Stock” or “Coming Soon” on sites like Newegg and Best Buy. Those buying a lower-end GPU are more price sensitive than people who can drop on an RTX 5090 without blinking. AMD has even more impetus to set a price people can afford, and make sure it can keep costs level when the card finally hits store shelves.

    Daily Newsletter

    You May Also Like

    By

    Kyle Barr

    Published May 19, 2025

    By

    Kyle Barr

    Published May 19, 2025

    By

    Kyle Barr

    Published May 19, 2025

    By

    Kyle Barr

    Published May 16, 2025

    By

    Kyle Barr

    Published April 17, 2025

    By

    Kyle Barr

    Published April 15, 2025
    #amds #radeon #could #budget #gpus
    AMD’s Radeon RX 9060 XT Could Do Budget GPUs Better Than Nvidia
    By Kyle Barr Published May 20, 2025 | Comments| Don't be fooled by the image. AMD doesn't make its own cards, so whatever comes out won't look quite as nice. © AMD In the battle of the low-end, 60-class graphics cards, AMD wants to see if it can pull off the same sucker punch of price and performance it gave Nvidia during the launch of its mid-range GPUs. The graphics card maker offered the first, sparse details on its Radeon RX 9060 XT graphics processors late Tuesday at Computex. The card may offer enough power for your PC to hit solid gaming performance at 1440p resolution, similar to the Nvidia GeForce RTX 5060 Ti, on cheaper gaming rigs. The real inflection point of this latest card will be whether you can actually buy it for its base price. The Radeon RX 9060 XT is the step down in GPU performance from the RX 9070 that AMD launched back in March. It’s based on the same RDNA 4 microarchitecture of the mid-range cards, but with 32 of the company’s latest compute units compared to the 56 on the higher-end card. The GPU comes with two options: one with 8 GB and another with 16 GB of GDDR6 VRAM. The version with more memory will be better for your rig long-term, especially if you plan to hook your PC up to a 1440p monitor and run the latest, more graphically intensive games. AMD did not offer us the full range of specs, which makes it hard to pin down just where this GPU will land in terms of raw performance compared to Nvidia’s latest cards. While the number of RDNA 4 compute units—the core clusters on AMD cards that process the thousands of calculations necessary for graphically intensive tasks—offers a vague impression of performance compared to the RX 9070, AMD didn’t provide any charts to compare FPS between games. The GPU runs on a 3.13GHz boost clock and has between 150W and 182W of board power compared to the 2.54 GHz clock and 304W board power on the company’s Radeon RX 9070 XT. Without a price tag, it’s impossible to judge how much of a step down the latest card is compared to the RX 9070. AMD didn’t offer any word on a non-XT variant, either. The card will require a PCIe 5.0 x16 interface, the same as its other cards. AMD doesn’t craft its own GPUs and instead relies on AICmakers to produce its cards. We’ll update this article if AMD announces details on price or availability during its Computex keynote. The crown jewel of AMD’s current lineup of graphics cards is the RX 9070 XT. AMD made headlines when it set the suggested sale price of the GPU at only more than the 9070, but it packs enough performance to get playable framerates out of multiple intensive games at 4K with a fair amount of ray tracing settings turned up. Unfortunately, because of a combination of tariffs and stock woes, the 9070 XT ended up priced at over and as high as at some online retailers. We’ve seen prices fluctuate regularly over the past several months, but a near 20% price inflation to what should be a mid-range card is simply too much to stomach. However, the lower-end GPUs are faring better. The RTX 5060 Ti MSRP is set at and the lowest price we’ve seen so far is The RTX 5060 is sitting closer to from some AIC makers like Gigabyte. A fair number of Nvidia’s lowest-end GPUs are currently listed as “Out of Stock” or “Coming Soon” on sites like Newegg and Best Buy. Those buying a lower-end GPU are more price sensitive than people who can drop on an RTX 5090 without blinking. AMD has even more impetus to set a price people can afford, and make sure it can keep costs level when the card finally hits store shelves. Daily Newsletter You May Also Like By Kyle Barr Published May 19, 2025 By Kyle Barr Published May 19, 2025 By Kyle Barr Published May 19, 2025 By Kyle Barr Published May 16, 2025 By Kyle Barr Published April 17, 2025 By Kyle Barr Published April 15, 2025 #amds #radeon #could #budget #gpus
    GIZMODO.COM
    AMD’s Radeon RX 9060 XT Could Do Budget GPUs Better Than Nvidia
    By Kyle Barr Published May 20, 2025 | Comments (0) | Don't be fooled by the image. AMD doesn't make its own cards, so whatever comes out won't look quite as nice. © AMD In the battle of the low-end, 60-class graphics cards, AMD wants to see if it can pull off the same sucker punch of price and performance it gave Nvidia during the launch of its mid-range GPUs. The graphics card maker offered the first, sparse details on its Radeon RX 9060 XT graphics processors late Tuesday at Computex. The card may offer enough power for your PC to hit solid gaming performance at 1440p resolution, similar to the $450 Nvidia GeForce RTX 5060 Ti, on cheaper gaming rigs. The real inflection point of this latest card will be whether you can actually buy it for its base price. The Radeon RX 9060 XT is the step down in GPU performance from the RX 9070 that AMD launched back in March. It’s based on the same RDNA 4 microarchitecture of the mid-range cards, but with 32 of the company’s latest compute units compared to the 56 on the higher-end card. The GPU comes with two options: one with 8 GB and another with 16 GB of GDDR6 VRAM. The version with more memory will be better for your rig long-term, especially if you plan to hook your PC up to a 1440p monitor and run the latest, more graphically intensive games. AMD did not offer us the full range of specs, which makes it hard to pin down just where this GPU will land in terms of raw performance compared to Nvidia’s latest cards. While the number of RDNA 4 compute units—the core clusters on AMD cards that process the thousands of calculations necessary for graphically intensive tasks—offers a vague impression of performance compared to the RX 9070, AMD didn’t provide any charts to compare FPS between games. The GPU runs on a 3.13GHz boost clock and has between 150W and 182W of board power compared to the 2.54 GHz clock and 304W board power on the company’s Radeon RX 9070 XT. Without a price tag, it’s impossible to judge how much of a step down the latest card is compared to the RX 9070. AMD didn’t offer any word on a non-XT variant, either. The card will require a PCIe 5.0 x16 interface, the same as its other cards. AMD doesn’t craft its own GPUs and instead relies on AIC (add-in card) makers to produce its cards. We’ll update this article if AMD announces details on price or availability during its Computex keynote. The crown jewel of AMD’s current lineup of graphics cards is the RX 9070 XT. AMD made headlines when it set the suggested sale price of the GPU at $600, only $50 more than the 9070, but it packs enough performance to get playable framerates out of multiple intensive games at 4K with a fair amount of ray tracing settings turned up. Unfortunately, because of a combination of tariffs and stock woes, the 9070 XT ended up priced at over $800 and as high as $1,000 at some online retailers. We’ve seen prices fluctuate regularly over the past several months, but a near 20% price inflation to what should be a mid-range card is simply too much to stomach. However, the lower-end GPUs are faring better. The RTX 5060 Ti MSRP is set at $450, and the lowest price we’ve seen so far is $480. The $300 RTX 5060 is sitting closer to $320 from some AIC makers like Gigabyte. A fair number of Nvidia’s lowest-end GPUs are currently listed as “Out of Stock” or “Coming Soon” on sites like Newegg and Best Buy. Those buying a lower-end GPU are more price sensitive than people who can drop $2,000 on an RTX 5090 without blinking. AMD has even more impetus to set a price people can afford, and make sure it can keep costs level when the card finally hits store shelves. Daily Newsletter You May Also Like By Kyle Barr Published May 19, 2025 By Kyle Barr Published May 19, 2025 By Kyle Barr Published May 19, 2025 By Kyle Barr Published May 16, 2025 By Kyle Barr Published April 17, 2025 By Kyle Barr Published April 15, 2025
    0 Комментарии 0 Поделились
  • Scammers Are Using AI to Impersonate Government Officials

    If you get a text or voice message from someone claiming to be a U.S. government official, they probably aren't who they say they are. The FBI is warning the public about an ongoing campaign in which scammers are using AI-generated voice messages to impersonate senior government staff in an attempt to gain access to personal accounts and, by extension, sensitive information or money. Many of those targeted have been other current and former government officials—both federal and state—and their contacts, but that doesn't mean this scam or something like it won't land in your inbox or on your phone sooner or later. Here's how these AI-powered attacks work, and how to avoid falling victim. How the AI impersonation scam worksThe current scam can take the form of smishing, which targets individuals via SMS or MMS, or vishing, which uses voice memos. Either way, bad actors are sending AI-generated voice messages and/or texts that appear to be from senior U.S. government officials. The goal is to build trust before directing targets to a separate messaging platform via a malicious link, which ultimately ends with you entering login credentials or downloading malware to your device. Scammers may also use the information gathered to target additional contacts, perpetuating the campaign. These scams are believable on some level thanks to voice cloning and generative AI tools that allow anyone to easily impersonate public figures. Bad actors can also spoof phone numbers so that messages in smishing schemes appear to be from family, friends, or trusted contacts. How to spot fake vishing messagesWhile AI-generated speech can be convincing, there are ways to identify these messages as fake. Listen for pronunciation and pacing that sound off as well as the presenceof emotion and variation in the speaker's voice—for example, AI tends to sound slightly flatter and have less inflection than a real human, and you may detect odd pauses. Of course, you should be wary of any communication—voice or otherwise—from anyone claiming to represent an organization, including a government agency, especially if they send unsolicited links, request money or personal information, or promote a sense of urgency. If you do receive a message that sounds convincing, verify the caller's identity by searching for official contact information and calling back, or hang up and reach out directly if it's someone you know. You should always independently confirm any request to send money or provide information, and never click links or download attachments sent via email or text. The FBI also suggests selecting a secret word or phrase with your close contacts that you can use to verify their identities from AI.
    #scammers #are #using #impersonate #government
    Scammers Are Using AI to Impersonate Government Officials
    If you get a text or voice message from someone claiming to be a U.S. government official, they probably aren't who they say they are. The FBI is warning the public about an ongoing campaign in which scammers are using AI-generated voice messages to impersonate senior government staff in an attempt to gain access to personal accounts and, by extension, sensitive information or money. Many of those targeted have been other current and former government officials—both federal and state—and their contacts, but that doesn't mean this scam or something like it won't land in your inbox or on your phone sooner or later. Here's how these AI-powered attacks work, and how to avoid falling victim. How the AI impersonation scam worksThe current scam can take the form of smishing, which targets individuals via SMS or MMS, or vishing, which uses voice memos. Either way, bad actors are sending AI-generated voice messages and/or texts that appear to be from senior U.S. government officials. The goal is to build trust before directing targets to a separate messaging platform via a malicious link, which ultimately ends with you entering login credentials or downloading malware to your device. Scammers may also use the information gathered to target additional contacts, perpetuating the campaign. These scams are believable on some level thanks to voice cloning and generative AI tools that allow anyone to easily impersonate public figures. Bad actors can also spoof phone numbers so that messages in smishing schemes appear to be from family, friends, or trusted contacts. How to spot fake vishing messagesWhile AI-generated speech can be convincing, there are ways to identify these messages as fake. Listen for pronunciation and pacing that sound off as well as the presenceof emotion and variation in the speaker's voice—for example, AI tends to sound slightly flatter and have less inflection than a real human, and you may detect odd pauses. Of course, you should be wary of any communication—voice or otherwise—from anyone claiming to represent an organization, including a government agency, especially if they send unsolicited links, request money or personal information, or promote a sense of urgency. If you do receive a message that sounds convincing, verify the caller's identity by searching for official contact information and calling back, or hang up and reach out directly if it's someone you know. You should always independently confirm any request to send money or provide information, and never click links or download attachments sent via email or text. The FBI also suggests selecting a secret word or phrase with your close contacts that you can use to verify their identities from AI. #scammers #are #using #impersonate #government
    LIFEHACKER.COM
    Scammers Are Using AI to Impersonate Government Officials
    If you get a text or voice message from someone claiming to be a U.S. government official, they probably aren't who they say they are. The FBI is warning the public about an ongoing campaign in which scammers are using AI-generated voice messages to impersonate senior government staff in an attempt to gain access to personal accounts and, by extension, sensitive information or money. Many of those targeted have been other current and former government officials—both federal and state—and their contacts, but that doesn't mean this scam or something like it won't land in your inbox or on your phone sooner or later. Here's how these AI-powered attacks work, and how to avoid falling victim. How the AI impersonation scam worksThe current scam can take the form of smishing, which targets individuals via SMS or MMS, or vishing, which uses voice memos. Either way, bad actors are sending AI-generated voice messages and/or texts that appear to be from senior U.S. government officials. The goal is to build trust before directing targets to a separate messaging platform via a malicious link, which ultimately ends with you entering login credentials or downloading malware to your device. Scammers may also use the information gathered to target additional contacts, perpetuating the campaign. These scams are believable on some level thanks to voice cloning and generative AI tools that allow anyone to easily impersonate public figures. Bad actors can also spoof phone numbers so that messages in smishing schemes appear to be from family, friends, or trusted contacts. How to spot fake vishing messagesWhile AI-generated speech can be convincing, there are ways to identify these messages as fake. Listen for pronunciation and pacing that sound off as well as the presence (or lack) of emotion and variation in the speaker's voice—for example, AI tends to sound slightly flatter and have less inflection than a real human, and you may detect odd pauses. Of course, you should be wary of any communication—voice or otherwise—from anyone claiming to represent an organization, including a government agency, especially if they send unsolicited links, request money or personal information, or promote a sense of urgency. If you do receive a message that sounds convincing, verify the caller's identity by searching for official contact information and calling back, or hang up and reach out directly if it's someone you know. You should always independently confirm any request to send money or provide information, and never click links or download attachments sent via email or text. The FBI also suggests selecting a secret word or phrase with your close contacts that you can use to verify their identities from AI.
    0 Комментарии 0 Поделились