Arm is rebranding its system-on-a-chip product designs to showcase power savings for AI workloads, targeting a surprising sector
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
UK-based chip designer Arm offers the architecture for systems-on-a-chipthat are used by some of the world’s largest tech brands, from Nvidia to Amazon to Google parent company Alphabet and beyond, all without ever manufacturing any hardware of its own — though that’s reportedly due to change this year.
And you’d think with a record setting last quarter of billion in total revenue, it might want to just keep things steady and keep raking in the cash.
But Arm sees how fast AI has taken off in the enterprise, and with some of its customers delivering record revenue of their own by offering AI graphics processing units that incorporate Arm’s tech, Arm wants a piece of the action.
Today, the company announced a new product naming strategy that underscores its shift from a supplier of component IP to a platform-first company.
“It’s about showing customers that we have much more to offer than just hardware and chip designs. specifically — we have a whole ecosystem that can help them scale AI and do so at lower cost with greater efficiency,” said Arm’s chief marketing officer Ami Badani, in an exclusive interview with VentureBeat over Zoom yesterday.
Indeed, as Arm CEO Rene Haas told the tech news outlet Next Platform back in February, Arm’s history of creating lower-power chips than the competitionhas set it up extremely well to serve as the basis for power-hungry AI training and inference jobs.
According to his comments in that article, today’s data center consume approximately 460 terawatt hours of electricity per year, but that is expected to triple by the end of this decade, and could jump from being 4 percent of all of the world’s energy usage to 25 percent — unless more Arm power-saving chip designs and their accompanying optimized software and firmware are used in the infrastructure for these centers.
From IP to platform: a significant shift
As AI workloads scale in complexity and power requirements, Arm is reorganizing its offerings around complete compute platforms.
These platforms allow for faster integration, more efficient scaling, and lower complexity for partners building AI-capable chips.
To reflect this shift, Arm is retiring its prior naming conventions and introducing new product families that are organized by market:
Neoverse for infrastructure
Niva for PCs
Lumex for mobile
Zena for automotive
Orbis for IoT and edge AI
The Mali brand will continue to represent GPU offerings, integrated as components within these new platforms.
Alongside the renaming, Arm is overhauling its product numbering system. IP identifiers will now correspond to platform generations and performance tiers labeled Ultra, Premium, Pro, Nano, and Pico. This structure is aimed at making the roadmap more transparent to customers and developers.
Emboldened by strong results
The rebranding follows Arm’s strong Q4 fiscal year 2025, where the company crossed the billion mark in quarterly revenue for the first time.
Total revenue hit billion, up 34% year-over-year, driven by both record licensing revenueand royalty revenue.
Notably, this royalty growth was driven by increasing deployment of the Armv9 architecture and adoption of Arm Compute Subsystemsacross smartphones, cloud infrastructure, and edge AI.
The mobile market was a standout: while global smartphone shipments grew less than 2%, Arm’s smartphone royalty revenue rose roughly 30%.
The company also entered its first automotive CSS agreement with a leading global EV manufacturer, furthering its penetration into the high-growth automotive market.
While Arm hasn’t disclosed the EV manufacturer’s precise name yet, Badani told VentureBeat that it sees automotive as a major growth area in addition to AI model providers and cloud hyperscalers such as Google and Amazon.
“We’re looking at automotive as a major growth area and we believe that AI and other advances like self-driving are going to be standard, which our designs are perfect for,” the CMO told VentureBeat.
Meanwhile, cloud providers like AWS, Google Cloud, and Microsoft Azure continued expanding their use of Arm-based silicon to run AI workloads, affirming Arm’s growing influence in data center compute.
Growing a new platform ecosystem with software and vertically integrated products
Arm is complementing its hardware platforms with expanded software tools and ecosystem support.
Its extension for GitHub Copilot, now free for all developers, lets users optimize code using Arm’s architecture.
More than 22 million developers now build on Arm, and its Kleidi AI software layer has surpassed 8 billion cumulative installs across devices.
Arm’s leadership sees the rebrand as a natural step in its long-term strategy. By providing vertically integrated platforms with performance and naming clarity, the company aims to meet increasing demand for energy-efficient AI compute from device to data center.
As Haas wrote in Arm’s blog post, Arm’s compute platforms are foundational to a future where AI is everywhere—and Arm is poised to deliver that foundation at scale.
What it means for AI and data decision makers
This strategic repositioning is likely to reshape how technical decision makers across AI, data, and security roles approach their day-to-day work and future planning.
For those managing large language model lifecycles, the clearer platform structure offers a more streamlined path for selecting compute architectures optimized for AI workloads.
As model deployment timelines tighten and the bar for efficiency rises, having predefined compute systems like Neoverse or Lumex could reduce the overhead required to evaluate raw IP blocks and allow faster execution in iterative development cycles.
For engineers orchestrating AI pipelines across environments, the modularity and performance tiering within Arm’s new architecture could help simplify pipeline standardization.
It introduces a practical way to align compute capabilities with varying workload requirements—whether that’s running inference at the edge or managing resource-intensive training jobs in the cloud.
These engineers, often juggling system uptime and cost-performance tradeoffs, may find more clarity in mapping their orchestration logic to predefined Arm platform tiers.
Data infrastructure leaders tasked with maintaining high-throughput pipelines and ensuring data integrity may also benefit.
The naming update and system-level integration signal a deeper commitment from Arm to support scalable designs that work well with AI-enabled pipelines.
The compute subsystems may also accelerate time-to-market for custom silicon that supports next-gen data platforms—important for teams that operate under budget constraints and limited engineering bandwidth.
Security leaders, meanwhile, will likely see implications in how embedded security features and system-level compatibility evolve within these platforms.
With Arm aiming to offer consistent architecture across edge and cloud, security teams can more easily plan for and enforce end-to-end protections, especially when integrating AI workloads that demand both performance and strict access controls.
The broader effect of this branding shift is a signal to enterprise architects and engineers: Arm is no longer just a component provider—it’s offering full-stack foundations for how AI systems are built and scaled.
Daily insights on business use cases with VB Daily
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
Read our Privacy Policy
Thanks for subscribing. Check out more VB newsletters here.
An error occured.
#arm #rebranding #its #systemonachip #product
Arm is rebranding its system-on-a-chip product designs to showcase power savings for AI workloads, targeting a surprising sector
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
UK-based chip designer Arm offers the architecture for systems-on-a-chipthat are used by some of the world’s largest tech brands, from Nvidia to Amazon to Google parent company Alphabet and beyond, all without ever manufacturing any hardware of its own — though that’s reportedly due to change this year.
And you’d think with a record setting last quarter of billion in total revenue, it might want to just keep things steady and keep raking in the cash.
But Arm sees how fast AI has taken off in the enterprise, and with some of its customers delivering record revenue of their own by offering AI graphics processing units that incorporate Arm’s tech, Arm wants a piece of the action.
Today, the company announced a new product naming strategy that underscores its shift from a supplier of component IP to a platform-first company.
“It’s about showing customers that we have much more to offer than just hardware and chip designs. specifically — we have a whole ecosystem that can help them scale AI and do so at lower cost with greater efficiency,” said Arm’s chief marketing officer Ami Badani, in an exclusive interview with VentureBeat over Zoom yesterday.
Indeed, as Arm CEO Rene Haas told the tech news outlet Next Platform back in February, Arm’s history of creating lower-power chips than the competitionhas set it up extremely well to serve as the basis for power-hungry AI training and inference jobs.
According to his comments in that article, today’s data center consume approximately 460 terawatt hours of electricity per year, but that is expected to triple by the end of this decade, and could jump from being 4 percent of all of the world’s energy usage to 25 percent — unless more Arm power-saving chip designs and their accompanying optimized software and firmware are used in the infrastructure for these centers.
From IP to platform: a significant shift
As AI workloads scale in complexity and power requirements, Arm is reorganizing its offerings around complete compute platforms.
These platforms allow for faster integration, more efficient scaling, and lower complexity for partners building AI-capable chips.
To reflect this shift, Arm is retiring its prior naming conventions and introducing new product families that are organized by market:
Neoverse for infrastructure
Niva for PCs
Lumex for mobile
Zena for automotive
Orbis for IoT and edge AI
The Mali brand will continue to represent GPU offerings, integrated as components within these new platforms.
Alongside the renaming, Arm is overhauling its product numbering system. IP identifiers will now correspond to platform generations and performance tiers labeled Ultra, Premium, Pro, Nano, and Pico. This structure is aimed at making the roadmap more transparent to customers and developers.
Emboldened by strong results
The rebranding follows Arm’s strong Q4 fiscal year 2025, where the company crossed the billion mark in quarterly revenue for the first time.
Total revenue hit billion, up 34% year-over-year, driven by both record licensing revenueand royalty revenue.
Notably, this royalty growth was driven by increasing deployment of the Armv9 architecture and adoption of Arm Compute Subsystemsacross smartphones, cloud infrastructure, and edge AI.
The mobile market was a standout: while global smartphone shipments grew less than 2%, Arm’s smartphone royalty revenue rose roughly 30%.
The company also entered its first automotive CSS agreement with a leading global EV manufacturer, furthering its penetration into the high-growth automotive market.
While Arm hasn’t disclosed the EV manufacturer’s precise name yet, Badani told VentureBeat that it sees automotive as a major growth area in addition to AI model providers and cloud hyperscalers such as Google and Amazon.
“We’re looking at automotive as a major growth area and we believe that AI and other advances like self-driving are going to be standard, which our designs are perfect for,” the CMO told VentureBeat.
Meanwhile, cloud providers like AWS, Google Cloud, and Microsoft Azure continued expanding their use of Arm-based silicon to run AI workloads, affirming Arm’s growing influence in data center compute.
Growing a new platform ecosystem with software and vertically integrated products
Arm is complementing its hardware platforms with expanded software tools and ecosystem support.
Its extension for GitHub Copilot, now free for all developers, lets users optimize code using Arm’s architecture.
More than 22 million developers now build on Arm, and its Kleidi AI software layer has surpassed 8 billion cumulative installs across devices.
Arm’s leadership sees the rebrand as a natural step in its long-term strategy. By providing vertically integrated platforms with performance and naming clarity, the company aims to meet increasing demand for energy-efficient AI compute from device to data center.
As Haas wrote in Arm’s blog post, Arm’s compute platforms are foundational to a future where AI is everywhere—and Arm is poised to deliver that foundation at scale.
What it means for AI and data decision makers
This strategic repositioning is likely to reshape how technical decision makers across AI, data, and security roles approach their day-to-day work and future planning.
For those managing large language model lifecycles, the clearer platform structure offers a more streamlined path for selecting compute architectures optimized for AI workloads.
As model deployment timelines tighten and the bar for efficiency rises, having predefined compute systems like Neoverse or Lumex could reduce the overhead required to evaluate raw IP blocks and allow faster execution in iterative development cycles.
For engineers orchestrating AI pipelines across environments, the modularity and performance tiering within Arm’s new architecture could help simplify pipeline standardization.
It introduces a practical way to align compute capabilities with varying workload requirements—whether that’s running inference at the edge or managing resource-intensive training jobs in the cloud.
These engineers, often juggling system uptime and cost-performance tradeoffs, may find more clarity in mapping their orchestration logic to predefined Arm platform tiers.
Data infrastructure leaders tasked with maintaining high-throughput pipelines and ensuring data integrity may also benefit.
The naming update and system-level integration signal a deeper commitment from Arm to support scalable designs that work well with AI-enabled pipelines.
The compute subsystems may also accelerate time-to-market for custom silicon that supports next-gen data platforms—important for teams that operate under budget constraints and limited engineering bandwidth.
Security leaders, meanwhile, will likely see implications in how embedded security features and system-level compatibility evolve within these platforms.
With Arm aiming to offer consistent architecture across edge and cloud, security teams can more easily plan for and enforce end-to-end protections, especially when integrating AI workloads that demand both performance and strict access controls.
The broader effect of this branding shift is a signal to enterprise architects and engineers: Arm is no longer just a component provider—it’s offering full-stack foundations for how AI systems are built and scaled.
Daily insights on business use cases with VB Daily
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
Read our Privacy Policy
Thanks for subscribing. Check out more VB newsletters here.
An error occured.
#arm #rebranding #its #systemonachip #product
·145 Views