Micron, Samsung, and SK Hynix preview new HBM4 memory for AI acceleration
www.techspot.com
Recap: The AI accelerator race is driving rapid innovation in high-bandwidth memory technologies. At this year's GTC event, memory giants Samsung, SK Hynix, and Micron previewed their next-generation HBM4 and HBM4e solutions coming down the pipeline. While data center GPUs are transitioning to HBM3e, the memory roadmaps revealed at Nvidia GTC make it clear that HBM4 will be the next big step. Computerbase attended the event and noted that this new standard enables some serious density and bandwidth improvements over HBM3.SK Hynix showcased its first 48GB HBM4 stack composed of 16 layers of 3GB chips running at 8Gbps. Likewise, Samsung and Micron had similar 16-high HBM4 demos, with Samsung claiming that speeds will ultimately reach 9.2Gbps within this generation. We should expect 12-high 36GB stacks to become more mainstream for HBM4 products launching in 2026. Micron says that its HBM4 solution will boost performance by over 50 percent compared to HBM3e.However, memory makers are already looking beyond HBM4 to HBM4e and staggering capacity points. Samsung's roadmap calls for 32Gb per layer DRAM, enabling 48GB and even 64GB per stack with data rates between 9.2-10Gbps. SK Hynix hinted at 20 or more layer stacks, allowing for up to 64GB capacities using their 3GB chips on HBM4e.These high densities are critical for Nvidia's forthcoming Rubin GPUs aimed at AI training. The company revealed Rubin Ultra will utilize 16 stacks of HBM4e for a colossal 1TB of memory per GPU when it arrives in 2027. Nvidia claims that with four chiplets per package and a 4.6PB/s bandwidth, Rubin Ultra will enable a combined 365TB of memory in the NVL576 system.While these numbers are impressive, they come at an ultra-premium price tag. VideoCardz notes that consumer graphics cards seem unlikely to adopt HBM variants anytime soon.The HBM4 and HBM4e generation represent a critical bridge for enabling continued AI performance scaling. If memory makers can deliver on their aggressive density and bandwidth roadmaps over the next few years, it will massively boost data-hungry AI workloads. Nvidia and others are counting on it.Image credit: ComputerBase
0 Comments ·0 Shares ·16 Views