NVIDIA HeadquartersGetty ImagesDigital storage and memory hold the data used to power AI training and for storing the models used in inference. At next weeks NVIDIA GPU Technology Conference (GTC) several storage and memory companies will be showing their latest products and services to support the growing workloads to enable AI. This article will talk about a couple of these developments from pre-event announcements by Kioxia and Pliops. We will have more to say on digital storage and memory developments when the GTC begins.Data centers often utilize multiple types of digital storage technology to trade off performance requirement and storage costs. For instance, for artificial intelligence (AI) training fast solid-state drives (SSDs) are often used for feeding data to memory, usually dynamic random-access memory (DRAM) that supports the fast data access required for efficient and effective use of GPUs.In the last few months most of the major SSD companies have announced high capacity QLC SSDs that are intended to provide warmer as well as hot data storage, perhaps displacing some HDD secondary storage, particularly where storage density in a rack is a major factor for secondary storage choice. Kioxia has now joined the ranks of SSD companies offering high capacity QLC SSDs.Kioxia announced its 122.88TB LC9 series NVMe SSD, targeted for AI applications, see image below. This SSD is in a 2.5-inch form factor and is built with the companys 8th generation 3D QLC 2terabit (Tb) die using CMOS directly Bonded to Array (CBA) to increase density on the memory die. The drive has a PCIe 5.0 interface and dual-port capability for greater fault tolerance or for connectivity to multiple computer systems. It can provide up to 128 gigatransfers per second.Kioxia LC9 SSDKioxiaThe company says that high-capacity drives are critical for parts of the AI workload, particularly for large language models (LLMs), training and storing large data sets and for the rapid reteival of information for inference and model fine-tuning.This new SSD can also be used with the recently announced KIOXIA AiSAQ technology that can enhance scalable Retrieval Augmented Generation (RAG) performance by storing vector database elements on SSDs instead of on more costly DRAM.Pliops, a supplier of solid state storage and accelerator products announced a strategic collaboration with the vLLM Production Stack developed at LMCache Lab at the University of Chicago. This stack is targeted to greatly improve large language model (LMM) interference performance.Pliops is providing shared storage and efficient vLLM cache offloading, while LMCache Lab provides a scalable framework for multiple instance execution. The combined solution can recover from failed instances, leveraging Pliops KV storage backend. The collaboration introduces a petabyte tier of memory below HBM memory for GPU compute applications. Using disaggregated smart storage, computed KV caches are retained and retrieved efficiently, significantly speeding up vLLM inference.Data centers need storage and memory to provide the data needed for AI training and inference. Kioxia will be showing their high capacity PCIe 5.0 SSD and Pliops will be showing their shared storage used to improve LLM inference performance.