Tech

Nvidia wants to speed up data transfer by connecting data center GPUs to SSDs

Nvidia wants to speed up data transfer by connecting data center GPUs to SSDs 

Enlarge (credit: Getty Images)

Microsoft brought DirectStorage to Windows PCs this week. The API promises faster load times and more detailed graphics by letting game developers make apps that load graphical data from the SSD directly to the GPU. Now, Nvidia and IBM have created a similar SSD/GPU technology, but they are aiming it at the massive data sets in information centers.

Instead of targeting console or PC gaming like DirectStorage, Big accelerator Memory (BaM) is meant to provide info centers quick access to vast amounts of data in GPU-intensive applications, like machine-learning training, analytics, and high-performance computing, according to a research paper spotted by The Register this week. Entitled “BaM: A Case for Enabling Fine-grain High Throughput GPU-Orchestrated Access to Storage” ( PDF ), the paper by researchers at -nvidia, IBM, and a few US universities proposes a more efficient way to run next-generation applications in data centers with massive computing power and memory bandwidth.

BaM also differs from DirectStorage in that the particular creators of the system architecture plan to make it open source.

Read 4 remaining paragraphs | Comments