Boyle said that while “ the best tools for our research and development teams to use internally,” the more important part for Nvidia’s customers is that “we have the exact copy of what they’re running. "When we've got workloads that can really benefit from the H100, and recommenders and language models, now, obviously, that workload will be first on Eos," Charlie Boyle, vice president and general manager of DGX Systems at Nvidia, told HPCwire.īut Nvidia also, of course, intends Eos to pave the road for clients to build similarly large systems. Eos will also power Nvidia-led research projects in areas like climate science and digital biology. (To learn more about a few of the other systems that are planning to be the world’s fastest AI supercomputer, read coverage like this or this.)Įos will be leveraged by Nvidia’s internal AI development and software engineering teams for its products, including autonomous vehicles and conversational AI software. 18 exaflops of AI performance may make Eos the most performant AI supercomputer, but it remains to be seen if it will best other "AI supercomputers" on the Linpack metric. ![]() ![]() Image courtesy of Nvidia.Īs each H100 delivers 30 teraflops of peak FP64 (IEEE) compute power, the traditional HPC peak works out to 138.2 FP64 petaflops, while Nvidia’s FP64 tensor core processing format doubles that HPC peak performance to 275 petaflops. “Eos will offer an incredible 18 exaflops of AI performance, and we expect it to be the world’s fastest AI supercomputer when it’s deployed,” said Paresh Kharya, senior director of product management and marketing at Nvidia, in a prebriefing for press and analysts. In total, Eos (pictured in a rendering in the header) will contain 18 of these 32-DGX H100 Pods, for a total of 576 DGX H100 systems 4,608 H100 GPUs 500 Quantum-2 InfiniBand switches and 360 NVLink switches. An external NVLink switch can then connect these DGX H100s into Pods, which offer up to an exaflop of AI performance and can themselves be linked in 32-node increments to form systems like Eos. To show off these advances, they also unveiled a new, massive supercomputer set to debut somewhere in the United States in a few months: Eos, named for the Greek goddess of the dawn.Įos is based on the fourth-generation DGX system - the DGX H100 - that was also launched today, and which is powered by octuple NVLink-connected H100 GPUs (more on all of that here). At GTC22 today, Nvidia unveiled its new H100 GPU, the first of its new ‘Hopper’ architecture, along with a slew of accompanying configurations, systems and accompanying technology and software.
0 Comments
Leave a Reply. |