Exciting advances in technology have propelled AI computing to the forefront of mainstream applications. The desire to drive advanced visualization with photo realistic real-time rendering and efficient exa-scale class high performance computing fed with huge scale data collection have driven development of the key elements needed to build the most advanced AI computational engines. While these engines connected with advanced high speed busses like NVLINK are now providing true scalable AI computation within single systems, the challenge to break out of the box with large scale AI is upon us. In this talk we will discuss insights gained from creating NVIDIA's SATURNV AI Supercomputer enabling efficient use of this new class of dense AI computational engines and keys to optimizing data centers for GPU multi-node computing specifically targeted for today's neural net and HPC computing.
Gain insight on how NVIDIA built the world's most efficient supercomputer for deep learning. Learn (1) how DGX SATURNV's efficiency is key to building machines capable of reaching exascale speeds, (2) the blueprint for building an AI supercomputer and why organizations invest in such an architecture, and (3) potential problems that can be solved with the massive computing power of 124 NVIDIA Pascal-powered DGX-1 server nodes.