Like its namesake, NVIDIA DGX SATURNV is yielding insights that have far-reaching impact, helping us build the best AI architecture for every enterprise. In this talk, we describe the architecture of SATURNV, and how we use it every day at NVIDIA to run our deep learning workloads for both production and research use cases. We explore how the NVIDIA GPU Cloud software is used to manage and schedule work on SATURNV, and how it gives us the agility to rapidly respond to business-critical projects. We also present some of the results of our research in operating this unique GPU-accelerated data center.
We describe the software for DGX-1, including the system software, optimized deep learning frameworks and cloud services. We describe how DGX-1 can be operated through its cloud services to provide high performance compute resources for an individual, team or department, complete with scheduling, monitoring, notifications and dashboard UIs for both users and administrators. We describe the advantages of application delivery through NVIDIA Docker containers and show how the customer's investment grows in value over time as we add new containers and improve performance through system software updates.
Today, billions of sensors gathering zettabytes of data are offering organizations a treasure trove of information that can help them better serve their customer needs. With the advances of 5G infrastructure, companies now have the ability to bring AI models to the edge, where the data is generated and real-time decisions need to be made. Kubernetes eliminates many of the manual processes involved in deploying, managing and scaling applications, and is becoming a standard for deployment from the data center to the edge. NVIDIA NGC product and engineering experts will walk through the latest enhancements to its GPU-accelerated software hub, and demonstrate how NVIDIA is facilitating the deployment and management of AI applications at the edge.