Migrating and building solutions in the cloud is challenging, expensive and not nearly as performant. Oracle Cloud Infrastructure (OCI) has been working with NVIDIA on giving you the on-premises performance you need with the cloud benefits and flexibility you expect. In this session we'll discuss how you can take big data and analytics workloads, database workloads, or traditional enterprise HPC workloads that require multiple components along with a portfolio of accelerated hardware and not only migrate them to the cloud, but run them successfully. We'll discuss solution architectures, showcase demos, benchmarks and take you through the cloud migration journey. We'll detail the latest instances that OCI provides, along with cloud-scale services.
Learn how to run GPU workloads securely in isolated unprivileged containers across a multi-node LXD cluster. We'll explain what unprivileged containers are and why they're safe, and then use demos to show how the power of LXD can be used to create a whole cluster of unprivileged containers that are isolated from each other. We will also show how LXD makes it trivial to pass through physical GPUs to containers and how it exposes a wide range of NVIDIA-specific options by leveraging NVIDIA's libnvidia-container library. This way each container can easily get a dedicated GPU or GPUs to run workloads. This session will help participants understand how running GPU-Intensive workloads can be effortless with the help of a dedicated container manager that is aware of NVIDIA-specific features.
Do you have a GPU cluster or air-gapped environment that you are responsible for but don't have an HPC background? NVIDIA DGX POD is a new way of thinking about AI infrastructure, combining DGX servers with networking and storage to accelerate AI workflow deployment and time to insight. We'll discuss lessons learned about building, deploying, and managing AI infrastructure at scale from design to deployment to management and monitoring. We will show how the DGX Pod Management software (DeepOps) along with our storage partner reference-architectures can be used for the deployment and management of multi-node GPU clusters for Deep Learning and HPC environments, in an on-premise, optionally air-gapped datacenter. The modular nature of the software also allows experienced administrators to pick and choose items that may be useful, making the process compatible with their existing software or infrastructure.
Our talk will describe the business and technical challenges posed by subsurface exploration for oil and gas and how the high-performance RiVA computing platform addresses these challenges. Oil and gas companies have struggled to find experts to manage the complex systems needed for subsurface exploration. In addition, the large data sets these engineers require often take more than eight hours to load. We'll discuss RiVA, which was built to address problems with slow data transfers, and describe how it offers performance 30 times faster than other solutions and reduces deployment time from years to months. We'll cover the technologies that make our solution possible, including NVIDIA GPUs, Mechdyne TGX, and the Leostream Connection Broker. In addition, a RiVA customer will share challenges and show how deploying RiVA helped lower costs during deployment and production.
NVIDIA GPU Cloud is a single source for researchers and developers seeking access to GPU optimized deep learning framework containers for TensorFlow, PyTorch, and MXNet. Well cover the latest NVIDIA features integrated into these popular frameworks, the benefits of using them through NGC monthly container updates, and tips and tricks to maximize performance on NVIDIA GPUs for your deep learning workloads. Well dive into the anatomy of a deep learning container, breaking down the software that makes up the container, and present the optimizations we have implemented to get the most out of NVIDIA GPUs. For both new and experienced users of our deep learning framework containers, this session will provide valuable insight into the benefits of NVIDIA accelerated frameworks available as easy pull and run containers.
Whether it's for AI, data science and analytics, or HPC, GPU-Accelerated software can make possible the previously impossible. But it's well known that these cutting edge software tools are often complex to use, hard to manage, and difficult to deploy. We'll exlain how NGC solves these problems and gives users a head start on their projects by simplifying the use of GPU-Optimized software. NVIDIA product management and engineering experts will walk through the latest enhancements to NGC and give examples of how software from NGC can improve GPU-accelerated workflows.
The hardest part of cloud computing engineering is operations because of the complexity of managing thousands of machines, but machine learning can add intelligence to public cloud operation and maintenance. We use RAPIDS to accelerate machine learning and the NVIDIA TensorRT inference server for GPU load balancing and improved GPU utilization. We'll explain how to use traditional machine learning algorithms such as ARIMA, XGBoost, and RandomForest for load prediction, load classification, user portrait, exception prediction, and other scenarios. Learn how to use GPUs for data preprocessing and algorithm acceleration for large-scale data analysis and machine learning of massive public cloud data. In addition, we'll cover how we implemented a large-scale training and prediction service platform based on Dask and NVIDIA's inference server. The platform can support large-scale GPU parallel computing and prediction requests.
VDI users across multiple industries can now harness the power of the world's most advanced virtual workstation to enable increasingly demanding workflows. This session brings together graphics virtualization thought leaders and experts from across the globe who have deep knowledge of NVIDIA virtual GPU architecture and years of experience implementing VDI across multiple hypervisors. Panelists will discuss how they transformed organizations, including how they leveraged multi-GPU support to boost GPU horsepower for photorealistic rendering and data-intensive simulation and how they used GPU-Accelerated deep learning or HPC VDI environments with ease using NGC containers.
With the growth in demand of Intelligent Video Analytics (IVA), NVIDIA virtual GPUs provides a secure solution while optimizing GPU utilization for inference-based deep learning applications for loss prevention, facial recognition, pose estimation, and many other use cases.
Learn how HPE and NVIDIA are simplifying infrastructure and delivering extreme graphics and performance on the HPE SimpliVity HCI platform. We'll talk about EUC offerings and use cases for HPE SimpliVity with NVIDIA GPUs and highlight performance metrics achieved through industry-standard benchmarks.
We'll talk about how we achieved lower costs, better density, and guaranteed performance while migrating from legacy large-scale GPU passthrough VDI architecture to the latest NVIDIA vGPU solution. We'll also discuss our work delivering remote workstations to subcontractors through our VDI solution for 6,000 concurrent users since 2014.
This customer panel brings together AI implementers who have deployed deep learning at scale. The discussion will focus on specific technical challenges they faced, solution design considerations, and best practices learned from implementing their respective solutions.
We will present NVIDIA's solution for interactive, real-time streaming of VR content (such as games and professional applications) from the cloud to a low-powered client driving a VR/AR headset. We will outline few of the challenges, describe our design, and share some performance and quality metrics.
Learn how CannonDesign has incorporated NVIDIA's RTX technology into their visualization workflows. During this presentation, we will discuss how CannonDesign is leveraging the power of the new Quadro RTX video cards to optimize rendering times using VRay Next and Unreal Engine. We will share our evolutionary path to better rendering solutions, initial challenges with RTX and current workflow through case studies. This session will be of interest to attendees with basic understanding of visualization workflows.
We'll discuss Project MagLev, NVIDIA's internal end-to-end AI platform for developing its self-driving car software, DRIVE. We'll explore the platform that supports continuous data ingest from multiple cars producing TB of data per hour. We'll also cover how the platform enables autonomous AI designers to iterate training of new neural network designs across thousands of GPU systems and validate the behavior of these designs over multi PB-scale data sets. We will talk about our overall architecture for everything from data center deployment to AI pipeline automation, as well as large-scale AI dataset management, AI training, and testing.
Global enterprises need to compress analysis time frames to update the business in real-time, a process called active analytics. We will discuss and demo how to bring together the key elements of an active analytics architecture, including historical, streaming, and graph analytics, location intelligence, and machine learning for predictive analytics.