As deep learning becomes more prevalent, one challenging aspect is the time it takes to setup and maintain systems. Finding the best combination of framework version, drivers, runtimes, operating system and patches and testing all of these pieces to ensure they work well together takes away valuable time from your deep learning goals. In this session, we'll talk about a better way to get your projects up and running with GPU-accelerated deep learning containers from the NVIDIA GPU Cloud (NGC) container registry. We'll discuss the variety of software containers available from NGC and how use them in your on-prem, cloud, or hybrid cloud deployments.
NAMD and VMD provide state-of-the-art molecular simulation, analysis, and visualization tools that leverage a panoply of GPU acceleration technologies to achieve performance levels that enable scientists to routinely apply research methods that were formerly too computationally demanding to be practical. To make state-of-the-art MD simulation and computational microscopy workflows available to a broader range of molecular scientists including non-traditional users of HPC systems, our center has begun producing pre-configured container images and Amazon EC2 AMIs that streamline deployment, particularly for specialized occasional-use workflows, e.g., for refinement of atomic structures obtained through cryo-electron microscopy. This talk will describe the latest technological advances in NAMD and VMD, using CUDA, OpenACC, and OptiX, including early results on ORNL Summit, state-of-the-art RTX hardware ray tracing on Turing GPUs, and easy deployment using containers and cloud computing infrastructure.
NVIDIA offers several containerized applications in HPC, visualization, and deep learning. We have also enabled a broad array of contain-related technologies for GPU with upstreamed improvements to community projects and with tools that are seeing broad interest and adoption. Furthermore, NVIDIA is acting as a catalyst for the broader community in enumerating key technical challenges for developers, admins and end users, and is helping to identify gaps and drive them to closure. This talk describes NVIDIA new developments and upcoming efforts. It outlines progress in the most important technical areas, including multi-node containers, security, and scheduling frameworks. It highlights the breadth and depth of interactions across the HPC community that are making the latest, highly-quality HPC applications available to platforms that include GPUs.
Containers simplify application deployments in the data centers by wrapping applications into an isolated virtual environment. By including all application dependencies like binaries and libraries, application containers run seamlessly in any data center environment. The HPC application containers available on NVIDIA GPU Cloud (NGC) dramatically improve ease of application deployment while delivering optimized performance. However, if the desired application is not available on NGC registry, building HPC containers from scratch trades one set of challenges for another. Parts of the software environment typically provided by the HPC data center must be redeployed inside the container. For those used to just loading the relevant environment modules, installing a compiler, MPI library, CUDA, and other core HPC components from scratch may be daunting. HPC Container Maker (HPCCM) is an open-source project that addresses the challenges of creating HPC application containers. Scott McMillan will present how HPCCM makes it easier to create HPC application containers by separating the choice of what should go into a container image and will cover the best practices to minimize container development effort, minimize image size, and take advantage of image layering.