GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
We are working with NVIDIA to lower the barrier of scientific understanding by improving the communication tools that scientists will have access to. NVIDIA has been working with Kitware to not only bring NVIDIA RTX support to ParaView, but allow ParaView users to access the omniverse. Come see how advancements in ParaView will unlock the next generation of visualization communication/collaboration techniques for your science.
We are working with NVIDIA to lower the barrier of scientific understanding by improving the communication tools that scientists will have access to. NVIDIA has been working with Kitware to not only bring NVIDIA RTX support to ParaView, but allow ParaView users to access the omniverse. Come see how advancements in ParaView will unlock the next generation of visualization communication/collaboration techniques for your science.  Back
 
Topics:
HPC and Supercomputing
Type:
Talk
Event:
Supercomputing
Year:
2019
Session ID:
SC1915
Streaming:
Download:
Share:
 
Abstract:
Visualization is a key component of many computational science disciplines. In the past, only typical HPC domains like computational fluid dynamics or cosmology had dataset sizes large enough to require sophisticated multi-node visualization tools. But with increasing detector sizes and large neural networks or higher resolution models, application domains that typically used single node visualization suddenly need performance that goes beyond a single node. In addition, rendering techniques using ray tracing or VR can dramatically increase the visual cues of visualizations, offering novel ways to investigate the data. In this presentation we will look at a range of large data problems and a palette of GPU-Accelerated visualization tools, and see how they can help in a range of use cases.
Visualization is a key component of many computational science disciplines. In the past, only typical HPC domains like computational fluid dynamics or cosmology had dataset sizes large enough to require sophisticated multi-node visualization tools. But with increasing detector sizes and large neural networks or higher resolution models, application domains that typically used single node visualization suddenly need performance that goes beyond a single node. In addition, rendering techniques using ray tracing or VR can dramatically increase the visual cues of visualizations, offering novel ways to investigate the data. In this presentation we will look at a range of large data problems and a palette of GPU-Accelerated visualization tools, and see how they can help in a range of use cases.  Back
 
Topics:
In-Situ & Scientific Visualization, Deep Learning & AI Frameworks, Medical Imaging & Radiology
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9681
Streaming:
Download:
Share:
 
Abstract:
Real-time ray racing has finally become reality. Following individual rays through a virtual scene leads to highly accurate renderings but the associated computational cost has so far prevented its interactive use. While mostly known in the computer graphics space, a lot of computational science and HPC applications perform similar operations. The ray tracing capabilities available on the latest generation GPUs, including the hardware support for ray-tracing via the RTCores on the Turing GPUs, are therefore of relevance not only to applications that generate colorful pixels, but also to applications performing ray-tracing like operations, including particles tracking in complex geometries or even spatial database searches. In this talk I will briefly summarize the features enabling real-time ray tracing and will look at the impact this technology has on scientific visualization and HPC applications in general. 
Real-time ray racing has finally become reality. Following individual rays through a virtual scene leads to highly accurate renderings but the associated computational cost has so far prevented its interactive use. While mostly known in the computer graphics space, a lot of computational science and HPC applications perform similar operations. The ray tracing capabilities available on the latest generation GPUs, including the hardware support for ray-tracing via the RTCores on the Turing GPUs, are therefore of relevance not only to applications that generate colorful pixels, but also to applications performing ray-tracing like operations, including particles tracking in complex geometries or even spatial database searches. In this talk I will briefly summarize the features enabling real-time ray tracing and will look at the impact this technology has on scientific visualization and HPC applications in general. 
  Back
 
Topics:
Science and Research
Type:
Talk
Event:
Supercomputing
Year:
2018
Session ID:
SC1831
Download:
Share:
 
Abstract:
This talk is a summary about the ongoing HPC visualization activities, as well as a description of the technologies behind the developer-zone shown in the booth.
This talk is a summary about the ongoing HPC visualization activities, as well as a description of the technologies behind the developer-zone shown in the booth.  Back
 
Topics:
Deep Learning & AI Frameworks
Type:
Talk
Event:
SIGGRAPH
Year:
2017
Session ID:
SC1735
Download:
Share:
 
Abstract:

This talk is a summary about the ongoing HPC visualization activities, as well as a description of the technologies behind the developer-zone shown in the booth. 

This talk is a summary about the ongoing HPC visualization activities, as well as a description of the technologies behind the developer-zone shown in the booth. 

  Back
 
Topics:
HPC and Supercomputing, In-Situ & Scientific Visualization
Type:
Talk
Event:
Supercomputing
Year:
2016
Session ID:
SC6105
Streaming:
Download:
Share:
 
Abstract:
Learn how to leverage the graphics power in your GPU-accelerated supercomputer to turn your simulation data into insight. Starting from simulation data distributed across the nodes of a remote supercomputer, we'll cover various techniques and tools to convert this data into insightful visualizations at your workstation, leading to an end-to-end GPU accelerated visualization pipeline.
Learn how to leverage the graphics power in your GPU-accelerated supercomputer to turn your simulation data into insight. Starting from simulation data distributed across the nodes of a remote supercomputer, we'll cover various techniques and tools to convert this data into insightful visualizations at your workstation, leading to an end-to-end GPU accelerated visualization pipeline.  Back
 
Topics:
In-Situ & Scientific Visualization, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2016
Session ID:
S6645
Streaming:
Download:
Share:
 
Abstract:
Learn how to visualize your data on GPU accelerated supercomputers. In this presentation, we will give an overview of data analysis and visualization on GPU accelerated supercomputers and clusters. In a first part, we will describe the steps necessary to use the GPUs in a remote supercomputer for visualization. We will then provide a brief overview of Paraview, one of the widely used visualization applications, touching on topics like parallel compositing and in-situ visualization of GPU resident data.
Learn how to visualize your data on GPU accelerated supercomputers. In this presentation, we will give an overview of data analysis and visualization on GPU accelerated supercomputers and clusters. In a first part, we will describe the steps necessary to use the GPUs in a remote supercomputer for visualization. We will then provide a brief overview of Paraview, one of the widely used visualization applications, touching on topics like parallel compositing and in-situ visualization of GPU resident data.  Back
 
Topics:
Visualization - In-Situ & Scientific, GPU Virtualization, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2015
Session ID:
S5660
Streaming:
Download:
Share:
 
Abstract:
In this session you will learn how to program GPU clusters using the message passing interface (MPI) and OpenACC or CUDA. Part I of this session will explain how to get started by giving a quick introduction to MPI and how it can be combined with OpenACC or CUDA. Part II will explain more advanced topics like GPU-aware MPI and how to overlap communication with computation to hide communication times. Finally, Part III will cover how to use the NVIDIA performance analysis tools in a MPI environment and give an overview of third party tools specifically designed for GPU clusters.
In this session you will learn how to program GPU clusters using the message passing interface (MPI) and OpenACC or CUDA. Part I of this session will explain how to get started by giving a quick introduction to MPI and how it can be combined with OpenACC or CUDA. Part II will explain more advanced topics like GPU-aware MPI and how to overlap communication with computation to hide communication times. Finally, Part III will cover how to use the NVIDIA performance analysis tools in a MPI environment and give an overview of third party tools specifically designed for GPU clusters.  Back
 
Topics:
HPC and Supercomputing, Performance Optimization1
Type:
Tutorial
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4236
Streaming:
Download:
Share:
 
Abstract:
Learn how to take advantage of GPUs to visualize results of your GPU-accelerated simulation! This session will cover a broad range of visualization and analysis techniques allowing you to investigate your data on the fly. Starting with some basic CUDA/OpenGL interoperability, we will introduce more sophisticated data models allowing you to take advantage of widely used tools like ParaView and VisIt to visualize your GPU resident data. Questions like parallel compositing, remote visualization and application steering will be addressed in order to allow you to take full advantage of the GPUs installed in your supercomputing system.
Learn how to take advantage of GPUs to visualize results of your GPU-accelerated simulation! This session will cover a broad range of visualization and analysis techniques allowing you to investigate your data on the fly. Starting with some basic CUDA/OpenGL interoperability, we will introduce more sophisticated data models allowing you to take advantage of widely used tools like ParaView and VisIt to visualize your GPU resident data. Questions like parallel compositing, remote visualization and application steering will be addressed in order to allow you to take full advantage of the GPUs installed in your supercomputing system.  Back
 
Topics:
Large Scale Data Visualization & In-Situ Graphics, Combined Simulation & Real-Time Visualization, Scientific Visualization, HPC and Supercomputing
Type:
Tutorial
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4244
Streaming:
Download:
Share:
 
Abstract:
In this talk, you will learn how to use the game and visualization wizard's tool chest to accelerate your scientific computing applications. NVIDIA's game physics engine PhysX and the ray tracing framework OptiX offer a wealth of functionality often needed in scientific computing application. However, due to the different target audiences, these frameworks are generally not very well known to the scientific computing communities. High-frequency electromagnetic simulations, particle simulations in complex geometries, or discrete element simulations are all examples of applications that could immediately benefit from these frameworks. Based on examples, we will talk about the basic concepts of these frameworks, introduce their strengths and their approximation, and how to take advantage of them from within a scientific application.
In this talk, you will learn how to use the game and visualization wizard's tool chest to accelerate your scientific computing applications. NVIDIA's game physics engine PhysX and the ray tracing framework OptiX offer a wealth of functionality often needed in scientific computing application. However, due to the different target audiences, these frameworks are generally not very well known to the scientific computing communities. High-frequency electromagnetic simulations, particle simulations in complex geometries, or discrete element simulations are all examples of applications that could immediately benefit from these frameworks. Based on examples, we will talk about the basic concepts of these frameworks, introduce their strengths and their approximation, and how to take advantage of them from within a scientific application.   Back
 
Topics:
Computational Physics, Numerical Algorithms & Libraries, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4260
Streaming:
Download:
Share:
 
Abstract:

Developing successful scientific software becomes increasingly a collaborative endeavor, joining the talents of from a multitude of disciplines. NVIDIA and ETH Zurich are forming a Co-Design Lab for Hybrid Multicore Computing as a joint effort to develop and optimize scientific applications for hybrid computing architectures. In this talk, I will introduce the lab and present some early successes of this new collaboration.

Developing successful scientific software becomes increasingly a collaborative endeavor, joining the talents of from a multitude of disciplines. NVIDIA and ETH Zurich are forming a Co-Design Lab for Hybrid Multicore Computing as a joint effort to develop and optimize scientific applications for hybrid computing architectures. In this talk, I will introduce the lab and present some early successes of this new collaboration.

  Back
 
Topics:
HPC and Supercomputing
Type:
Talk
Event:
Supercomputing
Year:
2013
Session ID:
SC3130
Streaming:
Download:
Share:
 
Abstract:

OpenACC has quickly become the standard for accelerating large code bases with GPUs. Using directives, the programmer provides hints about data locality, data dependency and control flow that allows the compiler to automatically generate efficient GPU code. While the OpenACC model is well suited for a broad range of commonly encountered software patterns, it is sometimes necessary to fine-tune an application with advanced OpenACC directives or interface to an external CUDA code to take advantage of latest hardware features. The goal of this tutorial is to present the different strategies to tune OpenACC code and introduce mechanisms to interface OpenACC with other GPU code. Based on examples, we will first present different strategies to assess and optimize the performance of an OpenACC code, and will then focus on interfacing OpenACC code with CUDA and graphics libraries.

OpenACC has quickly become the standard for accelerating large code bases with GPUs. Using directives, the programmer provides hints about data locality, data dependency and control flow that allows the compiler to automatically generate efficient GPU code. While the OpenACC model is well suited for a broad range of commonly encountered software patterns, it is sometimes necessary to fine-tune an application with advanced OpenACC directives or interface to an external CUDA code to take advantage of latest hardware features. The goal of this tutorial is to present the different strategies to tune OpenACC code and introduce mechanisms to interface OpenACC with other GPU code. Based on examples, we will first present different strategies to assess and optimize the performance of an OpenACC code, and will then focus on interfacing OpenACC code with CUDA and graphics libraries.

  Back
 
Topics:
Programming Languages, HPC and Supercomputing
Type:
Tutorial
Event:
GTC Silicon Valley
Year:
2013
Session ID:
S3019
Streaming:
Download:
Share:
 
Abstract:

We will present a stencil library used in the heart of the COSMO numeric weather prediction model. During the talk we'll show how we implemented an abstraction that allows easy development of new stencils and solvers on top of a framework allowing execution on both CPU and GPU. The library makes efficient use of GPU resources and we will show how to structure memory accesses and computation optimally. Developers involved in porting or writing fully-featured C++ libraries for CUDA will also be interested in attending.

We will present a stencil library used in the heart of the COSMO numeric weather prediction model. During the talk we'll show how we implemented an abstraction that allows easy development of new stencils and solvers on top of a framework allowing execution on both CPU and GPU. The library makes efficient use of GPU resources and we will show how to structure memory accesses and computation optimally. Developers involved in porting or writing fully-featured C++ libraries for CUDA will also be interested in attending.

  Back
 
Topics:
Climate, Weather & Ocean Modeling
Type:
Talk
Event:
GTC Silicon Valley
Year:
2012
Session ID:
S2256
Streaming:
Download:
Share:
 
Abstract:

The libraries distributed in the CUDA SDK and offered by third parties provide a wealth for functions commonly encountered in a GPU acceleration project. Using these libraries can often significantly shorten the development time of a GPU project while leading to high-performance, high-quality software. In this tutorial, we will provide an overview of the libraries in the CUDA SDK, including cuBLAS, cuRAND, NPP and Thurst and introduce common use cases. The audience will not only learn about the strengths of the individual libraries, but also learn about the decision making process to select the best suited library for their project.

The libraries distributed in the CUDA SDK and offered by third parties provide a wealth for functions commonly encountered in a GPU acceleration project. Using these libraries can often significantly shorten the development time of a GPU project while leading to high-performance, high-quality software. In this tutorial, we will provide an overview of the libraries in the CUDA SDK, including cuBLAS, cuRAND, NPP and Thurst and introduce common use cases. The audience will not only learn about the strengths of the individual libraries, but also learn about the decision making process to select the best suited library for their project.

  Back
 
Topics:
Programming Languages
Type:
Tutorial
Event:
GTC Silicon Valley
Year:
2012
Session ID:
S2629
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next