GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
This talk presents a system for the visualization of professional graphics, such as ray tracing, on a low-latency device, such as a head-mounted display or tablet. I will describe the issues encountered, and the algorithms used. The example I will demonstrate showcases the NVIDIA® VCA cluster for cloud-based rendering, NVENC for low-latency video encoding, and Google's Project Tango with the Tegra K1 processor for pose tracking and video decoding. The demo system presented can also serve graphics to multiple low-latency devices, such as a Virtual Reality HMD, at a rate much faster than the graphics are rendered.
This talk presents a system for the visualization of professional graphics, such as ray tracing, on a low-latency device, such as a head-mounted display or tablet. I will describe the issues encountered, and the algorithms used. The example I will demonstrate showcases the NVIDIA® VCA cluster for cloud-based rendering, NVENC for low-latency video encoding, and Google's Project Tango with the Tegra K1 processor for pose tracking and video decoding. The demo system presented can also serve graphics to multiple low-latency devices, such as a Virtual Reality HMD, at a rate much faster than the graphics are rendered.  Back
 
Topics:
Augmented Reality and Virtual Reality, Media and Entertainment, Real-Time Graphics
Type:
Talk
Event:
GTC Silicon Valley
Year:
2015
Session ID:
S5733
Streaming:
Share:
 
Abstract:

The Tango Tablet is the latest device in Googles advanced technology program to bring spatial awareness to mobile devices. Tango Tablet uses NVIDIAs Tegra K1 processor to provide GPU acceleration for advanced odometry and 3D depth sensor processing to enable a mobile device to know where it is and what is around it. This capability lays the foundation for a new generation of immersive experiences including augmented reality. This session will explore the capabilities of the Tango Tablet and how developers can integrate its advanced sensor processing into their applications.

The Tango Tablet is the latest device in Googles advanced technology program to bring spatial awareness to mobile devices. Tango Tablet uses NVIDIAs Tegra K1 processor to provide GPU acceleration for advanced odometry and 3D depth sensor processing to enable a mobile device to know where it is and what is around it. This capability lays the foundation for a new generation of immersive experiences including augmented reality. This session will explore the capabilities of the Tango Tablet and how developers can integrate its advanced sensor processing into their applications.

  Back
 
Topics:
Visual Computing Theater
Type:
Talk
Event:
SIGGRAPH
Year:
2014
Session ID:
SIG4125
Streaming:
Download:
Share:
 
Abstract:

Modern workstation applications demand a tightly coupled compute-graphics pipeline where the simulation and the graphics are done interactively and in parallel. The use of multiple GPUs provides an affordable way for such applications to improve their performance and increase their useable data size by partitioning the processing and subsequent visualization among multiple GPUs. This session explains the methodologies of how to program your application for a multi-GPU environment, including: How to structure an application to optimize compute-graphics performance and manage synchronization; How to manage efficient data transfers across the PCIE bus; Debugging and profiling and; Programming considerations when scaling beyond two GPUs, such as multiple compute GPUs feeding to one or multiple graphics GPUs. Throughout this session, OpenGL and CUDA code examples designed for a single GPU will be modified to efficiently work in a multi-GPU environment.

Modern workstation applications demand a tightly coupled compute-graphics pipeline where the simulation and the graphics are done interactively and in parallel. The use of multiple GPUs provides an affordable way for such applications to improve their performance and increase their useable data size by partitioning the processing and subsequent visualization among multiple GPUs. This session explains the methodologies of how to program your application for a multi-GPU environment, including: How to structure an application to optimize compute-graphics performance and manage synchronization; How to manage efficient data transfers across the PCIE bus; Debugging and profiling and; Programming considerations when scaling beyond two GPUs, such as multiple compute GPUs feeding to one or multiple graphics GPUs. Throughout this session, OpenGL and CUDA code examples designed for a single GPU will be modified to efficiently work in a multi-GPU environment.

  Back
 
Topics:
Combined Simulation & Real-Time Visualization, Graphics Performance Optimization
Type:
Talk
Event:
SIGGRAPH
Year:
2013
Session ID:
SIG1301
Streaming:
Download:
Share:
 
Abstract:

Workstation applications today demand a tightly coupled compute-graphics pipeline where the simulation and the graphics are done interactively and in parallel. The use of multiple GPUs provides an affordable way for such applications to improve their performance and increase their useable data size by partitioning the processing and subsequent visualization among mulitple GPUs. This tutorial explains the methodologies of how to program your application for a multi-GPU environment. Part 2 of this tutorial will cover programming methodologies, including: How to structure an application to optimize compute and graphics performance and manage synchronization; How to manage data transfers across the PCIE bus; Debugging and profiling; Programming considerations when scaling beyond two GPUs - multiple compute GPUs feeding to one or multiple graphics GPUs. Throughout this tutorial, simple OpenGL and CUDA examples designed for a single GPU will be modified to efficiently work in a multi-GPU environment.

Workstation applications today demand a tightly coupled compute-graphics pipeline where the simulation and the graphics are done interactively and in parallel. The use of multiple GPUs provides an affordable way for such applications to improve their performance and increase their useable data size by partitioning the processing and subsequent visualization among mulitple GPUs. This tutorial explains the methodologies of how to program your application for a multi-GPU environment. Part 2 of this tutorial will cover programming methodologies, including: How to structure an application to optimize compute and graphics performance and manage synchronization; How to manage data transfers across the PCIE bus; Debugging and profiling; Programming considerations when scaling beyond two GPUs - multiple compute GPUs feeding to one or multiple graphics GPUs. Throughout this tutorial, simple OpenGL and CUDA examples designed for a single GPU will be modified to efficiently work in a multi-GPU environment.

  Back
 
Topics:
Combined Simulation & Real-Time Visualization, Graphics Performance Optimization, Media and Entertainment
Type:
Tutorial
Event:
GTC Silicon Valley
Year:
2013
Session ID:
S3072
Streaming:
Download:
Share:
 
Abstract:

NVIDIA GPUs have been used to accelerate visual effects in movies for over a decade. We have witnessed them mature from graphics acceleration hardware, to generalized supercomputing co-processors. At the same time, we have seen the complexity of rendering and the fidelity of simulations in movie-FX increase exponentially.In this webinar, Wil Braithwaite, Senior Applied Engineer, NVIDIA examines the current state-of-the-art of GPU-accelerated HPC at leading VFX studios, and provides a glimpse into the future of how next-generation GPUs may be used to change the way movies are made.

NVIDIA GPUs have been used to accelerate visual effects in movies for over a decade. We have witnessed them mature from graphics acceleration hardware, to generalized supercomputing co-processors. At the same time, we have seen the complexity of rendering and the fidelity of simulations in movie-FX increase exponentially.In this webinar, Wil Braithwaite, Senior Applied Engineer, NVIDIA examines the current state-of-the-art of GPU-accelerated HPC at leading VFX studios, and provides a glimpse into the future of how next-generation GPUs may be used to change the way movies are made.

  Back
 
Topics:
Visual Effects & Simulation
Type:
Webinar
Event:
GTC Webinars
Year:
2013
Session ID:
GTCE047
Download:
Share:
 
Abstract:

The goal of this session is to explain the major methods a workstation developer would use to mix and adopt OpenGL and CUDA architecture to build (or improve) a high performance workstation application. This talk/demo would walk through the major concepts that utilize the Quadro (and Tesla) hardware (Multi-GPU programming, OpenGL/CUDA interop, CUDA streaming, and dual copy engines). At the end of this session, the developer will have a solid understanding of ï key concepts behind multi-GPU programming (including maximus configurations) ï features of OpenGL and CUDA that support multi-GPU programming and in which development environments (ie linux vs windows OpenGL vs directX) ï differentiating features of high-end Quadro/Tesla boards like Dual Copy Engines and when to take advantage of them. ï how to get started with GPU Accelerated software development

The goal of this session is to explain the major methods a workstation developer would use to mix and adopt OpenGL and CUDA architecture to build (or improve) a high performance workstation application. This talk/demo would walk through the major concepts that utilize the Quadro (and Tesla) hardware (Multi-GPU programming, OpenGL/CUDA interop, CUDA streaming, and dual copy engines). At the end of this session, the developer will have a solid understanding of ï key concepts behind multi-GPU programming (including maximus configurations) ï features of OpenGL and CUDA that support multi-GPU programming and in which development environments (ie linux vs windows OpenGL vs directX) ï differentiating features of high-end Quadro/Tesla boards like Dual Copy Engines and when to take advantage of them. ï how to get started with GPU Accelerated software development

  Back
 
Topics:
Graphics and AI
Type:
Talk
Event:
SIGGRAPH
Year:
2012
Session ID:
SIG1234
Download:
Share:
 
Abstract:

We present a plug-in for Maya which enables an artist to simulate huge particle counts in real-time by leveraging the NVIDIA GPU. Being able to interact with the simulation opens up new possibilities for modifying the workflow. We will demonstrate the plug-in, and provide insight into the algorithms used.

We present a plug-in for Maya which enables an artist to simulate huge particle counts in real-time by leveraging the NVIDIA GPU. Being able to interact with the simulation opens up new possibilities for modifying the workflow. We will demonstrate the plug-in, and provide insight into the algorithms used.

  Back
 
Topics:
Graphics and AI
Type:
Talk
Event:
GTC Silicon Valley
Year:
2012
Session ID:
S2364
Streaming:
Download:
Share:
 
Speakers:
Wil Braithwaite
- NVIDIA
 
Topics:
Tools & Libraries
Type:
Talk
Event:
SIGGRAPH
Year:
2011
Session ID:
SIG1106
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next