GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:

Workstation applications today demand a tightly coupled compute-graphics pipeline where the simulation and the graphics are done interactively and in parallel. The use of multiple GPUs provides an affordable way for such applications to improve their performance and increase their useable data size by partitioning the processing and subsequent visualization among multiple GPUs. This tutorial explains the methodologies of how to program your application for a multi-GPU environment. Part 1 of this tutorial will cover GPU resources allocation and system configuration, including: What to expect when you add additional GPUs to your system; How to select, query and allocate all the necessary GPU resources; Provide a rudimentary introduction into the use of profiling and analysis tools. Throughout this tutorial, simple OpenGL and CUDA examples designed for a single GPU will be modified to efficiently work in a multi-GPU environment.

Workstation applications today demand a tightly coupled compute-graphics pipeline where the simulation and the graphics are done interactively and in parallel. The use of multiple GPUs provides an affordable way for such applications to improve their performance and increase their useable data size by partitioning the processing and subsequent visualization among multiple GPUs. This tutorial explains the methodologies of how to program your application for a multi-GPU environment. Part 1 of this tutorial will cover GPU resources allocation and system configuration, including: What to expect when you add additional GPUs to your system; How to select, query and allocate all the necessary GPU resources; Provide a rudimentary introduction into the use of profiling and analysis tools. Throughout this tutorial, simple OpenGL and CUDA examples designed for a single GPU will be modified to efficiently work in a multi-GPU environment.

  Back
 
Topics:
Combined Simulation & Real-Time Visualization, Graphics Performance Optimization, Media and Entertainment
Type:
Tutorial
Event:
GTC Silicon Valley
Year:
2013
Session ID:
S3070
Streaming:
Download:
Share:
 
Abstract:

The goal of this session is to explain the major methods a workstation developer would use to mix and adopt OpenGL and CUDA architecture to build (or improve) a high performance workstation application. This talk/demo would walk through the major concepts that utilize the Quadro (and Tesla) hardware (Multi-GPU programming, OpenGL/CUDA interop, CUDA streaming, and dual copy engines). At the end of this session, the developer will have a solid understanding of ï key concepts behind multi-GPU programming (including maximus configurations) ï features of OpenGL and CUDA that support multi-GPU programming and in which development environments (ie linux vs windows OpenGL vs directX) ï differentiating features of high-end Quadro/Tesla boards like Dual Copy Engines and when to take advantage of them. ï how to get started with GPU Accelerated software development

The goal of this session is to explain the major methods a workstation developer would use to mix and adopt OpenGL and CUDA architecture to build (or improve) a high performance workstation application. This talk/demo would walk through the major concepts that utilize the Quadro (and Tesla) hardware (Multi-GPU programming, OpenGL/CUDA interop, CUDA streaming, and dual copy engines). At the end of this session, the developer will have a solid understanding of ï key concepts behind multi-GPU programming (including maximus configurations) ï features of OpenGL and CUDA that support multi-GPU programming and in which development environments (ie linux vs windows OpenGL vs directX) ï differentiating features of high-end Quadro/Tesla boards like Dual Copy Engines and when to take advantage of them. ï how to get started with GPU Accelerated software development

  Back
 
Topics:
Graphics and AI
Type:
Talk
Event:
SIGGRAPH
Year:
2012
Session ID:
SIG1233
Download:
Share:
 
Abstract:

In this session we will cover all the different aspects of interaction between graphics and compute. The first part of the session will focus on compute API interoperability with OpenGL (using CUDA and OpenCL APIs), while the second part of the session will delve into interoperability at a system level. In particular we will go through the challenges and benefits of dedicating one GPU for compute and another for graphics, how different system configurations affect data transfer between two GPUs, and how it translates into application design decisions helping to enable an efficient, cross-GPU interoperability between compute and graphics contexts. This talk is repeated on Thursday at 3:30 PM (S0267B)

In this session we will cover all the different aspects of interaction between graphics and compute. The first part of the session will focus on compute API interoperability with OpenGL (using CUDA and OpenCL APIs), while the second part of the session will delve into interoperability at a system level. In particular we will go through the challenges and benefits of dedicating one GPU for compute and another for graphics, how different system configurations affect data transfer between two GPUs, and how it translates into application design decisions helping to enable an efficient, cross-GPU interoperability between compute and graphics contexts. This talk is repeated on Thursday at 3:30 PM (S0267B)

  Back
 
Topics:
Professional Visualisation
Type:
Talk
Event:
GTC Silicon Valley
Year:
2012
Session ID:
S2267A
Streaming:
Download:
Share:
 
Abstract:

In this session we will cover all the different aspects of interaction between graphics and compute. The first part of the session will focus on compute API interoperability with OpenGL (using CUDA and OpenCL APIs), while the second part of the session will delve into interoperability at a system level. In particular we will go through the challenges and benefits of dedicating one GPU for compute and another for graphics, how different system configurations affect data transfer between two GPUs, and how it translates into application design decisions helping to enable an efficient, cross-GPU interoperability between compute and graphics contexts. This talk is repeated on Tuesday at 5:00 PM (S0267A)

In this session we will cover all the different aspects of interaction between graphics and compute. The first part of the session will focus on compute API interoperability with OpenGL (using CUDA and OpenCL APIs), while the second part of the session will delve into interoperability at a system level. In particular we will go through the challenges and benefits of dedicating one GPU for compute and another for graphics, how different system configurations affect data transfer between two GPUs, and how it translates into application design decisions helping to enable an efficient, cross-GPU interoperability between compute and graphics contexts. This talk is repeated on Tuesday at 5:00 PM (S0267A)

  Back
 
Topics:
Professional Visualisation
Type:
Talk
Event:
GTC Silicon Valley
Year:
2012
Session ID:
S2267A
Streaming:
Download:
Share:
 
Abstract:

Learn how you can use a multiple display configuration to render video content captured from multiple sources, utilizing the power of GPUs to achieve unprecedented performance.

Learn how you can use a multiple display configuration to render video content captured from multiple sources, utilizing the power of GPUs to achieve unprecedented performance.

  Back
 
Topics:
Professional Visualisation
Type:
Talk
Event:
GTC Silicon Valley
Year:
2012
Session ID:
S2326
Streaming:
Download:
Share:
 
Abstract:

This tutorial will demonstrate how video I/O devices can take advantage of the GPU Direct for Video API to optimize the data transfer performance for digital video, film and broadcast applications and computer vision applications. The GPU Direct for Video API is a technology that permits the DMA transfer of data buffers between video I/O devices and the GPU through the use of a shared system memory buffer for immediate processing by OpenGL, DirectX, CUDA and OpenCL. This direct transfer can improve synchronization and eliminate latency between video capture, GPU processing and video output.

This tutorial will demonstrate how video I/O devices can take advantage of the GPU Direct for Video API to optimize the data transfer performance for digital video, film and broadcast applications and computer vision applications. The GPU Direct for Video API is a technology that permits the DMA transfer of data buffers between video I/O devices and the GPU through the use of a shared system memory buffer for immediate processing by OpenGL, DirectX, CUDA and OpenCL. This direct transfer can improve synchronization and eliminate latency between video capture, GPU processing and video output.

  Back
 
Topics:
Audio, Image and Video Processing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2012
Session ID:
S2049
Streaming:
Download:
Share:
 
Abstract:

Have questions, concerns or thoughts about the direction of GPU-based video and image processing? Join NVIDIA engineers and product managers for a lively discussion of such topics as application design, multi-GPU architecture, data movement, threading, APIs, and color management as they apply to Video and Image processing applications.

Have questions, concerns or thoughts about the direction of GPU-based video and image processing? Join NVIDIA engineers and product managers for a lively discussion of such topics as application design, multi-GPU architecture, data movement, threading, APIs, and color management as they apply to Video and Image processing applications.

  Back
 
Topics:
Audio, Image and Video Processing
Type:
Tutorial
Event:
GTC Silicon Valley
Year:
2012
Session ID:
S2601
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next