GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
Learn how GPU Coder produces high-performance CUDA code that harness the power of TensorRT automatically from a high-level algorithm description in MATLAB. Write your deep learning application with the expressive power of MATLAB, which enables you to performance inference from trained deep learning networks together with data augmentation and post-processing of the results to create a complete deployment-ready application. GPU Coder then generates optimized inference code for the whole application. The deep learning inference model is compiled down to TensorRT while the rest of the application logic is parallelized through creation of CUDA kernels and integration with CUDA optimized libraries like cuBLAS, cuFFT, etc. The generated code can be cross-compiled to any NVIDIA GPU device that supports TensorRT. This allows engineers and scientists to unlock the expressive ease-of-use of the MATLAB programming language while unleashing deep learning performance by leveraging TensorRT.
Learn how GPU Coder produces high-performance CUDA code that harness the power of TensorRT automatically from a high-level algorithm description in MATLAB. Write your deep learning application with the expressive power of MATLAB, which enables you to performance inference from trained deep learning networks together with data augmentation and post-processing of the results to create a complete deployment-ready application. GPU Coder then generates optimized inference code for the whole application. The deep learning inference model is compiled down to TensorRT while the rest of the application logic is parallelized through creation of CUDA kernels and integration with CUDA optimized libraries like cuBLAS, cuFFT, etc. The generated code can be cross-compiled to any NVIDIA GPU device that supports TensorRT. This allows engineers and scientists to unlock the expressive ease-of-use of the MATLAB programming language while unleashing deep learning performance by leveraging TensorRT.  Back
 
Topics:
Artificial Intelligence and Deep Learning, Developer Tools
Type:
Talk
Event:
GTC Washington D.C.
Year:
2018
Session ID:
DC8130
Streaming:
Share:
 
Abstract:
Learn how GPU Coder produces high-performance CUDA code automatically from a high-level algorithm description in MATLAB. Write your deep learning application with the expressive power of MATLAB, which allows you to describe not just the use of your trained deep learning model in inference mode but also perform data-augmentation and post-processing of the results to create a complete deployment-ready application. GPU Coder can then generate optimized inference code for the whole application. The deep learning inference model is compiled down to TensorRT, while the rest of the application logic is parallelized through creation of CUDA kernels and integration with other CUDA optimized libraries like cuBLAS, cuFFT, etc. The generated code can be cross-compiled to any NVIDIA GPU device that supports TensorRT. This allows engineers and scientists to unlock the expressive ease-of-use of the MATLAB programming language while unleashing deep learning performance by leveraging TensorRT.
Learn how GPU Coder produces high-performance CUDA code automatically from a high-level algorithm description in MATLAB. Write your deep learning application with the expressive power of MATLAB, which allows you to describe not just the use of your trained deep learning model in inference mode but also perform data-augmentation and post-processing of the results to create a complete deployment-ready application. GPU Coder can then generate optimized inference code for the whole application. The deep learning inference model is compiled down to TensorRT, while the rest of the application logic is parallelized through creation of CUDA kernels and integration with other CUDA optimized libraries like cuBLAS, cuFFT, etc. The generated code can be cross-compiled to any NVIDIA GPU device that supports TensorRT. This allows engineers and scientists to unlock the expressive ease-of-use of the MATLAB programming language while unleashing deep learning performance by leveraging TensorRT.  Back
 
Topics:
Deep Learning & AI Frameworks, Tools & Libraries
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8480
Streaming:
Download:
Share:
 
Abstract:

Learn how to adopt a MATLAB-centric workflow to design, develop, and deploy computer vision and deep learning applications on to GPUs whether on your desktop, a cluster, or on embedded Tegra platforms. The workflow starts with algorithm design in MATLAB. The deep learning network is defined in MATLAB and is trained using MATLAB's GPU and parallel computing support. Then, the trained network is augmented with traditional computer vision techniques and the application can be verified in MATLAB. Finally, a compiler auto-generates portable and optimized CUDA code from the MATLAB algorithm, which can be cross-compiled to Tegra. Performance benchmark for Alexnet inference shows that the auto-generated CUDA code is ~2.5x faster than mxNet, ~5x faster than Caffe2 and is ~7x faster than TensorFlow.

Learn how to adopt a MATLAB-centric workflow to design, develop, and deploy computer vision and deep learning applications on to GPUs whether on your desktop, a cluster, or on embedded Tegra platforms. The workflow starts with algorithm design in MATLAB. The deep learning network is defined in MATLAB and is trained using MATLAB's GPU and parallel computing support. Then, the trained network is augmented with traditional computer vision techniques and the application can be verified in MATLAB. Finally, a compiler auto-generates portable and optimized CUDA code from the MATLAB algorithm, which can be cross-compiled to Tegra. Performance benchmark for Alexnet inference shows that the auto-generated CUDA code is ~2.5x faster than mxNet, ~5x faster than Caffe2 and is ~7x faster than TensorFlow.

  Back
 
Topics:
Autonomous Vehicles, Programming Languages, Computer Vision
Type:
Talk
Event:
GTC Europe
Year:
2017
Session ID:
23321
Download:
Share:
 
Abstract:
Learn how to adopt a MATLAB-centric workflow to design, develop, and deploy computer vision and deep learning applications on to GPUs whether on your desktop, a cluster, or on embedded Tegra platforms, including Jetson TK1/TX1 and DRIVE PX boards. The workflow starts with algorithm design in MATLAB, which enjoys universal appeal among engineers and scientists because of its expressive power and ease of use. The algorithm may employ deep learning networks augmented with traditional computer vision techniques and can be tested and verified within MATLAB. Next, those networks are trained using MATLAB's GPU and parallel computing support either on the desktop, a local compute cluster, or in the cloud. Finally, a compiler auto-generates portable and optimized CUDA code from the MATLAB algorithm, which is then cross-compiled and deployed to the Tegra board. We'll use examples of common computer vision algorithms and deep learning networks to describe this workflow, and we'll present their performance benchmarks, including training with multiple GPUs on an Amazon P2 cloud instance.
Learn how to adopt a MATLAB-centric workflow to design, develop, and deploy computer vision and deep learning applications on to GPUs whether on your desktop, a cluster, or on embedded Tegra platforms, including Jetson TK1/TX1 and DRIVE PX boards. The workflow starts with algorithm design in MATLAB, which enjoys universal appeal among engineers and scientists because of its expressive power and ease of use. The algorithm may employ deep learning networks augmented with traditional computer vision techniques and can be tested and verified within MATLAB. Next, those networks are trained using MATLAB's GPU and parallel computing support either on the desktop, a local compute cluster, or in the cloud. Finally, a compiler auto-generates portable and optimized CUDA code from the MATLAB algorithm, which is then cross-compiled and deployed to the Tegra board. We'll use examples of common computer vision algorithms and deep learning networks to describe this workflow, and we'll present their performance benchmarks, including training with multiple GPUs on an Amazon P2 cloud instance.  Back
 
Topics:
Tools & Libraries, Artificial Intelligence and Deep Learning, Intelligent Machines, IoT & Robotics
Type:
Talk
Event:
GTC Silicon Valley
Year:
2017
Session ID:
S7244
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next