GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Artificial Intelligence and Deep Learning
Presentation
Media
GPU Coder: Integrating MATLAB with TensorRT
Abstract:
Learn how GPU Coder produces high-performance CUDA code that harness the power of TensorRT automatically from a high-level algorithm description in MATLAB. Write your deep learning application with the expressive power of MATLAB, which enables you to performance inference from trained deep learning networks together with data augmentation and post-processing of the results to create a complete deployment-ready application. GPU Coder then generates optimized inference code for the whole application. The deep learning inference model is compiled down to TensorRT while the rest of the application logic is parallelized through creation of CUDA kernels and integration with CUDA optimized libraries like cuBLAS, cuFFT, etc. The generated code can be cross-compiled to any NVIDIA GPU device that supports TensorRT. This allows engineers and scientists to unlock the expressive ease-of-use of the MATLAB programming language while unleashing deep learning performance by leveraging TensorRT.
 
Topics:
Artificial Intelligence and Deep Learning, Developer Tools
Type:
Talk
Event:
GTC Washington D.C.
Year:
2018
Session ID:
DC8130
Streaming:
Share: