GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:

NVIDIA GPU Cloud is a single source for researchers and developers seeking access to GPU optimized deep learning framework containers for TensorFlow, PyTorch, and MXNet.  Well cover the latest NVIDIA features integrated into these popular frameworks, the benefits of using them through NGC monthly container updates, and tips and tricks to maximize performance on NVIDIA GPUs for your deep learning workloads. Well dive into the anatomy of a deep learning container, breaking down the software that makes up the container, and present the optimizations we have implemented to get the most out of NVIDIA GPUs. For both new and experienced users of our deep learning framework containers, this session will provide valuable insight into the benefits of NVIDIA accelerated frameworks available as easy pull and run containers.

NVIDIA GPU Cloud is a single source for researchers and developers seeking access to GPU optimized deep learning framework containers for TensorFlow, PyTorch, and MXNet.  Well cover the latest NVIDIA features integrated into these popular frameworks, the benefits of using them through NGC monthly container updates, and tips and tricks to maximize performance on NVIDIA GPUs for your deep learning workloads. Well dive into the anatomy of a deep learning container, breaking down the software that makes up the container, and present the optimizations we have implemented to get the most out of NVIDIA GPUs. For both new and experienced users of our deep learning framework containers, this session will provide valuable insight into the benefits of NVIDIA accelerated frameworks available as easy pull and run containers.

  Back
 
Topics:
Data Center & Cloud Infrastructure, AI & Deep Learning Research
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9500
Streaming:
Download:
Share:
 
Abstract:
As the use of AI has increased in applications, so has the need for production quality AI inference. The NVIDIA TensorRT Hyperscale Inference platform is designed precisely for this purpose, with a combination of hardware and software to meet the highest scalability and demand requirements. In this session, learn about the new TensorRT inference server, which maximizes utilization by allowing inference on multiple models on the same system, supports all popular AI frameworks, and integrates seamlessly into DevOps deployments using Docker, Kubernetes, and Kubeflow.
As the use of AI has increased in applications, so has the need for production quality AI inference. The NVIDIA TensorRT Hyperscale Inference platform is designed precisely for this purpose, with a combination of hardware and software to meet the highest scalability and demand requirements. In this session, learn about the new TensorRT inference server, which maximizes utilization by allowing inference on multiple models on the same system, supports all popular AI frameworks, and integrates seamlessly into DevOps deployments using Docker, Kubernetes, and Kubeflow.  Back
 
Topics:
Developer Tools
Type:
Talk
Event:
GTC Washington D.C.
Year:
2018
Session ID:
DC8227
Streaming:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next