GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
Learn more about using the most popular computer vision and natural language processing models with state-of-the-art accuracy in MXNet, accelerated for NVIDIA Tensor Cores, to reduce training time. The session will explore the MXNet Gluon CV and NLP toolkits with a demo showing how to achieve out-of-the-box acceleration on Tensor Cores. We'll also review and demo a new tool for MXNet, automated mixed-precision, which shows that with only a few lines of code, any MXNet Gluon model can be accelerated on NVIDIA Tensor Cores. In addition, we'lldiscuss the MXNet ResNet-50 MLPerf submission on NVIDIA DGX systems and share how MXNet was enhanced with additions such as Horovod and small batch to set a new benchmark record. Beyond training, we'll also cover improvements to the existing experimental MXNet-TRT integration going further than FP32 and ResNets.
Learn more about using the most popular computer vision and natural language processing models with state-of-the-art accuracy in MXNet, accelerated for NVIDIA Tensor Cores, to reduce training time. The session will explore the MXNet Gluon CV and NLP toolkits with a demo showing how to achieve out-of-the-box acceleration on Tensor Cores. We'll also review and demo a new tool for MXNet, automated mixed-precision, which shows that with only a few lines of code, any MXNet Gluon model can be accelerated on NVIDIA Tensor Cores. In addition, we'lldiscuss the MXNet ResNet-50 MLPerf submission on NVIDIA DGX systems and share how MXNet was enhanced with additions such as Horovod and small batch to set a new benchmark record. Beyond training, we'll also cover improvements to the existing experimental MXNet-TRT integration going further than FP32 and ResNets.  Back
 
Topics:
Deep Learning & AI Frameworks, Tools & Libraries
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S91003
Streaming:
Download:
Share:
 
Abstract:
We'll discuss monitoring and visualizing a deep neural network in MXNet and explain how to improve training performance. We'll also talk about coding best practices, data pre-processing, making effective use of CPUs, hybridization, efficient batch size, low precision training, and other tips and tricks that can improve training performance by orders of magnitude.
We'll discuss monitoring and visualizing a deep neural network in MXNet and explain how to improve training performance. We'll also talk about coding best practices, data pre-processing, making effective use of CPUs, hybridization, efficient batch size, low precision training, and other tips and tricks that can improve training performance by orders of magnitude.  Back
 
Topics:
Advanced AI Learning Techniques
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9370
Streaming:
Download:
Share:
 
Abstract:
Tuning hyperparameters is a time-consuming and costly task. More an art than a science, it often takes long hours to arrive at a good combination of parameters such as batch size, learning rate, optimizer, number of layers, number of nodes in a layer, and potentially tens of others. We'll discuss how automating the process of finding the best combination of parameters, based on a data-centric and repeatable method, can save time and result in better models. We will explain the theory of Bayesian hyperparameter optimization and provide hands-on labs to help attendees learn how to take advantage of Amazon SageMaker's Automatic Model Tuning.
Tuning hyperparameters is a time-consuming and costly task. More an art than a science, it often takes long hours to arrive at a good combination of parameters such as batch size, learning rate, optimizer, number of layers, number of nodes in a layer, and potentially tens of others. We'll discuss how automating the process of finding the best combination of parameters, based on a data-centric and repeatable method, can save time and result in better models. We will explain the theory of Bayesian hyperparameter optimization and provide hands-on labs to help attendees learn how to take advantage of Amazon SageMaker's Automatic Model Tuning.  Back
 
Topics:
Deep Learning & AI Frameworks, Advanced AI Learning Techniques
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9372
Streaming:
Download:
Share:
 
Abstract:

NMT is often performed using sequence to sequence modeling, where the input is a sequence of variable length tensor representation of a sentence in source language , and the output is the another variable length tensor representation of target language. Sockeye project, a sequence-to-sequence framework for Neural Machine Translation based on Apache MXNet Incubating. It implements the well-known encoder-decoder architecture with attention. The talk covers LSTM networks, NMT fundamentals, an overview of how to use Sockeye for implementing translation tasks, and areas of active research for those who are interested in further study of the subject.

NMT is often performed using sequence to sequence modeling, where the input is a sequence of variable length tensor representation of a sentence in source language , and the output is the another variable length tensor representation of target language. Sockeye project, a sequence-to-sequence framework for Neural Machine Translation based on Apache MXNet Incubating. It implements the well-known encoder-decoder architecture with attention. The talk covers LSTM networks, NMT fundamentals, an overview of how to use Sockeye for implementing translation tasks, and areas of active research for those who are interested in further study of the subject.

  Back
 
Topics:
Other
Type:
Talk
Event:
GTC Europe
Year:
2017
Session ID:
23496
Download:
Share:
 
Abstract:

In this lab, we will cover deep learning fundamentals and focus on the powerful and scalable Apache MXNet open source deep learning framework. At the end of this hands on lab, youll be able to train your own deep neural network and fine tune existing state of the art models for image and object recognition. Well also dive deep into setting up your deep learning infrastructure on AWS.

In this lab, we will cover deep learning fundamentals and focus on the powerful and scalable Apache MXNet open source deep learning framework. At the end of this hands on lab, youll be able to train your own deep neural network and fine tune existing state of the art models for image and object recognition. Well also dive deep into setting up your deep learning infrastructure on AWS.

  Back
 
Topics:
Deep Learning & AI Frameworks
Type:
Instructor-Led Lab
Event:
GTC Europe
Year:
2017
Session ID:
53491
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next