GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
In this session we will present work on building a fast, state of the art transcription system in the financial domain, using modular building blocks. We will also present Neural Modules (NeMo), a framework-agnostic toolkit for speech recognition and natural language processing that facilitates creation of applications through reusable, flexible components for training state of the art models.
In this session we will present work on building a fast, state of the art transcription system in the financial domain, using modular building blocks. We will also present Neural Modules (NeMo), a framework-agnostic toolkit for speech recognition and natural language processing that facilitates creation of applications through reusable, flexible components for training state of the art models.  Back
 
Topics:
AI & Deep Learning Research, AI Application, Deployment & Inference
Type:
Talk
Event:
GTC Washington D.C.
Year:
2019
Session ID:
DC91159
Download:
Share:
 
Abstract:
We'll discuss OpenSeq2Seq, a TensorFlow-based toolkit for training deep learning models optimized for NVIDIA GPUs. The main features of our toolkit are ease of use, modularity, and support for fast distributed and mixed-precision training. OpenSeq2Seq provides a large set of state-of-the-art models and building blocks for neural machine translation (GNMT, Transformer, ConvS2S, etc.), automatic speech recognition (DeepSpeech2, Wave2Letter, etc.) speech synthesis (Tacotron2, etc.), and language modeling. All models have been optimized for mixed-precision training with GPU Tensor Cores, and they achieve 1.5-3x training speed-up comparing to float32.
We'll discuss OpenSeq2Seq, a TensorFlow-based toolkit for training deep learning models optimized for NVIDIA GPUs. The main features of our toolkit are ease of use, modularity, and support for fast distributed and mixed-precision training. OpenSeq2Seq provides a large set of state-of-the-art models and building blocks for neural machine translation (GNMT, Transformer, ConvS2S, etc.), automatic speech recognition (DeepSpeech2, Wave2Letter, etc.) speech synthesis (Tacotron2, etc.), and language modeling. All models have been optimized for mixed-precision training with GPU Tensor Cores, and they achieve 1.5-3x training speed-up comparing to float32.  Back
 
Topics:
Speech & Language Processing, AI & Deep Learning Research
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9187
Streaming:
Download:
Share:
 
Abstract:
This session will describe an approach to building personalized recommendations using (very) deep autoencoders. We will explore effects of different activation functions, network depth and novel algorithmic approaches. The model is trained end-to-end without any layer-wise pre-training and our PyTorch-based code is publicly available.
This session will describe an approach to building personalized recommendations using (very) deep autoencoders. We will explore effects of different activation functions, network depth and novel algorithmic approaches. The model is trained end-to-end without any layer-wise pre-training and our PyTorch-based code is publicly available.  Back
 
Topics:
AI & Deep Learning Research, Consumer Engagement & Personalization
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8212
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next