GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
In this talk, I'll discuss several semi-supervised learning applications from our recent work in applied deep learning research at NVIDIA. I'll first discuss video translation, which renders new scenes using models learned from real-world videos. We take real world videos, analyze them using existing computer vision techniques such as pose estimation or semantic segmentation, and then train generative models to invert these poses or segmentations back to videos. In deployment, we then render novel sketches using these models. I'll then discuss work on large-scale language modeling, where a model trained to predict text, piece by piece, on a large dataset is then finetuned with small amounts of labeled data to solve problems like emotion classification. Finally, I'll discuss WaveGlow, our flow-based generative model for the vocoder stage of speech synthesis, that combines a simple log-likelihood based training procedure with very fast and efficient inference. Because semi-supervised learning allows us to try tackling problems where large amounts of labels would be prohibitively expensive to create, it opens the scope of problems to which we can apply machine learning.
In this talk, I'll discuss several semi-supervised learning applications from our recent work in applied deep learning research at NVIDIA. I'll first discuss video translation, which renders new scenes using models learned from real-world videos. We take real world videos, analyze them using existing computer vision techniques such as pose estimation or semantic segmentation, and then train generative models to invert these poses or segmentations back to videos. In deployment, we then render novel sketches using these models. I'll then discuss work on large-scale language modeling, where a model trained to predict text, piece by piece, on a large dataset is then finetuned with small amounts of labeled data to solve problems like emotion classification. Finally, I'll discuss WaveGlow, our flow-based generative model for the vocoder stage of speech synthesis, that combines a simple log-likelihood based training procedure with very fast and efficient inference. Because semi-supervised learning allows us to try tackling problems where large amounts of labels would be prohibitively expensive to create, it opens the scope of problems to which we can apply machine learning.  Back
 
Topics:
AI & Deep Learning Research
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9686
Streaming:
Download:
Share:
 
Abstract:
At NVIDIA, we're busy applying deep learning to diverse problems, and this talk will give an overview of a few of these applications. We'll discuss our resume matching system, which helps match candidates to job openings at NVIDIA, as well as an open-source sentiment analysis project trained on unsupervised text that is improving our marketing capabilities. We'll discuss a blind image quality metric that we're using to lower the cost of raytracing photorealistic graphics, and a generative model that we've built to create realistic graphics from simplistic sketches.
At NVIDIA, we're busy applying deep learning to diverse problems, and this talk will give an overview of a few of these applications. We'll discuss our resume matching system, which helps match candidates to job openings at NVIDIA, as well as an open-source sentiment analysis project trained on unsupervised text that is improving our marketing capabilities. We'll discuss a blind image quality metric that we're using to lower the cost of raytracing photorealistic graphics, and a generative model that we've built to create realistic graphics from simplistic sketches.  Back
 
Topics:
Graphics and AI
Type:
Talk
Event:
SIGGRAPH
Year:
2018
Session ID:
SIG1815E
Streaming:
Share:
 
Abstract:
At NVIDIA, we're busy applying deep learning to diverse problems, and this talk will give an overview of a few of these applications. We'll discuss our resume matching system, which helps match candidates to job openings at NVIDIA, as well as an open-source sentiment analysis project trained on unsupervised text that is improving our marketing capabilities. We'll discuss a blind image quality metric that we're using to lower the cost of raytracing photorealistic graphics, and a generative model that we've built to create realistic graphics from simplistic sketches.
At NVIDIA, we're busy applying deep learning to diverse problems, and this talk will give an overview of a few of these applications. We'll discuss our resume matching system, which helps match candidates to job openings at NVIDIA, as well as an open-source sentiment analysis project trained on unsupervised text that is improving our marketing capabilities. We'll discuss a blind image quality metric that we're using to lower the cost of raytracing photorealistic graphics, and a generative model that we've built to create realistic graphics from simplistic sketches.  Back
 
Topics:
AI & Deep Learning Research
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8672
Streaming:
Share:
 
Abstract:

What can deep learning do for applications today? How should I think about using deep learning for my problem? If I want to apply deep learning in a new way, how do I get started? In this talk, Bryan will share some characteristics of successful deep learning applications, and some things to think about when starting a new deep learning application.

What can deep learning do for applications today? How should I think about using deep learning for my problem? If I want to apply deep learning in a new way, how do I get started? In this talk, Bryan will share some characteristics of successful deep learning applications, and some things to think about when starting a new deep learning application.

  Back
 
Topics:
Artificial Intelligence and Deep Learning
Type:
Talk
Event:
GTC Silicon Valley
Year:
2017
Session ID:
S7860
Download:
Share:
 
Abstract:
Training and deploying deep neural networks for speech recognition is very computationally intensive. I will discuss how we have made our training process scale efficiently to many GPUs while training, as well as how we use GPUs to take our deep neural networks to users at scale through Batch Dispatch.
Training and deploying deep neural networks for speech recognition is very computationally intensive. I will discuss how we have made our training process scale efficiently to many GPUs while training, as well as how we use GPUs to take our deep neural networks to users at scale through Batch Dispatch.  Back
 
Topics:
Artificial Intelligence and Deep Learning
Type:
Talk
Event:
GTC Silicon Valley
Year:
2016
Session ID:
S6672
Streaming:
Download:
Share:
 
Abstract:

Speech is the user interface of the future, but today's implementations often fail when we need them the most, such as in noisy environments or when the microphone isn't close at hand. At Baidu, an increasing fraction of our users employ speech interfaces to find what they are looking for. In this talk, I will show how next generation deep learning models can provide state-of-the-art speech recognition performance. We train these models using clusters of GPUs using CUDA, MPI and Infiniband.

Speech is the user interface of the future, but today's implementations often fail when we need them the most, such as in noisy environments or when the microphone isn't close at hand. At Baidu, an increasing fraction of our users employ speech interfaces to find what they are looking for. In this talk, I will show how next generation deep learning models can provide state-of-the-art speech recognition performance. We train these models using clusters of GPUs using CUDA, MPI and Infiniband.

  Back
 
Topics:
Artificial Intelligence and Deep Learning
Type:
Talk
Event:
GTC Silicon Valley
Year:
2015
Session ID:
S5631
Streaming:
Download:
Share:
 
Abstract:
We'll present a new algorithm for in-place array transposition. The algorithm is useful for in-place transposes of large matrices, as well as in-place conversions between Arrays of Structures and Structures of Arrays. The simple structure of this algorithm enables full memory bandwidth accesses to Arrays of Structures. We'll discuss the algorithm, as well as several implementations on GPUs and CPUs.
We'll present a new algorithm for in-place array transposition. The algorithm is useful for in-place transposes of large matrices, as well as in-place conversions between Arrays of Structures and Structures of Arrays. The simple structure of this algorithm enables full memory bandwidth accesses to Arrays of Structures. We'll discuss the algorithm, as well as several implementations on GPUs and CPUs.  Back
 
Topics:
Numerical Algorithms & Libraries
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4664
Streaming:
Download:
Share:
 
Abstract:

Copperhead is a data parallel language suitable for GPU programming, embedded in Python, which aims to provide both a productive programming environment as well as excellent computational efficiency. Copperhead programs are written in a small, restricted subset of the Python language, using standard constructs like map and reduce, along with traditional data parallel primitives like scan and sort. Copperhead programs interoperate with existing Python numerical and visualization libraries such as NumPy, SciPy, and Matplotlib. In this talk, we will discuss the Copperhead language, the open-source Copperhead runtime, and selected example programs.

Copperhead is a data parallel language suitable for GPU programming, embedded in Python, which aims to provide both a productive programming environment as well as excellent computational efficiency. Copperhead programs are written in a small, restricted subset of the Python language, using standard constructs like map and reduce, along with traditional data parallel primitives like scan and sort. Copperhead programs interoperate with existing Python numerical and visualization libraries such as NumPy, SciPy, and Matplotlib. In this talk, we will discuss the Copperhead language, the open-source Copperhead runtime, and selected example programs.

  Back
 
Topics:
Programming Languages
Type:
Talk
Event:
GTC Silicon Valley
Year:
2012
Session ID:
S2525
Streaming:
Download:
Share:
 
Speakers:
Bryan Catanzaro
- University of California, Berkeley
Abstract:
Learn how to write Python programs that execute highly efficiently on GPUs using Copperhead, a data-parallel Python runtime. Using standard Python constructs like map and reduce, we will see how to construct data-parallel computations and embed them in Python programs that interoperate with numerical and visualization libraries such as NumPy, SciPy and Matplotlib. We will examine how to express computations using Copperhead, explore the performance of Copperhead programs running on GPUs, and discuss Copperhead''s runtime model, which enables data-parallel execution from within Python.
Learn how to write Python programs that execute highly efficiently on GPUs using Copperhead, a data-parallel Python runtime. Using standard Python constructs like map and reduce, we will see how to construct data-parallel computations and embed them in Python programs that interoperate with numerical and visualization libraries such as NumPy, SciPy and Matplotlib. We will examine how to express computations using Copperhead, explore the performance of Copperhead programs running on GPUs, and discuss Copperhead''s runtime model, which enables data-parallel execution from within Python.  Back
 
Topics:
Tools & Libraries
Type:
Talk
Event:
GTC Silicon Valley
Year:
2010
Session ID:
2050
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next