GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
In addition to the new production deployment-oriented capabilities included in the 1.0 release of PyTorch, the deep learning framework also added improved distributed training, allowing researchers and developers to easily parallelize computations across processes and clusters of machines. The PyTorch dev team at Facebook has continued to up the performance, and will be walking through new benchmarks, and how developers can readily take advantage of distributed training in PyTorch and NVIDIA GPUs to train their models faster.
In addition to the new production deployment-oriented capabilities included in the 1.0 release of PyTorch, the deep learning framework also added improved distributed training, allowing researchers and developers to easily parallelize computations across processes and clusters of machines. The PyTorch dev team at Facebook has continued to up the performance, and will be walking through new benchmarks, and how developers can readily take advantage of distributed training in PyTorch and NVIDIA GPUs to train their models faster.  Back
 
Topics:
Deep Learning & AI Frameworks
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9830
Streaming:
Download:
Share:
 
Abstract:

We''ll discuss how to get started with PyTorch from the creator of the project, Soumith Chintala. PyTorch is a fast and flexible deep learning framework that has been called a ''breath of fresh air'' by researchers and developers alike for its ease of use, flexibility, and similarity to python programming. It consists of an ndarray library that natively supports GPU execution (automatic differentiation engine that is flexible and fast), and an optimization package for gradient based optimization methods. 

We''ll discuss how to get started with PyTorch from the creator of the project, Soumith Chintala. PyTorch is a fast and flexible deep learning framework that has been called a ''breath of fresh air'' by researchers and developers alike for its ease of use, flexibility, and similarity to python programming. It consists of an ndarray library that natively supports GPU execution (automatic differentiation engine that is flexible and fast), and an optimization package for gradient based optimization methods. 

  Back
 
Topics:
Deep Learning & AI Frameworks, Artificial Intelligence and Deep Learning
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8817
Streaming:
Download:
Share:
 
Abstract:

In this session, you shall be introduced to a new framework for scientific computing, mainly aimed at deep learning workloads. The framework consists of an ndarray library that natively supports GPU execution, an automatic differentiation engine that is flexible and fast, and an optimization package for gradient based optimization methods. We shall discuss practical workflows, our features on top of python multiprocessing for efficient parallel data loaders and finally we shall briefly look at our upcoming just-in-time Tensor compiler to fuse computations and execute them more efficiently.

In this session, you shall be introduced to a new framework for scientific computing, mainly aimed at deep learning workloads. The framework consists of an ndarray library that natively supports GPU execution, an automatic differentiation engine that is flexible and fast, and an optimization package for gradient based optimization methods. We shall discuss practical workflows, our features on top of python multiprocessing for efficient parallel data loaders and finally we shall briefly look at our upcoming just-in-time Tensor compiler to fuse computations and execute them more efficiently.

  Back
 
Topics:
Computer Vision, Tools & Libraries
Type:
Talk
Event:
GTC Europe
Year:
2017
Session ID:
23373
Download:
Share:
 
Abstract:

AI research has seen many shifts in the last few years. We've seen research and deployments go from using static datasets such as ImageNet to being more dynamic and online in self-driving cars, robots, and game-playing. Paradigm shifts in AI research need new tools to enable this research. We'll introduce and talk about PyTorch -- a new deep learning framework that enables cutting-edge AI research by having a complete dynamic view of the world.

AI research has seen many shifts in the last few years. We've seen research and deployments go from using static datasets such as ImageNet to being more dynamic and online in self-driving cars, robots, and game-playing. Paradigm shifts in AI research need new tools to enable this research. We'll introduce and talk about PyTorch -- a new deep learning framework that enables cutting-edge AI research by having a complete dynamic view of the world.

  Back
 
Topics:
Artificial Intelligence and Deep Learning
Type:
Talk
Event:
GTC Silicon Valley
Year:
2017
Session ID:
S7784
Download:
Share:
 
Abstract:

Facebook AI Research (FAIR) in partnership with NVIDIA has designed a scale-out infrastructure built on NVIDIA DGX-1. This initiative began with an extensive evaluation of design approaches for multi-system scale, as well as considerations for networking and storage supporting one of the world's largest DGX-1 clusters. Attend this session to gain valuable insights into how one of the world's leading AI innovators is building a scale-out infrastructure for deep learning, learn architectural best practices, and participate in Q&A with featured panelists from FAIR and NVIDIA.

Facebook AI Research (FAIR) in partnership with NVIDIA has designed a scale-out infrastructure built on NVIDIA DGX-1. This initiative began with an extensive evaluation of design approaches for multi-system scale, as well as considerations for networking and storage supporting one of the world's largest DGX-1 clusters. Attend this session to gain valuable insights into how one of the world's leading AI innovators is building a scale-out infrastructure for deep learning, learn architectural best practices, and participate in Q&A with featured panelists from FAIR and NVIDIA.

  Back
 
Topics:
Artificial Intelligence and Deep Learning
Type:
Talk
Event:
GTC Silicon Valley
Year:
2017
Session ID:
S7815
Download:
Share:
 
Abstract:

Deep Learning is an emerging subfield in the area of machine learning, often involving compute intensive, but embarrassingly parallel problems. We'll give a very brief background about deep learning, discuss the typical computational workloads from a systems perspective, and finally give an overview of building deep learning systems that scale over multiple GPUs, machines and clusters. We'll also discuss the current frameworks and tools used in the deep learning space such as Torch, Theano, TensorFlow, Caffe, MXNet.

Deep Learning is an emerging subfield in the area of machine learning, often involving compute intensive, but embarrassingly parallel problems. We'll give a very brief background about deep learning, discuss the typical computational workloads from a systems perspective, and finally give an overview of building deep learning systems that scale over multiple GPUs, machines and clusters. We'll also discuss the current frameworks and tools used in the deep learning space such as Torch, Theano, TensorFlow, Caffe, MXNet.

  Back
 
Topics:
Artificial Intelligence and Deep Learning, Intelligent Machines, IoT & Robotics
Type:
Talk
Event:
GTC Washington D.C.
Year:
2016
Session ID:
DCS16168
Streaming:
Share:
 
Abstract:
We'll discuss Torch from a high-level perspective, discussing its usage style across the industry among deep learning giants such as Google DeepMind, Facebook AI Research, Twitter Cortex. We present the current state of Torch as a research and production framework for deep learning models, and finally we present our long term vision.
We'll discuss Torch from a high-level perspective, discussing its usage style across the industry among deep learning giants such as Google DeepMind, Facebook AI Research, Twitter Cortex. We present the current state of Torch as a research and production framework for deep learning models, and finally we present our long term vision.  Back
 
Topics:
Artificial Intelligence and Deep Learning
Type:
Talk
Event:
GTC Silicon Valley
Year:
2016
Session ID:
S6798
Streaming:
Download:
Share:
 
Abstract:
This talk provides a brief overview of deep learning research, the challenges involved in scaling it up across multi-GPU and multi-machine clusters, while providing software that is flexible enough for research settings. We discuss the clear trends that are emerging in deep learning from a HPC perspective and discuss several examples from our work at Facebook AI Research.
This talk provides a brief overview of deep learning research, the challenges involved in scaling it up across multi-GPU and multi-machine clusters, while providing software that is flexible enough for research settings. We discuss the clear trends that are emerging in deep learning from a HPC perspective and discuss several examples from our work at Facebook AI Research.  Back
 
Topics:
Artificial Intelligence and Deep Learning, Press-Suggested Sessions: AI & Deep Learning, Computer Vision
Type:
Talk
Event:
GTC Silicon Valley
Year:
2016
Session ID:
S6227
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next