GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
In this talk you will learn how to create efficient input pipelines that are tailored to your training data. As number of projects, number of GPUs, and data size increase, there is no one-size-fits-all input pipeline that can keep GPUs fed with data. We will examine the relationship between training throughput and image representation. We'll provide guidance on tradeoffs between pre-processing datasets and in-line data processing, and we'll review results from a distributed training environment with multiple NVIDIA DGX-1s and a Pure Storage FlashBlade to highlight performance impact at scale. Learn how to maximize time to accuracy and, ultimately, time to shipping models.
In this talk you will learn how to create efficient input pipelines that are tailored to your training data. As number of projects, number of GPUs, and data size increase, there is no one-size-fits-all input pipeline that can keep GPUs fed with data. We will examine the relationship between training throughput and image representation. We'll provide guidance on tradeoffs between pre-processing datasets and in-line data processing, and we'll review results from a distributed training environment with multiple NVIDIA DGX-1s and a Pure Storage FlashBlade to highlight performance impact at scale. Learn how to maximize time to accuracy and, ultimately, time to shipping models.  Back
 
Topics:
Performance Optimization, Accelerated Data Science
Type:
Sponsored Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S91025
Streaming:
Download:
Share:
 
Abstract:

In the new era of intelligence, enterprises will tap the power of AI for faster innovation and competitive edge. Yet, AI requires a completely different approach to scale-out infrastructure, and its complexities are holding enterprises back from moving forward in an AI-first world. Organisations already using AI today, trailblazers of their own industries, are spending countless months building and testing infrastructure, developing best practices, while the rest of the industry is struggling to get started. Learn how storage and networking hardware affects the ability to keeping GPUs fueled with data. In this talk, we present a formula for Deep Learning Infrastructure inspired by our customers who blazed the trail of AI - a way to ensure storage, networking & compute work together seamlessly. We will examine ways to evaluate AI hardware components, and how simplifying hardware can unlock new possibilities yet to be explored within your data.

In the new era of intelligence, enterprises will tap the power of AI for faster innovation and competitive edge. Yet, AI requires a completely different approach to scale-out infrastructure, and its complexities are holding enterprises back from moving forward in an AI-first world. Organisations already using AI today, trailblazers of their own industries, are spending countless months building and testing infrastructure, developing best practices, while the rest of the industry is struggling to get started. Learn how storage and networking hardware affects the ability to keeping GPUs fueled with data. In this talk, we present a formula for Deep Learning Infrastructure inspired by our customers who blazed the trail of AI - a way to ensure storage, networking & compute work together seamlessly. We will examine ways to evaluate AI hardware components, and how simplifying hardware can unlock new possibilities yet to be explored within your data.

  Back
 
Topics:
Artificial Intelligence and Deep Learning
Type:
Talk
Event:
AI Conference Australia
Year:
2018
Session ID:
AUS8004
Streaming:
Download:
Share:
 
Abstract:
Learn from real-world case studies where large corpora of unstructured data were indexed and organized by deep-learning pipelines. Organizations are capturing and saving exponentially more unstructured data. As a tactic to organize this data, many teams turn to manual data classification, but that human-in-the-loop process can be cost prohibitive and introduce metadata inaccuracies. By applying deep learning and cluster-based labeling, we can index petabyte-scale datasets and rapidly organize unstructured data for downstream model building and analysis. This session will teach you how to quickly switch to training on all the contents of your data lake, rather than just a subset. We will use cases studies with real-world datasets to walk through best practices for a deep learning indexing pipeline.
Learn from real-world case studies where large corpora of unstructured data were indexed and organized by deep-learning pipelines. Organizations are capturing and saving exponentially more unstructured data. As a tactic to organize this data, many teams turn to manual data classification, but that human-in-the-loop process can be cost prohibitive and introduce metadata inaccuracies. By applying deep learning and cluster-based labeling, we can index petabyte-scale datasets and rapidly organize unstructured data for downstream model building and analysis. This session will teach you how to quickly switch to training on all the contents of your data lake, rather than just a subset. We will use cases studies with real-world datasets to walk through best practices for a deep learning indexing pipeline.  Back
 
Topics:
Accelerated Data Science, Data Center & Cloud Infrastructure
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8962
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next