GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
Well discuss the RAPIDS ecosystem, which is accelerating the data science workflow by keeping data and computations on GPUs. Were able to go from ingestion to insights more quickly, with larger workloads. Within RAPIDS, cuML provides a sklearn-like application programming interface (API) and cuGraph a NetworkX API of GPU-accelerated algorithms. While over 100x of speedup is possible on a single GPU, the scale is bounded by the devices available memory space. By scaling to multiple GPUs spread across multiple nodes, cuML and cuGraph can increase speedup even further while providing avenues to scale up and out. Well focus on how we enabled the training and inference of machine learning and graph models on multiple nodes within cuML and cuGraph, and provide an architectural overview of our communications API, which is enabling GPU-to-GPU direct memory transfers. Well conclude with examples and benchmarks.
Well discuss the RAPIDS ecosystem, which is accelerating the data science workflow by keeping data and computations on GPUs. Were able to go from ingestion to insights more quickly, with larger workloads. Within RAPIDS, cuML provides a sklearn-like application programming interface (API) and cuGraph a NetworkX API of GPU-accelerated algorithms. While over 100x of speedup is possible on a single GPU, the scale is bounded by the devices available memory space. By scaling to multiple GPUs spread across multiple nodes, cuML and cuGraph can increase speedup even further while providing avenues to scale up and out. Well focus on how we enabled the training and inference of machine learning and graph models on multiple nodes within cuML and cuGraph, and provide an architectural overview of our communications API, which is enabling GPU-to-GPU direct memory transfers. Well conclude with examples and benchmarks.  Back
 
Topics:
Accelerated Data Science, HPC and AI
Type:
Talk
Event:
GTC Washington D.C.
Year:
2019
Session ID:
DC91231
Download:
Share:
 
Abstract:

We'll discuss cuML, a GPU-Accelerated library of machine learning algorithms within the RAPIDS data science ecosystem. The cuML library allows data scientists, researchers, and software engineers to run traditional ML tasks on GPUs without going into the details of CUDA programming. We'll show you how to get tremendous speed-up for traditional machine learning workloads by using APIs like Scikit-Learn with Python. We'll also provide code examples, benchmarks, and the latest news.

We'll discuss cuML, a GPU-Accelerated library of machine learning algorithms within the RAPIDS data science ecosystem. The cuML library allows data scientists, researchers, and software engineers to run traditional ML tasks on GPUs without going into the details of CUDA programming. We'll show you how to get tremendous speed-up for traditional machine learning workloads by using APIs like Scikit-Learn with Python. We'll also provide code examples, benchmarks, and the latest news.

  Back
 
Topics:
Accelerated Data Science, Tools & Libraries
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9817
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next