GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:

The rise of GPU-accelerated data science and AI has come about through a combination of open source innovation and better tooling to support reproducible workflows. However, as the diverse array of deep learning libraries continue to mature, attention is moving to other parts of the AI pipeline, including simulation, ETL, and deployment. In this talk, I'll review open source projects that address these other areas, such as Numba, for implementing custom simulations and data transformations on the GPU, and PyGDF, for GPU accelerated dataframes. I'll discuss how the Anaconda Distribution and its conda packaging system helps data scientists create reproducible environments and deploy models. Finally, I'll talk about how Anaconda Enterprise allows data science teams to collaborate efficiently on GPU-accelerated projects with each other, and supports AI workflows from data exploration all the way to deployment.

The rise of GPU-accelerated data science and AI has come about through a combination of open source innovation and better tooling to support reproducible workflows. However, as the diverse array of deep learning libraries continue to mature, attention is moving to other parts of the AI pipeline, including simulation, ETL, and deployment. In this talk, I'll review open source projects that address these other areas, such as Numba, for implementing custom simulations and data transformations on the GPU, and PyGDF, for GPU accelerated dataframes. I'll discuss how the Anaconda Distribution and its conda packaging system helps data scientists create reproducible environments and deploy models. Finally, I'll talk about how Anaconda Enterprise allows data science teams to collaborate efficiently on GPU-accelerated projects with each other, and supports AI workflows from data exploration all the way to deployment.

  Back
 
Topics:
Accelerated Data Science, Artificial Intelligence and Deep Learning
Type:
Talk
Event:
GTC Washington D.C.
Year:
2018
Session ID:
DC8174
Streaming:
Download:
Share:
 
Abstract:

Many data scientists use Anaconda and Python to increase their productivity, but don't realize they can leverage these technologies for scalable analysis. We'll survey the landscape of Python tools that empower data scientists to take their work to the next level, harnessing the growing computing capability of GPUs and clusters. We'll show the power of Python to drive distributed computation with Spark and Dask, execute large-scale machine learning with TensorFlow, and visualize large datasets right in the web browser.

Many data scientists use Anaconda and Python to increase their productivity, but don't realize they can leverage these technologies for scalable analysis. We'll survey the landscape of Python tools that empower data scientists to take their work to the next level, harnessing the growing computing capability of GPUs and clusters. We'll show the power of Python to drive distributed computation with Spark and Dask, execute large-scale machine learning with TensorFlow, and visualize large datasets right in the web browser.

  Back
 
Topics:
Artificial Intelligence and Deep Learning, Tools & Libraries
Type:
Talk
Event:
GTC Silicon Valley
Year:
2017
Session ID:
S7785
Download:
Share:
 
Abstract:
We'll demonstrate how Python and the Numba JIT compiler can be used for GPU programming that easily scales from your workstation to an Apache Spark cluster. Using an example application, we show how to write CUDA kernels in Python, compile and call them using the open source Numba JIT compiler, and execute them both locally and remotely with Spark. We also describe techniques for managing Python dependencies in a Spark cluster with the tools in the Anaconda Platform. Finally, we conclude with some tips and tricks for getting best performance when doing GPU computing with Spark and Python.
We'll demonstrate how Python and the Numba JIT compiler can be used for GPU programming that easily scales from your workstation to an Apache Spark cluster. Using an example application, we show how to write CUDA kernels in Python, compile and call them using the open source Numba JIT compiler, and execute them both locally and remotely with Spark. We also describe techniques for managing Python dependencies in a Spark cluster with the tools in the Anaconda Platform. Finally, we conclude with some tips and tricks for getting best performance when doing GPU computing with Spark and Python.  Back
 
Topics:
Programming Languages, Tools & Libraries, Big Data Analytics
Type:
Talk
Event:
GTC Silicon Valley
Year:
2016
Session ID:
S6413
Streaming:
Download:
Share:
 
Abstract:
We demonstrate how to implement the densest k-subgraph algorithm by Papailiopoulos et al, using the Numba CUDA compiler for Python. With the rise of social networks, more data scientists want to study the connections within and between the communities that dynamically organize on the Internet. Python is a very productive language for data scientists, but, on its own, may not provide the performance needed to analyze big data sets. To bridge this gap, the Numba compiler allows CUDA kernels to be written directly in the Python language and compiled for GPU execution. Using the densest k-subgraph algorithm as an example, we will show how the agility of Python can be combined with the high performance of GPU computing for graph analytics.
We demonstrate how to implement the densest k-subgraph algorithm by Papailiopoulos et al, using the Numba CUDA compiler for Python. With the rise of social networks, more data scientists want to study the connections within and between the communities that dynamically organize on the Internet. Python is a very productive language for data scientists, but, on its own, may not provide the performance needed to analyze big data sets. To bridge this gap, the Numba compiler allows CUDA kernels to be written directly in the Python language and compiled for GPU execution. Using the densest k-subgraph algorithm as an example, we will show how the agility of Python can be combined with the high performance of GPU computing for graph analytics.  Back
 
Topics:
Defense, Big Data Analytics, Programming Languages, Developer - Algorithms
Type:
Talk
Event:
GTC Silicon Valley
Year:
2015
Session ID:
S5419
Streaming:
Download:
Share:
 
Abstract:

This talk will describe the design and development of Chroma, a Python package for fast Monte Carlo simulation of individual optical photons propagating through particle physics experiments. Chroma implements standard ray-tracing techniques with Python and PyCUDA to provide a versatile, fast, and physically-accurate optical model that is more than 100x faster at photon propagation than the standard particle physics simulation package, GEANT4. Chroma was initially developed by a small academic team of only two people and will discuss lessons learned in the development process and the impact of Python and PyCUDA on scientist-developers.

This talk will describe the design and development of Chroma, a Python package for fast Monte Carlo simulation of individual optical photons propagating through particle physics experiments. Chroma implements standard ray-tracing techniques with Python and PyCUDA to provide a versatile, fast, and physically-accurate optical model that is more than 100x faster at photon propagation than the standard particle physics simulation package, GEANT4. Chroma was initially developed by a small academic team of only two people and will discuss lessons learned in the development process and the impact of Python and PyCUDA on scientist-developers.

  Back
 
Topics:
Computational Physics, Programming Languages, Rendering & Ray Tracing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2013
Session ID:
S3304
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next