GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
We demonstrate how to implement the densest k-subgraph algorithm by Papailiopoulos et al, using the Numba CUDA compiler for Python. With the rise of social networks, more data scientists want to study the connections within and between the communities that dynamically organize on the Internet. Python is a very productive language for data scientists, but, on its own, may not provide the performance needed to analyze big data sets. To bridge this gap, the Numba compiler allows CUDA kernels to be written directly in the Python language and compiled for GPU execution. Using the densest k-subgraph algorithm as an example, we will show how the agility of Python can be combined with the high performance of GPU computing for graph analytics.
We demonstrate how to implement the densest k-subgraph algorithm by Papailiopoulos et al, using the Numba CUDA compiler for Python. With the rise of social networks, more data scientists want to study the connections within and between the communities that dynamically organize on the Internet. Python is a very productive language for data scientists, but, on its own, may not provide the performance needed to analyze big data sets. To bridge this gap, the Numba compiler allows CUDA kernels to be written directly in the Python language and compiled for GPU execution. Using the densest k-subgraph algorithm as an example, we will show how the agility of Python can be combined with the high performance of GPU computing for graph analytics.  Back
 
Topics:
Defense, Big Data Analytics, Programming Languages, Developer - Algorithms
Type:
Talk
Event:
GTC Silicon Valley
Year:
2015
Session ID:
S5419
Streaming:
Download:
Share:
 
Abstract:
Learn about high-level GPU programming in NumbaPro to reduce development time and produce high-performance data-parallel code with the ease of Python. This tutorial is for beginning to intermediate CUDA programmers who already know Python. In this tutorial, audience will learn about (1) high-level Python decorators that turn simple Python functions into data-parallel GPU kernels without any knowledge of the CUDA architecture; (2) CUDA library bindings that can be used as a drop-in to speedup existing applications; and, (3) reuse existing CUDA-C/C++ code in Python with JIT Linking.
Learn about high-level GPU programming in NumbaPro to reduce development time and produce high-performance data-parallel code with the ease of Python. This tutorial is for beginning to intermediate CUDA programmers who already know Python. In this tutorial, audience will learn about (1) high-level Python decorators that turn simple Python functions into data-parallel GPU kernels without any knowledge of the CUDA architecture; (2) CUDA library bindings that can be used as a drop-in to speedup existing applications; and, (3) reuse existing CUDA-C/C++ code in Python with JIT Linking.  Back
 
Topics:
Programming Languages
Type:
Tutorial
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4413
Streaming:
Download:
Share:
 
Abstract:
Our objective is to design a high-level data-parallel language extension to Python on GPUs. This language extension cooperates with the CPython implementation and uses Python syntax for describing data-parallel computations. The combination of rich library support and language simplicity makes Python ideal for subject matter experts to rapidly develop powerful applications. Python enables fast turnaround time and flexibility for custom analytic pipelines to react to immediate demands. However, CPython has been criticized as being slow and the existence of the global interpreter lock (GIL) makes it difficult to take advantage of parallel hardware. To solve this problem, Continuum Analytics has developed LLVM based JIT compilers for CPython. Numba is the open-source JIT compiler. NumbaPro is the proprietary compiler that adds CUDA GPU support. We aim to extend and improve the current GPU support in NumbaPro to further increase the scalability and portability of Python-based GPU programming.
Our objective is to design a high-level data-parallel language extension to Python on GPUs. This language extension cooperates with the CPython implementation and uses Python syntax for describing data-parallel computations. The combination of rich library support and language simplicity makes Python ideal for subject matter experts to rapidly develop powerful applications. Python enables fast turnaround time and flexibility for custom analytic pipelines to react to immediate demands. However, CPython has been criticized as being slow and the existence of the global interpreter lock (GIL) makes it difficult to take advantage of parallel hardware. To solve this problem, Continuum Analytics has developed LLVM based JIT compilers for CPython. Numba is the open-source JIT compiler. NumbaPro is the proprietary compiler that adds CUDA GPU support. We aim to extend and improve the current GPU support in NumbaPro to further increase the scalability and portability of Python-based GPU programming.  Back
 
Topics:
Big Data Analytics, Programming Languages, Large Scale Data Analytics, Defense
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4608
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next