GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:

Spark is a powerful, scalable, real-time data analytics engine that is fast becoming the de facto hub for data science and big data. However, in parallel, GPU clusters is fast becoming the default way to quickly develop and train deep learning models. As data science teams and data savvy companies mature, they'll need to invest in both platforms if they intend to leverage both big data and artificial intelligence for competitive advantage. We'll discuss and show in action an examination of TensorflowOnSpark, CaffeOnSpark, DeepLearning4J, IBM's SystemML, and Intel's BigDL and distributed versions of various deep learning frameworks, namely TensorFlow, Caffe, and Torch.

Spark is a powerful, scalable, real-time data analytics engine that is fast becoming the de facto hub for data science and big data. However, in parallel, GPU clusters is fast becoming the default way to quickly develop and train deep learning models. As data science teams and data savvy companies mature, they'll need to invest in both platforms if they intend to leverage both big data and artificial intelligence for competitive advantage. We'll discuss and show in action an examination of TensorflowOnSpark, CaffeOnSpark, DeepLearning4J, IBM's SystemML, and Intel's BigDL and distributed versions of various deep learning frameworks, namely TensorFlow, Caffe, and Torch.

  Back
 
Topics:
Artificial Intelligence and Deep Learning, AI Startup, Performance Optimization
Type:
Talk
Event:
GTC Silicon Valley
Year:
2017
Session ID:
S7737
Download:
Share:
 
Abstract:

Today database performance records are being shattered by new innovative ways of tackling big data problems. We're calling it "fast data" and we're leveraging the power of GPUs to query 40 billion dataset rows in just milliseconds. Thanks to a collaboration between MapD, Bitfusion, IBM Cloud and NVIDIA no data problem is too big or complex to process. Using Bitfusion's Boost software, MapD was able to leverage over 64 NVIDIA Tesla GPUs across 16 IBM Cloud servers to filter and aggregate multi-billion row datasets in just milliseconds. Seeing is believing. Come find out why GPUs are quickly becoming the engine for the next generation of enterprise computing applications. 

Today database performance records are being shattered by new innovative ways of tackling big data problems. We're calling it "fast data" and we're leveraging the power of GPUs to query 40 billion dataset rows in just milliseconds. Thanks to a collaboration between MapD, Bitfusion, IBM Cloud and NVIDIA no data problem is too big or complex to process. Using Bitfusion's Boost software, MapD was able to leverage over 64 NVIDIA Tesla GPUs across 16 IBM Cloud servers to filter and aggregate multi-billion row datasets in just milliseconds. Seeing is believing. Come find out why GPUs are quickly becoming the engine for the next generation of enterprise computing applications. 

  Back
 
Topics:
HPC and Supercomputing, Big Data Analytics, Data Center & Cloud Infrastructure
Type:
Talk
Event:
GTC Europe
Year:
2016
Session ID:
SEU6232
Download:
Share:
 
Abstract:
We'll share different ways of packaging GPU applications as containers versus traditional options and shed light on performance versus portability. Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries -- anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. We'll provide some background on linux containers, and its applicability to heterogeneous platforms, GPUs in specific, and challenges in adoption, and conclude with a demo of the whole process of containerizing, deploying, and managing GPU applications for the cloud.
We'll share different ways of packaging GPU applications as containers versus traditional options and shed light on performance versus portability. Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries -- anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in. We'll provide some background on linux containers, and its applicability to heterogeneous platforms, GPUs in specific, and challenges in adoption, and conclude with a demo of the whole process of containerizing, deploying, and managing GPU applications for the cloud.  Back
 
Topics:
Data Center & Cloud Infrastructure, GPU Virtualization, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2016
Session ID:
S6491
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next