GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
We'll demonstrate some of the design choices required to provide a distributed, in-memory, GPU-accelerated, parallel mathematics library, distributed mathematics (dMath). The library considers some of the most common functionality required for effective scaling of deep learning pipelines for a variety of recognition and understanding tasks. The core of the problem is efficient implementations of common basic linear algebra subprograms (BLAS) and specific abstractions for learning at scale.
We'll demonstrate some of the design choices required to provide a distributed, in-memory, GPU-accelerated, parallel mathematics library, distributed mathematics (dMath). The library considers some of the most common functionality required for effective scaling of deep learning pipelines for a variety of recognition and understanding tasks. The core of the problem is efficient implementations of common basic linear algebra subprograms (BLAS) and specific abstractions for learning at scale.  Back
 
Topics:
Artificial Intelligence and Deep Learning
Type:
Talk
Event:
GTC Silicon Valley
Year:
2016
Session ID:
S6669
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next