GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
The development of cognitive computing applications is at a critical juncture with tough challenges but ample opportunities for great breakthroughs. Many of the cognitive solutions involved are complex and methods required to develop them remain poorly understood. Any major breakthrough in improving such understanding would require large-scale experimentation and extensive data-driven development. In short, we are witnessing the formation of a new modality of programming and even a new modality of application execution. The IBM-Illinois Center for Cognitive Computing Systems Research (C3SR) is developing scalable cognitive solutions that embody both advanced cognitive computing workloads and optimized heterogeneous computing systems for these cognitive workloads. The two streams of research not only complement, but also empower each other, and thus should be carried out in a tightly integrated fashion.
The development of cognitive computing applications is at a critical juncture with tough challenges but ample opportunities for great breakthroughs. Many of the cognitive solutions involved are complex and methods required to develop them remain poorly understood. Any major breakthrough in improving such understanding would require large-scale experimentation and extensive data-driven development. In short, we are witnessing the formation of a new modality of programming and even a new modality of application execution. The IBM-Illinois Center for Cognitive Computing Systems Research (C3SR) is developing scalable cognitive solutions that embody both advanced cognitive computing workloads and optimized heterogeneous computing systems for these cognitive workloads. The two streams of research not only complement, but also empower each other, and thus should be carried out in a tightly integrated fashion.  Back
 
Topics:
HPC and Supercomputing, Accelerated Data Science
Type:
Talk
Event:
GTC Washington D.C.
Year:
2017
Session ID:
DC7140
Download:
Share:
 
Abstract:

Introducing the 3rd Edition of "Programming Massively Parallel Processors – a Hands-on Approach". This new edition is the result of a collaboration between GPU computing experts and covers the CUDA computing platform, parallel patterns, case studies and other programming models. Brand new chapters cover Deep Learning, graph search, sparse matrix computation, histogram and merge sort.
The tightly-coupled GPU Teaching Kit contains everything needed to teach university courses and labs with GPUs.

Introducing the 3rd Edition of "Programming Massively Parallel Processors – a Hands-on Approach". This new edition is the result of a collaboration between GPU computing experts and covers the CUDA computing platform, parallel patterns, case studies and other programming models. Brand new chapters cover Deep Learning, graph search, sparse matrix computation, histogram and merge sort.
The tightly-coupled GPU Teaching Kit contains everything needed to teach university courses and labs with GPUs.

  Back
 
Topics:
HPC and Supercomputing
Type:
Talk
Event:
Supercomputing
Year:
2016
Session ID:
SC6114
Streaming:
Share:
 
Abstract:

As performance and functionality requirements of interdisciplinary computing applications rise, industry demand for new graduates familiar with accelerated computing with GPUs grows. This webinar introduces a comprehensive set of academic labs and university teaching material for use in courses leveraging introductory and advanced parallel programming concepts. The teaching materials start with the basics and focus on programming GPUs with CUDA, and go on to advanced topics such as optimization, advanced architectural enhancements, and integration of a variety of programming languages.

As performance and functionality requirements of interdisciplinary computing applications rise, industry demand for new graduates familiar with accelerated computing with GPUs grows. This webinar introduces a comprehensive set of academic labs and university teaching material for use in courses leveraging introductory and advanced parallel programming concepts. The teaching materials start with the basics and focus on programming GPUs with CUDA, and go on to advanced topics such as optimization, advanced architectural enhancements, and integration of a variety of programming languages.

  Back
 
Topics:
Intelligent Machines, IoT & Robotics
Type:
Webinar
Event:
GTC Webinars
Year:
2016
Session ID:
GTCE122
Streaming:
Download:
Share:
 
Abstract:

I will present two synergistic systems that enable productive development of scalable, Efficient data parallel code. Triolet is a Python-syntax based functional programming system where library implementers direct the compiler to perform parallelization and deep optimization. Tangram is an algorithm framework that supports effective parallelization of linear recurrence computation.

I will present two synergistic systems that enable productive development of scalable, Efficient data parallel code. Triolet is a Python-syntax based functional programming system where library implementers direct the compiler to perform parallelization and deep optimization. Tangram is an algorithm framework that supports effective parallelization of linear recurrence computation.

  Back
 
Topics:
HPC and Supercomputing
Type:
Talk
Event:
Supercomputing
Year:
2013
Session ID:
SC3110
Streaming:
Download:
Share:
 
Abstract:

Attend this session to learn new techniques to build a scalable and numerically stable tridiagonal solver for GPUs. It appears the numerical stability was missing in all existing GPU-based tridiagonal solvers. In this work, presented is a scalable, numerically stable, high-performance tridiagonal solver. Solver provides comparable quality of stable solutions to Intel MKL and Matlab, at speed comparable to the GPU tridiagonal solvers in existing packages like CUSPARSE. Presented and analyzed are two key optimization strategies for our solver: a high throughput data layout transformation for memory efficiency, and a dynamic tiling approach for reducing the memory access footprint caused by branch divergence. Several applications are shown to get large benefits from this solver. In this case study, Empirical Mode Decomposition, which is a critical method in time-frequency analyses, is used to demonstrate usability of our solver.

Attend this session to learn new techniques to build a scalable and numerically stable tridiagonal solver for GPUs. It appears the numerical stability was missing in all existing GPU-based tridiagonal solvers. In this work, presented is a scalable, numerically stable, high-performance tridiagonal solver. Solver provides comparable quality of stable solutions to Intel MKL and Matlab, at speed comparable to the GPU tridiagonal solvers in existing packages like CUSPARSE. Presented and analyzed are two key optimization strategies for our solver: a high throughput data layout transformation for memory efficiency, and a dynamic tiling approach for reducing the memory access footprint caused by branch divergence. Several applications are shown to get large benefits from this solver. In this case study, Empirical Mode Decomposition, which is a critical method in time-frequency analyses, is used to demonstrate usability of our solver.

  Back
 
Topics:
Developer - Algorithms
Type:
Talk
Event:
GTC Silicon Valley
Year:
2013
Session ID:
S3191
Streaming:
Download:
Share:
 
Speakers:
Charles Hansen, Wen-mei Hwu, Yangdong Deng
- Tsinghua University, University of Utah, University of Illinois
Abstract:
Come hear about the groundbreaking research taking place at the CUDA Centers of Excellence, an elite group of world-renown research universities that are pushing the frontier of massively parallel computing using CUDA. Researchers from these top institutions will survey cutting-edge research that is advancing the state of the art in GPU computing and dozens of application fields across science and engineering. In this session we will hear from Dr. Wen-mei Hwu at the University of Illinois at Urbana - Champaign, Professor Yangdong Deng at Tsinghua University and Dr. Charles D. Hansen at the University of Utah.
Come hear about the groundbreaking research taking place at the CUDA Centers of Excellence, an elite group of world-renown research universities that are pushing the frontier of massively parallel computing using CUDA. Researchers from these top institutions will survey cutting-edge research that is advancing the state of the art in GPU computing and dozens of application fields across science and engineering. In this session we will hear from Dr. Wen-mei Hwu at the University of Illinois at Urbana - Champaign, Professor Yangdong Deng at Tsinghua University and Dr. Charles D. Hansen at the University of Utah.   Back
 
Topics:
General Interest
Type:
Talk
Event:
GTC Silicon Valley
Year:
2010
Session ID:
S102264
Streaming:
Download:
Share:
 
Abstract:
GPU computing is transforming the extreme high-end realms of supercomputing. NVIDIA Tesla GPUs already power several of the world's sixty fastest supercomputers, and this trend is accelerating. This three-hour "super session" will feature some of the world's premiere supercomputing experts, who will discuss their experience building and deploying GPU-based supercomputing clusters, and present case studies of designing and porting codes for "big iron" GPU supercomputers.
GPU computing is transforming the extreme high-end realms of supercomputing. NVIDIA Tesla GPUs already power several of the world's sixty fastest supercomputers, and this trend is accelerating. This three-hour "super session" will feature some of the world's premiere supercomputing experts, who will discuss their experience building and deploying GPU-based supercomputing clusters, and present case studies of designing and porting codes for "big iron" GPU supercomputers.   Back
 
Topics:
HPC and AI
Type:
Talk
Event:
GTC Silicon Valley
Year:
2009
Session ID:
S09049
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next