GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:

Learn about the opportunities and pitfalls of running billion-atom science at scale on ORNL's Summit, the world's fastest GPU-Accelerated supercomputer. We'll talk about the latest performance improvements and scaling results for NAMD, a highly parallel molecular dynamics code and one of the first codes to run on Summit. NAMD performs petascale biomolecular simulations — these have included 64 million-atom model of the HIV virus capsid — and previously ran on the GPU-Accelerated Cray XK7 Blue Waters and ORNL Titan machines. Summit features IBM POWER9 CPUs, NVIDIA Volta GPUs, and the NVLink CPU-GPU interconnect.

Learn about the opportunities and pitfalls of running billion-atom science at scale on ORNL's Summit, the world's fastest GPU-Accelerated supercomputer. We'll talk about the latest performance improvements and scaling results for NAMD, a highly parallel molecular dynamics code and one of the first codes to run on Summit. NAMD performs petascale biomolecular simulations — these have included 64 million-atom model of the HIV virus capsid — and previously ran on the GPU-Accelerated Cray XK7 Blue Waters and ORNL Titan machines. Summit features IBM POWER9 CPUs, NVIDIA Volta GPUs, and the NVLink CPU-GPU interconnect.

  Back
 
Topics:
HPC and Supercomputing, Computational Biology & Chemistry
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9302
Streaming:
Download:
Share:
 
Abstract:
Learn the opportunities and pitfalls of running billion-atom science at scale on a next-generation pre-exascale GPU-accelerated supercomputer. The highly parallel molecular dynamics code NAMD has been long used on the GPU-accelerated Cray XK7 Blue Waters and ORNL Titan machines to perform petascale biomolecular simulations, including a 64-million-atom model of the HIV virus capsid. In 2007 NAMD was was one of the first codes to run on a GPU cluster, and it is now one of the first on the new ORNL Summit supercomputer, which features IBM POWER9 CPUs, NVIDIA Volta GPUs, and the NVLink CPU-GPU interconnect. This talk will cover the latest NAMD performance improvements and scaling results on Summit and other leading supercomputers.
Learn the opportunities and pitfalls of running billion-atom science at scale on a next-generation pre-exascale GPU-accelerated supercomputer. The highly parallel molecular dynamics code NAMD has been long used on the GPU-accelerated Cray XK7 Blue Waters and ORNL Titan machines to perform petascale biomolecular simulations, including a 64-million-atom model of the HIV virus capsid. In 2007 NAMD was was one of the first codes to run on a GPU cluster, and it is now one of the first on the new ORNL Summit supercomputer, which features IBM POWER9 CPUs, NVIDIA Volta GPUs, and the NVLink CPU-GPU interconnect. This talk will cover the latest NAMD performance improvements and scaling results on Summit and other leading supercomputers.  Back
 
Topics:
Computational Biology & Chemistry, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8747
Streaming:
Share:
 
Abstract:

The highly parallel molecular dynamics code NAMD is used on the GPU-accelerated Cray XK7 Blue Waters and ORNL Titan machines to perform petascale biomolecular simulations, including a 64-million-atom model of the HIV virus capsid. In 2007, NAMD was one of the first codes to run on a GPU cluster, and it's now being prepared for the ORNL Summit supercomputer, which will feature IBM Power9 CPUs, NVIDIA GPUs, and the NVLink CPU-GPU interconnect. Learn the opportunities and pitfalls of taking GPU computing to the petascale, along with recent NAMD performance advances and early results from the Summit Power8+/P100 "Minsky" development cluster.

The highly parallel molecular dynamics code NAMD is used on the GPU-accelerated Cray XK7 Blue Waters and ORNL Titan machines to perform petascale biomolecular simulations, including a 64-million-atom model of the HIV virus capsid. In 2007, NAMD was one of the first codes to run on a GPU cluster, and it's now being prepared for the ORNL Summit supercomputer, which will feature IBM Power9 CPUs, NVIDIA GPUs, and the NVLink CPU-GPU interconnect. Learn the opportunities and pitfalls of taking GPU computing to the petascale, along with recent NAMD performance advances and early results from the Summit Power8+/P100 "Minsky" development cluster.

  Back
 
Topics:
HPC and Supercomputing, Computational Biology & Chemistry
Type:
Talk
Event:
GTC Silicon Valley
Year:
2017
Session ID:
S7539
Download:
Share:
 
Abstract:

Come learn the opportunities and pitfalls of taking GPU computing to the petascale. The highly parallel molecular dynamics code NAMD is used on the GPU-accelerated Cray XK7 Blue Waters and ORNL Titan machines to perform petascale biomolecular simulations, including a 64 million-atom model of the HIV capsid. In 2007, NAMD was one of the first codes to run on a GPU cluster, and it is now being prepared for the 2017 ORNL Summit supercomputer, which will feature IBM POWER9 CPUs, NVIDIA Volta GPUs, and the NVLink CPU-GPU interconnect. We'll discuss the importance of CUDA and Kepler/Maxwell features in combining multicore host processors and GPUs in a legacy message-driven application, and the promise of remote graphics for improving productivity and accessibility in petascale biology.

Come learn the opportunities and pitfalls of taking GPU computing to the petascale. The highly parallel molecular dynamics code NAMD is used on the GPU-accelerated Cray XK7 Blue Waters and ORNL Titan machines to perform petascale biomolecular simulations, including a 64 million-atom model of the HIV capsid. In 2007, NAMD was one of the first codes to run on a GPU cluster, and it is now being prepared for the 2017 ORNL Summit supercomputer, which will feature IBM POWER9 CPUs, NVIDIA Volta GPUs, and the NVLink CPU-GPU interconnect. We'll discuss the importance of CUDA and Kepler/Maxwell features in combining multicore host processors and GPUs in a legacy message-driven application, and the promise of remote graphics for improving productivity and accessibility in petascale biology.

  Back
 
Topics:
HPC and Supercomputing, Computational Biology & Chemistry, Press-Suggested Sessions: HPC & Science
Type:
Talk
Event:
GTC Silicon Valley
Year:
2016
Session ID:
S6361
Streaming:
Download:
Share:
 
Abstract:
The highly parallel molecular dynamics code NAMD was was one of the first codes to run on a GPU cluster when G80 and CUDA were introduced in 2007, and is now used to perform petascale biomolecular simulations, including a 64-million-atom model of the HIV virus capsid, on the GPU-accelerated Cray XK7 Blue Waters and ORNL Titan machines. Come learn the opportunities and pitfalls of taking GPU computing to the petascale, the importance of CUDA 6.5 and Kepler/Maxwell features in combining multicore host processors and GPUs in a legacy message-driven application, and the promise of remote graphics for improving productivity and accessibility in petascale biology.
The highly parallel molecular dynamics code NAMD was was one of the first codes to run on a GPU cluster when G80 and CUDA were introduced in 2007, and is now used to perform petascale biomolecular simulations, including a 64-million-atom model of the HIV virus capsid, on the GPU-accelerated Cray XK7 Blue Waters and ORNL Titan machines. Come learn the opportunities and pitfalls of taking GPU computing to the petascale, the importance of CUDA 6.5 and Kepler/Maxwell features in combining multicore host processors and GPUs in a legacy message-driven application, and the promise of remote graphics for improving productivity and accessibility in petascale biology.  Back
 
Topics:
Life & Material Science, GPU Virtualization, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2015
Session ID:
S5149
Streaming:
Download:
Share:
 
Abstract:
The highly parallel molecular dynamics code NAMD was chosen in 2006 as a target application for the NSF petascale supercomputer now know as Blue Waters. NAMD was also one of the first codes to run on a GPU cluster when G80 and CUDA were introduced in 2007. When Blue Waters entered production in 2013, the first breakthrough it enabled was the complete atomic structure of the HIV capsid through calculations using NAMD, featured on the cover of Nature. How do the GPU-accelerated Cray XK7 Blue Waters and ORNL Titan machines compare to CPU-based platforms for a 64-million-atom virus simulation? Come learn the opportunities and pitfalls of taking GPU computing to the petascale and the importance of CUDA 5.5 and Kepler features in combining multicore host processors and GPUs in a legacy message-driven application.
The highly parallel molecular dynamics code NAMD was chosen in 2006 as a target application for the NSF petascale supercomputer now know as Blue Waters. NAMD was also one of the first codes to run on a GPU cluster when G80 and CUDA were introduced in 2007. When Blue Waters entered production in 2013, the first breakthrough it enabled was the complete atomic structure of the HIV capsid through calculations using NAMD, featured on the cover of Nature. How do the GPU-accelerated Cray XK7 Blue Waters and ORNL Titan machines compare to CPU-based platforms for a 64-million-atom virus simulation? Come learn the opportunities and pitfalls of taking GPU computing to the petascale and the importance of CUDA 5.5 and Kepler features in combining multicore host processors and GPUs in a legacy message-driven application.  Back
 
Topics:
HPC and Supercomputing, Molecular Dynamics
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4394
Streaming:
Download:
Share:
 
Abstract:

The highly parallel molecular dynamics code NAMD was chosen in 2006 as a target application for the NSF petascale supercomputer now know as Blue Waters. NAMD was also one of the first codes to run on a GPU cluster when G80 and CUDA were introduced in 2007. How do the GPU-accelerated Cray XK6 Blue Waters and ORNL Titan machines compare to CPU-based platforms for a hundred-million-atom Blue Waters acceptance test? Come learn the opportunities and pitfalls of taking GPU computing to the petascale and the importance of CUDA 5 and Kepler features in combining multicore host processors and GPUs in a legacy message-driven application.

The highly parallel molecular dynamics code NAMD was chosen in 2006 as a target application for the NSF petascale supercomputer now know as Blue Waters. NAMD was also one of the first codes to run on a GPU cluster when G80 and CUDA were introduced in 2007. How do the GPU-accelerated Cray XK6 Blue Waters and ORNL Titan machines compare to CPU-based platforms for a hundred-million-atom Blue Waters acceptance test? Come learn the opportunities and pitfalls of taking GPU computing to the petascale and the importance of CUDA 5 and Kepler features in combining multicore host processors and GPUs in a legacy message-driven application.

  Back
 
Topics:
Quantum Chemistry, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2013
Session ID:
S3272
Streaming:
Download:
Share:
 
Abstract:

The highly parallel molecular dynamics code NAMD was chosen in 2006 as a target application for the NSF petascale supercomputer now know as Blue Waters. NAMD was also one of the first codes to run on a GPU cluster when G80 and CUDA were introduced in 2007. How do the Cray XK6 and modern GPU clusters compare to 300,000 CPU cores for a hundred-million-atom Blue Waters acceptance test? Come learn the opportunities and pitfalls of taking GPU computing to the petascale and the importance of CUDA 4.0 features in combining multicore host processors and GPUs in a legacy message-driven application.

The highly parallel molecular dynamics code NAMD was chosen in 2006 as a target application for the NSF petascale supercomputer now know as Blue Waters. NAMD was also one of the first codes to run on a GPU cluster when G80 and CUDA were introduced in 2007. How do the Cray XK6 and modern GPU clusters compare to 300,000 CPU cores for a hundred-million-atom Blue Waters acceptance test? Come learn the opportunities and pitfalls of taking GPU computing to the petascale and the importance of CUDA 4.0 features in combining multicore host processors and GPUs in a legacy message-driven application.

  Back
 
Topics:
Molecular Dynamics
Type:
Talk
Event:
GTC Silicon Valley
Year:
2012
Session ID:
S2127
Streaming:
Download:
Share:
 
Speakers:
James Phillips
- University of Illinois
Abstract:
A supercomputer is only as fast as its weakest link. The highly parallel molecular dynamics code NAMD was one of the first codes to run on a GPU cluster when G80 and CUDA were introduced in 2007. Now, after three short years, the Fermi architecture opens the possibility of new algorithms, simpler code, and easier optimization. Come learn the opportunities and pitfalls of taking GPU computing to the petascale.
A supercomputer is only as fast as its weakest link. The highly parallel molecular dynamics code NAMD was one of the first codes to run on a GPU cluster when G80 and CUDA were introduced in 2007. Now, after three short years, the Fermi architecture opens the possibility of new algorithms, simpler code, and easier optimization. Come learn the opportunities and pitfalls of taking GPU computing to the petascale.  Back
 
Topics:
Molecular Dynamics, HPC and AI, Life & Material Science, Physics Simulation
Type:
Talk
Event:
GTC Silicon Valley
Year:
2010
Session ID:
2054
Streaming:
Download:
Share:
 
Abstract:
GPU computing is transforming the extreme high-end realms of supercomputing. NVIDIA Tesla GPUs already power several of the world's sixty fastest supercomputers, and this trend is accelerating. This three-hour "super session" will feature some of the world's premiere supercomputing experts, who will discuss their experience building and deploying GPU-based supercomputing clusters, and present case studies of designing and porting codes for "big iron" GPU supercomputers.
GPU computing is transforming the extreme high-end realms of supercomputing. NVIDIA Tesla GPUs already power several of the world's sixty fastest supercomputers, and this trend is accelerating. This three-hour "super session" will feature some of the world's premiere supercomputing experts, who will discuss their experience building and deploying GPU-based supercomputing clusters, and present case studies of designing and porting codes for "big iron" GPU supercomputers.   Back
 
Topics:
HPC and AI
Type:
Talk
Event:
GTC Silicon Valley
Year:
2009
Session ID:
S09049
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next