GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
At the brink of Exascale its clear that massive parallelism at the node level is the path forward. Scientists and engineers need highly productive programming environments to speed their time to discovery on todays HPC systems. In addition to the requirements this puts on compilers and software development tools, researchers must shore up their skills in parallel and accelerated computing in order to be ready for the Exascale era. Join Jack Wells, Director of Science at the ORNL Leadership Computing Facility and Vice President of the OpenACC Organization, as he discusses plans to help the HPC developer community take advantage of todays fastest supercomputers and prepare for Exascale through hands-on training and education in state-of-the art programming techniques in 2020 and beyond. Jack will give an overview of how the OpenACC organization mission is expanding to meet these needs and building on its philosophy of a user-driven OpenACC specification to create a bridge to heterogeneous programming using parallel features in standard C++ and Fortran.
At the brink of Exascale its clear that massive parallelism at the node level is the path forward. Scientists and engineers need highly productive programming environments to speed their time to discovery on todays HPC systems. In addition to the requirements this puts on compilers and software development tools, researchers must shore up their skills in parallel and accelerated computing in order to be ready for the Exascale era. Join Jack Wells, Director of Science at the ORNL Leadership Computing Facility and Vice President of the OpenACC Organization, as he discusses plans to help the HPC developer community take advantage of todays fastest supercomputers and prepare for Exascale through hands-on training and education in state-of-the art programming techniques in 2020 and beyond. Jack will give an overview of how the OpenACC organization mission is expanding to meet these needs and building on its philosophy of a user-driven OpenACC specification to create a bridge to heterogeneous programming using parallel features in standard C++ and Fortran.  Back
 
Topics:
HPC and Supercomputing
Type:
Talk
Event:
Supercomputing
Year:
2019
Session ID:
SC1923
Streaming:
Download:
Share:
 
Abstract:

This presentation will communicate selected, early results from application readiness activities at the Oak Ridge Leadership Computing Facility (OLCF), in preparation for Summit, the Department of Energy, Office of Science new supercomputer operated by Oak Ridge National Laboratory. With over 9,000 POWER9 CPUs and 27,000 V100 GPUs, high-bandwidth data movement, and large node-local memory, Summit's architecture is proving to be effective in advancing performance across diverse applications in traditional modeling and simulation, high-performance data analytics, and artificial intelligence. These advancements in application performance are being achieved with small increases in Summit's electricity consumption as compared with previous supercomputers operated at OLCF.

This presentation will communicate selected, early results from application readiness activities at the Oak Ridge Leadership Computing Facility (OLCF), in preparation for Summit, the Department of Energy, Office of Science new supercomputer operated by Oak Ridge National Laboratory. With over 9,000 POWER9 CPUs and 27,000 V100 GPUs, high-bandwidth data movement, and large node-local memory, Summit's architecture is proving to be effective in advancing performance across diverse applications in traditional modeling and simulation, high-performance data analytics, and artificial intelligence. These advancements in application performance are being achieved with small increases in Summit's electricity consumption as compared with previous supercomputers operated at OLCF.

  Back
 
Topics:
HPC and Supercomputing
Type:
Talk
Event:
Supercomputing
Year:
2018
Session ID:
SC1806
Download:
Share:
 
Abstract:
The Center for Accelerated Application Readiness within the Oak Ridge Leadership Computing Facility is a program to prepare scientific applications for next generation supercomputer architectures. Currently the program consists of thirteen domain science application development projects focusing on preparing codes for efficient use on Summit. Over the last three years, these teams have developed and executed a development plan based on detailed information about Summit's architecture and system software stack. This presentation will highlight the progress made by the teams that have used Titan, the 27 PF Cray XK7 with NVIDIA K20X GPUs, SummitDev, an early IBM Power8+ access system with NVIDIA P100 GPUs, and since very recently Summit, OLCF's new IBM Power9 system with NVIDIA V100 GPUs. The program covers a wide range of domain sciences, with applications including ACME, DIRAC, FLASH, GTC, HACC, LSDALTON, NAMD, NUCCOR, NWCHEM, QMCPACK, RAPTOR, SPECFEM, and XGC
The Center for Accelerated Application Readiness within the Oak Ridge Leadership Computing Facility is a program to prepare scientific applications for next generation supercomputer architectures. Currently the program consists of thirteen domain science application development projects focusing on preparing codes for efficient use on Summit. Over the last three years, these teams have developed and executed a development plan based on detailed information about Summit's architecture and system software stack. This presentation will highlight the progress made by the teams that have used Titan, the 27 PF Cray XK7 with NVIDIA K20X GPUs, SummitDev, an early IBM Power8+ access system with NVIDIA P100 GPUs, and since very recently Summit, OLCF's new IBM Power9 system with NVIDIA V100 GPUs. The program covers a wide range of domain sciences, with applications including ACME, DIRAC, FLASH, GTC, HACC, LSDALTON, NAMD, NUCCOR, NWCHEM, QMCPACK, RAPTOR, SPECFEM, and XGC  Back
 
Topics:
HPC and AI, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8908
Streaming:
Download:
Share:
 
Abstract:
HPC centers have been traditionally configured for simulation workloads, but deep learning has been increasingly applied alongside simulation on scientific datasets. These frameworks do not always fit well with job schedulers, large parallel file systems, and MPI backends. We'll discuss examples of how deep learning workflows are being deployed on next-generation systems at the Oak Ridge Leadership Computing Facility. We'll share benchmarks between native compiled versus containers on Power systems, like Summit, as well as best practices for deploying learning and models on HPC resources on scientific workflows.
HPC centers have been traditionally configured for simulation workloads, but deep learning has been increasingly applied alongside simulation on scientific datasets. These frameworks do not always fit well with job schedulers, large parallel file systems, and MPI backends. We'll discuss examples of how deep learning workflows are being deployed on next-generation systems at the Oak Ridge Leadership Computing Facility. We'll share benchmarks between native compiled versus containers on Power systems, like Summit, as well as best practices for deploying learning and models on HPC resources on scientific workflows.  Back
 
Topics:
HPC and AI, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8551
Streaming:
Download:
Share:
 
Abstract:
TBA
TBA  Back
 
Topics:
Accelerated Data Science
Type:
Talk
Event:
SIGGRAPH
Year:
2017
Session ID:
SC1712
Share:
 
Abstract:

High Performance Computing (HPC) has been a cornerstone of scientific discovery from Dr. Ken Wilson's Nobel Prize in Physics in 1982 to the breaking of the Capsid Code for the HIV virus his past year. EXAscale Computing, which follows the progression from TERAscale and PETAscale marks the next milestone in the evolution of HPC that will allow the scientific community to continue its dynamic pace of discovery and innovation. 

High Performance Computing (HPC) has been a cornerstone of scientific discovery from Dr. Ken Wilson's Nobel Prize in Physics in 1982 to the breaking of the Capsid Code for the HIV virus his past year. EXAscale Computing, which follows the progression from TERAscale and PETAscale marks the next milestone in the evolution of HPC that will allow the scientific community to continue its dynamic pace of discovery and innovation. 

  Back
 
Topics:
HPC and Supercomputing
Type:
Panel
Event:
SIGGRAPH Asia
Year:
2016
Session ID:
SC6113
Streaming:
Share:
 
Abstract:

High Performance Computing (HPC) has been a cornerstone of scientific discovery from Dr. Ken Wilson's Nobel Prize in Physics in 1982 to the breaking of the Capsid Code for the HIV virus his past year. EXAscale Computing, which follows the progression from TERAscale and PETAscale marks the next milestone in the evolution of HPC that will allow the scientific community to continue its dynamic pace of discovery and innovation.

High Performance Computing (HPC) has been a cornerstone of scientific discovery from Dr. Ken Wilson's Nobel Prize in Physics in 1982 to the breaking of the Capsid Code for the HIV virus his past year. EXAscale Computing, which follows the progression from TERAscale and PETAscale marks the next milestone in the evolution of HPC that will allow the scientific community to continue its dynamic pace of discovery and innovation.

  Back
 
Topics:
HPC and AI
Type:
Panel
Event:
GTC Washington D.C.
Year:
2016
Session ID:
DCS16173
Streaming:
Share:
 
Abstract:
Pending
Pending  Back
 
Topics:
OpenPOWER
Type:
Talk
Event:
GTC Silicon Valley
Year:
2015
Session ID:
S5923
Streaming:
Share:
 
Abstract:

Modeling and simulation with petascale computing has supercharged the process of innovation, dramatically accelerating time-to-discovery. This presentation will focus on early science from the Titan supercomputer at the Oak Ridge Leadership Computing Facility, with results from scientific codes discuss, e.g., LAMMPS and WL-LSMS. I will also summarize the lessons we have learned in preparing applications to move from conventional CPU architectures to a hybrid, accelerated architecture, and the implications for the research community as we prepare for exascale computational science.

Modeling and simulation with petascale computing has supercharged the process of innovation, dramatically accelerating time-to-discovery. This presentation will focus on early science from the Titan supercomputer at the Oak Ridge Leadership Computing Facility, with results from scientific codes discuss, e.g., LAMMPS and WL-LSMS. I will also summarize the lessons we have learned in preparing applications to move from conventional CPU architectures to a hybrid, accelerated architecture, and the implications for the research community as we prepare for exascale computational science.

  Back
 
Topics:
HPC and Supercomputing
Type:
Talk
Event:
Supercomputing
Year:
2013
Session ID:
SC3109
Streaming:
Download:
Share:
 
Abstract:

This presentation will focus on early outcomes from Titan, the world''s fastest supercomputer. We will showcase results from the Center for Accelerated Application Readiness, or CAAR, where Titan''s manufacturer Cray, NVIDIA, and scientific computing experts at OLCF have collaborated to make several applications ready to use Titan''s GPU accelerators. This talk will also explore some best practices the CAAR team learned in the process of porting CPU-only applications to Titan''s GPU-accelerated architecture. Preliminary Early Science results from users running on Titan will be discussed, including, for example, applications in combustion for advanced engines, properties of magnetic materials for clean energy applications, and reactor modeling for today''s fleet of light-water reactors. Lastly, details about Titan system setup, OLCF resources, and how to apply for time on Titan''s 18,688 GPU accelerated nodes will be shared.

This presentation will focus on early outcomes from Titan, the world''s fastest supercomputer. We will showcase results from the Center for Accelerated Application Readiness, or CAAR, where Titan''s manufacturer Cray, NVIDIA, and scientific computing experts at OLCF have collaborated to make several applications ready to use Titan''s GPU accelerators. This talk will also explore some best practices the CAAR team learned in the process of porting CPU-only applications to Titan''s GPU-accelerated architecture. Preliminary Early Science results from users running on Titan will be discussed, including, for example, applications in combustion for advanced engines, properties of magnetic materials for clean energy applications, and reactor modeling for today''s fleet of light-water reactors. Lastly, details about Titan system setup, OLCF resources, and how to apply for time on Titan''s 18,688 GPU accelerated nodes will be shared.

  Back
 
Topics:
HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2013
Session ID:
S3470
Streaming:
Download:
Share:
 
Abstract:

This year, the leadership-class computing facility at Oak Ridge National Labs is upgrading its largest supercomputer for open science, "Jaguar", to employ high-performance, power- efficient GPUs. Once the transition is complete, the machine will be known as "Titan". In this extended GTC session, we will feature a range of presenters showcasing research codes that will run computational science on the GPU at scale. Through these selected presentations, we will investigate the progress and anticipated results of GPU-acceleration of these significant codes. In this session, we will also explain how research scientists interested in tapping into the immense capabilities of Titan can do so, through programs such as the INCITE program sponsored by the US Department of Energy. The presenters include: Speaker: Jacqueline H. Chen (Combustion Research Facility, Sandia National Laboratories) "Direct Numerical Simulation of Turbulence-Chemistry Interactions: Fundamental Insights Towards Predictive Models" Speaker: Ray Grout (National Renewable Energy Laboratory) "S3D Direct Numerical Simulation - Preparations for the 10-100PF Era" Speaker: William Tang (Director, Fusion Simulation Program at the Princeton Plasma Physics Laboratory (PPPL), Princeton) "Fusion Energy Sciences & Computing at the Extreme Scale" Speaker: John A. Turner (Group Leader of Computational Engineering & Energy Sciences , Oak Ridge National Laboratory) "Transforming Modeling and Simulation for Nuclear Energy Applications" Speaker: Loukas Petridis (Staff Scientist, Oak Ridge National Laboratory) "Computer Simulation of Lignocellulosic Biomass" Speaker: Jeroen Tromp (Director, Princeton Institute for Computational Science, Princeton) "Toward Global Seismic Imaging based on Spectral-Element and Adjoint Methods"

This year, the leadership-class computing facility at Oak Ridge National Labs is upgrading its largest supercomputer for open science, "Jaguar", to employ high-performance, power- efficient GPUs. Once the transition is complete, the machine will be known as "Titan". In this extended GTC session, we will feature a range of presenters showcasing research codes that will run computational science on the GPU at scale. Through these selected presentations, we will investigate the progress and anticipated results of GPU-acceleration of these significant codes. In this session, we will also explain how research scientists interested in tapping into the immense capabilities of Titan can do so, through programs such as the INCITE program sponsored by the US Department of Energy. The presenters include: Speaker: Jacqueline H. Chen (Combustion Research Facility, Sandia National Laboratories) "Direct Numerical Simulation of Turbulence-Chemistry Interactions: Fundamental Insights Towards Predictive Models" Speaker: Ray Grout (National Renewable Energy Laboratory) "S3D Direct Numerical Simulation - Preparations for the 10-100PF Era" Speaker: William Tang (Director, Fusion Simulation Program at the Princeton Plasma Physics Laboratory (PPPL), Princeton) "Fusion Energy Sciences & Computing at the Extreme Scale" Speaker: John A. Turner (Group Leader of Computational Engineering & Energy Sciences , Oak Ridge National Laboratory) "Transforming Modeling and Simulation for Nuclear Energy Applications" Speaker: Loukas Petridis (Staff Scientist, Oak Ridge National Laboratory) "Computer Simulation of Lignocellulosic Biomass" Speaker: Jeroen Tromp (Director, Princeton Institute for Computational Science, Princeton) "Toward Global Seismic Imaging based on Spectral-Element and Adjoint Methods"

  Back
 
Topics:
HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2012
Session ID:
S2606
Streaming:
Download:
Share:
 
Abstract:

This session offers a wrap-up of "GPU-accelerated Science on Titan: Tapping into the World's Preeminent GPU Supercomputer to Achieve Better Science" session with Jack Wells.

This session offers a wrap-up of "GPU-accelerated Science on Titan: Tapping into the World's Preeminent GPU Supercomputer to Achieve Better Science" session with Jack Wells.

  Back
 
Topics:
HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2012
Session ID:
S2657
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next