GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
We'll discuss the Max Planck/University of Chicago Radiative MHD code (MURaM), the primary model for simulating the sun's upper convection zone, its surface, and the corona. Accelerating MURaM allows physicists to interpret high-resolution solar observations. We'll describe the programmatic challenges and optimization techniques we employed while using the OpenACC programming model to accelerate MURaM on GPUs and multicore architectures. We will also examine what we learned and how it could be broadly applied on atmospheric applications that demonstrate radiation-transport methods.
We'll discuss the Max Planck/University of Chicago Radiative MHD code (MURaM), the primary model for simulating the sun's upper convection zone, its surface, and the corona. Accelerating MURaM allows physicists to interpret high-resolution solar observations. We'll describe the programmatic challenges and optimization techniques we employed while using the OpenACC programming model to accelerate MURaM on GPUs and multicore architectures. We will also examine what we learned and how it could be broadly applied on atmospheric applications that demonstrate radiation-transport methods.  Back
 
Topics:
Climate, Weather & Ocean Modeling, Programming Languages
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9288
Streaming:
Download:
Share:
 
Abstract:
Scientific model performance has begun to stagnate over the last decade due to plateauing core speeds, increasing model complexity, and mushrooming data volumes. Learn how our team at the National Center for Atmospheric Research is pursuing an end-to-end hybrid approach to surmounting these barriers. We'll discuss how combining ML-based emulation with GPU acceleration of numerical models can pave the way toward new scientific modeling capabilities. We'll also detail our approach, which uses machine learning and GPU acceleration to produce what we hope will be a new generation of ultra-fast meteorological and climate models that provide enhanced fidelity with nature and increased value to society.
Scientific model performance has begun to stagnate over the last decade due to plateauing core speeds, increasing model complexity, and mushrooming data volumes. Learn how our team at the National Center for Atmospheric Research is pursuing an end-to-end hybrid approach to surmounting these barriers. We'll discuss how combining ML-based emulation with GPU acceleration of numerical models can pave the way toward new scientific modeling capabilities. We'll also detail our approach, which uses machine learning and GPU acceleration to produce what we hope will be a new generation of ultra-fast meteorological and climate models that provide enhanced fidelity with nature and increased value to society.  Back
 
Topics:
Climate, Weather & Ocean Modeling, Accelerated Data Science, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9731
Streaming:
Download:
Share:
 
Abstract:
We'll give a high-level overview of the results of these efforts, and how we built a cross-organizational partnership to achieve them. Ours is a directive-based approach using OMP and OpenACC to achieve portability. We have focused on achieving good performance on three main architectural branches available to us, namely: traditional multi-core processors (e.g. Intel Xeons), core processors such as the Intel Xeon Phi, and, of course NVIDIA GPUs. Our focus has been on creating tools for accelerating the optimization process, techniques for effective cross-platform optimization, and methodologies for characterizing and understanding performance. The results are encouraging, suggesting a path forward based on standard directives for responding to the pressures of future architectures.
We'll give a high-level overview of the results of these efforts, and how we built a cross-organizational partnership to achieve them. Ours is a directive-based approach using OMP and OpenACC to achieve portability. We have focused on achieving good performance on three main architectural branches available to us, namely: traditional multi-core processors (e.g. Intel Xeons), core processors such as the Intel Xeon Phi, and, of course NVIDIA GPUs. Our focus has been on creating tools for accelerating the optimization process, techniques for effective cross-platform optimization, and methodologies for characterizing and understanding performance. The results are encouraging, suggesting a path forward based on standard directives for responding to the pressures of future architectures.  Back
 
Topics:
Climate, Weather & Ocean Modeling, Performance Optimization
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8811
Streaming:
Download:
Share:
 
Abstract:
"The strategy of the National Center for Atmospheric Research (NCAR) for supporting its earth system modelers is through the development of community codes - applications that are not only freely downloadable but also contain components developed by a distributed group of contributing authors. Users of community models expect them to not only work, but to also run well on a variety of platforms, particularly the ones chosen by their home institutions. The divergence of computer architectures that occurred with the introduction of heterogeneous systems with accelerators, such as GPUs) has made the issue of achieving performance portability for community models quite challenging. The time required to optimize code also play well with the neither complexity of the code nor the codes complexity. Thus the objectives of NCAR''s exploration of accelerator architectures for high performance computing in recent years has been to 1) speed up the rate of code optimization and porting and 2) understand how to achieve performance portability on codes in the most economical and affordable way. 
"The strategy of the National Center for Atmospheric Research (NCAR) for supporting its earth system modelers is through the development of community codes - applications that are not only freely downloadable but also contain components developed by a distributed group of contributing authors. Users of community models expect them to not only work, but to also run well on a variety of platforms, particularly the ones chosen by their home institutions. The divergence of computer architectures that occurred with the introduction of heterogeneous systems with accelerators, such as GPUs) has made the issue of achieving performance portability for community models quite challenging. The time required to optimize code also play well with the neither complexity of the code nor the codes complexity. Thus the objectives of NCAR''s exploration of accelerator architectures for high performance computing in recent years has been to 1) speed up the rate of code optimization and porting and 2) understand how to achieve performance portability on codes in the most economical and affordable way.   Back
 
Topics:
Programming Languages
Type:
Talk
Event:
SIGGRAPH
Year:
2017
Session ID:
SC1725
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next