With over 5000 GPU-accelerated nodes, Piz Daint has been Europes leading supercomputing systems since 2013, and is currently one of the most performant and energy efficient supercomputers on the planet. It has been designed to optimize throughput of multiple applications, covering all aspects of the workflow, including data analysis and visualisation. We will discuss ongoing efforts to further integrate these extreme-scale compute and data services with infrastructure services of the cloud. As Tier-0 systems of PRACE, Piz Daint is accessible to all scientists in Europe and worldwide. It provides a baseline for future development of exascale computing. We will present a strategy for developing exascale computing technologies in domains such as weather and climate or materials science.
Since spring of 2016 the new model COSMO-NEXT of MeteoSwiss is fully operational. It delivers kilometer-scale ensemble weather forecasts for the alpine region and runs on a K80-based supercomputer at the Swiss National Supercomputing Centre in Lugano, Switzerland. Overall the simulation performance of COSMO-NEXT was enhanced 40-fold compared to the previous system. Thus, in order to remain within the same energy and system footprint as the computer introduced in 2012, a factor 10 improvement over normal performance growth due to Moore's Law had to be achieved with novel software and architectural design. This talk will discuss the system design and software development that made this enhancement possible. Furthermore, it will give an outlook on future requirements by the numerical weather
We'll discuss the hardware-software co-design project behind the most cost and energy efficient system for numerical weather prediction -- an appliance based on the Cray CS-Storm system architecture that is loaded with NVIDIA K80 GPUs and operated on behalf of MeteoSwiss by CSCS since October 2015.
One of today's biggest challenges for scientific computing is the rapidly developing architectural diversity and heterogeneity in computing systems. Application developers no longer face just concurrency as the major obstacle when scaling simulation codes, but have to adapt software to diverging architecture specific programming models and heterogeneous memory subsystems, requiring significant efforts in refactoring of software and development of new algorithms. In this talk, we will show how CSCS has turned these challenges into opportunity which led to software development collaborations with HPC centers in Europe, USA, and Japan, the deployment of "Piz Daint", a GPU-accelerated supercomputer that is among the top 10 systems worldwide that enabled development.
Piz Daint: a productive, energy efficient supercomputer with hybrid CPU-GPU nodes We will discuss the makings of Piz Daint, a Cray XC30 supercomputer with hybrid CPU-GPU nodes. The presentation will focus on quantitative improvements in time and energy to solution due to the use of GPU technology in full climate, materials science and chemistry simulations.
Reliable weather prediction for the Alpine region and cloud resolving climate modeling require simulations that run at 1-2 km resolution. Additionally, since the largest possible ensembles are needed, high fidelity models have to run on the most economical resource in a given time to solution. In this presentation we will give an update on the refactoring of COSMO, a widely used production code in academia as well as seven European weather services, and discuss the performance experience on hybrid CPU-GPU systems.
Numerical weather prediction is among the oldest fields of computational science, and existed before the advent of electronic computing. Thanks to the performance of modern computers, the fidelity of weather simulations has reached a point where they are indispensible in weather forecasting, and thus have become one of the economically most impactful domains of computational science. Typically, the dynamical cores of models of weather simulations are grid based and memory bandwidth bound, thus performing poorly on modern X86 type processors. In this presentation, we will discuss a refactoring project of the COSMO code that implements a regional climate model used by several weather services and academic institutions worldwide. The dynamical core has been rewritten and is easily portable to multiple architectures, including GPU. The physics part of the code is being ported to GPU with OpenACC directives. Preliminary performance results for production scale problems will be presented. Other contributors to this research include Oliver Fuhrer, Swiss Federal Office of Meteorology and Climatology MeteoSwiss, Tobias Gysi and David Müller, Supercomputing Systems AG, Xavier Lapillonne, Center for Climate Systems Modeling, ETH Zurich, William Sawyer, Ugo Varetto, and Mauro Bianco, Swiss National Supercomputing Center.