Deep learning is emerging as a major application for high-performance computing. While training of deep neural networks (DNNs) places some unique demands on computing hardware its shares with mainstream HPC applications the need for high performance arithmetic, high memory bandwidth, and high-bandwidth, low-latency networks. Deep learning can also be used to enhance traditional HPC applications both by interpreting the results and by “learning” constituent equations. This talk will examine the common requrements of DL and HPC and applications of DL to HPC.
Opening Keynote Speech
It's not just America's business community that is embracing Artificial Intelligence, but the Federal Government as well. In early October, the White House issued a report entitled "Preparing for the Future of Artificial Intelligence," after conducting a series of public workshops with prominent universities around the country. Join us for a conversation with leading thinkers on how the government and the private sector are preparing for that future, including discussion of the economic impact of AI, the role of the government in funding critical AI research, and how AI can help address current public policy challenges.
HPC and data analytics share challenges of power, programmability, and scalability to realize their potential. The end of Dennard scaling has made all computing power limited, so that performance is determined by energy efficiency. With improvements in process technology offering little increase in efficiency, innovations in architecture and circuits are required to maintain the expected performance scaling. The large scale parallelism and deep storage hierarchy of future machines poses programming challenges. This talk will discuss these challenges in more detail and introduce some of the technologies being developed to address them.