GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
There has been a surge of success in using deep learning as it has provided a new state of the art for a variety of domains. While these models learn their parameters through data-driven methods, model selection through hyper-parameter choices remains a tedious and highly intuition-driven task. We've developed two approaches to address this problem. Multi-node evolutionary neural networks for deep learning (MENNDL) is an evolutionary approach to performing this search. MENNDL is capable of evolving not only the numeric hyper-parameters, but is also capable of evolving the arrangement of layers within the network. The second approach is implemented using Apache Spark at scale on Titan. The technique we present is an improvement over hyper-parameter sweeps because we don't require assumptions about independence of parameters and is more computationally feasible than grid-search.
There has been a surge of success in using deep learning as it has provided a new state of the art for a variety of domains. While these models learn their parameters through data-driven methods, model selection through hyper-parameter choices remains a tedious and highly intuition-driven task. We've developed two approaches to address this problem. Multi-node evolutionary neural networks for deep learning (MENNDL) is an evolutionary approach to performing this search. MENNDL is capable of evolving not only the numeric hyper-parameters, but is also capable of evolving the arrangement of layers within the network. The second approach is implemented using Apache Spark at scale on Titan. The technique we present is an improvement over hyper-parameter sweeps because we don't require assumptions about independence of parameters and is more computationally feasible than grid-search.  Back
 
Topics:
HPC and Supercomputing, Artificial Intelligence and Deep Learning
Type:
Talk
Event:
GTC Washington D.C.
Year:
2017
Session ID:
DC7200
Download:
Share:
 
Abstract:
There has been a surge of success in using deep learning in imaging and speech applications for its relatively automatic feature generation and, in particular, for convolutional neural networks, high-accuracy classification abilities. While these models learn their parameters through data-driven methods, model selection (as architecture construction) through hyper-parameter choices remains a tedious and highly intuition driven task. To address this, multi-node evolutionary neural networks for deep learning (MENNDL) is proposed as a method for automating network selection on computational clusters through hyper-parameter optimization performed via genetic algorithms. MENNDL is capable of evolving not only the numeric hyper-parameters (for example, number of hidden nodes or convolutional kernel size), but is also capable of evolving the arrangement of layers within the network.
There has been a surge of success in using deep learning in imaging and speech applications for its relatively automatic feature generation and, in particular, for convolutional neural networks, high-accuracy classification abilities. While these models learn their parameters through data-driven methods, model selection (as architecture construction) through hyper-parameter choices remains a tedious and highly intuition driven task. To address this, multi-node evolutionary neural networks for deep learning (MENNDL) is proposed as a method for automating network selection on computational clusters through hyper-parameter optimization performed via genetic algorithms. MENNDL is capable of evolving not only the numeric hyper-parameters (for example, number of hidden nodes or convolutional kernel size), but is also capable of evolving the arrangement of layers within the network.  Back
 
Topics:
Artificial Intelligence and Deep Learning, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2017
Session ID:
S7435
Download:
Share:
 
Abstract:

This session will showcase the results of the inaugural GPU Hackathon held at the Oak Ridge Leadership Computing Facility. The event hosted six teams paired with mentors over a week where applications where ported to GPUs using OpenACC directives. The talk will describe the progress of each team from beginning to end as well as details about their implementation. Best practices, lessons learned as well as anecdotes from mentors who participated in this training event will be shared.

This session will showcase the results of the inaugural GPU Hackathon held at the Oak Ridge Leadership Computing Facility. The event hosted six teams paired with mentors over a week where applications where ported to GPUs using OpenACC directives. The talk will describe the progress of each team from beginning to end as well as details about their implementation. Best practices, lessons learned as well as anecdotes from mentors who participated in this training event will be shared.

  Back
 
Topics:
OpenACC, Programming Languages, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2015
Session ID:
S5515
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next