SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC On-Demand

Presentation
Media
Abstract:
Training and tuning models with lengthy training cycles like those in deep learning can be extremely expensive and may sometimes involve techniques that degrade performance. We'll explore recent research on optimization strategies to efficiently tune these types of deep learning models. We will provide benchmarks and comparisons to other popular methods for optimizing the models, and we'll recommend valuable areas for further applied research.
Training and tuning models with lengthy training cycles like those in deep learning can be extremely expensive and may sometimes involve techniques that degrade performance. We'll explore recent research on optimization strategies to efficiently tune these types of deep learning models. We will provide benchmarks and comparisons to other popular methods for optimizing the models, and we'll recommend valuable areas for further applied research.  Back
 
Topics:
AI and DL Research
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9313
Streaming:
Download:
Share:
 
Abstract:
Bayesian Optimization is an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time-consuming or expensive. Deep learning pipelines like MXnet are notoriously expensive to train, even on GPUs, and often have many tunable parameters including hyperparameters, the architecture, and feature transformations that can have a large impact on the efficacy of the model. In traditional optimization, a single metric like accuracy is optimized over a potentially large set of configurations with the goal of producing a single, best configuration. We'll explore real world extensions where multiple competing objectives need to be optimized, a portfolio of multiple solutions may be required, constraints on the underlying system make certain configurations not viable, and more. We'll present work from recent ICML and NIPS workshop papers and detailed examples, with code, for each extension.
Bayesian Optimization is an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time-consuming or expensive. Deep learning pipelines like MXnet are notoriously expensive to train, even on GPUs, and often have many tunable parameters including hyperparameters, the architecture, and feature transformations that can have a large impact on the efficacy of the model. In traditional optimization, a single metric like accuracy is optimized over a potentially large set of configurations with the goal of producing a single, best configuration. We'll explore real world extensions where multiple competing objectives need to be optimized, a portfolio of multiple solutions may be required, constraints on the underlying system make certain configurations not viable, and more. We'll present work from recent ICML and NIPS workshop papers and detailed examples, with code, for each extension.  Back
 
Topics:
Deep Learning and AI Frameworks, NVIDIA Inception Program
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8136
Streaming:
Download:
Share:
 
Abstract:

We'll introduce Bayesian optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time consuming or expensive. Deep learning pipelines are notoriously expensive to train and often have many tunable parameters, including hyperparameters, the architecture, and feature transformations, that can have a large impact on the efficacy of the model. We'll provide several example applications using multiple open source deep learning frameworks and open datasets. We'll compare the results of Bayesian optimization to standard techniques like grid search, random search, and expert tuning. Additionally, we'll present a robust benchmark suite for comparing these methods in general.

We'll introduce Bayesian optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time consuming or expensive. Deep learning pipelines are notoriously expensive to train and often have many tunable parameters, including hyperparameters, the architecture, and feature transformations, that can have a large impact on the efficacy of the model. We'll provide several example applications using multiple open source deep learning frameworks and open datasets. We'll compare the results of Bayesian optimization to standard techniques like grid search, random search, and expert tuning. Additionally, we'll present a robust benchmark suite for comparing these methods in general.

  Back
 
Topics:
Deep Learning and AI, AI Startup
Type:
Talk
Event:
GTC Silicon Valley
Year:
2017
Session ID:
S7749
Download:
Share: