SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC On-Demand

Presentation
Media
Abstract:
Academic design of deep neural networks has historically focused on maximizing accuracy at any cost. However, many practical applications have real-world constraints such as model size, computational complexity (FLOPs), or inference latency, as well as physical hardware performance, that need to be considered. We'll discuss our MorphNet solution, an approach to automate the design of neural nets with constraint-specific and hardware-specific tradeoffs while being lightweight and scalable to large data sets. We show how MorphNet can be used to design neural nets that reduce model size, FLOP count, or inference latency with the same accuracy across different domains such as ImageNet, OCR, and AudioSet. Finally, we show how MorphNet designs different architectures when optimizing for P100 and V100 platforms.
Academic design of deep neural networks has historically focused on maximizing accuracy at any cost. However, many practical applications have real-world constraints such as model size, computational complexity (FLOPs), or inference latency, as well as physical hardware performance, that need to be considered. We'll discuss our MorphNet solution, an approach to automate the design of neural nets with constraint-specific and hardware-specific tradeoffs while being lightweight and scalable to large data sets. We show how MorphNet can be used to design neural nets that reduce model size, FLOP count, or inference latency with the same accuracy across different domains such as ImageNet, OCR, and AudioSet. Finally, we show how MorphNet designs different architectures when optimizing for P100 and V100 platforms.  Back
 
Topics:
AI and DL Research
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9645
Streaming:
Download:
Share: