SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC On-Demand

Presentation
Media
Abstract:
Learn how to achieve real-world speedup of neural networks using structural sparsity. Structural sparsity reduces the number of weights and computations in a way that's suitable for hardware acceleration. Over-parameterized neural networks waste memory and energy. Techniques like pruning or factorization can alleviate this during inference but they often increase training time, and achieving real-world speedups remains difficult. We'll explain how biology-inspired techniques can reduce the number of weights from quadratic to linear in the number of neurons. Compared to fully connected neural networks, these structurally sparse neural networks achieve large speedups during both training and inference, while maintaining or even improving model accuracy. We'll discuss hardware considerations and results for feed-forward and recurrent networks.
Learn how to achieve real-world speedup of neural networks using structural sparsity. Structural sparsity reduces the number of weights and computations in a way that's suitable for hardware acceleration. Over-parameterized neural networks waste memory and energy. Techniques like pruning or factorization can alleviate this during inference but they often increase training time, and achieving real-world speedups remains difficult. We'll explain how biology-inspired techniques can reduce the number of weights from quadratic to linear in the number of neurons. Compared to fully connected neural networks, these structurally sparse neural networks achieve large speedups during both training and inference, while maintaining or even improving model accuracy. We'll discuss hardware considerations and results for feed-forward and recurrent networks.  Back
 
Topics:
AI and DL Research, Algorithms and Numerical Techniques
Type:
Tutorial
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9389
Streaming:
Download:
Share: