SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC On-Demand

Presentation
Media
Abstract:
Learn how to achieve 100% R/W cache hit rate for most intermediate tensors in CNN and over 80% typical DRAM traffic saving, with general applicability to a limited cache size and large tensors. The high-throughput NVIDIA Tensor Core and DLA demand high memory traffic. Chaining of consecutive layers in CNN can save DRAM traffic by reusing intermediate tensors in cache. This strategy is effective only with small tensors and a large cache. In this work, we slice tensors into small tiles (with halo) and chain these tiles so the requirement for perfect caching can always be fulfilled. Our implementation of this approach is proven to be very effective in saving DRAM traffic. This work allows us to solve the memory bandwidth issue of CNN with a relatively small but high-bandwidth cache.
Learn how to achieve 100% R/W cache hit rate for most intermediate tensors in CNN and over 80% typical DRAM traffic saving, with general applicability to a limited cache size and large tensors. The high-throughput NVIDIA Tensor Core and DLA demand high memory traffic. Chaining of consecutive layers in CNN can save DRAM traffic by reusing intermediate tensors in cache. This strategy is effective only with small tensors and a large cache. In this work, we slice tensors into small tiles (with halo) and chain these tiles so the requirement for perfect caching can always be fulfilled. Our implementation of this approach is proven to be very effective in saving DRAM traffic. This work allows us to solve the memory bandwidth issue of CNN with a relatively small but high-bandwidth cache.  Back
 
Topics:
AI Application Deployment and Inference, Performance Optimization
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8299
Streaming:
Share: