SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC On-Demand

Presentation
Media
Abstract:
Data centers today benefit from highly optimized hardware architectures and performance metrics that enable efficient provisioning and tuning of compute resources. But these architectures and metrics, honed over decades, are sternly challenged by the rapid increase of AI applications and neural net workloads, where the impact of memory metrics like bandwidth, capacity, and latency on overall performance is not yet well understood. Get the perspectives of AI HW/SW co-design experts from Google, Microsoft, Facebook and Baidu, and technologists from NVIDIA and Samsung, as they evaluate the AI hardware challenges facing data centers and brainstorm current and necessary advances in architectures with particular emphasis on memory's impact on both training and inference.
Data centers today benefit from highly optimized hardware architectures and performance metrics that enable efficient provisioning and tuning of compute resources. But these architectures and metrics, honed over decades, are sternly challenged by the rapid increase of AI applications and neural net workloads, where the impact of memory metrics like bandwidth, capacity, and latency on overall performance is not yet well understood. Get the perspectives of AI HW/SW co-design experts from Google, Microsoft, Facebook and Baidu, and technologists from NVIDIA and Samsung, as they evaluate the AI hardware challenges facing data centers and brainstorm current and necessary advances in architectures with particular emphasis on memory's impact on both training and inference.  Back
 
Topics:
Data Center and Cloud Infrastructure, Performance Optimization, Speech and Language Processing, HPC and AI, HPC and Supercomputing
Type:
Panel
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S91018
Streaming:
Download:
Share:
 
Abstract:
Modern AI methods represent a great opportunity to rethink every aspect of our companies, from engineering products, to pricing them, to production, and even to basic tasks like human resources and finance. In this talk, we will talk about some use cases in the enterprise, and then the software tools and hardware infrastructure required to build successful AI-based applications. We will talk about some of ways IBM is enhancing open-source software like Tensorflow and pyTorch and also how automatic AI (AutoAI) will enable faster creation and deployment of AI models.
Modern AI methods represent a great opportunity to rethink every aspect of our companies, from engineering products, to pricing them, to production, and even to basic tasks like human resources and finance. In this talk, we will talk about some use cases in the enterprise, and then the software tools and hardware infrastructure required to build successful AI-based applications. We will talk about some of ways IBM is enhancing open-source software like Tensorflow and pyTorch and also how automatic AI (AutoAI) will enable faster creation and deployment of AI models.  Back
 
Topics:
AI Application Deployment and Inference, AI and DL Business Track (high level)
Type:
Sponsored Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S91033
Streaming:
Download:
Share:
 
Speakers:
Sumit Gupta
Abstract:

IBM PowerAI provides the easiest on-ramp for enterprise deep learning. PowerAI helped users break deep learning training benchmarks AlexNet and VGGNet thanks to the world's only CPU-to-GPU NVIDIA NVLink interface. See how new feature development and performance optimizations will advance the future of deep learning in the next twelve months, including NVIDIA NVLink 2.0, leaps in distributed training, and tools that make it easier to create the next deep learning breakthrough. Learn how you can harness a faster, better and more performant experience for the future of deep learning.    

IBM PowerAI provides the easiest on-ramp for enterprise deep learning. PowerAI helped users break deep learning training benchmarks AlexNet and VGGNet thanks to the world's only CPU-to-GPU NVIDIA NVLink interface. See how new feature development and performance optimizations will advance the future of deep learning in the next twelve months, including NVIDIA NVLink 2.0, leaps in distributed training, and tools that make it easier to create the next deep learning breakthrough. Learn how you can harness a faster, better and more performant experience for the future of deep learning.    

  Back
 
Topics:
Accelerated Analytics, Deep Learning and AI
Type:
Talk
Event:
GTC Silicon Valley
Year:
2017
Session ID:
S7862
Download:
Share: