GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
Well share our best practices for extending AI compute power to your teams without the need to build and manage a data center. Our innovative approaches will enable you to turn your NVIDIA DGX Station into a powerful departmental solution serving entire teams of developers from the convenience of an office environment. Teams building powerful AI applications might not need to own servers or depend on datacenter access. Well show how to use containers; orchestration tools such as Kubernetes and Kubeflow; and scheduling tools like Slum. Step-by-step demos will illustrate how to easily set up an AI workgroup.
Well share our best practices for extending AI compute power to your teams without the need to build and manage a data center. Our innovative approaches will enable you to turn your NVIDIA DGX Station into a powerful departmental solution serving entire teams of developers from the convenience of an office environment. Teams building powerful AI applications might not need to own servers or depend on datacenter access. Well show how to use containers; orchestration tools such as Kubernetes and Kubeflow; and scheduling tools like Slum. Step-by-step demos will illustrate how to easily set up an AI workgroup.  Back
 
Topics:
Accelerated Data Science, AI Application, Deployment & Inference
Type:
Talk
Event:
GTC Washington D.C.
Year:
2019
Session ID:
DC91209
Download:
Share:
 
Abstract:
Well show how to plan the deployment of AI infrastructure at scale with DGX software using DeepOps, from design and deployment to management and monitoring. DeepOps, an NVIDIA open source project, is used for the deployment and management of DGX POD clusters. Its also used in the deployment of Kubernetes and Slurm, in an on-premise, optionally air-gapped data center. The modularity of the Ansible scripts in DeepOps gives experienced DevOps administrators the flexibility to customize deployment experience based on their specific IT infrastructure requirements, whether that means implementing a high-performance benchmarking cluster, or providing a data science team with Jupyter Notebooks that tap into GPUs. Well also describe how to leverage AI infrastructure at scale to support interactive training, machine learning pipelines, and inference use cases.
Well show how to plan the deployment of AI infrastructure at scale with DGX software using DeepOps, from design and deployment to management and monitoring. DeepOps, an NVIDIA open source project, is used for the deployment and management of DGX POD clusters. Its also used in the deployment of Kubernetes and Slurm, in an on-premise, optionally air-gapped data center. The modularity of the Ansible scripts in DeepOps gives experienced DevOps administrators the flexibility to customize deployment experience based on their specific IT infrastructure requirements, whether that means implementing a high-performance benchmarking cluster, or providing a data science team with Jupyter Notebooks that tap into GPUs. Well also describe how to leverage AI infrastructure at scale to support interactive training, machine learning pipelines, and inference use cases.  Back
 
Topics:
AI & Deep Learning Research, AI Application, Deployment & Inference
Type:
Talk
Event:
GTC Washington D.C.
Year:
2019
Session ID:
DC91207
Download:
Share:
 
Abstract:
Learn from NVIDIA customers who will share their best practices for extending AI compute power to their teams without the need to build and manage a data center. These organizations will describe innovative approaches that let them turn an NVIDIA DGX Station into a powerful solution serving entire teams of developers from the convenience of an office environment. Learn how teams building powerful AI applications may not need to own servers or depend on data center access and find out how to take advantage of containers, orchestration, monitoring, and scheduling tools. The organizations will also show demos of how to set up an AI work group with ease and cover best practices for AI developer productivity.
Learn from NVIDIA customers who will share their best practices for extending AI compute power to their teams without the need to build and manage a data center. These organizations will describe innovative approaches that let them turn an NVIDIA DGX Station into a powerful solution serving entire teams of developers from the convenience of an office environment. Learn how teams building powerful AI applications may not need to own servers or depend on data center access and find out how to take advantage of containers, orchestration, monitoring, and scheduling tools. The organizations will also show demos of how to set up an AI work group with ease and cover best practices for AI developer productivity.  Back
 
Topics:
Deep Learning & AI Frameworks, Data Center & Cloud Infrastructure
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9483
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next