GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
Red Hat and NVIDIA collaborated to bring together two of the technology industry's most popular products: Red Hat Enterprise Linux 7 and the NVIDIA DGX system. This talk will cover how the combination of RHELs rock-solid stability with the incredible DGX hardware can deliver tremendous value to enterprise data scientists. We will also show how to leverage NVIDIA GPU Cloud container images with Kubernetes and RHEL to reap maximum benefits from this incredible hardware.
Red Hat and NVIDIA collaborated to bring together two of the technology industry's most popular products: Red Hat Enterprise Linux 7 and the NVIDIA DGX system. This talk will cover how the combination of RHELs rock-solid stability with the incredible DGX hardware can deliver tremendous value to enterprise data scientists. We will also show how to leverage NVIDIA GPU Cloud container images with Kubernetes and RHEL to reap maximum benefits from this incredible hardware.  Back
 
Topics:
Data Center & Cloud Infrastructure, Finance - Quantitative Risk & Derivative Calculations
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9292
Streaming:
Download:
Share:
 
Abstract:

Learn how to effectively schedule and manage your system workload using Slurm; the free, open source and highly scalable cluster management and job scheduling system for Linux clusters. Slurm is in use today on roughly half of the largest systems in the world servicing a broad spectrum of applications. Slurm developers have been working closely with NVIDIA to provide capabilities specifically focused on the needs of GPU management. This includes a multitude of new options to specify GPU requirements for a job in various ways (GPU count per job, node, socket and/or task), additional resource requirements for allocated GPUs (CPUs and/or memory per GPU), how spawned tasks should be bound to allocated GPUs, and control over GPU frequency and voltage. An introduction to Slurm's design and capabilities will be presented with a focus on managing workloads for GPUs.

Learn how to effectively schedule and manage your system workload using Slurm; the free, open source and highly scalable cluster management and job scheduling system for Linux clusters. Slurm is in use today on roughly half of the largest systems in the world servicing a broad spectrum of applications. Slurm developers have been working closely with NVIDIA to provide capabilities specifically focused on the needs of GPU management. This includes a multitude of new options to specify GPU requirements for a job in various ways (GPU count per job, node, socket and/or task), additional resource requirements for allocated GPUs (CPUs and/or memory per GPU), how spawned tasks should be bound to allocated GPUs, and control over GPU frequency and voltage. An introduction to Slurm's design and capabilities will be presented with a focus on managing workloads for GPUs.

  Back
 
Topics:
Data Center & Cloud Infrastructure, Accelerated Data Science
Type:
Talk
Event:
GTC Washington D.C.
Year:
2018
Session ID:
DC8247
Streaming:
Share:
 
Abstract:

A shared physical graphics processor unit (GPU) exposed to virtual guests as a virtual GPU drastically changes the dynamics of what is possible from both a technical and monetary standpoint in high tech virtual workstations. You are able to run lots of GPU based workloads in multiple VMs on one host utilizing NVIDIA Tesla cards. Attendees will learn about vGPU technology, Virtual Function IO (VFIO) and associated roadmaps.

A shared physical graphics processor unit (GPU) exposed to virtual guests as a virtual GPU drastically changes the dynamics of what is possible from both a technical and monetary standpoint in high tech virtual workstations. You are able to run lots of GPU based workloads in multiple VMs on one host utilizing NVIDIA Tesla cards. Attendees will learn about vGPU technology, Virtual Function IO (VFIO) and associated roadmaps.

  Back
 
Topics:
GPU Virtualization
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8806
Streaming:
Share:
 
Abstract:
Red Hat OpenShift Container Platform, with Kubernetes at it's core, can play an important role in building flexible hybrid cloud infrastructure. By abstracting infrastructure away from developers, workloads become portable across any cloud. With NVIDIA Volta GPUs now available in every public cloud [1], as well as from every computer maker, an abstraction library like OpenShift becomes even more valuable. Through demonstrations, this session will introduce you to declarative models for consuming GPUs via OpenShift, as well as the two-level scheduling decisions that provide fast placement and stability.
Red Hat OpenShift Container Platform, with Kubernetes at it's core, can play an important role in building flexible hybrid cloud infrastructure. By abstracting infrastructure away from developers, workloads become portable across any cloud. With NVIDIA Volta GPUs now available in every public cloud [1], as well as from every computer maker, an abstraction library like OpenShift becomes even more valuable. Through demonstrations, this session will introduce you to declarative models for consuming GPUs via OpenShift, as well as the two-level scheduling decisions that provide fast placement and stability.  Back
 
Topics:
Data Center & Cloud Infrastructure
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8769
Streaming:
Download:
Share:
 
Abstract:

We'll discuss the benefits of virtual GPU (vGPU) and Linux for technical workstation use cases and how and when the cooperation of NVIDIA and Red Hat is getting this to the market. A shared physical GPU, exposed to virtual guests as a vGPU, drastically changes the dynamics of what is possible from both a technical and monetary standpoint in high-tech virtual workstations. We'll provide a brief overview of 3D acceleration and compute provided by NVIDIA and Red Hat technologies and what it means for high-tech virtual workstations. Learn about density, vGPUs, VFIO, endpoints, roadmaps, and where to find out more information.

We'll discuss the benefits of virtual GPU (vGPU) and Linux for technical workstation use cases and how and when the cooperation of NVIDIA and Red Hat is getting this to the market. A shared physical GPU, exposed to virtual guests as a vGPU, drastically changes the dynamics of what is possible from both a technical and monetary standpoint in high-tech virtual workstations. We'll provide a brief overview of 3D acceleration and compute provided by NVIDIA and Red Hat technologies and what it means for high-tech virtual workstations. Learn about density, vGPUs, VFIO, endpoints, roadmaps, and where to find out more information.

  Back
 
Topics:
GPU Virtualization
Type:
Talk
Event:
GTC Silicon Valley
Year:
2017
Session ID:
S7812
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next