SEARCH SESSIONS
SEARCH SESSIONS

Search All
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

AI & Deep Learning Research
Presentation
Media
Abstract:
NVIDIA and Intelligent Voice have been accelerating the Kaldi speech recognition framework using CUDA. This has been a continued effort and progress has been received with interest at previous GTC presentations. In these presentations amazing single ...Read More
Abstract:
NVIDIA and Intelligent Voice have been accelerating the Kaldi speech recognition framework using CUDA. This has been a continued effort and progress has been received with interest at previous GTC presentations. In these presentations amazing single GPU performance was demonstrated by showing LibriSpeech processing at 3500x real-time but multi-GPU scalability was limited. Recent developments include additional performance gains and the acceleration of feature extraction which greatly improved multi-GPU performance leading to near linear scalability on DGX-2. In addition to presenting these performance gains NVIDIA, Intelligent Voice, and HPE have partnered to bring speech recognition to the mass markets. For real world and multi-language applications, Intelligent Voice has more than 20 models that are now fully accelerated in CUDA. To better serve the speech market NVIDIA, Intelligent Voice, and HPE utilising their partnership to offer a pre-configured speech processing solution, details of which will be shared at this talk. While Kaldi is just a framework, IV provides an enterprise-strength, fully scalable speech platform to meet the most demanding applications in government, financial services and other commercial applications.  Back
 
Topics:
AI & Deep Learning Research, AI Application, Deployment & Inference
Type:
Talk
Event:
GTC Washington D.C.
Year:
2019
Session ID:
DC91177
Share:
 
Abstract:
Well show how to plan the deployment of AI infrastructure at scale with DGX software using DeepOps, from design and deployment to management and monitoring. DeepOps, an NVIDIA open source project, is used for the deployment and management of DGX POD ...Read More
Abstract:
Well show how to plan the deployment of AI infrastructure at scale with DGX software using DeepOps, from design and deployment to management and monitoring. DeepOps, an NVIDIA open source project, is used for the deployment and management of DGX POD clusters. Its also used in the deployment of Kubernetes and Slurm, in an on-premise, optionally air-gapped data center. The modularity of the Ansible scripts in DeepOps gives experienced DevOps administrators the flexibility to customize deployment experience based on their specific IT infrastructure requirements, whether that means implementing a high-performance benchmarking cluster, or providing a data science team with Jupyter Notebooks that tap into GPUs. Well also describe how to leverage AI infrastructure at scale to support interactive training, machine learning pipelines, and inference use cases.  Back
 
Topics:
AI & Deep Learning Research, AI Application, Deployment & Inference
Type:
Talk
Event:
GTC Washington D.C.
Year:
2019
Session ID:
DC91207
Download:
Share:
 
Abstract:
MATLAB's deep learning, visualization, and C++/CUDA code generation technology make it a uniquely complete solution for your entire AI workflow. In MATLAB, you can easily manage data, perform complex image and signal processing, prototype and train ...Read More
Abstract:
MATLAB's deep learning, visualization, and C++/CUDA code generation technology make it a uniquely complete solution for your entire AI workflow. In MATLAB, you can easily manage data, perform complex image and signal processing, prototype and train deep networks, and deploy to your desktop, embedded or cloud environments. Using GPU Coder technology MATLAB generates CUDA kernels that optimize loops and memory access, and C++ that leverages cuDNN and TensorRT, providing the fastest deep network inference of any framework. With MATLAB's NVIDIA docker container available through the NVIDIA GPU Cloud, you can now easily access all this AI power, deploy it in your cloud or DGX environment, and get up and running in seconds. In this presentation we will demonstrate a complete end-to-end workflow that starts from 'docker run', prototypes and trains a network on a multi-GPU machine in the cloud, and ends with a highly optimized inference engine to deploy to data centers, clouds, and embedded devices.  Back
 
Topics:
AI & Deep Learning Research, Data Center & Cloud Infrastructure
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9469
Streaming:
Download:
Share:
Accelerated Data Science
Presentation
Media
Abstract:
Well bring together AI implementers who have deployed deep learning at scale using NVIDIA DGX Systems. There will be a focus on specific technical challenges, solution design considerations, and the best practices that implementers learned from their ...Read More
Abstract:
Well bring together AI implementers who have deployed deep learning at scale using NVIDIA DGX Systems. There will be a focus on specific technical challenges, solution design considerations, and the best practices that implementers learned from their respective solutions. Attendees will learn how to set up their AI projects for success by matching the right hardware and software platform options to their use cases and operational needs. Theyll also master how to design architecture to overcome unnecessary bottlenecks inhibiting scalable training performance, and how to build end-to-end AI development workflows that enable productive experimentation, training at scale, and model refinement.  Back
 
Topics:
Accelerated Data Science, AI Application, Deployment & Inference
Type:
Panel
Event:
GTC Washington D.C.
Year:
2019
Session ID:
DC91204
Share:
 
Abstract:
Well explain how to create efficient input pipelines tailored to specific training data. As the number of projects and GPUs rise and data size increases, theres no common pipeline that can keep GPUs saturated with data. Well examine the relationship ...Read More
Abstract:
Well explain how to create efficient input pipelines tailored to specific training data. As the number of projects and GPUs rise and data size increases, theres no common pipeline that can keep GPUs saturated with data. Well examine the relationship between training throughput and image representation, and provide guidance on the trade-offs between pre-processing datasets and in-line data processing. Results from a distributed training environment with multiple NVIDIA DGX-1s and a Pure Storage FlashBlade highlight performance impact at scale. Well show how to maximize time to accuracy and shipping models.  Back
 
Topics:
Accelerated Data Science
Type:
Sponsored Talk
Event:
GTC Washington D.C.
Year:
2019
Session ID:
DC91512
Share:
 
Abstract:
Well explain how AI is helping analyze large data by spotting correlations and identifying anomalies. Its also reducing time-to-solution by orders of magnitude in simulation and modeling by replacing expensive computation with fast inferencing. Well ...Read More
Abstract:
Well explain how AI is helping analyze large data by spotting correlations and identifying anomalies. Its also reducing time-to-solution by orders of magnitude in simulation and modeling by replacing expensive computation with fast inferencing. Well describe two platforms at the Pittsburgh Supercomputing Center that combine AI and high-performance computing for research and education at no cost. Available now, Bridges-AI is an AI-focused extension of the Bridges supercomputer and features HPE Apollo 6500 servers and an NVIDIA DGX-2, with a total of 88 Volta GPUs. Bridges-2 will build on Bridges and Bridges-AI to serve the AI and AI-enabled simulation of tomorrow. To illustrate the systems impact, well present use cases in fields including genomics, medical imaging, weather forecasting, and agricultural sustainability. Well explain whats possible, how to access it, and what opportunities exist for collaboration.  Back
 
Topics:
Accelerated Data Science, HPC and AI
Type:
Talk
Event:
GTC Washington D.C.
Year:
2019
Session ID:
DC91471
Share:
 
Abstract:
Well share our best practices for extending AI compute power to your teams without the need to build and manage a data center. Our innovative approaches will enable you to turn your NVIDIA DGX Station into a powerful departmental solution serving ent ...Read More
Abstract:
Well share our best practices for extending AI compute power to your teams without the need to build and manage a data center. Our innovative approaches will enable you to turn your NVIDIA DGX Station into a powerful departmental solution serving entire teams of developers from the convenience of an office environment. Teams building powerful AI applications might not need to own servers or depend on datacenter access. Well show how to use containers; orchestration tools such as Kubernetes and Kubeflow; and scheduling tools like Slum. Step-by-step demos will illustrate how to easily set up an AI workgroup.  Back
 
Topics:
Accelerated Data Science, AI Application, Deployment & Inference
Type:
Talk
Event:
GTC Washington D.C.
Year:
2019
Session ID:
DC91209
Share:
 
Abstract:
Well discuss NVIDIAs DGX NVLink architecture, which enables rapid training of Bidirectional Encoder Representations from Transformers (BERT) on unstructured document corpora. At Vyasa, we have utilized this capability to connect a range of unstructur ...Read More
Abstract:
Well discuss NVIDIAs DGX NVLink architecture, which enables rapid training of Bidirectional Encoder Representations from Transformers (BERT) on unstructured document corpora. At Vyasa, we have utilized this capability to connect a range of unstructured document sources -- such as Pubmed, patents, and clinical trials -- to a front-end application called Vyasa Synapse. This application enables a semi-structured data connection to these large-scale unstructured back-end sources. Well highlight the use of DGX to build this capability and discuss use cases associated with Vyasa Synapse.  Back
 
Topics:
Accelerated Data Science, Healthcare and Life Sciences
Type:
Talk
Event:
GTC Washington D.C.
Year:
2019
Session ID:
DC91222
Share:
Artificial Intelligence and Deep Learning
Presentation
Media
Abstract:
人工智能 (AI) 和深度学习 (DL) 可以帮助企业检测欺诈行为、加强与客户的关系、优化供应链、交付创新产品和服务,从而在竞争日益激烈的市场中占据一席之地。由 NVIDIA DGX 超级计算机和 NetApp 云互联全闪存存储提供动力支持的业已验证的 NetApp ONTAP AI 架构可简化、加速和集成数据平台,帮助您充分实现人工智能和深度学习的优势。利用横跨边缘到核心再到云的 Data Fabric,可以可靠地简化数据流,加速训练和推理。 ...Read More
Abstract:
人工智能 (AI) 和深度学习 (DL) 可以帮助企业检测欺诈行为、加强与客户的关系、优化供应链、交付创新产品和服务,从而在竞争日益激烈的市场中占据一席之地。由 NVIDIA DGX 超级计算机和 NetApp 云互联全闪存存储提供动力支持的业已验证的 NetApp ONTAP AI 架构可简化、加速和集成数据平台,帮助您充分实现人工智能和深度学习的优势。利用横跨边缘到核心再到云的 Data Fabric,可以可靠地简化数据流,加速训练和推理。  Back
 
Topics:
Artificial Intelligence and Deep Learning
Type:
Talk
Event:
GTC China
Year:
2019
Session ID:
CN9569
Streaming:
Download:
Share:
Autonomous Vehicles
Presentation
Media
Abstract:
This session will discuss the process of training deep neural networks using NVIDIA DGX servers at BMW Group. We will describe our research work in four application areas: fine-grained vehicle representations for autonomous driving, panoptic segmenta ...Read More
Abstract:
This session will discuss the process of training deep neural networks using NVIDIA DGX servers at BMW Group. We will describe our research work in four application areas: fine-grained vehicle representations for autonomous driving, panoptic segmentation, self-supervised learning of the drivable area for autonomous vehicles and neural network optimization. All of these projects require high-performance compute and demand a scalable, agile and adaptive learning infrastructure, leveraging Kubernetes on NVIDIA DGX servers.  Back
 
Topics:
Autonomous Vehicles, AI & Deep Learning Research
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9892
Streaming:
Download:
Share:
 
Abstract:
上汽集团基于 NVIDIA DGX-1 打造了 iGear AI 全流程工作,用以训练自动驾驶及其他智能化场景的模型。iGear AI 平台包含视频图片场景分类、各类标注工具,Web 化调试开发界面,以及规模化的各类神经网络开发框架和预训练模型的调度,并打通了后续模型验证和仿真测试的环节。基于 iGear AI 平台,上汽正在研发智能制造、智能驾驶战略下的场景应用。 ...Read More
Abstract:
上汽集团基于 NVIDIA DGX-1 打造了 iGear AI 全流程工作,用以训练自动驾驶及其他智能化场景的模型。iGear AI 平台包含视频图片场景分类、各类标注工具,Web 化调试开发界面,以及规模化的各类神经网络开发框架和预训练模型的调度,并打通了后续模型验证和仿真测试的环节。基于 iGear AI 平台,上汽正在研发智能制造、智能驾驶战略下的场景应用。  Back
 
Topics:
Autonomous Vehicles
Type:
Talk
Event:
GTC China
Year:
2019
Session ID:
CN9775
Streaming:
Download:
Share:
Climate, Weather & Ocean Modeling
Presentation
Media
Abstract:
We'll talk about how we're applying deep learning to weather forecasting at Weather News, one of the world's largest forecasting companies. We're now able to provide Japanese TV news shows with AI-generated weather information, and we plan to exp ...Read More
Abstract:
We'll talk about how we're applying deep learning to weather forecasting at Weather News, one of the world's largest forecasting companies. We're now able to provide Japanese TV news shows with AI-generated weather information, and we plan to expand elsewhere in Asia. We'll explain how we used TensorFlow on an NVIDIA DGX-2 machine and innovative learning model to add measurement results and increase accuracy of our forecaster. We'll also talk about how we're creating new learning models with TensorRT on the DGX-2. We'll touch on other potential uses for our weather technology in settings such as autonomous cars and solar power plants.  Back
 
Topics:
Climate, Weather & Ocean Modeling, Advanced AI Learning Techniques
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9164
Streaming:
Download:
Share:
Computational Biology & Chemistry
Presentation
Media
Abstract:
Generative Variational Autoencoders (VAE) in molecular discovery and new materials design have recently gained considerable attention in academia as well as industry (Gomez-Bombarelli, 2017). In this talk, we will present results from a combined Dow ...Read More
Abstract:
Generative Variational Autoencoders (VAE) in molecular discovery and new materials design have recently gained considerable attention in academia as well as industry (Gomez-Bombarelli, 2017). In this talk, we will present results from a combined Dow Chemical and NVIDIA development effort to implement a VAE for chemical discovery. We'll discuss challenges associated with applying deep learning to chemistry and highlight recently developed methods. Highlights from our presentation will include a discussion of methods to analyze and sample from an organized latent representation in a conditioned variational autoencoder, tips for training a complex architecture, distributed multi-node training using Horovod, and results showing the generation of molecular structure with associated property prediction.  Back
 
Topics:
Computational Biology & Chemistry, Advanced AI Learning Techniques
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9417
Streaming:
Download:
Share:
Consumer Engagement & Personalization
Presentation
Media
Abstract:
We'll explain how Brytlyt became the first vendor to use a GPU-Accelerated SQL database to run the TPC-H benchmark. TPC-H, a decision-support benchmark for a SQL database, consists of a suite of business-oriented ad-hoc queries using data wi ...Read More
Abstract:

We'll explain how Brytlyt became the first vendor to use a GPU-Accelerated SQL database to run the TPC-H benchmark. TPC-H, a decision-support benchmark for a SQL database, consists of a suite of business-oriented ad-hoc queries using data with broad industry-wide relevance. We'll explain how it illustrates decision-support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions. We'll also discuss our Brytlyt GPU database and analytics platform, which is based on the open source PostgreSQL database.

  Back
 
Topics:
Consumer Engagement & Personalization, Accelerated Data Science
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9373
Streaming:
Download:
Share:
Data Center & Cloud Infrastructure
Presentation
Media
Abstract:
Red Hat and NVIDIA collaborated to bring together two of the technology industry's most popular products: Red Hat Enterprise Linux 7 and the NVIDIA DGX system. This talk will cover how the combination of RHELs rock-solid stability with the incredibl ...Read More
Abstract:
Red Hat and NVIDIA collaborated to bring together two of the technology industry's most popular products: Red Hat Enterprise Linux 7 and the NVIDIA DGX system. This talk will cover how the combination of RHELs rock-solid stability with the incredible DGX hardware can deliver tremendous value to enterprise data scientists. We will also show how to leverage NVIDIA GPU Cloud container images with Kubernetes and RHEL to reap maximum benefits from this incredible hardware.  Back
 
Topics:
Data Center & Cloud Infrastructure, Finance - Quantitative Risk & Derivative Calculations
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9292
Streaming:
Download:
Share:
 
Abstract:
Do you have a GPU cluster or air-gapped environment that you are responsible for but don't have an HPC background?   NVIDIA DGX POD is a new way of thinking about AI infrastructure, combining DGX servers with networking and storage to a ...Read More
Abstract:

Do you have a GPU cluster or air-gapped environment that you are responsible for but don't have an HPC background?   NVIDIA DGX POD is a new way of thinking about AI infrastructure, combining DGX servers with networking and storage to accelerate AI workflow deployment and time to insight. We'll discuss lessons learned about building, deploying, and managing AI infrastructure at scale from design to deployment to management and monitoring.   We will show how the DGX Pod Management software (DeepOps) along with our storage partner reference-architectures can be used for the deployment and management of multi-node GPU clusters for Deep Learning and HPC environments, in an on-premise, optionally air-gapped datacenter. The modular nature of the software also allows experienced administrators to pick and choose items that may be useful, making the process compatible with their existing software or infrastructure.  

  Back
 
Topics:
Data Center & Cloud Infrastructure, AI Application, Deployment & Inference
Type:
Tutorial
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9334
Streaming:
Download:
Share:
 
Abstract:
Learn our solutions for increasing GPU resource utilization on an on-premise DGX-2 node and public clouds. In this talk we present our operational experiences of a set of multi-tenant deep learning workloads selected through an open competition. To h ...Read More
Abstract:
Learn our solutions for increasing GPU resource utilization on an on-premise DGX-2 node and public clouds. In this talk we present our operational experiences of a set of multi-tenant deep learning workloads selected through an open competition. To host them we use and extend the Backend.AI framework as the resource and computation manager. While tailored for both educational and research-oriented workloads, it offers a topology-aware multi-GPU resource scheduler combined with fractional GPU scaling implemented via API-level CUDA virtualization, achieving higher GPU utilization compared to vanilla setups.  Back
 
Topics:
Data Center & Cloud Infrastructure, GPU Virtualization, Deep Learning & AI Frameworks
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9406
Download:
Share:
 
Abstract:
Learn how to deploy deep learning applications for multi-tenant environments based on KVM. These virtual machines (VM) can be created with simple commands and are tuned for optimal DL performance leveraging underlying NVSwitches, NVLINKs, and NVIDIA ...Read More
Abstract:
Learn how to deploy deep learning applications for multi-tenant environments based on KVM. These virtual machines (VM) can be created with simple commands and are tuned for optimal DL performance leveraging underlying NVSwitches, NVLINKs, and NVIDIA GPUs. We'll show examples for creating, launching, and managing multiple GPU VMs.  Back
 
Topics:
Data Center & Cloud Infrastructure, GPU Virtualization
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9893
Streaming:
Download:
Share:
Deep Learning & AI Frameworks
Presentation
Media
Abstract:
Learn more about using the most popular computer vision and natural language processing models with state-of-the-art accuracy in MXNet, accelerated for NVIDIA Tensor Cores, to reduce training time. The session will explore the MXNet Gluon CV and NLP ...Read More
Abstract:
Learn more about using the most popular computer vision and natural language processing models with state-of-the-art accuracy in MXNet, accelerated for NVIDIA Tensor Cores, to reduce training time. The session will explore the MXNet Gluon CV and NLP toolkits with a demo showing how to achieve out-of-the-box acceleration on Tensor Cores. We'll also review and demo a new tool for MXNet, automated mixed-precision, which shows that with only a few lines of code, any MXNet Gluon model can be accelerated on NVIDIA Tensor Cores. In addition, we'lldiscuss the MXNet ResNet-50 MLPerf submission on NVIDIA DGX systems and share how MXNet was enhanced with additions such as Horovod and small batch to set a new benchmark record. Beyond training, we'll also cover improvements to the existing experimental MXNet-TRT integration going further than FP32 and ResNets.  Back
 
Topics:
Deep Learning & AI Frameworks, Tools & Libraries
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S91003
Streaming:
Download:
Share:
 
Abstract:
Learn from NVIDIA customers who will share their best practices for extending AI compute power to their teams without the need to build and manage a data center. These organizations will describe innovative approaches that let them turn an NVIDIA DGX ...Read More
Abstract:
Learn from NVIDIA customers who will share their best practices for extending AI compute power to their teams without the need to build and manage a data center. These organizations will describe innovative approaches that let them turn an NVIDIA DGX Station into a powerful solution serving entire teams of developers from the convenience of an office environment. Learn how teams building powerful AI applications may not need to own servers or depend on data center access and find out how to take advantage of containers, orchestration, monitoring, and scheduling tools. The organizations will also show demos of how to set up an AI work group with ease and cover best practices for AI developer productivity.  Back
 
Topics:
Deep Learning & AI Frameworks, Data Center & Cloud Infrastructure
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9483
Streaming:
Download:
Share:
HPC and AI
Presentation
Media
Abstract:
We'll introduce the fundamental concepts behind NVIDIA GPUDirect and explain how GPUDirect technologies are leveraged to scale out performance. GPUDirect technologies can provide even faster results for compute-intensive workloads, including those r ...Read More
Abstract:
We'll introduce the fundamental concepts behind NVIDIA GPUDirect and explain how GPUDirect technologies are leveraged to scale out performance. GPUDirect technologies can provide even faster results for compute-intensive workloads, including those running on a new breed of dense, GPU-Accelerated servers such as the Summit and Sierra supercomputers and the NVIDIA DGX line of servers.  Back
 
Topics:
HPC and AI, Tools & Libraries
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9653
Streaming:
Download:
Share:
 
Abstract:
We'll discuss the challenges uncovered in AI and deep learning workloads, discuss the most efficient approaches to handling data, and examine use cases in autonomous vehicles, retail, health care, finance, and other markets. Our talk will cover the ...Read More
Abstract:
We'll discuss the challenges uncovered in AI and deep learning workloads, discuss the most efficient approaches to handling data, and examine use cases in autonomous vehicles, retail, health care, finance, and other markets. Our talk will cover the complete requirements of the data life cycle including initial acquisition, processing, inference, long-term storage, and driving data back into the field to sustain ever-growing processes of improvement. As the data landscape evolves with emerging requirements, the relationship between compute and data is undergoing a fundamental transition. We will provide examples of data life cycles in production triggering diverse architectures from turnkey reference systems with DGX and DDN A3I to tailor-made solutions.  Back
 
Topics:
HPC and AI, HPC and Supercomputing
Type:
Sponsored Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9983
Streaming:
Share:
HPC and Supercomputing
Presentation
Media
Abstract:
Research is evolving to be even more data-centric. AI is driving this change and is increasingly enabling breakthroughs. For analyzing large data, AI is helping to spot correlations and identify anomalies. For simulation and modeling, AI is reducing ...Read More
Abstract:
Research is evolving to be even more data-centric. AI is driving this change and is increasingly enabling breakthroughs. For analyzing large data, AI is helping to spot correlations and identify anomalies. For simulation and modeling, AI is reducing time to solution by orders of magnitude by replacing expensive computation with fast inferencing. This talk describes two unique platforms at the Pittsburgh Supercomputing Center that combine AI and HPC, at no cost for research and education. Bridges-AI, available today and an AI-focused extension to the Bridges supercomputer, features an NVIDIA DGX-2 and HPE Apollo 6500 servers, with 88 Volta GPUs total. Bridges-2 will build on Bridges and Bridges-AI to serve AI and AI-enabled simulation of tomorrow. To illustrate the systems impact, we will detail use cases in genomics and medical imaging, weather forecasting, agricultural sustainability, and other fields. Learn whats possible, how to get access, and of opportunities for collaboration.  Back
 
Topics:
HPC and Supercomputing
Type:
Talk
Event:
Supercomputing
Year:
2019
Session ID:
SC1914
Streaming:
Share:
Performance Optimization
Presentation
Media
Abstract:
In this talk you will learn how to create efficient input pipelines that are tailored to your training data. As number of projects, number of GPUs, and data size increase, there is no one-size-fits-all input pipeline that can keep GPUs fed with data. ...Read More
Abstract:
In this talk you will learn how to create efficient input pipelines that are tailored to your training data. As number of projects, number of GPUs, and data size increase, there is no one-size-fits-all input pipeline that can keep GPUs fed with data. We will examine the relationship between training throughput and image representation. We'll provide guidance on tradeoffs between pre-processing datasets and in-line data processing, and we'll review results from a distributed training environment with multiple NVIDIA DGX-1s and a Pure Storage FlashBlade to highlight performance impact at scale. Learn how to maximize time to accuracy and, ultimately, time to shipping models.  Back
 
Topics:
Performance Optimization, Accelerated Data Science
Type:
Sponsored Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S91025
Streaming:
Download:
Share:
 
Abstract:
NVIDIA's DGX-2 system offers a unique architecture which connects 16 GPUs together via the high-speed NVLink interface, along with NVSwitch which enables unprecedented bandwidth between processors. This talk will take an in depth look at the ...Read More
Abstract:

NVIDIA's DGX-2 system offers a unique architecture which connects 16 GPUs together via the high-speed NVLink interface, along with NVSwitch which enables unprecedented bandwidth between processors. This talk will take an in depth look at the properties of this system along with programming techniques to take maximum advantage of the system architecture.

  Back
 
Topics:
Performance Optimization, Programming Languages, Algorithms & Numerical Techniques
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9241
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next