SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
As research and clinical healthcare organizations formulate and Implement AI strategies, a crucial component is planning for the proper AI compute infrastructure. This talk will address compute infrastructure planning in healthcare settings, including reference architectures and best practices that NVIDIA has developed based on our internal AI supercomputer, as well as examples of successful AI deployments by leading healthcare organizations.
As research and clinical healthcare organizations formulate and Implement AI strategies, a crucial component is planning for the proper AI compute infrastructure. This talk will address compute infrastructure planning in healthcare settings, including reference architectures and best practices that NVIDIA has developed based on our internal AI supercomputer, as well as examples of successful AI deployments by leading healthcare organizations.  Back
 
Topics:
AI in Healthcare
Type:
Talk
Event:
GTC Washington D.C.
Year:
2018
Session ID:
DC8155
Streaming:
Share:
 
Abstract:
We'll introduce deep learning infrastructure for building and maintaining autonomous vehicles. This includes techniques for managing the lifecycle of deep learning models from definition, training and deployment to reloading and life-long learning. DNN autocurates and pre-labels data in the loop. Given data, it finds the best run-time optimized deep learning models. With these methodologies, one takes data from the application and feeds DL predictors to it. This infrastructure is divided into multiple tiers and is modular, with each of the modules containerized to lower infrastructures like GPU-based cloud infrastructure.
We'll introduce deep learning infrastructure for building and maintaining autonomous vehicles. This includes techniques for managing the lifecycle of deep learning models from definition, training and deployment to reloading and life-long learning. DNN autocurates and pre-labels data in the loop. Given data, it finds the best run-time optimized deep learning models. With these methodologies, one takes data from the application and feeds DL predictors to it. This infrastructure is divided into multiple tiers and is modular, with each of the modules containerized to lower infrastructures like GPU-based cloud infrastructure.  Back
 
Topics:
Autonomous Vehicles
Type:
Talk
Event:
GTC Israel
Year:
2018
Session ID:
SIL8114
Share:
 
Abstract:

We''ll introduce deep learning infrastructure for building and maintaining autonomous vehicles, including techniques for managing the lifecycle of deep learning models, from definition, training and deployment to reloading and life-long learning. DNN autocurates and pre-labels data in the loop. Given data, it finds the best run-time optimized deep learning models. Training scales with data size beyond multi-nodes. With these methodologies, one takes only data from the application and feeds DL predictors to it. This infrastructure is divided into multiple tiers and is modular, with each of the modules containerized to lower infrastructures like GPU-based cloud infrastructure.

We''ll introduce deep learning infrastructure for building and maintaining autonomous vehicles, including techniques for managing the lifecycle of deep learning models, from definition, training and deployment to reloading and life-long learning. DNN autocurates and pre-labels data in the loop. Given data, it finds the best run-time optimized deep learning models. Training scales with data size beyond multi-nodes. With these methodologies, one takes only data from the application and feeds DL predictors to it. This infrastructure is divided into multiple tiers and is modular, with each of the modules containerized to lower infrastructures like GPU-based cloud infrastructure.

  Back
 
Topics:
AI Application, Deployment & Inference, Data Center & Cloud Infrastructure, Autonomous Vehicles, Intelligent Machines, IoT & Robotics
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8531
Streaming:
Download:
Share:
 
Abstract:

Smart cities are getting a lot of attention and both academia and industry are focusing and investing in next-generation technologies for making this as a reality. We'll present a case study on how GPU-based IT infrastructure can enable different components and use-cases of a smart city platform. Smart cities IT infrastructure will need massive computational power and visualization of extremely rich visual contents within a given energy budget. GPU-accelerated data centers can provide a unified IT infrastructure and software platform to achieve that. This case study has taken Singapore's smart nation initiative as a reference and will also present different initiatives and projects using the GPU platform.

Smart cities are getting a lot of attention and both academia and industry are focusing and investing in next-generation technologies for making this as a reality. We'll present a case study on how GPU-based IT infrastructure can enable different components and use-cases of a smart city platform. Smart cities IT infrastructure will need massive computational power and visualization of extremely rich visual contents within a given energy budget. GPU-accelerated data centers can provide a unified IT infrastructure and software platform to achieve that. This case study has taken Singapore's smart nation initiative as a reference and will also present different initiatives and projects using the GPU platform.

  Back
 
Topics:
Intelligent Machines, IoT & Robotics, Intelligent Video Analytics, Autonomous Vehicles
Type:
Talk
Event:
GTC Silicon Valley
Year:
2016
Session ID:
S6148
Streaming:
Share:
 
Abstract:
Learn how the hardware architecture difference in terms of training infrastructure will affect the CNN training process, and what are the design principles of building an efficient CNN training cluster, what are the key metrics you should be watching, and what is the reference architecture and how it's been developed since from traditional IT server architecture to HPC architecture.
Learn how the hardware architecture difference in terms of training infrastructure will affect the CNN training process, and what are the design principles of building an efficient CNN training cluster, what are the key metrics you should be watching, and what is the reference architecture and how it's been developed since from traditional IT server architecture to HPC architecture.  Back
 
Topics:
Data Center & Cloud Infrastructure, Artificial Intelligence and Deep Learning, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2016
Session ID:
S6368
Streaming:
Download:
Share:
 
Abstract:
Risk management is a classical problem in ?nance. Value at Risk (VaRs) and Incremental Risk Charge (IRC), are used as important measures to quantify market and credit risk. The large number of instruments or assets and their frequent revaluations makes them a signi?cant computational task. These computations are repeated many times in the tasks like back testing, deal synthesis and batch jobs, which runs over night or for days, a signi?cant reduction in turn around time can be achieved. The current state-of-the-art platforms like, K40 GPU, not only enables fast computations but also reduces the computational cost in terms of energy requirement. In this talk we present the performance tuning the VaR estimation problems, option pricing and IRC calculation on latest NVIDIA platforms.
Risk management is a classical problem in ?nance. Value at Risk (VaRs) and Incremental Risk Charge (IRC), are used as important measures to quantify market and credit risk. The large number of instruments or assets and their frequent revaluations makes them a signi?cant computational task. These computations are repeated many times in the tasks like back testing, deal synthesis and batch jobs, which runs over night or for days, a signi?cant reduction in turn around time can be achieved. The current state-of-the-art platforms like, K40 GPU, not only enables fast computations but also reduces the computational cost in terms of energy requirement. In this talk we present the performance tuning the VaR estimation problems, option pricing and IRC calculation on latest NVIDIA platforms.  Back
 
Topics:
Finance, Performance Optimization
Type:
Talk
Event:
GTC Silicon Valley
Year:
2015
Session ID:
S5522
Streaming:
Download:
Share: