GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:

This talk is a lightning introduction to object detection and image segmentation for data scientists, engineers, and technical professionals. This task of computer-based image understanding underpins many major fields such as autonomous driving, smart cities, healthcare, national defense, and robotics. Ultimately, the goals of this talk are to provide a broad context and clear roadmap from traditional computer vision techniques to the most recent state-of-the-art methods based on deep learning and convolution neural networks (CNNs). Additional considerations for network deployment at the edge or on the road in an autonomous vehicle using NVIDIA's latest TensorRT release will be discussed.

This talk is a lightning introduction to object detection and image segmentation for data scientists, engineers, and technical professionals. This task of computer-based image understanding underpins many major fields such as autonomous driving, smart cities, healthcare, national defense, and robotics. Ultimately, the goals of this talk are to provide a broad context and clear roadmap from traditional computer vision techniques to the most recent state-of-the-art methods based on deep learning and convolution neural networks (CNNs). Additional considerations for network deployment at the edge or on the road in an autonomous vehicle using NVIDIA's latest TensorRT release will be discussed.

  Back
 
Topics:
Autonomous Vehicles, Intelligent Video Analytics, Computer Vision
Type:
Talk
Event:
GTC Washington D.C.
Year:
2017
Session ID:
DC7217
Download:
Share:
 
Abstract:

Building upon the foundational understanding of how deep learning is applied to image classification, this lab explores different approaches to the more challenging problem of detecting if an object of interest is present within an image and recognizing its precise location within the image. Numerous approaches have been proposed for training deep neural networks for this task, each having pros and cons in relation to model training time, model accuracy and speed of detection during deployment. On completion of this lab, you will understand each approach and their relative merits. You'll receive hands-on training applying cutting edge object detection networks trained using NVIDIA DIGITS on a challenging real-world dataset.

Building upon the foundational understanding of how deep learning is applied to image classification, this lab explores different approaches to the more challenging problem of detecting if an object of interest is present within an image and recognizing its precise location within the image. Numerous approaches have been proposed for training deep neural networks for this task, each having pros and cons in relation to model training time, model accuracy and speed of detection during deployment. On completion of this lab, you will understand each approach and their relative merits. You'll receive hands-on training applying cutting edge object detection networks trained using NVIDIA DIGITS on a challenging real-world dataset.

  Back
 
Topics:
Science and Research
Type:
Panel
Event:
GTC Washington D.C.
Year:
2016
Session ID:
DCL16110
Streaming:
Share:
 
Abstract:

Deep learning software frameworks leverage GPU acceleration to train deep neural networks (DNNs). But what do you do with a DNN once you have trained it? The process of applying a trained DNN to new test data is often referred to as 'inference' or 'deployment'. In this lab, you will test three different approaches to deploying a trained DNN for inference. The first approach is to directly use inference functionality within a deep learning framework, in this case, DIGITS and Caffe. The second approach is to integrate inference within a custom application by using a deep learning framework API, again using Caffe but this time through it's Python API. The final approach is to use the NVIDIA TensorRT™ which will automatically create an optimized inference run-time from a trained Caffe model an

Deep learning software frameworks leverage GPU acceleration to train deep neural networks (DNNs). But what do you do with a DNN once you have trained it? The process of applying a trained DNN to new test data is often referred to as 'inference' or 'deployment'. In this lab, you will test three different approaches to deploying a trained DNN for inference. The first approach is to directly use inference functionality within a deep learning framework, in this case, DIGITS and Caffe. The second approach is to integrate inference within a custom application by using a deep learning framework API, again using Caffe but this time through it's Python API. The final approach is to use the NVIDIA TensorRT™ which will automatically create an optimized inference run-time from a trained Caffe model an

  Back
 
Topics:
Science and Research
Type:
Panel
Event:
GTC Washington D.C.
Year:
2016
Session ID:
DCL16121
Streaming:
Share:
 
Abstract:

Deep learning is giving machines near human levels of visual recognition capabilities and disrupting many applications by replacing hand-coded software with predictive models learned directly from data. This lab introduces the machine learning workflow and provides hands-on experience with using deep neural networks (DNN) to solve a real-world image classification problem. You will walk through the process of data preparation, model definition, model training and troubleshooting, validation testing, and strategies for improving model performance. You'll also see the benefits of GPU acceleration in the model training process. On completion of this lab, you will have the knowledge to use NVIDIA DIGITS to train a DNN on your own image classification dataset.

Deep learning is giving machines near human levels of visual recognition capabilities and disrupting many applications by replacing hand-coded software with predictive models learned directly from data. This lab introduces the machine learning workflow and provides hands-on experience with using deep neural networks (DNN) to solve a real-world image classification problem. You will walk through the process of data preparation, model definition, model training and troubleshooting, validation testing, and strategies for improving model performance. You'll also see the benefits of GPU acceleration in the model training process. On completion of this lab, you will have the knowledge to use NVIDIA DIGITS to train a DNN on your own image classification dataset.

  Back
 
Topics:
Science and Research
Type:
Instructor-Led Lab
Event:
GTC Washington D.C.
Year:
2016
Session ID:
DCL16107
Streaming:
Share:
 
Abstract:

In this lab you will test three different approaches to deploying a trained DNN for inference. The first approach is to directly use inference functionality within a deep learning framework, in this case DIGITS and Caffe. The second approach is to integrate inference within a custom application by using a deep learning framework API, again using Caffe but this time through it's Python API. The final approach is to use the NVIDIA GPU Inference Engine (GIE) which will automatically create an optimized inference run-time from a trained Caffe model and network description file. You will learn about the role of batch size in inference performance as well as various optimizations that can be made in the inference process. You'll also explore inference for a variety of different DNN architecture

In this lab you will test three different approaches to deploying a trained DNN for inference. The first approach is to directly use inference functionality within a deep learning framework, in this case DIGITS and Caffe. The second approach is to integrate inference within a custom application by using a deep learning framework API, again using Caffe but this time through it's Python API. The final approach is to use the NVIDIA GPU Inference Engine (GIE) which will automatically create an optimized inference run-time from a trained Caffe model and network description file. You will learn about the role of batch size in inference performance as well as various optimizations that can be made in the inference process. You'll also explore inference for a variety of different DNN architecture

  Back
 
Topics:
Science and Research
Type:
Instructor-Led Lab
Event:
GTC Washington D.C.
Year:
2016
Session ID:
DCL16118
Download:
Share:
 
Abstract:

Deep learning is giving machines near human levels of visual recognition capabilities and disrupting many applications by replacing hand-coded software with predictive models learned directly from data. This lab introduces the machine learning workflow and provides hands-on experience with using deep neural networks (DNN) to solve a real-world image classification problem. You will walk through the process of data preparation, model definition, model training and troubleshooting, validation testing, and strategies for improving model performance. You'll also see the benefits of GPU acceleration in the model training process. On completion of this lab, you will have the knowledge to use NVIDIA DIGITS to train a DNN on your own image classification dataset.

Deep learning is giving machines near human levels of visual recognition capabilities and disrupting many applications by replacing hand-coded software with predictive models learned directly from data. This lab introduces the machine learning workflow and provides hands-on experience with using deep neural networks (DNN) to solve a real-world image classification problem. You will walk through the process of data preparation, model definition, model training and troubleshooting, validation testing, and strategies for improving model performance. You'll also see the benefits of GPU acceleration in the model training process. On completion of this lab, you will have the knowledge to use NVIDIA DIGITS to train a DNN on your own image classification dataset.

  Back
 
Topics:
Science and Research
Type:
Instructor-Led Lab
Event:
GTC Washington D.C.
Year:
2016
Session ID:
DCL16104
Streaming:
Share:
 
Abstract:

Deep learning technology development continues to accelerate in many areas relevant to defense and national security missions. In this talk, we'll provide a brief introduction to the technology of deep learning, then explore what's at the forefront of research and development in defense. We'll also cover advances in deep learning theory, software, and GPU-acceleration hardware. The three key takeaways are: 1. The rate of deep learning technology development continues to accelerate and demonstrate applicability to a growing set of defense and national security missions. 2. The deep learning software ecosystem, including the NVIDIA SDK, makes deep learning easily accessible in many application areas today. 3. NVIDIA® Tesla® GPUs are the world's fastest deep learning acceler

Deep learning technology development continues to accelerate in many areas relevant to defense and national security missions. In this talk, we'll provide a brief introduction to the technology of deep learning, then explore what's at the forefront of research and development in defense. We'll also cover advances in deep learning theory, software, and GPU-acceleration hardware. The three key takeaways are: 1. The rate of deep learning technology development continues to accelerate and demonstrate applicability to a growing set of defense and national security missions. 2. The deep learning software ecosystem, including the NVIDIA SDK, makes deep learning easily accessible in many application areas today. 3. NVIDIA® Tesla® GPUs are the world's fastest deep learning acceler

  Back
 
Topics:
Federal
Type:
Talk
Event:
GTC Washington D.C.
Year:
2016
Session ID:
DCS16185
Streaming:
Share:
 
Abstract:
The computational intensity required for modern-day space missions is quickly outgrowing existing CPU capabilities. The Magnetosphere Multiscale (MMS) mission is the first NASA mission to fly four satellites in formation and thus has uniquely challenging design and operational requirements, namely, mitigation of collision scenarios involving space debris and/or the formation with itself. By design, no more than 1 in 1000 unsafe close approaches may go undetected while operationally no more than 1 in 20 alarms raised my be false - so as to minimize science interruptions. The confidence intervals required to satisfy such requirements pose daunting computational demands, which operationally, can not be met using traditional CPU solutions. Here it is demonstrated how GPU-accelerated solutions are being deployed, for the first time, at the NASA Goddard Space Flight Center (GSFC) to meet operational MMS mission requirements. Additional applications to Space Situational Awareness and mission design are discussed.
The computational intensity required for modern-day space missions is quickly outgrowing existing CPU capabilities. The Magnetosphere Multiscale (MMS) mission is the first NASA mission to fly four satellites in formation and thus has uniquely challenging design and operational requirements, namely, mitigation of collision scenarios involving space debris and/or the formation with itself. By design, no more than 1 in 1000 unsafe close approaches may go undetected while operationally no more than 1 in 20 alarms raised my be false - so as to minimize science interruptions. The confidence intervals required to satisfy such requirements pose daunting computational demands, which operationally, can not be met using traditional CPU solutions. Here it is demonstrated how GPU-accelerated solutions are being deployed, for the first time, at the NASA Goddard Space Flight Center (GSFC) to meet operational MMS mission requirements. Additional applications to Space Situational Awareness and mission design are discussed.  Back
 
Topics:
Defense, Numerical Algorithms & Libraries, Scientific Visualization, HPC and Supercomputing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4571
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next