This talk is a lightning introduction to object detection and image segmentation for data scientists, engineers, and technical professionals. This task of computer-based image understanding underpins many major fields such as autonomous driving, smart cities, healthcare, national defense, and robotics. Ultimately, the goals of this talk are to provide a broad context and clear roadmap from traditional computer vision techniques to the most recent state-of-the-art methods based on deep learning and convolution neural networks (CNNs). Additional considerations for network deployment at the edge or on the road in an autonomous vehicle using NVIDIA's latest TensorRT release will be discussed.
Building upon the foundational understanding of how deep learning is applied to image classification, this lab explores different approaches to the more challenging problem of detecting if an object of interest is present within an image and recognizing its precise location within the image. Numerous approaches have been proposed for training deep neural networks for this task, each having pros and cons in relation to model training time, model accuracy and speed of detection during deployment. On completion of this lab, you will understand each approach and their relative merits. You'll receive hands-on training applying cutting edge object detection networks trained using NVIDIA DIGITS on a challenging real-world dataset.
Deep learning software frameworks leverage GPU acceleration to train deep neural networks (DNNs). But what do you do with a DNN once you have trained it? The process of applying a trained DNN to new test data is often referred to as 'inference' or 'deployment'. In this lab, you will test three different approaches to deploying a trained DNN for inference. The first approach is to directly use inference functionality within a deep learning framework, in this case, DIGITS and Caffe. The second approach is to integrate inference within a custom application by using a deep learning framework API, again using Caffe but this time through it's Python API. The final approach is to use the NVIDIA TensorRT™ which will automatically create an optimized inference run-time from a trained Caffe model an
Deep learning is giving machines near human levels of visual recognition capabilities and disrupting many applications by replacing hand-coded software with predictive models learned directly from data. This lab introduces the machine learning workflow and provides hands-on experience with using deep neural networks (DNN) to solve a real-world image classification problem. You will walk through the process of data preparation, model definition, model training and troubleshooting, validation testing, and strategies for improving model performance. You'll also see the benefits of GPU acceleration in the model training process. On completion of this lab, you will have the knowledge to use NVIDIA DIGITS to train a DNN on your own image classification dataset.
In this lab you will test three different approaches to deploying a trained DNN for inference. The first approach is to directly use inference functionality within a deep learning framework, in this case DIGITS and Caffe. The second approach is to integrate inference within a custom application by using a deep learning framework API, again using Caffe but this time through it's Python API. The final approach is to use the NVIDIA GPU Inference Engine (GIE) which will automatically create an optimized inference run-time from a trained Caffe model and network description file. You will learn about the role of batch size in inference performance as well as various optimizations that can be made in the inference process. You'll also explore inference for a variety of different DNN architecture
Deep learning technology development continues to accelerate in many areas relevant to defense and national security missions. In this talk, we'll provide a brief introduction to the technology of deep learning, then explore what's at the forefront of research and development in defense. We'll also cover advances in deep learning theory, software, and GPU-acceleration hardware. The three key takeaways are: 1. The rate of deep learning technology development continues to accelerate and demonstrate applicability to a growing set of defense and national security missions. 2. The deep learning software ecosystem, including the NVIDIA SDK, makes deep learning easily accessible in many application areas today. 3. NVIDIA® Tesla® GPUs are the world's fastest deep learning acceler