SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC On-Demand

Algorithms
Presentation
Media
Tomtom Enabling Localization and Deep Learning Powered Mapping
Krzysztof Kudrynski (TomTom)
Deep learning enabled mapping is vital to the development of self-driving cars. TomTom is working with the NVIDIA GPU platform on AI based mapping, leveraging the DRIVE PX 2 platform to enable autonomous driving. Maps and localization software h ...Read More

Deep learning enabled mapping is vital to the development of self-driving cars. TomTom is working with the NVIDIA GPU platform on AI based mapping, leveraging the DRIVE PX 2 platform to enable autonomous driving. Maps and localization software have become an indispensable part of autonomous driving set-ups, telling the sensors where to look and lowering the overall computational load. We'll demonstrate our localization technology called RoadDNA on real roads in Europe and the United States. This demo will include our real-time, GPU-based traffic sign detection and classification, with networks trained on our vast traffic sign databases (100 million+ traffic signs).    

  Back
 
Keywords:
Algorithms, Algorithms, Automotive, GTC Europe 2016 - ID SEU6151
Download:
Big Data Analytics
Presentation
Media
Visual Sensemaking with GPU-Driven Machine Learning
Stef vandenElzen (SynerScope BV)
We show how our interactive, integrated analytics solution allows a new class of users to perform machine-assisted visual sensemaking. Up till now, machine learning techniques such as predictive analytics and deep learning are mostly used as par ...Read More

We show how our interactive, integrated analytics solution allows a new class of users to perform machine-assisted visual sensemaking. Up till now, machine learning techniques such as predictive analytics and deep learning are mostly used as part of a complex tool-chain that serves as an endpoint in the decision making process. We combine the strengths of human decision making and GPU-driven machine learning in a multi-coordinated visual analytics solution. This enables the discovery of actionable insights by bridging the gap between data scientist and business user.

  Back
 
Keywords:
Big Data Analytics, Deep Learning and AI, Self-Driving Cars, Automotive, GTC 2016 - ID S6356
Streaming:
Computer Vision and Machine Vision
Presentation
Media
VisionWorks: A CUDA Accelerated Computer Vision Library
Elif Albuz (NVIDIA)
In this talk, we will introduce NVIDIA VisionWorks toolkit, a software development package for computer vision (CV) and image processing. VisionWorks(TM) implements and extends the Khronos OpenVX standard, and it is optimized for CUDA-capable GP ...Read More

In this talk, we will introduce NVIDIA VisionWorks toolkit, a software development package for computer vision (CV) and image processing. VisionWorks(TM) implements and extends the Khronos OpenVX standard, and it is optimized for CUDA-capable GPUs and SOCs enabling computer vision applications on a scalable and flexible platform. VisionWorks implements a thread-safe API and framework for seamlessly adding user defined primitives. The talk will give an overview of the VisionWorks toolkit, OpenVX API and framework, VisionWorks-plus modules including VisionWorks Structure From Motion and Object Tracker modules, and computer vision pipeline samples showing integration of the library API into a computer vision pipeline on Tegra platforms.

  Back
 
Keywords:
Computer Vision and Machine Vision, Embedded, Self-Driving Cars, Automotive, GTC 2016 - ID S6783
Streaming:
Download:
 
Real-time 3D Reconstruction for Autonomous Driving through Semi-Global Matching
Antonio Espinosa (Universitat Autonoma de Barcelona)
Robust and dense computation of depth information from stereo-camera systems is a computationally demanding requirement for real-time autonomous driving. Semi-Global Matching (SGM) [1] approximates heavy-computation global algorithms results but ...Read More

Robust and dense computation of depth information from stereo-camera systems is a computationally demanding requirement for real-time autonomous driving. Semi-Global Matching (SGM) [1] approximates heavy-computation global algorithms results but with lower computational complexity, therefore it is a good candidate for a real-time implementation. SGM minimizes energy along several 1D paths across the image. The aim of this work is to provide a real-time system producing reliable results on energy-efficient hardware. Our design runs on a NVIDIA Titan X GPU at 104.62 FPS and on a NVIDIA Drive PX at 6.7 FPS, promising for real-time platforms.

  Back
 
Keywords:
Computer Vision and Machine Vision, Self-Driving Cars, Automotive, GTC 2016 - ID P6289
Download:
Deep Learning and AI
Presentation
Media
NVIDIA? GIE: High-Performance GPU Inference Engine
Michael Andersch (NVIDIA)
We'll discuss, analyze, and improve the performance of deep neural network inference using GPUs. Other than neural net training, which is an offline process where large batches of images are fed to the GPU to maximize computational throughpu ...Read More

We'll discuss, analyze, and improve the performance of deep neural network inference using GPUs. Other than neural net training, which is an offline process where large batches of images are fed to the GPU to maximize computational throughput, inference focuses on small-batch, low-latency forward propagation through the network. We'll discuss how the different performance requirements for inference impact the way we implement it on GPUs and what performance optimizations are possible, and we'll show how GPUs, all the way from the small Tegra X1 to the powerful TITAN X, excel at performance and energy efficiency when performing inference for deep neural networks.

  Back
 
Keywords:
Deep Learning and AI, Self-Driving Cars, Performance Optimization, Automotive, GTC 2016 - ID S6136
Streaming:
Download:
 
Opening Keynote
Jensen Huang (NVIDIA)
Don't miss GTC's opening keynote address from NVIDIA CEO and co-founder Jensen Huang. ...Read More

Don't miss GTC's opening keynote address from NVIDIA CEO and co-founder Jensen Huang.

  Back
 
Keywords:
Deep Learning and AI, Self-Driving Cars, Automotive, Robotics & Autonomous Machines, GTC 2016 - ID S6699
Streaming:
Graphics Virtualization
Presentation
Media
Delivering 3D Workstations with VMware Horizon and NVIDIA GRID™
Christophe Delattre (Dassault Systemes), Jim McKinney (Esri), David Benson (Bloomberg), Pat Lee (VMware), Luke Wignall (NVIDIA)
The panel will share their experiences and insights gained on transforming their business by moving 3D workstations to the data center with NVIDIA GRID and VMware Horizon. Hear actual customers speaking about their real world use of NVIDIA GRID ...Read More

The panel will share their experiences and insights gained on transforming their business by moving 3D workstations to the data center with NVIDIA GRID and VMware Horizon. Hear actual customers speaking about their real world use of NVIDIA GRID vWorkstations and vGPU. Audience is invited to ask questions of the panel as well.

  Back
 
Keywords:
Graphics Virtualization, Self-Driving Cars, Product & Building Design, Automotive, GTC 2016 - ID S6200
Streaming:
HPC and Supercomputing
Presentation
Media
How to Deal with Radiation: Evaluation and Mitigation of GPUs Soft-Errors
Paolo Rech (UFRGS)
We will disclose the basics of radiation-induced effects on GPUs and propose effective solutions to mitigate them. The session will start with an exhaustive description of the physics mechanisms that induce ionizing particles to generate failure ...Read More

We will disclose the basics of radiation-induced effects on GPUs and propose effective solutions to mitigate them. The session will start with an exhaustive description of the physics mechanisms that induce ionizing particles to generate failures. Then, taking advantage of data gathered in four years of GPUs neutron beam tests, we evaluate GPUs' error rate in realistic applications and identify GPUs' weaker resources. Observed errors are also compared with Titan field data and automotive market reliability constraints. Additionally, mitigation strategies like ECC and software-based hardening solutions are analyzed and experimentally evaluated. Finally, we will advise on how to implement parallel algorithms and distribute threads in the more efficient and reliable way.

  Back
 
Keywords:
HPC and Supercomputing, Self-Driving Cars, Automotive, GTC 2016 - ID S6249
Streaming:
Download:
Intelligent Machines and IoT
Presentation
Media
Enabling Smart Cities with GPU-Accelerated Infrastructure
Pradeep Gupta (NVIDIA)
Smart cities are getting a lot of attention and both academia and industry are focusing and investing in next-generation technologies for making this as a reality. We'll present a case study on how GPU-based IT infrastructure can enable diff ...Read More

Smart cities are getting a lot of attention and both academia and industry are focusing and investing in next-generation technologies for making this as a reality. We'll present a case study on how GPU-based IT infrastructure can enable different components and use-cases of a smart city platform. Smart cities IT infrastructure will need massive computational power and visualization of extremely rich visual contents within a given energy budget. GPU-accelerated data centers can provide a unified IT infrastructure and software platform to achieve that. This case study has taken Singapore's smart nation initiative as a reference and will also present different initiatives and projects using the GPU platform.

  Back
 
Keywords:
Intelligent Machines and IoT, Intelligent Video Analytics, Self-Driving Cars, Automotive, GTC 2016 - ID S6148
Streaming:
Performance Optimization
Presentation
Media
Advanced System Power Management for Deep Learning and A.I. Machines (Presented by Linear Technology)
Dave Dwelley (Linear Technology)
Linear Technology's DC/DC regulator and power management solutions enable designers to increase performance in GPU- and CPU-based systems. Improved electrical, thermal and mechanical properties for core, I/O, and memory rails, combined with ...Read More

Linear Technology's DC/DC regulator and power management solutions enable designers to increase performance in GPU- and CPU-based systems. Improved electrical, thermal and mechanical properties for core, I/O, and memory rails, combined with expertise and tools for PCB layout, simulation and design verification permit deployment of more efficient, lighter weight, cooler, and more compact systems. This presentation will also focus on methods of controlling, monitoring and debugging power circuits by digitally communicating with the device, reading temperature and load current data while setting voltages and start-up conditions. Future product advancements related to powering automotive electronics will also be discussed.

  Back
 
Keywords:
Performance Optimization, Deep Learning and AI, Self-Driving Cars, Automotive, GTC 2016 - ID S6761
Streaming:
Product & Building Design
Presentation
Media
Leveraging GPU Technology to Visualize Next-Generation Products and Ideas
Michael Wilken (Saatchi & Saatchi)
While CAD real-time visualization solutions and 3D content creation software have been available for decades, there were practical workflow barriers that inhibit efficient integration into an agency's creative and production process. Using t ...Read More

While CAD real-time visualization solutions and 3D content creation software have been available for decades, there were practical workflow barriers that inhibit efficient integration into an agency's creative and production process. Using the latest in GPU technology from NVIDIA, Saatchi and Saatchi LA is pioneering the breaking of these barriers. 3D artists work with creative directors and clients to rapidly visualize ideas and products. Real-time visualization is integrated into the production workflow seamlessly, making rapid visualization both inspiring and cost-saving. We'll provide a top-level overview of how Saatchi is leveraging NVIDIA GPU technologies, including the NVIDIA VCA, to create powerful virtual creative collaborations.

  Back
 
Keywords:
Product & Building Design, Self-Driving Cars, Automotive, Rendering and Ray Tracing, GTC 2016 - ID S6251
Download:
Robotics & Autonomous Machines
Presentation
Media
Brain-in-a-Box: A Unified Perception and Navigation Framework for Mobile Robots, Drones and Cars
Massimiliano Versace (Neurala Inc.)
Mobile robots, drones, and self-driving cars need advanced and coordinated capabilities in perception and mobility to co-exist with humans in complex environments. To date, the most effective "machines" built for these tasks come to bi ...Read More

Mobile robots, drones, and self-driving cars need advanced and coordinated capabilities in perception and mobility to co-exist with humans in complex environments. To date, the most effective "machines" built for these tasks come to biology. Max Versace, CEO of Neurala and director of the Boston University Neuromorphics Lab, will explain how mobile robots, drones, and cars can use GPUs coupled with relatively inexpensive sensors, today available in the sensor pack of a common smartphone, to enable machines to sense and navigate intelligently their environment. The talk will illustrate the working "mini-brain" that can drive a ground robot to learn, map, and understand the layout of the environment, objects in it, while avoiding collisions.

  Back
 
Keywords:
Robotics & Autonomous Machines, Deep Learning and AI, Self-Driving Cars, Automotive, GTC 2016 - ID S6192
Streaming:
 
Hercules: High-Performance Real-time Architectures for Low-Power Embedded Systems
Paolo Burgio (University of Modena and Reggio Emilia, Italy)
Many-core architectures are the key building block for the next generation of embedded systems, where power consumption will be the primary concern. Platforms such as NVIDIA Tegra X1 with a GPU and a multi-core host provide an unprecedented perf ...Read More

Many-core architectures are the key building block for the next generation of embedded systems, where power consumption will be the primary concern. Platforms such as NVIDIA Tegra X1 with a GPU and a multi-core host provide an unprecedented performance/watt trade-off, but they are not yet widely adopted in domains such as advanced driver assistance systems (ADAS), where safety-critical requirements and a tight interaction with the surrounding environment call for predictable performance. The Hercules project will develop an integrated framework to obtain predictable performance on top of cutting-edge heterogeneous COTS many-core platforms, with the final goal of obtaining an order-of-magnitude improvement in the cost and power consumption of next-generation real-time applications.

  Back
 
Keywords:
Robotics & Autonomous Machines, Self-Driving Cars, Intelligent Machines and IoT, Automotive, GTC 2016 - ID P6167
Download:
 
Acceleration of a Pseudo-Bacterial Potential Field Algorithm for Path Planning
Ulises Orozco-Rosas (Instituto Politecnico Nacional)
Path planning of a mobile robot -- determining an optimal path from a universe of possible solutions -- is one of the most computationally intensive tasks and a challenge in dynamically changing environments. Using GPUs, it is possible to proces ...Read More

Path planning of a mobile robot -- determining an optimal path from a universe of possible solutions -- is one of the most computationally intensive tasks and a challenge in dynamically changing environments. Using GPUs, it is possible to process data-intensive tasks efficiently. This work presents the acceleration of a Pseudo-Bacterial Potential Field (PBPF) algorithm for path planning. The Matlab-CUDA implementation of the PBPF algorithm shows how to find an optimal collision-free path for a mobile robot and how to speed up the path planning computation through the use of GPUs. The simulation results demonstrate the efficiency of the PBPF implementation to solve the path planning problem in offline and online mode.

  Back
 
Keywords:
Robotics & Autonomous Machines, Self-Driving Cars, Intelligent Machines and IoT, Automotive, GTC 2016 - ID P6288
Download:
Self-Driving Cars
Presentation
Media
High-Performance Pedestrian Detection on NVIDIA Tegra?
Max Lv (NVIDIA)
We''ll present an innovate approach to efficiently mapping a popular pedestrian detection algorithm (HoG) on an NVIDIA Tegra GPU. Attendees will learn new techniques to optimize a real computer vision application on Tegra X1, as well as ...Read More

We''ll present an innovate approach to efficiently mapping a popular pedestrian detection algorithm (HoG) on an NVIDIA Tegra GPU. Attendees will learn new techniques to optimize a real computer vision application on Tegra X1, as well as several new architecture features of the Tegra X1 GPU.

  Back
 
Keywords:
Self-Driving Cars, Automotive, Computer Vision and Machine Vision, GTC 2016 - ID S6108
Streaming:
Download:
 
Delivering Personalized Cloud Services to the Car
Albert Jordan (Cloudcar)
Most of today's IVI solutions are trying to replicate the smartphone interaction model in the car. Adopting an approach that is similar to smartphones will not result in differentiated solutions with a sustainable competitive advantage. More ...Read More

Most of today's IVI solutions are trying to replicate the smartphone interaction model in the car. Adopting an approach that is similar to smartphones will not result in differentiated solutions with a sustainable competitive advantage. More importantly, the immersive experiences that are typical of smartphone interaction, are not suitable in a driving environment. CloudCar is proposing a new approach in delivering connected services to the car, which brings about a new interaction model suited for the car.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Automotive, GTC 2016 - ID S6179
Streaming:
Download:
 
Developing Software Architectures for Autonomous Driving Vehicles
Sebastian Ohl (Elektrobit)
Modern vehicle functions like advanced driver assistance systems (ADAS) or even fully autonomous driving functions have a rapidly growing demand for high performance computing power. To fulfill fail-operational requirements of autonomous driving ...Read More

Modern vehicle functions like advanced driver assistance systems (ADAS) or even fully autonomous driving functions have a rapidly growing demand for high performance computing power. To fulfill fail-operational requirements of autonomous driving functions, the next generation of a vehicle infrastructure platform has to ensure the execution of safety critical functions with high reliability. In addition the "always connected" feature, needed for autonomous driving, should be protected by the powerful security mechanisms. We'll show how the requirements of ADAS can be fulfilled in an efficient way, on both system and software architecture levels, using the example of automated valet parking from Elektrobit.

  Back
 
Keywords:
Self-Driving Cars, Automotive, GTC 2016 - ID S6252
Streaming:
Download:
 
Performance Optimizations for Automotive Software
Stefan Schoenefeld (NVIDIA), Pradeep Chandrahasshenoy (NVIDIA)
Learn how to use NVIDIA performance tools to optimize your scene graph and rendering pipeline for the use in automotive software. We'll demonstrate the capabilities of these tools using some simple Qt-based examples and will look at some of ...Read More

Learn how to use NVIDIA performance tools to optimize your scene graph and rendering pipeline for the use in automotive software. We'll demonstrate the capabilities of these tools using some simple Qt-based examples and will look at some of the more common mistakes in writing efficient software and how to avoid them.

  Back
 
Keywords:
Self-Driving Cars, Performance Optimization, Automotive, Real-Time Graphics, GTC 2016 - ID S6341
Streaming:
Download:
 
Putting Tegra into Drive: Safe, Secure, and Seamless Vehicle Integration
Ulrich Meis (OpenSynergy)
A solution for vehicle integration targeting the NVIDIA Tegra Jetson Pro and DriveCX platforms will be presented. Communication with the vehicle via the automotive CAN bus is managed by a system that runs separately from other functions in its o ...Read More

A solution for vehicle integration targeting the NVIDIA Tegra Jetson Pro and DriveCX platforms will be presented. Communication with the vehicle via the automotive CAN bus is managed by a system that runs separately from other functions in its own execution environment and backed by its own real-time operating system -- all based on the industry's standard Automotive Open System Architecture (AUTOSAR). Learn about the various benefits this design often has versus handling CAN directly in systems like Linux, Android, or QNX.

  Back
 
Keywords:
Self-Driving Cars, Automotive, GTC 2016 - ID S6342
Streaming:
Download:
 
Building the Fully Digital Audi Virtual Cockpit
Horst Hadler (e.solutions)
Get an overview of the techniques used for Audi's Tegra 3 powered virtual cockpit, focusing on the topics (1) reduction of start-up time, (2) instrument display with 60 fps, and (3) synchronization with the infotainment main unit. Additional ...Read More

Get an overview of the techniques used for Audi's Tegra 3 powered virtual cockpit, focusing on the topics (1) reduction of start-up time, (2) instrument display with 60 fps, and (3) synchronization with the infotainment main unit. Additionally, get to know the overall software structure and see how graphical effects were implemented. The virtual cockpit is available in single-display and dual-display configurations. The single-display configuration is used for sport models, like the TT and R8, where the output of the infotainment main unit is integrated into the instrument cluster. In contrast, the dual-display configuration additionally features a ""standard"" main unit display.

  Back
 
Keywords:
Self-Driving Cars, Embedded, Automotive, Real-Time Graphics, GTC 2016 - ID S6377
Streaming:
Download:
 
Training My Car to See: Using Virtual Worlds
Antonio M.Lopez (Computer Vision Center & Universitat Autonoma de Barcelona)
Learn how realistic virtual worlds can be used to train vision-based classifiers that operate in the real world, i.e., avoiding the cumbersome task of collecting ground truth by manual annotation. Many vision-based applications rely on classifie ...Read More

Learn how realistic virtual worlds can be used to train vision-based classifiers that operate in the real world, i.e., avoiding the cumbersome task of collecting ground truth by manual annotation. Many vision-based applications rely on classifiers trained with annotated data. We avoid manual annotation by using realistic computer graphics (e.g. video games). However, the accuracy of the classifiers drops because virtual (training) and real (operation) worlds are different. We overcome the problem using domain adaptation (DA) techniques. In the context of vision-based driver assistance and autonomous driving, we present our DA experiences using classifiers based on both handcrafted features and CNNs. We show how GPUs are used in all the stages of our training and operation paradigm.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Automotive, Computer Vision and Machine Vision, GTC 2016 - ID S6467
Streaming:
Download:
 
A Single Forward Propagation of Neural Network for Image Detection
Minyoung Kim (Panasonic Silicon Valley Laboratory)
This talk will describe how a single forward propagation of a neural network can give us locations of objects interested on an image frame. There are no proposal generation steps before running neural networks and no post processing steps after. ...Read More

This talk will describe how a single forward propagation of a neural network can give us locations of objects interested on an image frame. There are no proposal generation steps before running neural networks and no post processing steps after. The speaker will describe fully neural detection system, implemented by deep learning research teams of Panasonic, that achieves real-time speed and state-of-the-art performance. The talk also includes live demonstration of the system on a laptop PC with NVIDIA 970m and tablet with NVIDIA Tegra K1 GPU.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Embedded, Automotive, GTC 2016 - ID S6470
Streaming:
Download:
 
Collision Avoidance on NVIDIA Tegra?
Richard Membarth (DFKI), Christoph Lauer (AUDI AG)
D(r)ive deep into crash prediction in future automotive systems that allow the tracking of dozens of objects in real time by utilizing the processing power of embedded GPUs. We'll describe (1) the new possibilities for crash prediction syste ...Read More

D(r)ive deep into crash prediction in future automotive systems that allow the tracking of dozens of objects in real time by utilizing the processing power of embedded GPUs. We'll describe (1) the new possibilities for crash prediction systems in embedded systems that are only possible by taking advantage of recent developments of embedded GPUs, and (2) the implementation and optimization of such a system on the Tegra K1 utilizing AnyDSL, a framework for rapid prototyping of domain-specific libraries that targets NVVM and CUDA.

  Back
 
Keywords:
Self-Driving Cars, Embedded, Performance Optimization, Automotive, GTC 2016 - ID S6490
Streaming:
Download:
 
Creating Unique Customers Relationships with Deep Learning in the Cloud and in the Car
Nick Black (CloudMade)
The car presents a particular challenge for creators of learning systems -- it is incredibly rich in data and context, its hardware and software environments are heterogeneous and fragmented, and drivers expect incredible precision from its inte ...Read More

The car presents a particular challenge for creators of learning systems -- it is incredibly rich in data and context, its hardware and software environments are heterogeneous and fragmented, and drivers expect incredible precision from its interactions. CloudMade has pioneered an approach to machine learning in the automotive context that leverages the richness of car data, the emerging computational power of the car, and the existing computational power of the cloud to deliver an automotive-grade machine learning toolset. With CloudMade's solutions, automotive OEMs can deliver personalized experiences to customers that together create a self-learning car that anticipates the needs and desires of the user.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Embedded, Automotive, GTC 2016 - ID S6565
Streaming:
Download:
 
A Universal Trajectory Generator for Automated Vehicles
Christoph Klas (fka mbH, Aachen and Institute for Automotive Engineering, RWTH Aachen University)
A universal, real-time capable NMPC (nonlinear model predictive controller) based implementation of a trajectory generator for highly automated vehicles is presented. Its main target is to serve as the central instance for all high-level ADAS or ...Read More

A universal, real-time capable NMPC (nonlinear model predictive controller) based implementation of a trajectory generator for highly automated vehicles is presented. Its main target is to serve as the central instance for all high-level ADAS or automated vehicle functions, therefore abstracting vehicle-dependent kinematics and dynamics. The trajectory planner is capable of the combined optimization of lateral and longitudinal dynamics in urban, rural, and highway scenarios. One of the major challenges besides stable system layout is the fast solution of the embedded optimal control problem. For this, a bespoke GPU-optimized implementation was developed; apart from the planner itself, details about this implementation will be presented.

  Back
 
Keywords:
Self-Driving Cars, Automotive, GTC 2016 - ID S6572
Streaming:
Download:
 
Solid State LiDAR for Ubiquitous 3D Sensing
Louay Eldada (Quanergy Systems, Inc.)
This tutorial covers for the first time the technology, operation and application of Quanergy's solid state LiDAR that is making 3D sensing ubiquitous, with its low price point, no moving parts, small form factor, light weight, low power con ...Read More

This tutorial covers for the first time the technology, operation and application of Quanergy's solid state LiDAR that is making 3D sensing ubiquitous, with its low price point, no moving parts, small form factor, light weight, low power consumption, long range, high resolution, high accuracy, long lifetime, and ability to operate in various environmental conditions. GPUs are used for performing in real time (1) LiDAR/Video data fusion for modeling and recognizing the environment around a vehicle, (2) object detection, classification, identification, and tracking, (3) scenario analysis and path planning based on deep learning, and (4) actuation of vehicle controls.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Automotive, Robotics & Autonomous Machines, GTC 2016 - ID S6726
Streaming:
Download:
 
NVIDIA's Deep Learning Car Computer - DRIVE PX
Shri Sundaram (NVIDIA)
At CES 2016, DRIVE PX 2 was launched as the world's first AI supercomputer designed for autonomous vehicles from NVIDIA. DRIVE PX is a lot more than that. It is an incredible development platform for developers to write autonomous car applic ...Read More

At CES 2016, DRIVE PX 2 was launched as the world's first AI supercomputer designed for autonomous vehicles from NVIDIA. DRIVE PX is a lot more than that. It is an incredible development platform for developers to write autonomous car applications. It is a reference design for Tier-1s, OEMs to reuse it for safety critical ECUs meant for Level 3/4/5 (As defined by SAE International). This talk will present the *under the hood* details of what makes it an AI Supercomputer, a Development platform and a Reference platform for autonomous cars.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Automotive, GTC 2016 - ID S6733
Streaming:
Download:
 
Reimagining Cartography for Navigation
Eric Gundersen (Mapbox)
Attendees will be able to walk away with an appreciation for how modern computing power and GPUs are enabling a whole new world of map design potential for the car. Vector-based maps can render data on the fly, 60fps, taking in-car map design to ...Read More

Attendees will be able to walk away with an appreciation for how modern computing power and GPUs are enabling a whole new world of map design potential for the car. Vector-based maps can render data on the fly, 60fps, taking in-car map design to a more video game-like state. The driving experience can be seamless across devices, and tailored to exactly what a user needs for any specific use case.

  Back
 
Keywords:
Self-Driving Cars, Embedded, Automotive, Real-Time Graphics, GTC 2016 - ID S6762
Streaming:
 
GPU-Based Deep Learning in Cloud and Embedded Systems
Frederick Soo (Nauto, Inc.)
We'll present how Nauto uses deep learning in its distributed, vehicle-based compute and sensor network, and our learnings to date. Topics will include the performance of deep learning algorithms for computer vision in embedded systems, stra ...Read More

We'll present how Nauto uses deep learning in its distributed, vehicle-based compute and sensor network, and our learnings to date. Topics will include the performance of deep learning algorithms for computer vision in embedded systems, strategies for distributing compute across networks of embedded systems and in the cloud, and collecting and labeling data to maximize the performance of the system. Nauto's system is a dual-camera, windshield-mounted dashcam with GPS, IMU, wireless/cellular connection, and a SoC capable of running small CNNs in real time.

  Back
 
Keywords:
Self-Driving Cars, Embedded, Deep Learning and AI, Automotive, GTC 2016 - ID S6806
Streaming:
Download:
 
Keynote Presentation - Toyota Research Institute
Gill Pratt (Toyota Research Institute)
Robots. Supercomputers. Cars. They're all coming together. Come hear Gill Pratt, one of the world's leading figures in artificial intelligence and CEO of the Toyota Research Institute, deliver what is sure to be an enlightening presentat ...Read More

Robots. Supercomputers. Cars. They're all coming together. Come hear Gill Pratt, one of the world's leading figures in artificial intelligence and CEO of the Toyota Research Institute, deliver what is sure to be an enlightening presentation.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Automotive, Robotics & Autonomous Machines, GTC 2016 - ID S6831
Streaming:
Download:
 
Ford's Autonomous Vehicles Using GPUs
Mark Crawford (Ford Motor Company)
In this presentation, we discuss Ford's autonomous vehicle technology including an overview of the tasks of sensing, sensor fusion, localization and mapping, object detection and object classification. We examine the impact of GPU hardware t ...Read More

In this presentation, we discuss Ford's autonomous vehicle technology including an overview of the tasks of sensing, sensor fusion, localization and mapping, object detection and object classification. We examine the impact of GPU hardware to achieve significant improvements to the computational efficiency of our parallelized algorithms for vehicle localization based on a combination of a synthetic aperture camera (derived from lidar data) and a Gaussian mixture 3d map approach. We provide an overview of some preliminary results of our deep learning research in the novel area of lidar-based methods for vehicle localization and object classification.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Automotive, Robotics & Autonomous Machines, GTC 2016 - ID S6832
Streaming:
Download:
 
Maps for Autonomous Cars
Willem Strijbosch (TomTom), Blazej Kubiak (TomTom)
Hear the latest thinking on the maps that autonomous cars will use for highly accurate positioning. Autonomous cars need maps to function. The most critical use of maps is centimeter-level positioning. TomTom solves this with highly accurate lan ...Read More

Hear the latest thinking on the maps that autonomous cars will use for highly accurate positioning. Autonomous cars need maps to function. The most critical use of maps is centimeter-level positioning. TomTom solves this with highly accurate lane information and lateral depth maps, which we call RoadDNA. Autonomous driving and map creation have incredible synergy. Mobile mapping cars go through the exact same process as autonomous cars: sensor perception, sensor data processing and comparing it with a stored version of reality. We process the sensor data with GPUs for fast creation of deep neural networks (DNNs) that can recognize traffic signs and other road attributes, both in-car as well as in the cloud. These DNNs, RoadDNA and sensors in the car together enable autonomous cars.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Big Data Analytics, Automotive, GTC 2016 - ID S6849
Streaming:
Download:
 
Audi Autonomous Braking with a 3D Monovision Camera
Rudolph Matthias (Audi AG)
To fulfill the EuroNCAP requirements an autonomous braking system has to be developed. The emergency braking system is designed to brake for pedestrians as well as for car to car scenarios. We'll explain how the functional logic is developed ...Read More

To fulfill the EuroNCAP requirements an autonomous braking system has to be developed. The emergency braking system is designed to brake for pedestrians as well as for car to car scenarios. We'll explain how the functional logic is developed and what has to be done to reach a zero false positive goal with an excellent field performance. Audi was the first OEM who fulfilled this goal with a single 3D Monovision camera by developing the first A-SIL B camera with our supplier Kostal, the architecture of the 3D camera is explained as well.

  Back
 
Keywords:
Self-Driving Cars, Automotive, GTC 2016 - ID S6856
Streaming:
Download:
 
VW's Approach to Piloted Driving with Deep Learning
Martin Hempel (Volkswagen of America)
The Electronics Research Laboratory (ERL) is a part of the global research and development network that supports the Volkswagen Group brands. These brands include Audi, Bentley, Bugatti, Lamborghini, Porsche, and Volkswagen. Located in Silicon V ...Read More

The Electronics Research Laboratory (ERL) is a part of the global research and development network that supports the Volkswagen Group brands. These brands include Audi, Bentley, Bugatti, Lamborghini, Porsche, and Volkswagen. Located in Silicon Valley, we draw upon its innovation spirit to build new concepts and technologies for our future vehicles. Deep learning is at the center of our work in the fast evolution of piloted driving. As part of our research into this technology, our mission is to research deep neural network architectures and bridge the gap between concept and series development application. In this paper, we'll present our current development in a variety of Deep Learning projects as well as insights into how this technology could affect the future of piloted driving.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Automotive, GTC 2016 - ID S6857
Streaming:
 
The Future Outlook for Connected and Automated Driving
Roger Lanctot (Strategy Analytics)
Will review current connected and automated driving initiatives with the goal of identifying progress and impediments. Will look at market/thought leaders, tests, implementations, partnerships and the latest developments, some of which will be r ...Read More

Will review current connected and automated driving initiatives with the goal of identifying progress and impediments. Will look at market/thought leaders, tests, implementations, partnerships and the latest developments, some of which will be reflected from presentations and announcements taking place at GTC. Will share some forecast specifics and perspectives on the timing of partial and full autonomy and the expansion of vehicle connectivity.

  Back
 
Keywords:
Self-Driving Cars, Big Data Analytics, Intelligent Machines and IoT, Automotive, GTC 2016 - ID S6858
Streaming:
 
ROBORACE: The Global Driverless Championship of Intelligence and Technology
Denis Sverdlov (Roborace)
ROBORACE is a global race series for full-size driverless electric cars. The championship will provide a showcase platform for the autonomous driving solutions that are now being developed by many large industrial automotive and technology playe ...Read More

ROBORACE is a global race series for full-size driverless electric cars. The championship will provide a showcase platform for the autonomous driving solutions that are now being developed by many large industrial automotive and technology players as well as top tech universities. As a competition of intelligence and technology, ROBORACE is fusing AI with automotive engineering in extreme conditions. Bringing together motorsports and gaming in that battle of algorithms the teams will compete on the racing tracks in major cities across the world. During the talk we will share the technical vision of our competition and explain the selection criteria for the racing teams. Join us to discuss and be the first to hear some exciting news about ROBORACE!

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Automotive, Robotics & Autonomous Machines, GTC 2016 - ID S6866
Streaming:
Download:
 
Drive Me: Volvo's Autonomous Car Program
Henrik Lind (Volvo Car Corporation)
We'll present the Drive Me project involving 100 highly autonomous vehicles in the vicinity of Gothenburg, Sweden. Henrik will discuss different technologies related to sensors and sensor processing and the resulting requirement for high per ...Read More

We'll present the Drive Me project involving 100 highly autonomous vehicles in the vicinity of Gothenburg, Sweden. Henrik will discuss different technologies related to sensors and sensor processing and the resulting requirement for high performance processing in autonomous vehicles.

  Back
 
Keywords:
Self-Driving Cars, Press-Suggested Sessions: Self-Driving Cars & Auto, Automotive, GTC 2016 - ID S6829
 
WePod: Autonomous Shuttles on Public Roads
Floris Gaisser (Delft University of Technology)
The WePod is the first self-driving vehicle on the public road without a steering wheel or pedals. To achieve driving in such a complex environment and guarantee safety, multiple sensors covering 360 degrees around the vehicle have been used. Se ...Read More

The WePod is the first self-driving vehicle on the public road without a steering wheel or pedals. To achieve driving in such a complex environment and guarantee safety, multiple sensors covering 360 degrees around the vehicle have been used. Sensor-fusion, road-user detection, classification and tracking have been implemented on NVIDIA's DrivePX platform. This session will give an overview of the systems architecture and implementation, as well preliminary test results of driving on the public road will be presented.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Press-Suggested Sessions: AI & Deep Learning, Automotive, GTC 2016 - ID S6830
Streaming:
Download:
 
GPU-Based Pedestrian Detection for Autonomous Driving
Victor Campmany (Computer Vision Center), Juan Carlos Moure (University Autonoma of Barcelona)
Pedestrian detection for autonomous driving has gained a lot of prominence during the last few years. Besides the fact that this is one of the hardest tasks within computer vision, it involves huge computational costs. Obtaining acceptable real- ...Read More

Pedestrian detection for autonomous driving has gained a lot of prominence during the last few years. Besides the fact that this is one of the hardest tasks within computer vision, it involves huge computational costs. Obtaining acceptable real-time performance, measured in frames per second (fps), for the most advanced algorithms is a difficult challenge. We propose a CUDA implementation of a well known pedestrian detection system (i.e., Random Forest of Local Experts). It includes LBP and HOG as feature descriptors and SVM and Random Forest as classifiers. We introduce significant algorithmic adjustments and optimizations to adapt the problem to the NVIDIA GPU architecture. The aim is to deploy a real-time system providing reliable results.

  Back
 
Keywords:
Self-Driving Cars, Automotive, Computer Vision and Machine Vision, GTC 2016 - ID P6181
Download:
 
NVIDIA's Open Autopilot Platform
Justin Ebert (NVIDIA)
NVIDIA automotive solutions are being used by more than 110 automakers, tier 1 suppliers, start up companies and research institutions. Learn more about the hardware and software architecture of NVIDIA's open autopilot platform. From the NVI ...Read More

NVIDIA automotive solutions are being used by more than 110 automakers, tier 1 suppliers, start up companies and research institutions. Learn more about the hardware and software architecture of NVIDIA's open autopilot platform. From the NVIDIA DRIVE PX 2 platform to the DriveWorks libraries and SDK, you'll get insights into the ingredients required to create a level 3 autopilot system for your vehicle. We'll cover bringing the user experience into the cockpit, visualizing what the vehicle is sensing and seeing.

  Back
 
Keywords:
Self-Driving Cars, Embedded, Deep Learning and AI, Automotive, GTC Europe 2016 - ID SEU6251
Download:
 
Next Generation Tegra
Glenn Schuster (NVIDIA)
 
Keywords:
Self-Driving Cars, Automotive, GTC Europe 2016 - ID SEU6238
Download:
 
Challenges and Research Needs on Automated Driving
Devid Will (Fka Forschungsgesellschaft Kraftfahrwesen mbH Aachen)
In this speech, we will give an overview on challenges and research needs on automated driving and solutions which can help to achieve the research goals, also with the help of GPU technology.  ...Read More

In this speech, we will give an overview on challenges and research needs on automated driving and solutions which can help to achieve the research goals, also with the help of GPU technology. 

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Automotive, GTC Europe 2016 - ID SEU6124
Download:
 
Audi Cognitive Vehicles - How Deep Learning Drives The Future
Florian Netter (Audi Electronics Venture), Felix Friedmann (Audi Electronics Venture)
Deep Learning is changing the paradigms of automotive software development.  Vehicles will act as a mobile sensor with a huge computing power, and will become more intelligent when connected to the environment. Cumbersome feature extraction ...Read More

Deep Learning is changing the paradigms of automotive software development.  Vehicles will act as a mobile sensor with a huge computing power, and will become more intelligent when connected to the environment. Cumbersome feature extraction algorithm design which requires years of experience  are  often outperformed by Deep Learning models. We will reach truly cognitive cars at the end of this development, which can handle even the most challenging traffic situations, and interact seamlessly with drivers and surroundings. But that comes with new challenges for the automotive industry. This session gives an overview of AI applications, learning mechanisms, hardware requirements as well as architectures in connection with an IT backend.  

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Automotive, GTC Europe 2016 - ID SEU6219
Download:
 
Drive Me - Self-driving Volvos on Public Roads
Erik Coelingh (Volvo Cars)
Volvo Drive Me will be making 100 XC90 SUVs available for customers to drive autonomously on certain roads in and around Gothenburg Sweden. Central to this pilot project, is the DRIVE PX 2 AI supercomputer ...Read More
Volvo Drive Me will be making 100 XC90 SUVs available for customers to drive autonomously on certain roads in and around Gothenburg Sweden. Central to this pilot project, is the DRIVE PX 2 AI supercomputer for the car. Hear the latest about Volvo Cars development of self-driving vehicles and the launch of this exciting program.
 

 

  Back
 
Keywords:
Self-Driving Cars, Automotive, GTC Europe 2016 - ID SEU6126
 
WEpods "last mile" people transporter
Pieter Jonker (Delft Universty of Technology)
Learn all about the WEpods project. TU-Delft developed with partners, a six person vehicle without a driver, that can drive a route on a public road at the WUR Agricultural University (the campus loop) and ...Read More
Learn all about the WEpods project. TU-Delft developed with partners, a six person vehicle without a driver, that can drive a route on a public road at the WUR Agricultural University (the campus loop) and on demand (by app) from the train station in the city of EDE to the campus in Wageningen. Last mile transport is a hot issue. Transport of tourists in cities, transport from a parking place to the departure lounge or the hospital. Also bringing parcels around in old Dutch cities with canals, where big trucks from DHL and UPS block the road even for bicycles and pedestrians. How nice would it be if you could order an autonomous vehicle to bring your parcel when you are actually home and not to your neighbor, seldom at home? Delivery on demand, like printing on demand.

 

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Automotive, GTC Europe 2016 - ID SEU6142
 
Roborace - Driverless, Electric and Connected
Justin Cooke (NVIDIA)
Roborace is the worlds first driverless electric racing series that will launch this year on custom made city tracks. The race circuit will celebrate advances in Artificial intelligence and will challenge the world's greatest software engine ...Read More

Roborace is the worlds first driverless electric racing series that will launch this year on custom made city tracks. The race circuit will celebrate advances in Artificial intelligence and will challenge the world's greatest software engineers and companies to compete at extraordinary speeds and in extreme conditions. Showcasing the safety of driverless vehicles, Roborace highlights the enormous potential to transform cities on every level: reducing traffic, accidents, pollution and more. The Roborace car is powered by the NVIDIA DRIVE PX 2 and will be one of the most intelligent cars ever to drive on a public road.   

  Back
 
Keywords:
Self-Driving Cars, Automotive, GTC Europe 2016 - ID SEU6258
Download:
 
Autonomous Driving enabled by NVIDIA GPUs: Virtual-world SYNTHIA & Visual Perception
Antonio M. Lopez (Autonomous University of Barcelona)
Convolutional neural networks (CNNs) have propelled camera-based perception as a core technology for autonomous driving. Training and testing such CNNs requires lots of images with metadata, which is costly and prone to errors when performed by ...Read More

Convolutional neural networks (CNNs) have propelled camera-based perception as a core technology for autonomous driving. Training and testing such CNNs requires lots of images with metadata, which is costly and prone to errors when performed by humans. To minimize this problem we created SYNTHIA (www.synthia-dataset.net), a graphically realistic virtual city for automatically collecting images and their metadata. NVIDIA GPUs have eliminated the run-time bottleneck from the rendering stage. We present results of using SYNTHIA for CNNs training. Moreover, we will see how NVIDIA GPUs allow us to run in real-time computationally intensive key tasks for camera-based perception (stereo & stixels computation, object detection). We will embed these tasks in DRIVE PX 2 for controlling our robocar.   

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Virtual Reality and Augmented Reality, Automotive, GTC Europe 2016 - ID SEU6141
Download:
 
Virtual Proving for Fast Development and Testing of Automated Driving
Ilja Radusch (Fraunhofer FOKUS)
Development of automated driving is driven by two factors: data and deep learning. After successful deployment of automated vehicles in controlled environments like highways, research and development now tackles the challenges of every day drivi ...Read More

Development of automated driving is driven by two factors: data and deep learning. After successful deployment of automated vehicles in controlled environments like highways, research and development now tackles the challenges of every day driving in urban and sub-urban environments under all weather conditions. Hereby, simple path planing with obstacle avoidance needs to be upstaged by complex scene and situation understanding. Tasks where deep learning is establishing itself as a viable - and most likely only - solution. Learning - in general - requires not only an open and capable mind but also quality input and even more so a good teacher. Accordingly, not only do we require lots of varying sensor data but also a deep understand of that. Here, virtual proving proved crucial for us.   

  Back
 
Keywords:
Self-Driving Cars, Automotive, GTC Europe 2016 - ID SEU6236
Download:
 
Intelligent Perception and Situational Awareness for Automated Vehicles
Christian Laugier (Inria Grenoble)
We'll present the techniques developed by Inria Chroma's team to perceive the environment of dynamic scenes using Bayesian Occupancy Filters and V2X Communication (vehicle to vehicle and vehicle to infrastructure). These technologies wer ...Read More

We'll present the techniques developed by Inria Chroma's team to perceive the environment of dynamic scenes using Bayesian Occupancy Filters and V2X Communication (vehicle to vehicle and vehicle to infrastructure). These technologies were developed in collaboration with partners like Toyota and Renault. The same technologies can be use for mobile robotics. We'll show how heterogeneous sensors can be used efficiently, merged, and filtered in real-time into probabilistic grids, and how collision risks can be computed in an optimized way on embedded GPUs, NVIDIAs Jetson Tegra X1. We'll show that the perception of the environment can be distributed between connected cars and perception units using V2X protocols.   

  Back
 
Keywords:
Self-Driving Cars, Automotive, Robotics & Autonomous Machines, GTC Europe 2016 - ID SEU6200
Download:
 
AI-based Driver Monitoring for Self-Driving Cars
Martin Krantz (Smart Eye)
As cars become more automated, it is increasingly important to keep track of the driver, who is still in charge of controlling the car safely. Driver monitoring enables the car to determine if there's a driver, if that driver is awake and pa ...Read More

As cars become more automated, it is increasingly important to keep track of the driver, who is still in charge of controlling the car safely. Driver monitoring enables the car to determine if there's a driver, if that driver is awake and paying attention to the surrounding traffic. Systems with these capabilities are planned for launch in premium cars in 2017. Learn how next gen system with multiple, high resolution cameras will enable much higher gaze accuracy and have the potential to become part of the in-car user interface in a multi modal UX solution consisting of gaze, touch, gestures and voice.  

  Back
 
Keywords:
Self-Driving Cars, Automotive, Computer Vision and Machine Vision, GTC Europe 2016 - ID SEU6235
Download:
 
Pipeline for real-time mapping by geo visual information
Anton Slesarev (Yandex), Fedor Chervinksy (Yandex)
The talk will describe the pipeline for collecting and processing Geo visual information for live mapping and detection of road events.   The talk details how data is gathered using mobile devices placed in vehicles. The device has a stereo ...Read More

The talk will describe the pipeline for collecting and processing Geo visual information for live mapping and detection of road events.   The talk details how data is gathered using mobile devices placed in vehicles. The device has a stereo camera, gps plus accelerator and Nvidia Jetson TX1. We use a combination of visual slam and semantic segmentation algorithms based on deep learning to generate hypothesis for updating the map in real-time. The resulting semi-automatically verified stream can then be utilized in different ways: directly to place or remove objects on the map, to alert users about road events, and many other similar functions to ensure the most up-to-date mapping experience can be designed.   

  Back
 
Keywords:
Self-Driving Cars, Automotive, Video and Image Processing, GTC Europe 2016 - ID SEU6146
Download:
 
Autonomous Driving at RENAULT: A Revolution for Mobility
Remi Bastien (RENAULT)
 
Keywords:
Self-Driving Cars, Automotive, GTC Europe 2016 - ID SEU6249
Download:
Virtual Reality and Augmented Reality
Presentation
Media
The Audi VR Experience - A Look into the Future of Digital Retail
Marcus Kuehne (Audi AG), Darren Jobling (Zerolight), Thomas Zuchtriegel (Audi AG)
We'll give an insight into the philosophy behind the "Audi VR Experience" and share that experience with you. We'll share the challenges as well as the learnings from creating this VR experience. We'll explain why Audi is a ...Read More

We'll give an insight into the philosophy behind the "Audi VR Experience" and share that experience with you. We'll share the challenges as well as the learnings from creating this VR experience. We'll explain why Audi is an attractive industry partner for all VR technology and content companies. With special focus on visual performance. Darren Jobling, CEO of project partner Zerolight, will join us. He'll explain how Zerolight managed to create the VR visual performance defined by Audi.

  Back
 
Keywords:
Virtual Reality and Augmented Reality, Self-Driving Cars, Product & Building Design, Automotive, GTC 2016 - ID S6786
 
 
NVIDIA - World Leader in Visual Computing Technologies
Copyright © 2017 NVIDIA Corporation Legal Info | Privacy Policy