SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC On-Demand

AI for In-Vehicle Applications
Presentation
Media
Fusing Vision and 3D Sensors with AI to Build Cognition Systems
Ronny Cohen (VayaVision), Ido Goren (VayaVision)
Learn how to use GPUs to run 3D and camera deep learning fusion applications for autonomous driving. Cameras provide high resolution 2D information, while lidar has relatively low resolution but provides 3D data. Smart fusing of both RGB and 3D ...Read More

Learn how to use GPUs to run 3D and camera deep learning fusion applications for autonomous driving. Cameras provide high resolution 2D information, while lidar has relatively low resolution but provides 3D data. Smart fusing of both RGB and 3D information, in combination with AI software, enables the building of ultra-high reliability classifiers. This facilitates the required cognition application for semi-autonomous and fully autonomous driving.  

  Back
 
Keywords:
AI for In-Vehicle Applications, Self-Driving Cars, Deep Learning and AI, GTC 2017 - ID S7235
Download:
 
Automated Truck Driving and Platooning with DRIVE PX 2
Devid Will (fka Forschungsgesellschaft Kraftfahrwesen mbH Aachen)
We'll present achievements in the field of automated truck driving, specifically the use case of lane keeping in platooning scenarios based on mirror cameras. Lane detection, generating control parameters, controller, and arbitration functio ...Read More

We'll present achievements in the field of automated truck driving, specifically the use case of lane keeping in platooning scenarios based on mirror cameras. Lane detection, generating control parameters, controller, and arbitration functions all run on the NVIDIA DRIVE PX 2 with three cameras attached to it. 

  Back
 
Keywords:
AI for In-Vehicle Applications, Self-Driving Cars, Deep Learning and AI, GTC 2017 - ID S7426
Download:
 
Building Emotionally Aware Cars
Abdelrahman Mahmoud (Affectiva)
Advanced and autonomous AI systems surround us daily, but as smart as these are, they lack the ability to sense and adapt to human emotions. At Affectiva, our mission is to humanize technology by bringing artificial emotional intelligence (Emotion AI ...Read More
Advanced and autonomous AI systems surround us daily, but as smart as these are, they lack the ability to sense and adapt to human emotions. At Affectiva, our mission is to humanize technology by bringing artificial emotional intelligence (Emotion AI) to the digital world. Using computer vision and deep learning, Affectiva measures facial expressions of emotions. We'll explore the applications of Emotion AI in automotive. We'll show how driver's emotion can be measured in human-driven cars and (semi-) autonomous vehicles to improve road safety and deliver a more personalized transportation experience. In addition, we'll share our findings from over 28 hours of in-car data collected, such as the most frequently observed emotions.  Back
 
Keywords:
AI for In-Vehicle Applications, Deep Learning and AI, Video and Image Processing, GTC 2017 - ID S7670
Download:
 
Airbus Vahana - Development of a Self-Piloted Air Taxi
Arne Stoschek (Airbus A3)
Vahana started in early 2016 as one of the first projects at A? the advanced projects outpost of Airbus Group in Silicon Valley. The aircraft we're building doesn't need a runway, is self-piloted, and can automatically detect and avoid o ...Read More

Vahana started in early 2016 as one of the first projects at A? the advanced projects outpost of Airbus Group in Silicon Valley. The aircraft we're building doesn't need a runway, is self-piloted, and can automatically detect and avoid obstacles and other aircraft. Designed to carry a single passenger or cargo, Vahana is meant to be the first certified passenger aircraft without a pilot. We'll discuss the key challenges to develop the autonomous systems of a self-piloted air taxi that can be operated in urban environments.

  Back
 
Keywords:
AI for In-Vehicle Applications, Deep Learning and AI, GTC 2017 - ID S7805
Download:
 
Optimus Ride: Fully Autonomous System for Electric Vehicle Fleets
Sertac Karaman (Optimus Ride Inc.)
Self-driving vehicles will transform the transportation industry, yet must overcome challenges that that go far beyond just technology. We'll discuss both the challenges and opportunities of autonomous mobility and highlight the recent work ...Read More

Self-driving vehicles will transform the transportation industry, yet must overcome challenges that that go far beyond just technology. We'll discuss both the challenges and opportunities of autonomous mobility and highlight the recent work on autonomous vehicle systems by Optimus Ride Inc., an MIT spinoff company based in Boston. The company develops self-driving technologies and is designing a fully autonomous system for electric vehicle fleets.

  Back
 
Keywords:
AI for In-Vehicle Applications, AI Startup, Self-Driving Cars, GTC 2017 - ID S7807
Download:
 
Visual Perception for Autonomous Driving on NVIDIA DRIVE PX 2
Antonio MiguelEspinosa (Universitat Autonoma de Barcelona)
We'll show how to program energy-efficient automotive-oriented NVIDIA GPUs to run computationally intensive camera-based perception algorithms in real time. The Stixel World is a medium-level, compact representation of road scenes that abstr ...Read More

We'll show how to program energy-efficient automotive-oriented NVIDIA GPUs to run computationally intensive camera-based perception algorithms in real time. The Stixel World is a medium-level, compact representation of road scenes that abstracts millions of disparity pixels into hundreds or thousands of stixels. We'll present a fully GPU-accelerated implementation of stixel estimation that produces reliable results at real time (26 frames per second) on the Drive PX 2 platform.   

  Back
 
Keywords:
AI for In-Vehicle Applications, Self-Driving Cars, Computer Vision and Machine Vision, GTC 2017 - ID S7848
Download:
 
The Path to End to End AI Solution (Presented by Inspur)
LeiJun Hu (Inspur)
More and more traditional industries begin to use AI, facing the computing platform, system management, model optimization and other challenges. In this session we build a GPU-based AI end-to-end solution based on a comparative analysis of Caffe ...Read More

More and more traditional industries begin to use AI, facing the computing platform, system management, model optimization and other challenges. In this session we build a GPU-based AI end-to-end solution based on a comparative analysis of Caffe and TensorFlow's computing and communication.

  Back
 
Keywords:
AI for In-Vehicle Applications, GTC 2017 - ID S7861
Download:
Computer Vision and Machine Vision
Presentation
Media
Bicycle Green Waves Powered by Deep Learning
Edward Zimmermann (Nonmonotonic Networks / joint R&D with GESIG. Gesellschaft fur Signalanlagen)
We'll explore using deep learning to improve urban traffic signaling. Bicycles (both self-powered and pedelecs) are the future of urban transport alongside (self-driving) electric cars, buses, and rail services. Green waves make cycling more ...Read More

We'll explore using deep learning to improve urban traffic signaling. Bicycles (both self-powered and pedelecs) are the future of urban transport alongside (self-driving) electric cars, buses, and rail services. Green waves make cycling more efficient, attractive, and safer. Instead of fixed ""green wave"" timings or priorities, a work in progress system is presented that learns to increase the flow of bicycle traffic while minimizing the impact on other traffic actors -- and in many use cases also results in improvements in general traffic times. Using low power efficient SoCs -- Tegra X1 -- the ""smarts"" are integrated in traffic lights and provide V2I interfaces -- also to mobile phones of cyclists -- about signal changes and warn of pedestrians or cyclists. Dispensing with inductive loop, magnetometer, or radar-based sensors buried in the pavement makes the system inexpensive. We'll present initial results from pilot testing in a German city.

  Back
 
Keywords:
Computer Vision and Machine Vision, AI for In-Vehicle Applications, Deep Learning and AI, GTC 2017 - ID S7170
Download:
 
ADAS Computer Vision and Augmented Reality Solution
Sergii Bykov (Luxoft)
We'll address how next-generation informational ADAS experiences are created by combining machine learning, computer vision, and real-time signal processing with GPU computing. Computer vision and augmented reality (CVNAR) is a real-time sof ...Read More

We'll address how next-generation informational ADAS experiences are created by combining machine learning, computer vision, and real-time signal processing with GPU computing. Computer vision and augmented reality (CVNAR) is a real-time software solution, which encompasses a set of advanced algorithms that create mixed augmented reality for the driver by utilizing vehicle sensors, map data, telematics, and navigation guidance. The broad range of features includes augmented navigation, visualization, driver infographics, driver health monitoring, lane keeping, advanced parking assistance, adaptive cruise control, and autonomous driving. Our approach augments drivers' visual reality with supplementary objects in real time, and works with various output devices such as head unit displays, digital clusters, and head-up displays.  

  Back
 
Keywords:
Computer Vision and Machine Vision, AI for In-Vehicle Applications, Self-Driving Cars, GTC 2017 - ID S7312
Download:
 
The Deep Learning Pipeline for Self-Driving: Sensing, Perception, Localization, and Mapping
Raquel Urtasun (University of Toronto)
We'll discuss deep learning for self-driving cars, including sensing, perception, localization, and mapping. Our group at The University of Toronto focuses on machine perception for self-driving cars, and our work is enabling vehicles to see ...Read More

We'll discuss deep learning for self-driving cars, including sensing, perception, localization, and mapping. Our group at The University of Toronto focuses on machine perception for self-driving cars, and our work is enabling vehicles to see and understand their environment. This session is appropriate for a non-technical audience and will include a summary of my group's latest results in the field.

  Back
 
Keywords:
Computer Vision and Machine Vision, Self-Driving Cars, Deep Learning and AI, GTC 2017 - ID S7512
Download:
 
TorontoCity Benchmark: Towards Building Large-Scale 3D Models of the World
Min Bai (University of Toronto)
We'll introduce the TorontoCity HD mapping benchmark, which covers the full greater Toronto area with 712.5 square-km of land, 8,439 km of roads, and around 400,000 buildings. Our benchmark provides different perspectives of the world captured from ...Read More
We'll introduce the TorontoCity HD mapping benchmark, which covers the full greater Toronto area with 712.5 square-km of land, 8,439 km of roads, and around 400,000 buildings. Our benchmark provides different perspectives of the world captured from airplanes, drones, and cars driving around the city. Manually labeling such a large-scale dataset is infeasible. Instead, we propose to utilize different sources of high-precision maps to create our ground truth. Towards this goal, we develop algorithms that allow us to align all data sources with the maps while requiring minimal human supervision. We have designed a wide variety of tasks, including building height estimation (reconstruction), road centerline and curb extraction, building instance segmentation, building contour extraction (reorganization), semantic labeling, and scene-type classification (recognition). Our pilot study shows that most of these tasks are still difficult for modern convolutional neural networks.  Back
 
Keywords:
Computer Vision and Machine Vision, HD Mapping, Deep Learning and AI, GTC 2017 - ID S7516
Download:
 
Deep Unconstrained Gaze Estimation with Synthetic Data
Shalini DeMello (NVIDIA)
Gaze tracking in unconstrained conditions, including inside cars, is challenging where traditional gaze trackers fail. We've developed a CNN-based algorithm for unconstrained, head-pose- and subject-independent gaze tracking, which requires only con ...Read More
Gaze tracking in unconstrained conditions, including inside cars, is challenging where traditional gaze trackers fail. We've developed a CNN-based algorithm for unconstrained, head-pose- and subject-independent gaze tracking, which requires only consumer-quality color images of the eyes to determine gaze direction, and points along the boundary of the eye, pupil, and iris. We'll describe how we successfully trained the CNN with millions of synthetic photorealistic eye images, which we rendered on the NVIDIA GPU for a wide range of head poses, gaze directions, subjects, and illumination conditions. Among appearance-based gaze estimation techniques, our algorithm has best-in-class accuracy.  Back
 
Keywords:
Computer Vision and Machine Vision, AI for In-Vehicle Applications, GTC 2017 - ID S7551
 
Driver Monitoring: A Deep Learning Approach for Gaze Estimation
Cornelius Wefelscheid (Leopold Kostal GmbH & Co. KG)
A driver monitoring camera will be a valuable component when it comes to autonomous driving for levels 3 & 4. The camera is able to distinguish the area of the drivers' attention. For this purpose the estimation of the gaze of the driver ...Read More

A driver monitoring camera will be a valuable component when it comes to autonomous driving for levels 3 & 4. The camera is able to distinguish the area of the drivers' attention. For this purpose the estimation of the gaze of the driver is needed. Additionally to signal "eyes on road," the user experience for HMI can be significantly improved. We'll present a deep learning approach that trains a neural network in an end-to-end manner. Small patches of the eye serve as input to a convolution neural network. The tradeoff between a deep and shallow net is an important aspect when it comes to a commercial product. The massive use of GPUs can help to find the best tradeoff between accuracy and number of needed FLOPS as well as the best suited DNN architecture.

  Back
 
Keywords:
Computer Vision and Machine Vision, AI for In-Vehicle Applications, Deep Learning and AI, GTC 2017 - ID S7624
Download:
Deep Learning and AI
Presentation
Media
Keynote
Jensen Huang (NVIDIA)
Don't miss this keynote from NVIDIA Founder & CEO, Jensen Huang, as he speaks on the future of computing. ...Read More

Don't miss this keynote from NVIDIA Founder & CEO, Jensen Huang, as he speaks on the future of computing.

  Back
 
Keywords:
Deep Learning and AI, Data Center and Cloud Computing, Virtual Reality and Augmented Reality, Self-Driving Cars, Intelligent Video Analytics, GTC 2017 - ID S7820
Streaming:
 
8-Bit Inference with TensorRT
Szymon Migacz (NVIDIA)
We'll describe a method for converting FP32 models to 8-bit integer (INT8) models for improved efficiency. Traditionally, convolutional neural networks are trained using 32-bit floating-point arithmetic (FP32) and, by default, inference on these mod ...Read More
We'll describe a method for converting FP32 models to 8-bit integer (INT8) models for improved efficiency. Traditionally, convolutional neural networks are trained using 32-bit floating-point arithmetic (FP32) and, by default, inference on these models employs FP32 as well. Our conversion method doesn't require re-training or fine-tuning of the original FP32 network. A number of standard networks (AlexNet, VGG, GoogLeNet, ResNet) have been converted from FP32 to INT8 and have achieved comparable Top 1 and Top 5 inference accuracy. The methods are implemented in TensorRT and can be executed on GPUs that support new INT8 inference instructions.  Back
 
Keywords:
Deep Learning and AI, Tools and Libraries, Self-Driving Cars, Federal, GTC 2017 - ID S7310
Download:
HD Mapping
Presentation
Media
Accelerating HD Map Creations with GPUs
Shigeyuki Iwata (ZENRIN Corporation)
We'll explain how GPUs can accelerate the development of HD maps for autonomous vehicles. Traditional mapping techniques take weeks to result in highly detailed maps because massive volumes of data, collected by survey vehicles with numerous ...Read More

We'll explain how GPUs can accelerate the development of HD maps for autonomous vehicles. Traditional mapping techniques take weeks to result in highly detailed maps because massive volumes of data, collected by survey vehicles with numerous sensors, are processed, compiled, and registered offline manually. We'll describe how Japan's leading mapping company uses the concept of a cloud-to-car AI-powered HD mapping system to automate and accelerate the HD mapping process, including actual examples of GPU data processing that use real-world data collected from roads in Japan.  

  Back
 
Keywords:
HD Mapping, Self-Driving Cars, GTC 2017 - ID S7656
Download:
 
The Self-Healing Map for Automated Driving
Sanjay Sood (HERE)
Self-driving vehicles require a high-definition, real-time "self-healing" map to help them operate safely and comfortably. HERE is closing the data loop from vehicle to cloud to vehicle by using real-time sensor data that provides our ...Read More

Self-driving vehicles require a high-definition, real-time "self-healing" map to help them operate safely and comfortably. HERE is closing the data loop from vehicle to cloud to vehicle by using real-time sensor data that provides our backend with information to effectively "self-heal" our map. RA vehicle needs real-time, accurate, and semantically rich data to pinpoint its lane-level position, and enable it to make proactive maneuvers in response to changes or incidents that affect driving conditions. In this session, we will discuss how HERE is addressing this critical need with its HD Live Map by providing precise positioning on the road, accurate planning of vehicle control maneuvers beyond sensor visibility, and increasing trust with the consumer through a more comfortable experience.  

  Back
 
Keywords:
HD Mapping, Federal, Self-Driving Cars, GTC 2017 - ID S7665
Download:
 
A Multi-Source, Multi-Sensor Approach to HD Map Creation
Willem Strijbosch (TomTom)
It's simple to take the output of one type of sensor in multiple cars and produce a map based on that data. However, a map created in this way will not have sufficient coverage, attribution, or quality for autonomous driving. Our multi-sourc ...Read More

It's simple to take the output of one type of sensor in multiple cars and produce a map based on that data. However, a map created in this way will not have sufficient coverage, attribution, or quality for autonomous driving. Our multi-source, multi-sensor approach leads to HD maps that have greater coverage, are more richly attributed, and have higher quality than single-source, single-sensor maps. In this session, we will discuss how we have created the world's largest HD map, are able to continuously update it, and are making autonomous driving safer and more comfortable.  

  Back
 
Keywords:
HD Mapping, Self-Driving Cars, GTC 2017 - ID S7809
 
Democratize Autonomous Driving
Gu Weihao (Baidu)
Most of you probably have already heard about Project Apollo, which we announced couple of weeks ago at Shanghai Motor Show. Baidu was one of the first major tech companies to embrace artificial intelligence and machine learning, and its autonom ...Read More

Most of you probably have already heard about Project Apollo, which we announced couple of weeks ago at Shanghai Motor Show. Baidu was one of the first major tech companies to embrace artificial intelligence and machine learning, and its autonomous vehicle push began with road testing in Beijing in 2015. In this presentation, you will learn more about the Project Apollo. And we will share the application scenarios of autonomous driving, the key practices of GPU application, and the vision of Baidu Intelligent Vehicle.

  Back
 
Keywords:
HD Mapping, Self-Driving Cars, GTC 2017 - ID S7826
Intelligent Machines and IoT
Presentation
Media
AirVision: AI Based, Real-Time Computer Vision System for Drones
Mindaugas Eglinskas (Magma Solutions, UAB)
Modern computing hardware and NVIDIA Jetson TX1 performance create new possibilities for drones and enable autonomous AI systems, where image processing can be done on-board during flight. We'll present how Magma Solutions developed the AirV ...Read More

Modern computing hardware and NVIDIA Jetson TX1 performance create new possibilities for drones and enable autonomous AI systems, where image processing can be done on-board during flight. We'll present how Magma Solutions developed the AirVision system to cover advanced vision processing tasks for drones, e.g., image stabilization, moving object detection, tracking, and classification using deep neural networks, and visual position estimation using preloaded maps. We'll describe how Magma Solutions used software frameworks Caffe with cuDNN, OpenVX /NVIDIA VisionWorks, and NVIDIA CUDA to achieve real-time vision processing and object recognition. The AirVision system is in part developed with Lithuanian Ministry of Defence funding and is being used as a tactical UAV system prototype.

  Back
 
Keywords:
Intelligent Machines and IoT, AI for In-Vehicle Applications, Computer Vision and Machine Vision, GTC 2017 - ID S7313
Download:
Intelligent Video Analytics
Presentation
Media
How Artificial Intelligence and Edge Computing Are Transforming Driver Safety, Recognition, and Retention
Avneesh Agrawal (Netradyne)
Through the application of artificial intelligence and deep learning, "computing at the edge" is changing how safety systems are detecting, capturing, analyzing, and applying reasoning to events. Using real-time analysis of the data fr ...Read More

Through the application of artificial intelligence and deep learning, "computing at the edge" is changing how safety systems are detecting, capturing, analyzing, and applying reasoning to events. Using real-time analysis of the data from cameras and inertial sensors mounted on a vehicle, we can not only detect unsafe driving events but also analyze the chain of events that lead to unsafe situations. We can recognize driver's positive performance in addition to areas where best practices need to be reinforced. Power-efficient and powerful deep learning processors enable us to process all of this data in real time at the edge of the network. This allows us to create an accurate and comprehensive record of driving performance that fleet managers can use to create incentives for safer driving. Insurance companies can also use this information to set proper premiums customized for individual drivers and potentially adjusted dynamically to reflect the driving environment. 

  Back
 
Keywords:
Intelligent Video Analytics, AI for In-Vehicle Applications, Federal, Deep Learning and AI, GTC 2017 - ID S7661
Download:
Media and Entertainment
Presentation
Media
Dynamic Facial Analysis: From Bayesian Filtering to Recurrent Neural Networks
Jinwei Gu (NVIDIA)
We propose to use recurrent neural networks for analyzing facial properties from videos. Facial analysis from consecutive video frames, including head pose estimation and facial landmark localization, is key for many applications such as in-car drive ...Read More
We propose to use recurrent neural networks for analyzing facial properties from videos. Facial analysis from consecutive video frames, including head pose estimation and facial landmark localization, is key for many applications such as in-car driver monitoring, facial animation capture, and human-computer interaction. Compared with the traditional Bayesian filtering methods for facial tracking, we show RNNs are a more generic, end-to-end approach for joint estimation and tracking. With the proposed RNN method, we achieved state-of-the-art performance for head pose estimation and facial landmark localization on benchmark datasets.  Back
 
Keywords:
Media and Entertainment, AI for In-Vehicle Applications, Deep Learning and AI, Computer Vision and Machine Vision, GTC 2017 - ID S7176
Download:
 
Multilayer and Multimodal Fusion of Deep Neural Networks for Video Classification
Xiaodong Yang (NVIDIA)
We'll present a novel framework to combine multiple layers and modalities of deep neural networks for video classification, which is fundamental to intelligent video analytics, including automatic categorizing, searching, indexing, segmentation, and ...Read More
We'll present a novel framework to combine multiple layers and modalities of deep neural networks for video classification, which is fundamental to intelligent video analytics, including automatic categorizing, searching, indexing, segmentation, and retrieval of videos. We'll first propose a multilayer strategy to simultaneously capture a variety of levels of abstraction and invariance in a network, where the convolutional and fully connected layers are effectively represented by the proposed feature aggregation methods. We'll further introduce a multimodal scheme that includes four highly complementary modalities to extract diverse static and dynamic cues at multiple temporal scales. In particular, for modeling the long-term temporal information, we propose a new structure, FC-RNN, to effectively transform the pre-trained fully connected layers into recurrent layers. A robust boosting model is then introduced to optimize the fusion of multiple layers and modalities in a unified way. In the extensive experiments, we achieve state-of-the-art results on benchmark datasets.  Back
 
Keywords:
Media and Entertainment, AI for In-Vehicle Applications, Intelligent Video Analytics, Deep Learning and AI, GTC 2017 - ID S7497
Download:
Self-Driving Cars
Presentation
Media
GPU Scheduling and Synchronization for ADAS
Venugopala Madumbu (NVIDIA)
Learn how the GPU schedules different workloads, and how it solves the challenges when developing ADAS systems. In these systems, some functionalities are expected to be executed with deterministic manner, and even prioritized and synchronized w ...Read More

Learn how the GPU schedules different workloads, and how it solves the challenges when developing ADAS systems. In these systems, some functionalities are expected to be executed with deterministic manner, and even prioritized and synchronized with different functionalities involved with GPU. We'll discuss the preemption feature in different GPU architectures, and also introduce two different approaches for achieving deterministic and priority execution of different GPU functionalities.

  Back
 
Keywords:
Self-Driving Cars, GTC 2017 - ID S7105
Download:
 
Using DRIVE PX 2 as the Brain of Self-Driving Vehicles
Shri Sundaram (NVIDIA)
We'll cover how to install NVIDIA DRIVE PX 2 to power a self-driving car, including insights into data acquisition, data annotation, neural network training, and in-vehicle inference. We'll focus on the type of sensors required to percei ...Read More

We'll cover how to install NVIDIA DRIVE PX 2 to power a self-driving car, including insights into data acquisition, data annotation, neural network training, and in-vehicle inference. We'll focus on the type of sensors required to perceive the driving environment, as well as how to log and annotate data, train a neural network with that data, and then use the neural network to inference on DRIVE PX 2 to create an occupancy grid and drive the car.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, GTC 2017 - ID S7118
Download:
 
Deep Learning Meets Motor Sports at ROBORACE
Bryn Balcombe (Roborace), John Waraniak (Specialty Equipment Market Association, SEMA)
Self-driving technology meets motorsport with the Roborace series. Learn how the tech is making its way onto the track, experience exciting milestones achieved and discover what to expect in the near future. This session will cover relevant AI t ...Read More

Self-driving technology meets motorsport with the Roborace series. Learn how the tech is making its way onto the track, experience exciting milestones achieved and discover what to expect in the near future. This session will cover relevant AI technologies in the Robocar and highlight how software is defining the future of the auto industry and motor racing. 

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, GTC 2017 - ID S7157
Download:
 
Embedded Bayesian Perception and V2X Communications for Autonomous Driving
Christian Laugier (Inria Grenoble)
We'll present technologies developed by the Inria Chroma team that robustly perceive and interpret dynamic environments using Bayesian systems (such as BOF, HSBOF, and CMCDOT) relying on embedded sensors input and V2X communications (vehicle ...Read More

We'll present technologies developed by the Inria Chroma team that robustly perceive and interpret dynamic environments using Bayesian systems (such as BOF, HSBOF, and CMCDOT) relying on embedded sensors input and V2X communications (vehicle to vehicle and vehicle to infrastructure). These technologies were initially developed in collaboration with industrial partners such as Toyota, Renault, and Probayes SA. We'll demonstrate how heterogeneous sensors can be used efficiently, merged, and filtered in real time into probabilistic grids, and discuss how to compute collision risks in an optimized way on embedded GPU platforms like the NVIDIA Jetson. 

  Back
 
Keywords:
Self-Driving Cars, Algorithms, Deep Learning and AI, GTC 2017 - ID S7190
Download:
 
Similarity Mapping with Enhanced Siamese Network for Multi-Object Tracking
Minyoung Kim (Panasonic Silicon Valley Laboratory)
We'll describe and demonstrate how to use an enhanced Siamese neural network for similarity mapping and multiple object tracking. By fusing both appearance and geometric information into a single enhanced Siamese neural network, which is tra ...Read More

We'll describe and demonstrate how to use an enhanced Siamese neural network for similarity mapping and multiple object tracking. By fusing both appearance and geometric information into a single enhanced Siamese neural network, which is trainable end-to-end on a single GPU machine, the object tracking system achieves competitive performance in both speed and accuracy on several benchmarks. 

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Computer Vision and Machine Vision, GTC 2017 - ID S7274
Download:
 
Reconstructing Traffic Intersections with Deep Semi-Supervised Learning
Menna El-Shaer (The Ohio State University)
Detecting objects at traffic intersectionswhether they are pedestrians, bicyclists, or other vehiclesis essential to ensure efficient traffic flow and safety. We'll present methods to reconstruct a traffic scene from the vehicle's point ...Read More

Detecting objects at traffic intersectionswhether they are pedestrians, bicyclists, or other vehiclesis essential to ensure efficient traffic flow and safety. We'll present methods to reconstruct a traffic scene from the vehicle's point of view, using multiple cameras placed on the vehicle, and share a mixture of deep semi-supervised learning models to infer objects from the scene. We'll also demonstrate how we optimized our models to run on the Tegra SoC, used in NVIDIA's Jetson TX1 and the DRIVE PX platforms. Participants are expected to be familiar with basic probability concepts and GPU programming with CUDA.  

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, GTC 2017 - ID S7304
Download:
 
Deep Learning in Argo.ai's Autonomous Vehicles
Bryan Goodman (Ford Motor Company / Argo AI)
We'll provide an overview of the models Argo.ai and Ford are using to fuse sensor information, and give examples of the performance optimization. Argo.ai and Ford are leveraging deep learning for autonomous vehicle perception across a multit ...Read More

We'll provide an overview of the models Argo.ai and Ford are using to fuse sensor information, and give examples of the performance optimization. Argo.ai and Ford are leveraging deep learning for autonomous vehicle perception across a multitude of sensors. It is important that these models have optimized performance to process high-resolution images, lidar point clouds, and other sensor inputs in a timely fashion. We will discuss how Argo.ai and Ford are exploring a variety of methods to push the run-time performance to new limits and maximize the use of the resources available, including modifying the underlying models, data structures, and the inference engine itself.  

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Performance Optimization, GTC 2017 - ID S7348
Download:
 
Functional Safety: Developing ISO 26262 Compliant GPU Applications
Richard Bramley
Functional safety is an important consideration for many applications of GPU computing, especially autonomous driving, robotics, and healthcare. We'll cover what it means to be compliant with current functional safety standards, learn the ba ...Read More

Functional safety is an important consideration for many applications of GPU computing, especially autonomous driving, robotics, and healthcare. We'll cover what it means to be compliant with current functional safety standards, learn the basics of functional safety, and uncover how the prevailing standard, ISO26262, can apply to GPUs and GPU programming. Often the development of an applications core features takes precedence, leaving functional safety considerations until the end of the development cycle. If functional safety is considered and planned from the start, results can improve while cost decreases. We'll explain the support that NVIDIA has implemented inside GPUs for functional safety and the various tools and methodologies that are available to support ISO26262 compliance for both hardware and software.  

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Computer Vision and Machine Vision, GTC 2017 - ID S7372
Download:
 
DeepTraffic: Driving Fast through Dense Traffic with Deep Reinforcement Learning
Lex Fridman (Massachusetts Institute of Technology (MIT))
This talk will introduce DeepTraffic, a deep reinforcement learning competition at MIT that has received over 10,000 submissions and is preparing for its second iteration. It's accessible to both beginners and experts. Whether with Javascript or Ten ...Read More
This talk will introduce DeepTraffic, a deep reinforcement learning competition at MIT that has received over 10,000 submissions and is preparing for its second iteration. It's accessible to both beginners and experts. Whether with Javascript or TensorFlow, the task is to drive faster than anyone else in the world. We will introduce deep reinforcement learning through the case study of motion planning in dense micro-traffic simulation, and describe the emergent behavior achieved through crowdsourced hyper-parameter tuning of policy networks. Go deep, go fast at http://selfdrivingcars.mit.edu.  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, GTC 2017 - ID S7381
Download:
 
Scene Understanding in Challenging Lighting Conditions for ADAS Systems
Srinivas KruthiventiSS (Harman International Industries), Pratyush Sahay (HARMAN International Industries)
We'll highlight deep learning techniques to develop visual scene understanding under challenging illumination conditions. We'll discuss approaches to mitigate severe illumination challenges posed by poor lighting, perform object detectio ...Read More

We'll highlight deep learning techniques to develop visual scene understanding under challenging illumination conditions. We'll discuss approaches to mitigate severe illumination challenges posed by poor lighting, perform object detection effectively in such scenarios, and use GPU acceleration to achieve reasonable throughput for ADAS systems. We'll also present compelling results our system achieved on a publicly available low-light benchmark dataset.  

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Computer Vision and Machine Vision, GTC 2017 - ID S7412
Download:
 
Real and Virtual Proving of Automated Driving in Berlin's Mixed Traffic
Ilja Radusch (Fraunhofer FOKUS)
Validating automated driving in city traffic requires a new approach. Developers must combine traditional miles driven, by collecting large amounts of data from real vehicles, with virtual miles. In this session, we will discuss the comprehensiv ...Read More

Validating automated driving in city traffic requires a new approach. Developers must combine traditional miles driven, by collecting large amounts of data from real vehicles, with virtual miles. In this session, we will discuss the comprehensive tool suite developed to generate, collect, segment, and label sensor data for algorithm validation or as ground truth for machine learning.  

  Back
 
Keywords:
Self-Driving Cars, Tools and Libraries, GTC 2017 - ID S7422
Download:
 
DriveWorks: A Look Inside NVIDIA's Autonomous Driving SDK
Gaurav Agarwal (NVIDIA), Dennis Lui (NVIDIA), Miguel Sainz (NVIDIA)
We'll introduce NVIDIA DriveWorks, a software development kit for autonomous driving and processing sensor data through perception, mapping, localization, and path planning steps. DriveWorks provides a rich set of functionalities: sensor abs ...Read More

We'll introduce NVIDIA DriveWorks, a software development kit for autonomous driving and processing sensor data through perception, mapping, localization, and path planning steps. DriveWorks provides a rich set of functionalities: sensor abstraction layer, algorithm modules, DNNs, applications, UI and tools for sensor setup and management. The SDK is modular, optimized for GPUs, and runs on top of OS, CUDA/cuDNN, TensorRT, and VPI. This is the foundation for developers working on autonomous vehicle applications, and the session will highlight how to leverage it.

  Back
 
Keywords:
Self-Driving Cars, GTC 2017 - ID S7427
Download:
 
DNA for Automated Driving
Jeremy Dahan (Elektrobit)
We'll showcase an architecture that enables discrete driver assistance systems to all work in tandem. This framework is enabling automakers to develop complex systems more quickly and efficiently, reducing time to market for ADAS functionali ...Read More

We'll showcase an architecture that enables discrete driver assistance systems to all work in tandem. This framework is enabling automakers to develop complex systems more quickly and efficiently, reducing time to market for ADAS functionality. As part of our discussion we'll share a reference implementation that demonstrates a valet parking function, which was built by using the architecture and accessing maps from the cloud.  

  Back
 
Keywords:
Self-Driving Cars, AI for In-Vehicle Applications, GTC 2017 - ID S7493
Download:
 
SVNet: CNN-Based Object Detection for ADAS
Junhwan Kim (StradVision, Inc.)
We'll discuss how StradVision developed SVNet, a CNN-based object detection system for ADAS. Using GPUs, SVNet is effective at handling bad weather and lighting conditions, small object sizes, and occlusion. We'll describe automotive cus ...Read More

We'll discuss how StradVision developed SVNet, a CNN-based object detection system for ADAS. Using GPUs, SVNet is effective at handling bad weather and lighting conditions, small object sizes, and occlusion. We'll describe automotive customers' requirements, address technical challenges, and discuss the significant performance achievements to algorithms by using GPUs.  

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Computer Vision and Machine Vision, GTC 2017 - ID S7547
Download:
 
The Digital Driving License: Testing Autonomous Vehicles Without with 3D Simulation
Jorrit Kuipers (robotTUNER)
We'll present the "digital driving license" project, which aims to establish standardization in the testing and assessment of autonomous vehicles. This is necessary to accelerate the use of self-driving cars in public spaces. Our 3 ...Read More

We'll present the "digital driving license" project, which aims to establish standardization in the testing and assessment of autonomous vehicles. This is necessary to accelerate the use of self-driving cars in public spaces. Our 3D simulation methodology has been successfully used for training and assessment of human drivers since 2003. Data from more than 100,000 human drivers gives insight in driving skills and styles, and is useful as a reference for performance measurement of robot vehicles. We will share information about the library of use cases we are building to assess the software of autonomous vehicles without going on road, and the positive correlation we have seen between simulated driving behaviors and real world driving scenarios.

  Back
 
Keywords:
Self-Driving Cars, AI for In-Vehicle Applications, GTC 2017 - ID S7559
Download:
 
Designing Autonomous Vehicle Applications with Real-Time Multisensor Frameworks
Nicolas Dulac (Intempora)
As embedded software in intelligent vehicles becomes more complex, researchers and engineers need more efficient tools and integration frameworks that simultaneously align ease-of-use, dynamism, execution performance, and portability. We'll introduc ...Read More
As embedded software in intelligent vehicles becomes more complex, researchers and engineers need more efficient tools and integration frameworks that simultaneously align ease-of-use, dynamism, execution performance, and portability. We'll introduce Intempora's RTMaps (Real-Time Multisensor applications) framework, which is a component-based design and execution middleware for software development, integration, and testing. This framework reduces software development cycle times and provides easy access to the DRIVE PX 2 capabilities. RTMaps supports most automotive sensors on the market for real-time execution, and also provides recording and synchronized playback capabilities for offline development, testing, and validation. RTMaps is now available on DRIVE PX 2. It offers a drag-and-drop approach for GPU-based computer-vision and AI systems, including an integration of the NVIDIA DriveWorks software modules as independent building-blocks.  Back
 
Keywords:
Self-Driving Cars, AI for In-Vehicle Applications, Computer Vision and Machine Vision, GTC 2017 - ID S7687
Download:
 
Building an L4 Autonomous Driving R&D Platform
Wolfgang Juchmann (AutonomouStuff)
We'll give a step-by-step description of how to use NVIDIA DRIVE PX 2 and the NVIDIA DriveWorks SDK to enable Level 4 autonomous research vehicles. We'll consider choice of sensors (camera, lidar, radar) and mounting locations for highwa ...Read More

We'll give a step-by-step description of how to use NVIDIA DRIVE PX 2 and the NVIDIA DriveWorks SDK to enable Level 4 autonomous research vehicles. We'll consider choice of sensors (camera, lidar, radar) and mounting locations for highway and urban autonomous driving. We'll also discuss optimal use of DriveWorks for sensor data gathering and processing using NVIDIA's AI solutions. The presentation will include video demonstrations of real-life examples showcasing the utilization of DRIVE PX 2 and DriveWorks as an end-to-end deep learning platform for automated driving.  

  Back
 
Keywords:
Self-Driving Cars, GTC 2017 - ID S7704
Download:
 
Powering Autonomy: Power Management Solutions for the Brains and Sensors in Autonomous Vehicles (Presented by Linear Technology)
Dave Dwelley (Linear Technology)
Autonomous systems are built around a network of sensors and NVIDIA GPUs. You'll learn about: (1)Power solutions for core, I/O, and other rails for GPU, memory, PLL, clock , etc., (2)Silent Switcher for low noise DC/DC regulation, (3)Techniq ...Read More

Autonomous systems are built around a network of sensors and NVIDIA GPUs. You'll learn about: (1)Power solutions for core, I/O, and other rails for GPU, memory, PLL, clock , etc., (2)Silent Switcher for low noise DC/DC regulation, (3)Techniques for monitoring power consumption, optimization and fault management, and (4)Power Over Data Line (PoDL): Next generation solutions for powering cameras and sensors via single twisted-pair Automotive Ethernet. 

  Back
 
Keywords:
Self-Driving Cars, Data Center and Cloud Computing, GTC 2017 - ID S7712
Download:
 
Edge-AI for Intelligent User Experience
Kal Mos (Mercedes-Benz Research and Development North America)
We'll showcase how Mercedes-Benz is enabling edge AI in the car by utilizing powerful embedded hardware for sensor processing and fusion in the cabin interior. The focus of AI work today has been dominated by the cloud environment. The avail ...Read More

We'll showcase how Mercedes-Benz is enabling edge AI in the car by utilizing powerful embedded hardware for sensor processing and fusion in the cabin interior. The focus of AI work today has been dominated by the cloud environment. The availability of computation power, combined with technologies for scaling with massive datasets, makes the cloud a perfect ecosystem for the application of AI technologies. However, there are a myriad of AI applications today that cant fully live on the cloud, such as an AI application in a moving vehicle where connectivity to the cloud is not guaranteed. In such cases, AI in the edge computing space faces a number of challenges not always present in today's cloud environment. Chief among them is a sense of autonomy: when the edge AI encounters problems that require prompt decision making, the problems have to be resolved by its own intelligence. Well talk about how Mercedes-Benz is enabling edge AI to address this issue. 

  Back
 
Keywords:
Self-Driving Cars, GTC 2017 - ID S7802
Download:
 
Simulating Traffic for Collision Avoidance Testing Using Deep Reinforcement Learning
Celite Milbrandt (monoDrive)
Safety is the most important aspect of autonomous vehicle feature development, testing, and deployment. Predicting, generating, and obtaining real world ground truth accident scenarios for research and development is both dangerous and expensive ...Read More

Safety is the most important aspect of autonomous vehicle feature development, testing, and deployment. Predicting, generating, and obtaining real world ground truth accident scenarios for research and development is both dangerous and expensive. Simulation has become a popular method for test case generation, although current solutions do not always model vehicle movement realistically, and they model real world dynamic traffic scenarios poorly. A generalized algorithm for simulated vehicle control is needed. We will demonstrate generalized parameterization, training, and resulting vehicle control patterns obtained from using various machine learning and AI methods. The resulting vehicle behavior is realistic and improves simulation efforts. 

  Back
 
Keywords:
Self-Driving Cars, AI Startup, AI for In-Vehicle Applications, GTC 2017 - ID S7816
Download:
 
Crowdsourcing 3D Semantic Maps for Vehicle Cognition
Andy Chen (Civil Maps), Fabien Chraim (Civil Maps), Scott Harvey (Civil Maps)
Extracting context from the vehicle's environment remains one of the major challenges to autonomy. While this can be achieved in highly controlled scenarios today, scalable solutions are not yet deployed. In this talk we explore the crucial ...Read More

Extracting context from the vehicle's environment remains one of the major challenges to autonomy. While this can be achieved in highly controlled scenarios today, scalable solutions are not yet deployed. In this talk we explore the crucial role of 3D semantic maps in providing cognition to autonomous vehicles. We will look at how Civil Maps uses swarm methods to rapidly crowdsource these maps, and how they are utilized by automotive systems in real time.    

  Back
 
Keywords:
Self-Driving Cars, HD Mapping, Computer Vision and Machine Vision, GTC 2017 - ID S7823
Download:
 
Tronis: The Virtual Environment Towards Prototyping and Testing Autonomous Driving
Michael Keckeisen (TWT Science and Innovation GmbH), Karl Kufieta (TWT Science and Innovation GmbH)
We'll introduce TRONIS, a high resolution virtual environment for prototyping and safeguarding highly automated and autonomous driving functions. We'll showcase how the vehicle, the environment and the driver behavior can be accurately m ...Read More

We'll introduce TRONIS, a high resolution virtual environment for prototyping and safeguarding highly automated and autonomous driving functions. We'll showcase how the vehicle, the environment and the driver behavior can be accurately modeled such as to reflect, develop and safeguard complex driving situations. TRONIS allows deploying Artificial Intelligence, thus enabling the vehicle to learn how to safely operate in its virtual environment with realistic scenarios. This behavior can afterwards can be transferred to real vehicles. TRONIS closes the gap between real drive and virtual drive testing. TRONIS exploits a state of the art gaming engine as introduced by UNREAL for the photo-realistic representation of the vehicle environment, the vehicle itself and the vehicle driver. TRONIS is empowered by NVIDIA.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, Computer Vision and Machine Vision, GTC 2017 - ID S7830
Download:
 
Software Development for Active Safety and Autonomous Driving Technology
Erik Coelingh (Zenuity)
It is obvious that we are rapidly moving into a future where intelligent vehicles assist you in avoiding collisions or take over part of the driving tasks, giving you the impression that the car can almost drive itself. Advanced computer vision ...Read More

It is obvious that we are rapidly moving into a future where intelligent vehicles assist you in avoiding collisions or take over part of the driving tasks, giving you the impression that the car can almost drive itself. Advanced computer vision in combination with radar technology and capable computational platforms have created a revolution in this field and accident statistics show that it has a significant positive impact on traffic safety - it is saving lives every single day. Demonstrations of prototypes are already ubiquitous and almost all car manufacturers are talking about it, but why can you not yet buy a fully self-driving vehicle at your local car dealer?  We'll describe how Zenuity addresses the key challenges when developing software for self-driving vehicles. Zenuity originates from the safety leaders of the automotive industry and develops a complete software stack from sensors to actuators.

  Back
 
Keywords:
Self-Driving Cars, GTC 2017 - ID S7833
Download:
 
How to Become a Self-Driving Car Engineer
David Silver (Udacity)
Learn how Udacity trains engineers to work on autonomous vehicles! Topics include deep learning, computer vision, sensor fusion, localization, control, path planning, and system integration. You'll cover the technical challenges and trends o ...Read More

Learn how Udacity trains engineers to work on autonomous vehicles! Topics include deep learning, computer vision, sensor fusion, localization, control, path planning, and system integration. You'll cover the technical challenges and trends of self-driving cars and the autonomous vehicle industry. Review examples of the projects that Udacity students build to learn and showcase their autonomous vehicle skills.

  Back
 
Keywords:
Self-Driving Cars, Deep Learning and AI, GTC 2017 - ID S7836
Download:
 
Using Machine Learning for Active Safety on DRIVE PX 2
Jost Bernasch (Virtual Vehicle Research Center)
This session will highlight work on an open vehicle platform for controlling autonomous vehicles, and explore using machine learning on traffic data for the functional design of an active safety system. In addition, a fully digital development a ...Read More

This session will highlight work on an open vehicle platform for controlling autonomous vehicles, and explore using machine learning on traffic data for the functional design of an active safety system. In addition, a fully digital development and test chain will be presented that allows the seamless use of real-world data--from public and non-public test tracks--to create new automated driving functions and optimize existing functions. 

  Back
 
Keywords:
Self-Driving Cars, AI for In-Vehicle Applications, GTC 2017 - ID S7838
Download:
 
 
NVIDIA - World Leader in Visual Computing Technologies
Copyright © 2017 NVIDIA Corporation Legal Info | Privacy Policy