SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC On-Demand

Presentation
Media
Abstract:
Learn how recent achievements in machine learning, sensor fusion, and GPU computing make it possible to create a next-generation advanced driver-assistance systems experience. We'll showcase a software solution that creates real-time augmented reality for drivers while using vehicle sensors, map data, telematics, and navigation guidance with a set of advanced algorithms. Our approach augments drivers' visual reality with supplementary objects in real time, and works with output devices such as head unit displays, digital clusters, and head-up displays. We'll also examine the challenges of running advanced neural network models in real time on embedded hardware and explain solutions to overcome them.
Learn how recent achievements in machine learning, sensor fusion, and GPU computing make it possible to create a next-generation advanced driver-assistance systems experience. We'll showcase a software solution that creates real-time augmented reality for drivers while using vehicle sensors, map data, telematics, and navigation guidance with a set of advanced algorithms. Our approach augments drivers' visual reality with supplementary objects in real time, and works with output devices such as head unit displays, digital clusters, and head-up displays. We'll also examine the challenges of running advanced neural network models in real time on embedded hardware and explain solutions to overcome them.  Back
 
Topics:
AI Application Deployment and Inference, Virtual Reality and Augmented Reality, Autonomous Vehicles
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9169
Streaming:
Download:
Share:
 
Abstract:

Learn how combining machine learning and computer vision with GPU computing helps to create a next-generation informational ADAS experience. This talk will present a real-time software solution that encompasses a set of advanced algorithms to create an augmented reality for the driver, utilizing vehicle sensors, map data, telematics, and navigation guidance. The broad range of features includes augmented navigation, visualization for cases of advanced parking assistance, adaptive cruise control and lane keeping, driver infographics, driver health monitoring, support in low visibility. Our approach augments driver's visual reality with supplementary objects in real time, and works with various output devices such as head unit displays, digital clusters, and head-up displays.

Learn how combining machine learning and computer vision with GPU computing helps to create a next-generation informational ADAS experience. This talk will present a real-time software solution that encompasses a set of advanced algorithms to create an augmented reality for the driver, utilizing vehicle sensors, map data, telematics, and navigation guidance. The broad range of features includes augmented navigation, visualization for cases of advanced parking assistance, adaptive cruise control and lane keeping, driver infographics, driver health monitoring, support in low visibility. Our approach augments driver's visual reality with supplementary objects in real time, and works with various output devices such as head unit displays, digital clusters, and head-up displays.

  Back
 
Topics:
AI for In-Vehicle Applications, Self-Driving Cars, Computer Vision and Machine Vision
Type:
Talk
Event:
GTC Europe
Year:
2017
Session ID:
23270
Download:
Share:
 
Abstract:

We'll address how next-generation informational ADAS experiences are created by combining machine learning, computer vision, and real-time signal processing with GPU computing. Computer vision and augmented reality (CVNAR) is a real-time software solution, which encompasses a set of advanced algorithms that create mixed augmented reality for the driver by utilizing vehicle sensors, map data, telematics, and navigation guidance. The broad range of features includes augmented navigation, visualization, driver infographics, driver health monitoring, lane keeping, advanced parking assistance, adaptive cruise control, and autonomous driving. Our approach augments drivers' visual reality with supplementary objects in real time, and works with various output devices such as head unit displays, digital clusters, and head-up displays.  

We'll address how next-generation informational ADAS experiences are created by combining machine learning, computer vision, and real-time signal processing with GPU computing. Computer vision and augmented reality (CVNAR) is a real-time software solution, which encompasses a set of advanced algorithms that create mixed augmented reality for the driver by utilizing vehicle sensors, map data, telematics, and navigation guidance. The broad range of features includes augmented navigation, visualization, driver infographics, driver health monitoring, lane keeping, advanced parking assistance, adaptive cruise control, and autonomous driving. Our approach augments drivers' visual reality with supplementary objects in real time, and works with various output devices such as head unit displays, digital clusters, and head-up displays.  

  Back
 
Topics:
Computer Vision and Machine Vision, AI for In-Vehicle Applications, Self-Driving Cars
Type:
Talk
Event:
GTC Silicon Valley
Year:
2017
Session ID:
S7312
Download:
Share: