We'll explain how GPUs can accelerate the development of HD maps for autonomous vehicles. Traditional mapping techniques take weeks to result in highly detailed maps because massive volumes of data, collected by survey vehicles with numerous sensors, are processed, compiled, and registered offline manually. We'll describe how Japan's leading mapping company uses the concept of a cloud-to-car AI-powered HD mapping system to automate and accelerate the HD mapping process, including actual examples of GPU data processing that use real-world data collected from roads in Japan.
Self-driving vehicles require a high-definition, real-time "self-healing" map to help them operate safely and comfortably. HERE is closing the data loop from vehicle to cloud to vehicle by using real-time sensor data that provides our backend with information to effectively "self-heal" our map. RA vehicle needs real-time, accurate, and semantically rich data to pinpoint its lane-level position, and enable it to make proactive maneuvers in response to changes or incidents that affect driving conditions. In this session, we will discuss how HERE is addressing this critical need with its HD Live Map by providing precise positioning on the road, accurate planning of vehicle control maneuvers beyond sensor visibility, and increasing trust with the consumer through a more comfortable experience.
It's simple to take the output of one type of sensor in multiple cars and produce a map based on that data. However, a map created in this way will not have sufficient coverage, attribution, or quality for autonomous driving. Our multi-source, multi-sensor approach leads to HD maps that have greater coverage, are more richly attributed, and have higher quality than single-source, single-sensor maps. In this session, we will discuss how we have created the world's largest HD map, are able to continuously update it, and are making autonomous driving safer and more comfortable.
Extracting context from the vehicle's environment remains one of the major challenges to autonomy. While this can be achieved in highly controlled scenarios today, scalable solutions are not yet deployed. In this talk we explore the crucial role of 3D semantic maps in providing cognition to autonomous vehicles. We will look at how Civil Maps uses swarm methods to rapidly crowdsource these maps, and how they are utilized by automotive systems in real time.