Experience how to make spaces aware of the situation of people and objects. Explore new techniques to build real-time systems that can understand scenes with the help of hemispherical point clouds and AI at the edge. The goal of this session is to learn new ways of developing scene understanding needed for action and interaction in public spaces or smart homes. The capture, recognition and understanding of all external and internal degrees of freedom of persons and objects and of their respective states give the full information of the observed space.
While hemispherical vision provides advantages for wide-area coverage from a single point of observation, it also introduces new challenges due to its distinct projection geometry. At the example of 3-dimensional people detection and posture recognition, we explain different approaches to use deep neural networks to extract information from hemispherical RGB-D data. The talk focuses on providing an overview over methods, which attendees can be apply to custom projects and run on Jetson in real-time.