Like people, Autonomous Vehicles (Level 2 through 5) need to "see" and "feel" the road to best perform (in an enjoyable, efficient and safe way). To date, this "feel" aspect has been under-served. In this session, we will list novel methods for applying Artificial Intelligence to vehicle sensors in order to provide vehicles with advanced tactile sensing capabilities and augment the vastly-used visual sensors. We will discuss ways to leverage Big Data Analysis in the cloud, where data originating from vehicles equipped with Tactile Sensing Fusion and AI capabilities could be analyzed. These methods can also be leveraged to create Tactile Maps for roads based on crowd mapping. Additionally, we will describe cases in which the same data could be analyzed in the cloud to create continuously updated vehicle mechanical and health profiles. These profiles enable advanced predictive maintenance and other use cases that could be relevant to fleet managers and vehicle manufacturers.
OpenSeq2Seq is an open-source, TensorFlow-based toolkit, which supports a wide range of off-the-shelf models for Natural Language Translation (GNMT, Transformer, ConvS2S), Speech Recognition (Wave2Letter, DeepSpeech2), Speech Synthesis (Tacotron 2), Language Modeling and transfer learning for NLP tasks. OpenSeq2Seq is optimized for latest GPUs. It supports multi-GPU and mixed-precision training. Benchmarks on machine translation and speech recognition tasks show that models built using OpenSeq2Seq give state-of-the-art performance at 1.5-3x faster training time.
Data is the lifeblood of an enterprise, and it's being generated everywhere. To overcome the challenges of data gravity, data analytics, including machine learning, is best done where the data is located. Come to this session to understand how to overcome the challenges of machine learning everywhere.