Data is the lifeblood of an enterprise, and it's being generated everywhere. To overcome the challenges of data gravity, data analytics, including machine learning, is best done where the data is located. Come to this session to understand how to overcome the challenges of machine learning everywhere.
In this session, you will learn what factors to consider when making infrastructure choices and what the benefits of on-premise infrastructure are for machine learning and deep learning. When considering artificial intelligence and machine learning, model development and algorithm choices are key. But anybody who attempted to train a model on a full dataset will tell you that infrastructure matters, a lot in fact. You can save hours, days, and sometime weeks by running your training algorithm on the right GPU-accelerated infrastructure. On the other hand when hunting for the perfect model for your business problems, you need to be able to stand up infrastructure quickly, connect to the existing data lake and iterate through many, many algorithm variations.
Machine learning and deep learning applications are revolutionizing how we as consumers interact with our compute devices by imbuing them with speech recognition, machine vision, and other perceptual capabilities. We are now seeing new advancements in AI which move from simple pattern recognition and perceptual processing to much deeper semantic processing. These new advancements in essence bridge the gap between machine learning techniques, including deep learning, and symbolic artificial intelligence. We'll cover the new capabilities and use cases Cisco is targeting with this new breakthrough technology. In addition, we'll discuss the core enabling technical building blocks and projects such as InfoGAN's and Ben Goertzel's OpenCog project.
We present our experience of running computationally intensive camera-based perception algorithms on NVIDIA GPUs. Geometric (depth) and semantic (classification) information is fused in the form of semantic stixels, which provide a rich and compact representation of the traffic scene. We present some strategies to reduce the computational complexity of the algorithms. Using synthetic data generated by the SYNTHIA tool, including slanted roads from a simulation of San Francisco city, we evaluate performance latencies and frame rates on a DrivePX2-based platform.
We present an approach of using real time path tracing in combination with traditional deferred techniques. This method allows to use most elements of a traditional rendering pipeline (like direct light and post effects) and keep the BVH ray traversal usage at a minimum. In combination with adaptive filtering, GPU data streaming and mesh preprocessing, this technique allows for real time frame rates up to Virtual Reality usage on a single GPU. The robust implementation is used for architectural visualization but can also be used at games and other areas with a wide range of direct and indirect lighting phenomena. We finally compare our results with our offline path tracer implementation.
Adoption of machine learning (ML) and deep learning has grown at an unprecedented rate in the last few years. With many applications requiring edge compute as well as a strong demand for hybrid and multi cloud no lock-in solutions, customers demand more flexibility in how models are trained and served. This situation warrants a hybrid cloud approach, enabling ML wherever the data lives with the flexibility to access the cloud when local compute resources are lacking. Google Cloud has collaborated with partners, including NVIDIA and Cisco, to enable a standard open source AI platform, Kubeflow, that's built on Kubernetes to provide a consistent machine learning experience for both on-premise and in the cloud. This platform supports deep integration into NVIDIA stack, including TensorRT and RAPIDS.
Cisco and NVIDIA partnered with ESRI to create starting point sizing guidance for ArcGIS Pro in the optimal Cisco UCS server with the best fit NVIDIA GPU to deliver three key capabilities of the software: 3D rendering, spatial analytics and deep learning inferencing – all on the same server
Modernizing VDI to support the use of common productivity and collaboration applications, video content, graphics intensive Windows 10, and the rising trend of multiple and higher resolution monitors are all driving the demand for more graphics computing resources. In this session, learn how customers using Cisco-NVIDIA solutions have turned to GPU virtualization to achieve a native-PC experience in the age of multimedia.
Machine learning and deep learning applications are revolutionizing how we as consumers interact with our compute devices by imbuing them with speech recognition, machine vision, and other perceptual capabilities. We are now seeing new advancements in AI which move from simple pattern recognition and sensory data processing to much deeper semantic processing. These new advancements in essence bridge the gap between machine learning techniques, including commoditized deep learning on SIMD GPUs , and the next generation of specialized distributed-memory MIMD hardware for large-scale graph analysis for symbolic artificial intelligence. In this session we will cover the new capabilities and use cases Cisco is targeting with this new breakthrough technology.