Pixvana's Creative Director Scott Squires & Tywen Kelly Product Manager/Evangelists will demonstrate their cloud-based GPU video processing system created for handling many large 360 videos. The talk will also showcase how VRWorks has made it possible for Pixvana to stitch multiple 8k+ videos, each from several cameras, in parallel, and how Pixvana has transformed the editing process by allowing in-headset creation of interactive experiences with branching narratives. The potential for AI and Machine Learning in VR will also be covered.
Parking infrastructure drives a multi-billion dollar revenue stream. Please join Arrow Electronics to learn how GPU Edge Compute and Deep Learning help to enhance guest experience, optimize parking utilization, and drive economic benefits for municipalities using readably deployable Jetson compute solutions and software from Arrow Electronics customers and partners.
We will present NVIDIA's solution for interactive, real-time streaming of VR content (such as games and professional applications) from the cloud to a low-powered client driving a VR/AR headset. We will outline few of the challenges, describe our design, and share some performance and quality metrics.
Learn how CannonDesign has incorporated NVIDIA's RTX technology into their visualization workflows. During this presentation, we will discuss how CannonDesign is leveraging the power of the new Quadro RTX video cards to optimize rendering times using VRay Next and Unreal Engine. We will share our evolutionary path to better rendering solutions, initial challenges with RTX and current workflow through case studies. This session will be of interest to attendees with basic understanding of visualization workflows.
We'll provide an overview of new techniques developed by ZeroLight using NVIDIA's Volta and Turing GPUs to enhance real-time 3D visualization in the automotive industry for compelling retail experiences. We'll cover the challenges involved in integrating real-time ray-traced reflections at 60fps in 4k and how future developments using DXR and NVIDIA RTX will enable improvements to both graphics and performance. We'll also discuss the challenges to achieving state-of-the-art graphical quality in virtual reality. Specifically, we'll explain how the team created a compelling commercial VR encounter using StarVR One and its eye-tracking capabilities for foveated rendering.
For decades GPUs have blazed pixels to the screen, while 3D geometry kernels primarily ran on the CPU. What happens when you move up the stack, leveraging the GPU to create the geometry itself? We'll introduce our groundbreaking native GPU-Accelerated geometry kernel, which is fully accessible via a Python API. We'll discuss the first application built on Dyndrite, the Additive Manufacturing Toolkit, which prints parts using the same splines used to design the parts. We'll also outline what's next for the software.
Large AEC projects involve complex structure design and validation tools but also benefit from High-end Visualization and VR for proper scale-one immersion and volume apprehension. CADMakers has been one of the very early adopters of Dassault Systemes 3DEXPERIENCE that combines the legacy of over 20 years of CATIA CAD excellence with advanced rendering materials support and native VR immersion without the need to use other external tools. This presentation will provide an unique inside view into todays and future possibilities of decision making in building design, leveraging the power of integrated Virtual Reality and Visualization experiences that happen directly in the CAD software tools of the 3DEXPERIENCE platform. The talk will present some of the latest GPU intensive 3DEXPERIENCE platform achievements at CADMakers, including how the platform is being used for building construction simulation, visual High-End Material validation for realistic AEC design review but also actual VR usages showing the different graphics performance gains obtained with large AEC projects, as well as how VR SLI allows enabling 90FPS immersion for multi-user multi-location VR reviews. Talk will be presented with actual AEC dataset used by CADMakers for some of their buildings designed with 3DEXPERIENCE.
While most industries have experienced growth due to rapid adoption of digital tools, productivity in architecture and construction has dropped since 2005. The industry's toolkit is limited and disjointed, imposing a ceiling on the capabilities of even the most talented designers. The result is lost time, wasted money, and an industry-wide foreclosing of creative possibility that affects the world around us. We'll discuss suite of tools developed and deployed by our team at SHoP Architects, and describe the pioneering workflows we're using to redefine the future of architecture.
We'll discuss how advancements in GPU technology, real-time ray tracing, and virtual reality technologies furthered the building design process of a global design firm. Our talk will trace the steps we took to move beyond earlier approaches to using digital technology in building design by adopting, planning, and implementing new technologies. We'll cover our firm's infrastructure and cloud technology, and conclude with a full reveal of how we deploy and embrace advanced technologies to enhance designs, enrich internal and client communication, and ultimately deliver buildings that further improve our communities.
Learn about the plans of market leaders in streaming VR and AR content from the cloud in this panel discussion. From enterprise use cases to streaming VR to the 5G edge, panelists will describe the state-of-the-art and challenges to making XR truly mobile.
VR is rapidly evolving. HMD resolution and field of view are increasing, VR content is becoming more detailed, and demand for more realistic and more immersive experiences continues to grow. As we march forward in the pursuit of ever-better VR, how will we render fast enough to drive those higher resolution displays? How will we generate realistic content for enormous virtual worlds? How will we continue to enhance the quality and depth of immersion? In this panel, we'll cover topics such as human perception and neurophysiology, adaptive rendering strategies that focus compute power where it's needed, and deep learning-based synthesis for virtual models and environment. Learn how these components are being integrated to drive the future of VR.
We'll discuss how we're leveraging GPU acceleration for AI applications used to model performance and monitor the health of smart cities. As concerns grow about aging infrastructure and environmental sustainability, integrated methods and software tools are needed to facilitate infrastructure health monitoring, perform data-driven and model-centric analysis, and achieve cost-effective and carbon-efficient design, construction, and operation. Analyzing and modeling large-scale civil infrastructure is a computationally intensive and time-consuming task. We'll discuss a number of use cases for GPUs for AI applications such as sensor placement.
Learn how a successful implementation of a low memory footprint, multi-GPU iterative method makes it possible to efficiently resolve localization of spontaneous nonlinear flow in deforming porous media. Grasping this physical process is essential to ensure safe underground waste storage and understand natural fluid migration in reservoirs. We'll describe our parallel, matrix-free solver design, which provides a short time to solution and can solve a variety of coupled and nonlinear systems of partial differential equations in 3D. We will unveil the key algorithmic and optimization concepts that enable our stencil-based solvers to converge in few iterations, while tackling the hardware limit on the most recent NVIDIA high-bandwidth GPU accelerators. Also, we will explain how we achieved 98 percent parallel efficiency on 5000 NVIDIA Tesla P100 GPUs on the hybrid Cray XC-50 Piz Daint supercomputer at the Swiss National Supercomputing Centre, CSCS.
Well talk about data exploration and how its enabling next-generation buildings. Well describe how were using a mesh-based server application to tackle the building industrys headaches, and how GPU acceleration with Python allows us to perform large calculations. Our data scraping and data wrangling processes include geometrical transformation functions, high-level matrix multiplication, proximity search algorithm in quadtree structures, and advanced topological functions. Well explain how we use GPUs to train a neural network to accurately classify information within a building to create a clean and exploitable data model, a task thats impossible using the actual systems. Well also cover how this application benefits architects, technical coordinators, project owners, and others by increasing control accuracy and optimizing time and costs.
Todays Internet of Things (IoT) has evolved into the Internet of Experiences where autonomous products and connected devices are integrating more and more software to digitally connect to the physical world around them, blending together to become part of a living experience shaped by interactions between products, nature and life. To make them seamlessly work together, industrial companies, along with their ecosystem, are looking for the ability to virtually co-design and simulate systems of systems, embedded systems and software architectures across industries, shorten design and engineering innovation cycles through automation and systematic reuse of existing data lakes for the training of Cognitive Augmented Companions. 3DEXPERIENCE CATIA is operating the shift from traditional Computer Aided Design towards Cognitive Augmented Design where a seamless collaboration between engineers and AI-powered Design solutions empower our users with Know How reaching far beyond their initial domain of expertise. Combined with Systems Engineering solutions, this paradigm shift enables organizations to first focus on the challenges to solve and then to quickly and easily evaluate requests for new system variants, that reduces the overall development cost thanks to an open and extensible development platform a platform that fully integrates cross-discipline modeling, simulation, verification and business process support needed for developing simple to sophisticated cyber-physical systems. In this presentation, we will illustrate how the Cognitive Augmented solutions could support engineers to design GPU enabled intelligent systems while relying themselves on the use of AIs powered by accelerated computing.
Learn how Microsoft is extending WebRTC to enable real-time, interactive 3D streaming from the cloud to any low-powered remote device. The purpose is to provide an open-source toolkit to enable industries to leverage remote cloud rendering in their service and product pipelines. This is required for many industries in which the scale and complexity of 3D models, scenes, physics, and rendering is beyond the capabilities of a mobile device platform. We are extending the industry standard WebRTC framework to 3D scenarios such as mixed reality. We'll explain the work we did to realize the goal of delivering high-quality 3D applications to any client — web, mobile, desktop, and embedded. This is only possible using the NVIDIA NVENCODE pipeline for server-side rendering on the cloud.
Manufacturers are increasingly adopting AI to improve productivity. We'll discuss our work to automate the inspection process, which represent 20 percent of the manufacturing pipeline. We're developing deep learning for automated visual inspection, aiming for human-level accuracy using NVIDIA GPUs and TensorRT to deploy the neural network on Jetson AGX Xavier. We'll also introduce our other new deep learning products.
Learn how to leverage GPUs to improve chip-design quality and make the VLSI design process faster. We will show a GPU-Accelerated global placement engine built on PyTorch that achieved a 40X speedup over multi-threaded implementation and can place a 10M cell design in four minutes. In addition to direct GPU acceleration of design automation software, we'll explain how we apply a deep learning approach to physical design problems, which indirectly leverages GPU. We will illustrate a method that leverages convolutional neural networks and fully convolutional networks to predict design rule checking hotspots during physical design. This deep learning-based approach significantly outperforms other ML approaches such as support vector machine in prediction accuracy.
AI and deep learning are about to change the manufacturing industry by boosting capacity, increasing efficiency, and reducing costs and inventory. We'll share how Foxconn Interconnect Technology, a professional parts and components company, developed an industrial inspection application in-house and show how to leverage NVIDIA SDKs to accelerate the overall develop and deployment process.
We'll discuss deep learning practices used in semiconductor processes. Semiconductor processes produce a wide variety of data, including image data produced during inspection that can be used to verify if the process was normal. We explain why the problem isn't one of simple image classification, and how it's essential to consider the characteristics of the semiconductor process for accurate results. We'll also describe challenges we faced and how have we solved these problems.
Its time to separate the signal from the noise when it comes to autonomous driving. And as self-driving trucks near commercial reality, the stakes are high for safe operation on our highways. Join Dr. Xiaodi Hou, Founder, President and CTO of TuSimple, the largest self-driving truck company worldwide, for a discussion of what it takes to design, test and deploy a fully autonomous truck. Dr. Hou will lay it on the line in terms of whats working and whats not in the design and testing of todays self-driving trucks.
We'll provide a deep dive into how AI is helping solve real-world problems in the retail industry. Learn how NVIDIA GPUs allow our auto-checkout system to achieve accuracy and efficiency in in-store customer tracking, shelf inventory management, and automated store monitoring. We will explain how to use our automated checkout solution to fuse data streams efficiently, avoid delays and latency, and achieve unprecedented accuracy without sacrificing speed. In addition, we'll discuss a case study of NVIDIA Tesla P4 setup and cover technology choices behind various automated checkout solutions.
The session is focused on applying deep learning to predictive maintenance for paper manufacturing. Although addressing this problem with machine learning methods requires considerable training data that's usually not available in the paper industry, we'll introduce an approach that combines GANs and reinforcement learning. We'll describe how that makes it possible to create a digital twin of equipment based on data from sensors, which works like a virtual environment for synthetic data generation.
We'll talk about how we achieved lower costs, better density, and guaranteed performance while migrating from legacy large-scale GPU passthrough VDI architecture to the latest NVIDIA vGPU solution. We'll also discuss our work delivering remote workstations to subcontractors through our VDI solution for 6,000 concurrent users since 2014.
We'll examine the potential for spatial computing and machine learning to reintroduce people to the physical potential of their bodies by focusing on Embody, MAP Lab's 2019 Sundance premiere. Inspired by movement traditions such as aikido, yoga, and dance, Embody is a social VR experience that uses visual metaphor and encouragement from teachers and friends to bring about coordinated body movement. We'll explain how this experience, which is piloted entirely by body movement and position, reclaims the body's potential inside the digital landscape. Users prompt each other with conversation, mirroring, and environmental channeling to step together through physical sequences designed to center, balance, extend, and strengthen. We hope players who experience Embody will be reminded of their deep physical potential and remember that the body is a flexible tool and able to change (http://www.sundance.org/projects/embody).
We'll describe PLDC, a capability embedded inside NuFlare's MBM-1000 multi-beam mask writer that performs semiconductor mask-process correction. PLDC does pixel-level dose correction and improves quality and reliability of shapes printed on the mask. We'll explain how PLDC processes 540 TB of mask data in 10 hours, a feat that would not be possible without advances in GPU computing.
BMW’s logistics and industry 4.0 research team, X Works, will present how GPU computing power is being leveraged in an end-to-end pipeline for object labelling and detection, developed in house and deployed to a wide range of applications throughout BMW Group. The talk will include a description of how GPU computing is being used to support the creation of photorealistic meshes, and how our 3D pipeline helps BMW associates efficiently create large datasets to train 2D/3D detection models for industrial use case in robotics, autonomous transport, interactive layout planning, virtual reality visualization and smart three dimensional maps.
In this session, we'll discuss our approach to applying deep learning algorithms to full car inspection and address the challenges involved in real time data processing. We plan to demonstrate how we apply convolutional neural networks used for geometry verification during the complete vehicle inspection on BMW assembly line. The complete inspection of a fully assembled car involves verifying the geometry of the car, ensuring the correct customization, and detecting possible defects. AI-aided computer vision eliminates problems of traditional approaches arising from highly reflective surfaces and specular reflections. Using deep learning models and the processing power of a multi-GPU system, we can make the inspection happen in real time.