Pixvana's Creative Director Scott Squires & Tywen Kelly Product Manager/Evangelists will demonstrate their cloud-based GPU video processing system created for handling many large 360 videos. The talk will also showcase how VRWorks has made it possible for Pixvana to stitch multiple 8k+ videos, each from several cameras, in parallel, and how Pixvana has transformed the editing process by allowing in-headset creation of interactive experiences with branching narratives. The potential for AI and Machine Learning in VR will also be covered.
We will present NVIDIA's solution for interactive, real-time streaming of VR content (such as games and professional applications) from the cloud to a low-powered client driving a VR/AR headset. We will outline few of the challenges, describe our design, and share some performance and quality metrics.
Learn how CannonDesign has incorporated NVIDIA's RTX technology into their visualization workflows. During this presentation, we will discuss how CannonDesign is leveraging the power of the new Quadro RTX video cards to optimize rendering times using VRay Next and Unreal Engine. We will share our evolutionary path to better rendering solutions, initial challenges with RTX and current workflow through case studies. This session will be of interest to attendees with basic understanding of visualization workflows.
We'll provide an overview of new techniques developed by ZeroLight using NVIDIA's Volta and Turing GPUs to enhance real-time 3D visualization in the automotive industry for compelling retail experiences. We'll cover the challenges involved in integrating real-time ray-traced reflections at 60fps in 4k and how future developments using DXR and NVIDIA RTX will enable improvements to both graphics and performance. We'll also discuss the challenges to achieving state-of-the-art graphical quality in virtual reality. Specifically, we'll explain how the team created a compelling commercial VR encounter using StarVR One and its eye-tracking capabilities for foveated rendering.
Large AEC projects involve complex structure design and validation tools but also benefit from High-end Visualization and VR for proper scale-one immersion and volume apprehension. CADMakers has been one of the very early adopters of Dassault Systemes 3DEXPERIENCE that combines the legacy of over 20 years of CATIA CAD excellence with advanced rendering materials support and native VR immersion without the need to use other external tools. This presentation will provide an unique inside view into todays and future possibilities of decision making in building design, leveraging the power of integrated Virtual Reality and Visualization experiences that happen directly in the CAD software tools of the 3DEXPERIENCE platform. The talk will present some of the latest GPU intensive 3DEXPERIENCE platform achievements at CADMakers, including how the platform is being used for building construction simulation, visual High-End Material validation for realistic AEC design review but also actual VR usages showing the different graphics performance gains obtained with large AEC projects, as well as how VR SLI allows enabling 90FPS immersion for multi-user multi-location VR reviews. Talk will be presented with actual AEC dataset used by CADMakers for some of their buildings designed with 3DEXPERIENCE.
While most industries have experienced growth due to rapid adoption of digital tools, productivity in architecture and construction has dropped since 2005. The industry's toolkit is limited and disjointed, imposing a ceiling on the capabilities of even the most talented designers. The result is lost time, wasted money, and an industry-wide foreclosing of creative possibility that affects the world around us. We'll discuss suite of tools developed and deployed by our team at SHoP Architects, and describe the pioneering workflows we're using to redefine the future of architecture.
We'll discuss how advancements in GPU technology, real-time ray tracing, and virtual reality technologies furthered the building design process of a global design firm. Our talk will trace the steps we took to move beyond earlier approaches to using digital technology in building design by adopting, planning, and implementing new technologies. We'll cover our firm's infrastructure and cloud technology, and conclude with a full reveal of how we deploy and embrace advanced technologies to enhance designs, enrich internal and client communication, and ultimately deliver buildings that further improve our communities.
Learn about the plans of market leaders in streaming VR and AR content from the cloud in this panel discussion. From enterprise use cases to streaming VR to the 5G edge, panelists will describe the state-of-the-art and challenges to making XR truly mobile.
VR is rapidly evolving. HMD resolution and field of view are increasing, VR content is becoming more detailed, and demand for more realistic and more immersive experiences continues to grow. As we march forward in the pursuit of ever-better VR, how will we render fast enough to drive those higher resolution displays? How will we generate realistic content for enormous virtual worlds? How will we continue to enhance the quality and depth of immersion? In this panel, we'll cover topics such as human perception and neurophysiology, adaptive rendering strategies that focus compute power where it's needed, and deep learning-based synthesis for virtual models and environment. Learn how these components are being integrated to drive the future of VR.
We'll discuss how we're leveraging GPU acceleration for AI applications used to model performance and monitor the health of smart cities. As concerns grow about aging infrastructure and environmental sustainability, integrated methods and software tools are needed to facilitate infrastructure health monitoring, perform data-driven and model-centric analysis, and achieve cost-effective and carbon-efficient design, construction, and operation. Analyzing and modeling large-scale civil infrastructure is a computationally intensive and time-consuming task. We'll discuss a number of use cases for GPUs for AI applications such as sensor placement.
Learn how a successful implementation of a low memory footprint, multi-GPU iterative method makes it possible to efficiently resolve localization of spontaneous nonlinear flow in deforming porous media. Grasping this physical process is essential to ensure safe underground waste storage and understand natural fluid migration in reservoirs. We'll describe our parallel, matrix-free solver design, which provides a short time to solution and can solve a variety of coupled and nonlinear systems of partial differential equations in 3D. We will unveil the key algorithmic and optimization concepts that enable our stencil-based solvers to converge in few iterations, while tackling the hardware limit on the most recent NVIDIA high-bandwidth GPU accelerators. Also, we will explain how we achieved 98 percent parallel efficiency on 5000 NVIDIA Tesla P100 GPUs on the hybrid Cray XC-50 Piz Daint supercomputer at the Swiss National Supercomputing Centre, CSCS.
Todays Internet of Things (IoT) has evolved into the Internet of Experiences where autonomous products and connected devices are integrating more and more software to digitally connect to the physical world around them, blending together to become part of a living experience shaped by interactions between products, nature and life. To make them seamlessly work together, industrial companies, along with their ecosystem, are looking for the ability to virtually co-design and simulate systems of systems, embedded systems and software architectures across industries, shorten design and engineering innovation cycles through automation and systematic reuse of existing data lakes for the training of Cognitive Augmented Companions. 3DEXPERIENCE CATIA is operating the shift from traditional Computer Aided Design towards Cognitive Augmented Design where a seamless collaboration between engineers and AI-powered Design solutions empower our users with Know How reaching far beyond their initial domain of expertise. Combined with Systems Engineering solutions, this paradigm shift enables organizations to first focus on the challenges to solve and then to quickly and easily evaluate requests for new system variants, that reduces the overall development cost thanks to an open and extensible development platform a platform that fully integrates cross-discipline modeling, simulation, verification and business process support needed for developing simple to sophisticated cyber-physical systems. In this presentation, we will illustrate how the Cognitive Augmented solutions could support engineers to design GPU enabled intelligent systems while relying themselves on the use of AIs powered by accelerated computing.
Learn how Microsoft is extending WebRTC to enable real-time, interactive 3D streaming from the cloud to any low-powered remote device. The purpose is to provide an open-source toolkit to enable industries to leverage remote cloud rendering in their service and product pipelines. This is required for many industries in which the scale and complexity of 3D models, scenes, physics, and rendering is beyond the capabilities of a mobile device platform. We are extending the industry standard WebRTC framework to 3D scenarios such as mixed reality. We'll explain the work we did to realize the goal of delivering high-quality 3D applications to any client — web, mobile, desktop, and embedded. This is only possible using the NVIDIA NVENCODE pipeline for server-side rendering on the cloud.
Its time to separate the signal from the noise when it comes to autonomous driving. And as self-driving trucks near commercial reality, the stakes are high for safe operation on our highways. Join Dr. Xiaodi Hou, Founder, President and CTO of TuSimple, the largest self-driving truck company worldwide, for a discussion of what it takes to design, test and deploy a fully autonomous truck. Dr. Hou will lay it on the line in terms of whats working and whats not in the design and testing of todays self-driving trucks.
We'll provide a deep dive into how AI is helping solve real-world problems in the retail industry. Learn how NVIDIA GPUs allow our auto-checkout system to achieve accuracy and efficiency in in-store customer tracking, shelf inventory management, and automated store monitoring. We will explain how to use our automated checkout solution to fuse data streams efficiently, avoid delays and latency, and achieve unprecedented accuracy without sacrificing speed. In addition, we'll discuss a case study of NVIDIA Tesla P4 setup and cover technology choices behind various automated checkout solutions.
We'll talk about how we achieved lower costs, better density, and guaranteed performance while migrating from legacy large-scale GPU passthrough VDI architecture to the latest NVIDIA vGPU solution. We'll also discuss our work delivering remote workstations to subcontractors through our VDI solution for 6,000 concurrent users since 2014.
We'll examine the potential for spatial computing and machine learning to reintroduce people to the physical potential of their bodies by focusing on Embody, MAP Lab's 2019 Sundance premiere. Inspired by movement traditions such as aikido, yoga, and dance, Embody is a social VR experience that uses visual metaphor and encouragement from teachers and friends to bring about coordinated body movement. We'll explain how this experience, which is piloted entirely by body movement and position, reclaims the body's potential inside the digital landscape. Users prompt each other with conversation, mirroring, and environmental channeling to step together through physical sequences designed to center, balance, extend, and strengthen. We hope players who experience Embody will be reminded of their deep physical potential and remember that the body is a flexible tool and able to change (http://www.sundance.org/projects/embody).