Pixvana's Creative Director Scott Squires & Tywen Kelly Product Manager/Evangelists will demonstrate their cloud-based GPU video processing system created for handling many large 360 videos. The talk will also showcase how VRWorks has made it possible for Pixvana to stitch multiple 8k+ videos, each from several cameras, in parallel, and how Pixvana has transformed the editing process by allowing in-headset creation of interactive experiences with branching narratives. The potential for AI and Machine Learning in VR will also be covered.
We will present NVIDIA's solution for interactive, real-time streaming of VR content (such as games and professional applications) from the cloud to a low-powered client driving a VR/AR headset. We will outline few of the challenges, describe our design, and share some performance and quality metrics.
Learn how CannonDesign has incorporated NVIDIA's RTX technology into their visualization workflows. During this presentation, we will discuss how CannonDesign is leveraging the power of the new Quadro RTX video cards to optimize rendering times using VRay Next and Unreal Engine. We will share our evolutionary path to better rendering solutions, initial challenges with RTX and current workflow through case studies. This session will be of interest to attendees with basic understanding of visualization workflows.
We'll provide an overview of new techniques developed by ZeroLight using NVIDIA's Volta and Turing GPUs to enhance real-time 3D visualization in the automotive industry for compelling retail experiences. We'll cover the challenges involved in integrating real-time ray-traced reflections at 60fps in 4k and how future developments using DXR and NVIDIA RTX will enable improvements to both graphics and performance. We'll also discuss the challenges to achieving state-of-the-art graphical quality in virtual reality. Specifically, we'll explain how the team created a compelling commercial VR encounter using StarVR One and its eye-tracking capabilities for foveated rendering.
For decades GPUs have blazed pixels to the screen, while 3D geometry kernels primarily ran on the CPU. What happens when you move up the stack, leveraging the GPU to create the geometry itself? We'll introduce our groundbreaking native GPU-Accelerated geometry kernel, which is fully accessible via a Python API. We'll discuss the first application built on Dyndrite, the Additive Manufacturing Toolkit, which prints parts using the same splines used to design the parts. We'll also outline what's next for the software.
Large AEC projects involve complex structure design and validation tools but also benefit from High-end Visualization and VR for proper scale-one immersion and volume apprehension. CADMakers has been one of the very early adopters of Dassault Systemes 3DEXPERIENCE that combines the legacy of over 20 years of CATIA CAD excellence with advanced rendering materials support and native VR immersion without the need to use other external tools. This presentation will provide an unique inside view into todays and future possibilities of decision making in building design, leveraging the power of integrated Virtual Reality and Visualization experiences that happen directly in the CAD software tools of the 3DEXPERIENCE platform. The talk will present some of the latest GPU intensive 3DEXPERIENCE platform achievements at CADMakers, including how the platform is being used for building construction simulation, visual High-End Material validation for realistic AEC design review but also actual VR usages showing the different graphics performance gains obtained with large AEC projects, as well as how VR SLI allows enabling 90FPS immersion for multi-user multi-location VR reviews. Talk will be presented with actual AEC dataset used by CADMakers for some of their buildings designed with 3DEXPERIENCE.
Our talk will describe the business and technical challenges posed by subsurface exploration for oil and gas and how the high-performance RiVA computing platform addresses these challenges. Oil and gas companies have struggled to find experts to manage the complex systems needed for subsurface exploration. In addition, the large data sets these engineers require often take more than eight hours to load. We'll discuss RiVA, which was built to address problems with slow data transfers, and describe how it offers performance 30 times faster than other solutions and reduces deployment time from years to months. We'll cover the technologies that make our solution possible, including NVIDIA GPUs, Mechdyne TGX, and the Leostream Connection Broker. In addition, a RiVA customer will share challenges and show how deploying RiVA helped lower costs during deployment and production.
While most industries have experienced growth due to rapid adoption of digital tools, productivity in architecture and construction has dropped since 2005. The industry's toolkit is limited and disjointed, imposing a ceiling on the capabilities of even the most talented designers. The result is lost time, wasted money, and an industry-wide foreclosing of creative possibility that affects the world around us. We'll discuss suite of tools developed and deployed by our team at SHoP Architects, and describe the pioneering workflows we're using to redefine the future of architecture.
We'll discuss how advancements in GPU technology, real-time ray tracing, and virtual reality technologies furthered the building design process of a global design firm. Our talk will trace the steps we took to move beyond earlier approaches to using digital technology in building design by adopting, planning, and implementing new technologies. We'll cover our firm's infrastructure and cloud technology, and conclude with a full reveal of how we deploy and embrace advanced technologies to enhance designs, enrich internal and client communication, and ultimately deliver buildings that further improve our communities.
VDI users across multiple industries can now harness the power of the world's most advanced virtual workstation to enable increasingly demanding workflows. This session brings together graphics virtualization thought leaders and experts from across the globe who have deep knowledge of NVIDIA virtual GPU architecture and years of experience implementing VDI across multiple hypervisors. Panelists will discuss how they transformed organizations, including how they leveraged multi-GPU support to boost GPU horsepower for photorealistic rendering and data-intensive simulation and how they used GPU-Accelerated deep learning or HPC VDI environments with ease using NGC containers.
With the growth in demand of Intelligent Video Analytics (IVA), NVIDIA virtual GPUs provides a secure solution while optimizing GPU utilization for inference-based deep learning applications for loss prevention, facial recognition, pose estimation, and many other use cases.
Learn how HPE and NVIDIA are simplifying infrastructure and delivering extreme graphics and performance on the HPE SimpliVity HCI platform. We'll talk about EUC offerings and use cases for HPE SimpliVity with NVIDIA GPUs and highlight performance metrics achieved through industry-standard benchmarks.
We'll talk about how we achieved lower costs, better density, and guaranteed performance while migrating from legacy large-scale GPU passthrough VDI architecture to the latest NVIDIA vGPU solution. We'll also discuss our work delivering remote workstations to subcontractors through our VDI solution for 6,000 concurrent users since 2014.
Learn about the plans of market leaders in streaming VR and AR content from the cloud in this panel discussion. From enterprise use cases to streaming VR to the 5G edge, panelists will describe the state-of-the-art and challenges to making XR truly mobile.
VR is rapidly evolving. HMD resolution and field of view are increasing, VR content is becoming more detailed, and demand for more realistic and more immersive experiences continues to grow. As we march forward in the pursuit of ever-better VR, how will we render fast enough to drive those higher resolution displays? How will we generate realistic content for enormous virtual worlds? How will we continue to enhance the quality and depth of immersion? In this panel, we'll cover topics such as human perception and neurophysiology, adaptive rendering strategies that focus compute power where it's needed, and deep learning-based synthesis for virtual models and environment. Learn how these components are being integrated to drive the future of VR.
We'll discuss how we're leveraging GPU acceleration for AI applications used to model performance and monitor the health of smart cities. As concerns grow about aging infrastructure and environmental sustainability, integrated methods and software tools are needed to facilitate infrastructure health monitoring, perform data-driven and model-centric analysis, and achieve cost-effective and carbon-efficient design, construction, and operation. Analyzing and modeling large-scale civil infrastructure is a computationally intensive and time-consuming task. We'll discuss a number of use cases for GPUs for AI applications such as sensor placement.
We'll discuss Prism, a Technicolor initiative to produce a high-end Optix-based path tracer for a fast preview of element, shots or sequences. It incorporates open source technologies like Open Subdivision Surface, Open Shading Language, and Pixar USD to produce a high level of fidelity and realism. We will explain why we chose to develop a modern GPU rendering system and the advantage of using it in collaboration to RTX graphic cards.
Learn how Microsoft is extending WebRTC to enable real-time, interactive 3D streaming from the cloud to any low-powered remote device. The purpose is to provide an open-source toolkit to enable industries to leverage remote cloud rendering in their service and product pipelines. This is required for many industries in which the scale and complexity of 3D models, scenes, physics, and rendering is beyond the capabilities of a mobile device platform. We are extending the industry standard WebRTC framework to 3D scenarios such as mixed reality. We'll explain the work we did to realize the goal of delivering high-quality 3D applications to any client — web, mobile, desktop, and embedded. This is only possible using the NVIDIA NVENCODE pipeline for server-side rendering on the cloud.
We'll discuss deep learning practices used in semiconductor processes. Semiconductor processes produce a wide variety of data, including image data produced during inspection that can be used to verify if the process was normal. We explain why the problem isn't one of simple image classification, and how it's essential to consider the characteristics of the semiconductor process for accurate results. We'll also describe challenges we faced and how have we solved these problems.
We'll examine the potential for spatial computing and machine learning to reintroduce people to the physical potential of their bodies by focusing on Embody, MAP Lab's 2019 Sundance premiere. Inspired by movement traditions such as aikido, yoga, and dance, Embody is a social VR experience that uses visual metaphor and encouragement from teachers and friends to bring about coordinated body movement. We'll explain how this experience, which is piloted entirely by body movement and position, reclaims the body's potential inside the digital landscape. Users prompt each other with conversation, mirroring, and environmental channeling to step together through physical sequences designed to center, balance, extend, and strengthen. We hope players who experience Embody will be reminded of their deep physical potential and remember that the body is a flexible tool and able to change (http://www.sundance.org/projects/embody).
Well talk about data exploration and how its enabling next-generation buildings. Well describe how were using a mesh-based server application to tackle the building industrys headaches, and how GPU acceleration with Python allows us to perform large calculations. Our data scraping and data wrangling processes include geometrical transformation functions, high-level matrix multiplication, proximity search algorithm in quadtree structures, and advanced topological functions. Well explain how we use GPUs to train a neural network to accurately classify information within a building to create a clean and exploitable data model, a task thats impossible using the actual systems. Well also cover how this application benefits architects, technical coordinators, project owners, and others by increasing control accuracy and optimizing time and costs.
Todays Internet of Things (IoT) has evolved into the Internet of Experiences where autonomous products and connected devices are integrating more and more software to digitally connect to the physical world around them, blending together to become part of a living experience shaped by interactions between products, nature and life. To make them seamlessly work together, industrial companies, along with their ecosystem, are looking for the ability to virtually co-design and simulate systems of systems, embedded systems and software architectures across industries, shorten design and engineering innovation cycles through automation and systematic reuse of existing data lakes for the training of Cognitive Augmented Companions. 3DEXPERIENCE CATIA is operating the shift from traditional Computer Aided Design towards Cognitive Augmented Design where a seamless collaboration between engineers and AI-powered Design solutions empower our users with Know How reaching far beyond their initial domain of expertise. Combined with Systems Engineering solutions, this paradigm shift enables organizations to first focus on the challenges to solve and then to quickly and easily evaluate requests for new system variants, that reduces the overall development cost thanks to an open and extensible development platform a platform that fully integrates cross-discipline modeling, simulation, verification and business process support needed for developing simple to sophisticated cyber-physical systems. In this presentation, we will illustrate how the Cognitive Augmented solutions could support engineers to design GPU enabled intelligent systems while relying themselves on the use of AIs powered by accelerated computing.
Global enterprises need to compress analysis time frames to update the business in real-time, a process called active analytics. We will discuss and demo how to bring together the key elements of an active analytics architecture, including historical, streaming, and graph analytics, location intelligence, and machine learning for predictive analytics.
We'll discuss our work on mission display computers, which play an important role in imaging applications such as digital moving maps, 360° situational awareness, surveillance, embedded training, and degraded visual environments. These systems are designed with advanced graphics capability to drive multiple independent displays with video from multiple sources. The display video can be a input from multiple sensors, generated digital map video, symbology, and metadata information from a variety of sources. This information is overlaid to provide instantaneous independent views. We'll explain our flexible mission display computer design, which uses four VPX2-1220 single board computers modules and a video mixer with an NVIDIA Pascal-based VPX3U-P5000-SDI-8IO module. The GPU-Based mixer provides increased programmability and versatility to the video capabilities required of a mission display computer.
We'll describe PLDC, a capability embedded inside NuFlare's MBM-1000 multi-beam mask writer that performs semiconductor mask-process correction. PLDC does pixel-level dose correction and improves quality and reliability of shapes printed on the mask. We'll explain how PLDC processes 540 TB of mask data in 10 hours, a feat that would not be possible without advances in GPU computing.