Legacy production methods can't keep up with the global nature of content creation. Studios need to operate where tax incentives are offered, and artist talent may be located anywhere in the world. A Virtual Studio lets you deploy resources where and when you need them in a matter of minutes, rather than weeks, so you can ramp up and down as production ebbs and flows. Virtual Studios are fueled by GPUs, which provide artists and engineers with both a powerful virtual workstation and the ability to accelerate renders and simulations, both locally and distributed across clusters. On the cloud, you're able to visualize and manipulate massive datasets that would be difficult or even impossible to achieve on traditional hardware. This session examines the benefits, strategies, and challenges of building a Virtual Studio on Google Cloud Platform, powered by NVIDIA GPUs.
We'll examine the potential for spatial computing and machine learning to reintroduce people to the physical potential of their bodies by focusing on Embody, MAP Lab's 2019 Sundance premiere. Inspired by movement traditions such as aikido, yoga, and dance, Embody is a social VR experience that uses visual metaphor and encouragement from teachers and friends to bring about coordinated body movement. We'll explain how this experience, which is piloted entirely by body movement and position, reclaims the body's potential inside the digital landscape. Users prompt each other with conversation, mirroring, and environmental channeling to step together through physical sequences designed to center, balance, extend, and strengthen. We hope players who experience Embody will be reminded of their deep physical potential and remember that the body is a flexible tool and able to change (http://www.sundance.org/projects/embody).
Generative methods allow a computer to automatically distill the essence of a dataset and then produce novel examples that are indistinguishable from the original data. That's the promise, but getting there has been difficult. This talk focuses on recent advances in generative adversarial networks (GAN), describing ideas that have finally enabled the synthesis of credible high-resolution images. It also covers recent work by NVIDIA (StyleGAN) that makes the image generation more controllable by borrowing ideas from style transfer literature, and also leads to an interesting, unsupervised separation of high-level attributes (e.g. pose or identity in case of human faces) and inconsequential variation in the images (exact placement of hair, etc.).
We'll discuss Prism, a Technicolor initiative to produce a high-end Optix-based path tracer for a fast preview of element, shots or sequences. It incorporates open source technologies like Open Subdivision Surface, Open Shading Language, and Pixar USD to produce a high level of fidelity and realism. We will explain why we chose to develop a modern GPU rendering system and the advantage of using it in collaboration to RTX graphic cards.
Learn how CannonDesign has incorporated NVIDIA's RTX technology into their visualization workflows. During this presentation, we will discuss how CannonDesign is leveraging the power of the new Quadro RTX video cards to optimize rendering times using VRay Next and Unreal Engine. We will share our evolutionary path to better rendering solutions, initial challenges with RTX and current workflow through case studies. This session will be of interest to attendees with basic understanding of visualization workflows.
We'll provide an overview of new techniques developed by ZeroLight using NVIDIA's Volta and Turing GPUs to enhance real-time 3D visualization in the automotive industry for compelling retail experiences. We'll cover the challenges involved in integrating real-time ray-traced reflections at 60fps in 4k and how future developments using DXR and NVIDIA RTX will enable improvements to both graphics and performance. We'll also discuss the challenges to achieving state-of-the-art graphical quality in virtual reality. Specifically, we'll explain how the team created a compelling commercial VR encounter using StarVR One and its eye-tracking capabilities for foveated rendering.
Large AEC projects involve complex structure design and validation tools but also benefit from High-end Visualization and VR for proper scale-one immersion and volume apprehension. CADMakers has been one of the very early adopters of Dassault Systemes 3DEXPERIENCE that combines the legacy of over 20 years of CATIA CAD excellence with advanced rendering materials support and native VR immersion without the need to use other external tools. This presentation will provide an unique inside view into todays and future possibilities of decision making in building design, leveraging the power of integrated Virtual Reality and Visualization experiences that happen directly in the CAD software tools of the 3DEXPERIENCE platform. The talk will present some of the latest GPU intensive 3DEXPERIENCE platform achievements at CADMakers, including how the platform is being used for building construction simulation, visual High-End Material validation for realistic AEC design review but also actual VR usages showing the different graphics performance gains obtained with large AEC projects, as well as how VR SLI allows enabling 90FPS immersion for multi-user multi-location VR reviews. Talk will be presented with actual AEC dataset used by CADMakers for some of their buildings designed with 3DEXPERIENCE.
We'll discuss how advancements in GPU technology, real-time ray tracing, and virtual reality technologies furthered the building design process of a global design firm. Our talk will trace the steps we took to move beyond earlier approaches to using digital technology in building design by adopting, planning, and implementing new technologies. We'll cover our firm's infrastructure and cloud technology, and conclude with a full reveal of how we deploy and embrace advanced technologies to enhance designs, enrich internal and client communication, and ultimately deliver buildings that further improve our communities.
We'll discuss our work on mission display computers, which play an important role in imaging applications such as digital moving maps, 360° situational awareness, surveillance, embedded training, and degraded visual environments. These systems are designed with advanced graphics capability to drive multiple independent displays with video from multiple sources. The display video can be a input from multiple sensors, generated digital map video, symbology, and metadata information from a variety of sources. This information is overlaid to provide instantaneous independent views. We'll explain our flexible mission display computer design, which uses four VPX2-1220 single board computers modules and a video mixer with an NVIDIA Pascal-based VPX3U-P5000-SDI-8IO module. The GPU-Based mixer provides increased programmability and versatility to the video capabilities required of a mission display computer.
We'll describe PLDC, a capability embedded inside NuFlare's MBM-1000 multi-beam mask writer that performs semiconductor mask-process correction. PLDC does pixel-level dose correction and improves quality and reliability of shapes printed on the mask. We'll explain how PLDC processes 540 TB of mask data in 10 hours, a feat that would not be possible without advances in GPU computing.