This presentation will show how Pixar uses GPU technology to empower artists in the animation and lighting departments. By providing our artists with high-quality, interactive visual feedback, we enable them to spend more time making creative decisions. Animators interactively pose characters in order to create a performance. When features like displacement, fur, and shadows become critical for communicating the story, it is vital to be able to represent these visual elements in motion at interactive frame rates. We will show Presto, Pixar''s proprietary animation system, which uses GPU acceleration to deliver real-time feedback during the character animation process, using examples from Pixar''s recent films. Lighting artists place and adjust virtual lights to create the mood and tone of the scene as well as guide the audience''s attention. A physically-based illumination model allows these artists to create visually-rich imagery using simpler and more direct controls. We will demonstrate our interactive lighting preview tool, based on this model, built on NVIDIA''s OptiX framework, and fully integrated into our new Katana-based production workflow.
Learn about the latest breakthroughs and offerings in NVIDIAs Advanced Rendering Solutions, which scale smoothly from local GPU rendering to remote super-computer clusters. New capabilities and possibilities in Iray® and mental ray® will be explored and demonstrated, along with what''s possible with the latest in NVIDIA OptiX for accelerating custom ray tracing development. Industry trends and production examples will also be explored as advanced in both interactive and production rendering possibilities continue to revolutionize workflows.
This session will cover everything developers need to get started with ray tracing in OptiX, including OptiX C and C++ APIs, the execution model, acceleration structures, programmable entry points, and best practices. We will also cover exciting customer use cases and the new OptiX Prime API that provides to-the-metal ray tracing without shading or recursion.
We introduce GVDB Sparse Volumes as a new offering with NVIDIA DesignWorks to focus on high quality raytracing of sparse volumetric data for motion pictures. Based on the VDB topology of Museth, with a novel GPU-based data structure and API, GVDB is designed for efficient compute and raytracing on a sparse hierarchy of grids. Raytracing on the GPU is accelerated with indexed memory pooling, 3D texture atlas storage and a new hierarchical traversal algorithm. GVDB integrates with NVIDIA OptiX, and is developed as an open source library as a part of DesignWorks.
In this session we describe our GPU accelerated computing service which supports several internal business processes in a large scale company setup. The service supports diverse computational needs such as on-demand rendering, mesh optimization, a Massive Multiplayer Online Game (MMO), product visualizations and other demanding computational tasks. We present the architectural considerations for a service-oriented computational framework and the practical learning's and opportunities encountered during development a enterprise system using NVIDIA technologies such as CUDA, OptiX, OpenGL and OpenCL. Our aim is to share knowledge and present LEGO's vision for a GPU accelerated computational platform as a business-driven technology.
We present our findings on using the NVIDIA OptiX framework to simulate the scattering of electrons as encountered in scanning electron microscope environments. In particular, we discuss how we implemented volume scattering and coplanar material transition boundaries with varying material properties within the framework. The results have been verified with established CPU based simulation packages. While achieving comparable accuracy, significant speed ups are realized.
Learn how creative professionals harness the power of AdobeÆ After EffectsÆ CS6 and NVIDIA GPUs to accelerate the motion graphics workflow with new 3D ray-traced rendering. Based on NVIDIAÆ Optixô technology, this simplifies the design of realistic geometric text and shapes in 3D space with up to 27x faster performance on NVIDIAÆ QuadroÆ GPUs.
Adobe After Effects CS6 unveils an amazing new 3D ray-traced rendering engine based on NVIDIA Optix technology with GPU acceleration of up to 50x faster than a CPU alone. This enables simple and quick designs of realistic geometric text and shapes in 3D space. Motion graphics artists can now create more physically accurate scenes with beautiful results such as reflections, transparency, soft shadows, and depth-of-field blur directly in After Effects. GPU-accelerated ray tracing drastically improves the workflow by enabling motion graphics artists to develop these 3D effects entirely within After Effects.
Learn how to use NVIDIA OptiX to quickly develop high performance ray tracing applications for interactive rendering, offline rendering, or scientific visualization. This session will explore the latest available OptiX version.
The tremendous successes that GPUs have had in accelerating molecular simulations must continue to be matched by advances in their application to challenging simulation preparation, analysis, and visualization tasks. We will describe how the latest developments in the molecular visualization tool VMD exploit GPUs using exciting new features of CUDA, OpenACC, EGL, and OptiX to accelerate key science tasks on clouds, clusters, and petascale computers. We will summarize our early experiences and performance results on GPU-accelerated OpenPOWER platforms with an eye toward the challenges and opportunities posed by the upcoming DOE Summit and Sierra systems.
We present a novel technique for visualization of scientific data with compute operators and multi-scatter ray tracing entirely on GPU. Our source data consists of a high-resolution simulation using point-based wavelets, a representation not supported by existing tools. To visualize this data, and consider dynamic time-based rendering, our approach is inspired by OpenVDB from motion pictures, which uses a hierarchy of grids similar to AMR. We develop GVDB, a ground-up implementation with tree traversal, compute, and ray tracing via OptiX all on the GPU. GVDB enables multi-scatter rendering at 200 million rays/sec, and full-volume compute operations in a few milliseconds on datasets up to 4,200^3 entirely in GPU memory.
We'll explore NVIDIA GVDB Voxels, a new open source SDK framework for generic representation, computation, and rendering of voxel-based data. We'll introduce the features of the new SDK and cover applications and examples in motion pictures, scientific visualization, and 3D printing. NVIDIA GVDB Voxels, based on GVDB Sparse Volume technology and inspired by OpenVDB, manipulates large volumetric datasets entirely on the GPU using a hierarchy of grids. The second part of the talk will cover in-depth use of the SDK, with code samples, and coverage of the design aspects of NVIDIA GVDB Voxels. A sample code walk-through will demonstrate how to build sparse volumes, render high-quality images with NVIDIA OptiX integration, produce dynamic data, and perform compute-based operations.
In this session we will discuss the challenge and benefits of interactively visualizing large scenes in modern big budget VFX-driven movies. We will share some examples of the scale and complexity we experienced in our recent productions at MPC and the value of being able to visualize them without the need to go through long offline render processes. We will show initial results of our work done using Nvidia's OptiX framework and Fabric Engine to assemble and render large scenes in an interactive environments taking advantage of the power of high end GPUs.
In this session we will discuss the challenge and benefits of interactively visualizing large scenes in modern big budget VFX-driven movies. We will share some examples of the scale and complexity we experienced in our recent productions at MPC and the value of being able to visualize them without the need to go through long offline render processes. We will show initial results of our work done using NVIDIA's OptiX framework and Fabric Engine to assemble and render large scenes in an interactive environments taking advantage of the power of high end GPUs.
Virtual testing is the key to the development of ADAS and HAD systems. Research projects on national (PEGASUS) and European (Enable-S3) level have been setup explicitly to define methods and quality criteria for the testing of HAD functions and identify the virtual domain as one of their top priorities. As vehicles depend increasingly on sensors like LIDAR, RADAR and SONAR, an accurate representation of these sensors for test and validation purposes is mandatory. Sensor data will flow into deep learning neural networks on Nvidia Driveworks, or it will be used in software-in-the-loop (SiL) or hardware-in-the-loop (HiL) using Nvidia PX2 for test setups.
By integrating NVIDIA's OptiX system for real-time GPU raytracing into a DirectX9 based engine, CCP Games enables high-quality raytraced player portraits for the single shard MMO EVE Online, reusing the game's assets and pipeline. We selectively add stochastic effects while closely maintaining the look of the DX9-based renderer that Art Direction aimed for. In this talk we approach OptiX from the point of view of a programmer familiar with DirectX, discuss integrating these two systems, and show how we reproduced some DirectX-based effects like transparency and subsurface scattering within OptiX.
Learn in this session how the AUDI AG and its partners make use of OptiX as a unified platform for the simulation of perception sensors utilizing different physical measurement principles, e.g. Video Camera, LIDAR, Ultra Sonic, etc. The aim is to generate synthetic sensor data with realistic measurement errors for testing Advanced Driver Assistance Systems. Get details about the challenges they faced during the implementation of the necessary tools for validating the sensor models and join the discussion when they describe the upcoming challenges related to real-time Ray Tracing and advanced material descriptions, when multiple sensors are simulated simultaneously.
OptiX has broken some major barriers recently by enabling out-of-GPU-core memory rendering and by adding a CPU rendering back-end when an OptiX-capable GPU is not present in the system. OptiX users and CUDA developers will be interested in how we accomplished these feats within the existing GPU architecture. This talk will provide a brief introduction to OptiX and then dive into what the new features provide. We will then go under the covers and show how we pulled it off.
Learn the latest approaches in levering GPUs for the fastest possible ray tracing results from experts developing and leveraging the NVIDIA OptiX ray tracing engine, the team behind NVIDIA iray, and those making custom renderers. Multiple rendering techniques, GPU programming languages, out-of-core rendering, and optimal hardware configurations will be covered in this cutting-edge discussion.
Learn the latest approaches in levering GPUs for the fastest possible ray tracing results from experts developing and leveraging the NVIDIA OptiX ray tracing engine and those making custom renderers. Multiple ray tracing techniques, out-of-core rendering, multi-GPU support, optimal hardware configurations, and new opportunities with Kepler GPU and will be covered in this up to date discussion of the fastest growing trend in advanced rendering.
OptiX is the foremost platform for GPU ray tracing. It exposes the extreme ray tracing performance of the GPU to typical developers, while hiding most of the complexity usually associated with ray tracing. This tutorial will cover everything developers need to get started with ray tracing in OptiX, including at least the OptiX C and C++ APIs, the execution model, acceleration structures, programmable entry points, and best practices.
We will cover advanced topics in OptiX. Examples include implementing advanced rendering and sampling algorithms, dealing with large datasets and OptiX''s virtual memory system, new API features like callable programs, and CUDA interoperability. We will also cover performance analysis and optimization in OptiX, and plan to leave plenty of time for questions.
Learn how GPU computing is revolutionizing performance and possibilities in both interactive and production rendering. The latest capabilities of Iray 2013, will be explored and demonstrated, along with what's possible with the latest OptiX for accelerating custom ray tracing solutions. Trends in the industry, along with guidelines for configuring optimal rendering, will also be discussed.
OptiX is the industry's premier ray tracing engine in terms of performance, functionality, and adoption. We will present three recent advances in OptiX. First, the renovation of the core of OptiX, including using an LLVM-based compiler pipeline, which brings several performance benefits and opens the door for long-desired new features. Second, the OptiX VCA allows OptiX-based applications to transparently use NVIDIA Visual Computing Appliance for massively parallel, shared, remote rendering. Third, we will share exciting results of our top partners and their recent successes with OptiX.
Learn how to implement a physically based ray tracing renderer with NVIDIA OptiX, which supports the material definition language (MDL). The concepts and specific renderer design decisions to support the fundamental building blocks in the MDL specification are explained using a global illumination path tracer implemented with OptiX as an example. Special attention has been given to the material description code inside that renderer to express complex material hierarchies via standard C++ mechanisms in a readable manner with the goal of automatic code generation from MDL files finally done via the MDL SDK.
NVIDIA's Material Definition Language provides a powerful tool for describing complex physically based materials. Using the MDL SDK, generation of actual GPU shader code from an MDL file is usually done in an offline process. We'll introduce the NVRTC CUDA runtime compilation library, and then demonstrate how it can be employed to build shader programs for the OptiX ray-tracing engine within a running rendering application. Using NVRTC not only relives end users from having to install an NVIDIA CUDA development environment, it also enables the creation of compact and efficient shader code that can be specialized at runtime. We'll demonstrate a prototypical implementation that has been integrated in ESI's IC.IDO decision-making platform.
Learn about the NVIDIA OptiX ray tracing engine, a sophisticated library for performing GPU ray tracing. We'll provide an overview of the OptiX ray tracing pipeline and the programmable components that allow for the implementation of many algorithms and applications. OptiX can be used in many domains, ranging from rendering to acoustic modeling to scientific visualization. Several case studies will be presented describing the benefits of integrating OptiX into third-party applications.
Learn how Iray Server, Quadro VCA and NVIDIA DGX-1 deliver flexible workflows with interactive remote rendering. This talk will discuss the workflow advantages of interactive rendering on a remote system. Specifically how Iray and OptiX support Interactive rendering powered by a network attached GPU accelerator. Either Iray Server, Quadro VCA, or NVIDIA DGX-1.
In this session we will discuss the challenge and benefits of interactively visualising large scenes in modern big budget VFX-driven movies. We will share some examples of the scale and complexity we experienced in our recent productions at MPC and the value of being able to visualize them without the need to go through long offline render processes. We will show initial results of our work done using NVIDIA''s Optix framework and Fabric Engine to assemble and render large scenes in an interactive environments taking advantage of the power of high end GPUs.
Having moved our film lighting pipeline to a ray traced, physically based illumination model, this presentation will demonstrate the viability and advantages of using OptiX along with NVIDIA GPUs to obtain interactive lighting feedback on real production shots and assets.