The NVIDIA VisualFx SDK provides game developers a turnkey solution to enable cinematic effects like interactive fire and smoke, fur, waves , global illumination and more in games. All these complex, realistic effects are provided in an easy-to-use SDK to facilitate the integration and tuning in any given game engine. In this session we will provide an overview of the different VisualFX SDK modules, the roadmap and some case studies on how they were successfully used.
Learn how to add volumetric effects to your game engine - smoke, fire and explosions that are interactive, more realistic, and can actually render faster than traditional sprite-based techniques. Volumetrics remain one of the last big differences between real-time and offline visual effects. In this talk we will show how volumetric effects are now practical on current GPU hardware. We will describe several new simulation and rendering techniques, including new solvers, combustion models, optimized ray marching and shadows, which together can make volumetric effects a practical alternative to particle-based methods for game effects.
This talk presents several rendering techniques behind Batman: Arkham Origins (BAO), the third installment in the critically-acclaimed Batman: Arkham series. This talk focuses on several DirectX 11 features developed in collaboration with NVIDIA specifically for the high-end PC enthusiast. Features such as tessellation and how it significantly improves the visuals behind Batman's iconic cape and brings our deformable snow technique from the consoles to the next level on PC will be presented. Features such as physically-based particles with PhysX, particle fields with Turbulence, improved shadows, temporally stable dynamic ambient occlusion, bokeh depth-of-field and improved anti-aliasing will also be presented. Additionally, other improvements to image quality, visual fidelity and compression will be showcased, such as improved detail normal mapping via Reoriented Normal Mapping and how Chroma Subsampling at various stages of our lighting pipeline was essential in doubling the size of our open world and still fit on a single DVD.
Android continues its meteoric rise as the world's dominate mobile operating system. Every day developers large and small discover new ways to delight users but getting noticed is increasingly difficult. The latest NVIDIA® Tegra® K1 processors provide developers with a host of new features to differentiate their titles and get them flying above the rest of the crowd. During this session discover the new CPU, GPU, and multimedia features the latest Tegra processors offer and learn how to use them to enhance and extend your applications. As an example of the type of differentiation the Tegra K1 makes possible, Allegorithmic and RUST Ltd will provide a hands-on demo of physically based shading (PBR), dynamic texturing and high resolution GPU based particle throwing using the latest Allegorithmic Substance texturing pipeline.
This session presents the technologies behind NVIDIA GRID(TM) and the future of game engines and application delivery running in the cloud. The audience will learn about the key components of NVIDIA GRID, like optimal capture, efficient compression, fast streaming, and low latency rendering that make cloud gaming and application delivery possible. Franck will demonstrate how these components fit together, how to use the GRID APIs, and how to optimize their usage to deliver an ultimate experience, with live demos.
Learn techniques for efficiently using the GPU and detecting and eliminating driver overhead. See the direction that OpenGL is heading in to embrace multi-threaded, multi-core CPU app designs. Also, the GPU can construct and update app rendering data structures to require very little CPU intervention. We will also explore subdivision surfaces and how to get them automatically GPU accelerated with a new extension. And hand-in-glove with subdivision surfaces is PTEX support in OpenGL. Finally, while OpenGL is the most broadly available open API for 3D graphics, it's also the most fragmented. We will explore Regal, an open source library that illustrates how to de-fragment the OpenGL landscape and keep your graphics back end code from becoming a patchwork of platform #ifdefs.
3D Animation is the art form of the present and the future, with hundred of millions people drawn to its emotional power in movie theaters and games every year. Mixamo recently developed a facial capture and animation technology to enable anybody to create compelling animated content that is immediately reflected on a character's face. The technology was originally developed for 3D professionals, but with the recent introduction of the new generation mobile GPU hardware supporting OpenCL APIs such as the Tegra K1 it is now possible to port the technology to mobile devices. In the course of this presentation we will introduce numerical approaches to facial motion capture and animation that are based on a mixture of global and local models of human facial expressions and shape. The presenter will also go into the details of implementing the real-time technology on a Tegra K1 device.
Topics covered in this session include: Minko: game development & real-time graphics applications for web & mobile platforms
Panelists: Scott Budman (Business & Technology Reporter, NBC) Jeff Herbst (Vice President of Business Development, NVIDIA) Jens Hortsmann (Executive Producer & Managing Partner, Crestlight Venture Productions) Pat Moorhead (President and Principal Analyst, Moor Insights & Strategy) Bill Reichert (Managing Director, Garage Technology Ventures)
Topics covered in this session include: Video game development & technology consulting Panelists: Scott Budman (Business & Technology Reporter, NBC) Jeff Herbst (Vice President of Business Development, NVIDIA) Jens Hortsmann (Executive Producer & Managing Partner, Crestlight Venture Productions) Pat Moorhead (President and Principal Analyst, Moor Insights & Strategy) Bill Reichert (Managing Director, Garage Technology Ventures)
Topics covered in this session include: GPU-accelerated computer vision for mobile AR applications