SEARCH SESSIONS
SEARCH SESSIONS

Search All
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Topic(s) Filter: Gaming and AI
Presentation
Media
Abstract:
The NVIDIA VisualFx SDK provides game developers a turnkey solution to enable cinematic effects like interactive fire and smoke, fur, waves , global illumination and more in games. All these complex, realistic effects are provided in an easy-to- ...Read More
Abstract:

The NVIDIA VisualFx SDK provides game developers a turnkey solution to enable cinematic effects like interactive fire and smoke, fur, waves , global illumination and more in games. All these complex, realistic effects are provided in an easy-to-use SDK to facilitate the integration and tuning in any given game engine. In this session we will provide an overview of the different VisualFX SDK modules, the roadmap and some case studies on how they were successfully used.

  Back
 
Topics:
Gaming and AI, Rendering & Ray Tracing, Visual Effects & Simulation
Type:
Tutorial
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4618
Streaming:
Download:
Share:
 
Abstract:
Fur rendering is one of the most important, but computationally expensive tasks in digitally creating animal creatures in films and games. We explain how features of recent GPUs can be used to create visually realistic rendering and simulation of fu ...Read More
Abstract:
Fur rendering is one of the most important, but computationally expensive tasks in digitally creating animal creatures in films and games. We explain how features of recent GPUs can be used to create visually realistic rendering and simulation of fur and hairs. Our fur technology consists of 1) authoring pipeline to prepare hair assets in artist friendly tools 2) simulation engine to move hairs on skinned, animated characters 3) rendering and tessellation engine that creates millions of hair primitives on the fly all inside GPUs. We also share real-world challenges we faced in integrating the fur module into highly anticipated upcoming games such as Witcher 3, Call of Duty - Ghosts.   Back
 
Topics:
Gaming and AI, Combined Simulation & Real-Time Visualization, Real-Time Graphics, Visual Effects & Simulation
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4179
Streaming:
Share:
 
Abstract:
Learn how to add volumetric effects to your game engine - smoke, fire and explosions that are interactive, more realistic, and can actually render faster than traditional sprite-based techniques. Volumetrics remain one of the last big difference ...Read More
Abstract:

Learn how to add volumetric effects to your game engine - smoke, fire and explosions that are interactive, more realistic, and can actually render faster than traditional sprite-based techniques. Volumetrics remain one of the last big differences between real-time and offline visual effects. In this talk we will show how volumetric effects are now practical on current GPU hardware. We will describe several new simulation and rendering techniques, including new solvers, combustion models, optimized ray marching and shadows, which together can make volumetric effects a practical alternative to particle-based methods for game effects.

  Back
 
Topics:
Gaming and AI, Rendering & Ray Tracing, Visual Effects & Simulation
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4607
Streaming:
Share:
 
Abstract:
This talk presents several rendering techniques behind Batman: Arkham Origins (BAO), the third installment in the critically-acclaimed Batman: Arkham series. This talk focuses on several DirectX 11 features developed in collaboration with NVIDIA ...Read More
Abstract:

This talk presents several rendering techniques behind Batman: Arkham Origins (BAO), the third installment in the critically-acclaimed Batman: Arkham series. This talk focuses on several DirectX 11 features developed in collaboration with NVIDIA specifically for the high-end PC enthusiast. Features such as tessellation and how it significantly improves the visuals behind Batman's iconic cape and brings our deformable snow technique from the consoles to the next level on PC will be presented. Features such as physically-based particles with PhysX, particle fields with Turbulence, improved shadows, temporally stable dynamic ambient occlusion, bokeh depth-of-field and improved anti-aliasing will also be presented. Additionally, other improvements to image quality, visual fidelity and compression will be showcased, such as improved detail normal mapping via Reoriented Normal Mapping and how Chroma Subsampling at various stages of our lighting pipeline was essential in doubling the size of our open world and still fit on a single DVD.

  Back
 
Topics:
Gaming and AI, Rendering & Ray Tracing, Real-Time Graphics, Visual Effects & Simulation
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4614
Streaming:
Download:
Share:
 
Abstract:
The audience will learn about the latest developer tools suite specifically designed to unleash the power of Tegra K1 for Android application developers. The broad scope of this tutorial spans from advanced graphics to compute and multi-core CPU tool ...Read More
Abstract:
The audience will learn about the latest developer tools suite specifically designed to unleash the power of Tegra K1 for Android application developers. The broad scope of this tutorial spans from advanced graphics to compute and multi-core CPU tools to enable developers to fully take advantage of the heterogeneous computing horsepower available. More specifically, compute developers will learn about the tools available to program CUDA on Tegra K1. Graphics developers will be introduced to the new Tegra Graphics Debugger for Tegra K1. This new mobile graphics development tool supports all the advanced features that Tegra K1 has to offer, via OpenGL ES 2.0, 3.0 and OpenGL 4.3. Finally, game developers will see how to manage their Android build configuration and debugging sessions all within the latest Visual Studio 2013, profile their application to identify hot spots and corresponding call stacks with our brand new release of Tegra System Profiler.  Back
 
Topics:
Mobile Summit, Debugging Tools & Techniques, Performance Optimization, Gaming and AI
Type:
Tutorial
Event:
GTC Silicon Valley
Year:
2014
Session ID:
SIG4116
Streaming:
Download:
Share:
 
Abstract:
Android continues its meteoric rise as the world's dominate mobile operating system. Every day developers large and small discover new ways to delight users but getting noticed is increasingly difficult. The latest NVIDIA® Tegra® K1 ...Read More
Abstract:

Android continues its meteoric rise as the world's dominate mobile operating system. Every day developers large and small discover new ways to delight users but getting noticed is increasingly difficult. The latest NVIDIA® Tegra® K1 processors provide developers with a host of new features to differentiate their titles and get them flying above the rest of the crowd. During this session discover the new CPU, GPU, and multimedia features the latest Tegra processors offer and learn how to use them to enhance and extend your applications. As an example of the type of differentiation the Tegra K1 makes possible, Allegorithmic and RUST Ltd will provide a hands-on demo of physically based shading (PBR), dynamic texturing and high resolution GPU based particle throwing using the latest Allegorithmic Substance texturing pipeline.

  Back
 
Topics:
Mobile Summit, Gaming and AI, Mobile Applications
Type:
Tutorial
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4877
Download:
Share:
 
Abstract:
This session presents the technologies behind NVIDIA GRID(TM) and the future of game engines and application delivery running in the cloud. The audience will learn about the key components of NVIDIA GRID, like optimal capture, efficient compress ...Read More
Abstract:

This session presents the technologies behind NVIDIA GRID(TM) and the future of game engines and application delivery running in the cloud. The audience will learn about the key components of NVIDIA GRID, like optimal capture, efficient compression, fast streaming, and low latency rendering that make cloud gaming and application delivery possible. Franck will demonstrate how these components fit together, how to use the GRID APIs, and how to optimize their usage to deliver an ultimate experience, with live demos.

  Back
 
Topics:
GPU Virtualization, Graphics and AI, Gaming and AI
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4159
Streaming:
Download:
Share:
 
Abstract:
The goal of this session is to show how to create geometric shapes in GPUs, by taking advantage of GPU's tessellation feature, using the state of the art spline technique called PSP splines (PSPS). PSPS are simpler than B-splines in its mathematical ...Read More
Abstract:
The goal of this session is to show how to create geometric shapes in GPUs, by taking advantage of GPU's tessellation feature, using the state of the art spline technique called PSP splines (PSPS). PSPS are simpler than B-splines in its mathematical form, but are much more powerful than NURBS in geometric design. Compared with Bezier, B-spline, NURBS, design a geometric shape using PSPS is much more efficient, flexible and more intuitive. In this session we will describe what PSPS are and demonstrate how to directly implement PSPS in GLSL or HLSL in the tessellation stages to create new geometries.   Back
 
Topics:
Computer Aided Engineering, Product & Building Design, Gaming and AI
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4240
Streaming:
Share:
 
Abstract:
Learn how to use GPU for accelerating 3D registration with Kinect or similar devices in order to capture highly-detailed facial performance in real time or at an interactive speed. We describe the energy-based approach that we borrowed from Hao Li et ...Read More
Abstract:
Learn how to use GPU for accelerating 3D registration with Kinect or similar devices in order to capture highly-detailed facial performance in real time or at an interactive speed. We describe the energy-based approach that we borrowed from Hao Li et al. paper published at SGP 2008. We also explain why we can benefit from GPU computation power and achieve higher quality and more detail at interactive speeds. Finally, we elaborate on how real-time performance can be achieved by improving our CUDA-based implementation.   Back
 
Topics:
Computer Vision, Virtual Reality & Augmented Reality, Gaming and AI, Real-Time Graphics
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4414
Streaming:
Share:
 
Abstract:
Geometric acoustics (GA), which involves directly simulating in real-time the acoustic transfer between sound sources and listeners in a virtual space, is considered the holy grail of game audio. We present a GA method and optimizations which along w ...Read More
Abstract:
Geometric acoustics (GA), which involves directly simulating in real-time the acoustic transfer between sound sources and listeners in a virtual space, is considered the holy grail of game audio. We present a GA method and optimizations which along with the massive parallelism of modern GPUs allows for immersive sound rendering at interactive frame-rates. This talk focuses on optimizations made for Fermi and Kepler GPUs on the two main components of our engine: the ray-acoustic engine and the per-path head-related transfer function (HRTF) renderer. Audio examples will be given using the open-source ID Tech 3 engine, comparing original assets from the Quake 3 game rendered via traditional positional audio to the same assets processed through our engine.  Back
 
Topics:
Signal and Audio Processing, Virtual Reality & Augmented Reality, Defense, Gaming and AI
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4537
Streaming:
Download:
Share:
 
Abstract:
This session describes the work at making the voxel-based global illumination (GI) approach practical for use in games running on current generation graphics hardware such as Kepler. Based upon Cyril Crassin's research, a library has been developed ...Read More
Abstract:
This session describes the work at making the voxel-based global illumination (GI) approach practical for use in games running on current generation graphics hardware such as Kepler. Based upon Cyril Crassin's research, a library has been developed that allows applications to render GI effects for large and fully dynamic scenes at 30 frames per second or more, producing soft diffuse indirect lighting and blurry specular reflections, and providing emissive material support. During the session, Alexey will talk about the cone tracing GI algorithm in general and get into the details of scene representation, efficient multi-resolution voxelization, and indirect light gathering.  Back
 
Topics:
Real-Time Graphics, Performance Optimization, Gaming and AI, Mobile Applications
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
SIG4114
Streaming:
Download:
Share:
 
Abstract:
Learn techniques for efficiently using the GPU and detecting and eliminating driver overhead. See the direction that OpenGL is heading in to embrace multi-threaded, multi-core CPU app designs. Also, the GPU can construct and update app rendering ...Read More
Abstract:

Learn techniques for efficiently using the GPU and detecting and eliminating driver overhead. See the direction that OpenGL is heading in to embrace multi-threaded, multi-core CPU app designs. Also, the GPU can construct and update app rendering data structures to require very little CPU intervention. We will also explore subdivision surfaces and how to get them automatically GPU accelerated with a new extension. And hand-in-glove with subdivision surfaces is PTEX support in OpenGL. Finally, while OpenGL is the most broadly available open API for 3D graphics, it's also the most fragmented. We will explore Regal, an open source library that illustrates how to de-fragment the OpenGL landscape and keep your graphics back end code from becoming a patchwork of platform #ifdefs.

  Back
 
Topics:
Real-Time Graphics, Performance Optimization, Gaming and AI, Rendering & Ray Tracing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4610
Streaming:
Share:
 
Abstract:
Sooner or later, the worlds of videogames and cinemas will collide/meet halfway/join: we will have videogame with the quality of movies, and movies with the interaction of videogame. With a novel stereoscopic 3D visualization technique we developed a ...Read More
Abstract:
Sooner or later, the worlds of videogames and cinemas will collide/meet halfway/join: we will have videogame with the quality of movies, and movies with the interaction of videogame. With a novel stereoscopic 3D visualization technique we developed and patented (www.truedynamic3d.com), we are able to create an immersive reality system where user can perceive virtual world completely merged with real world. This will lead to a new generation of entertainment contents, where movies will not be limited inside the frame of the monitor, but they will surround the user.   Back
 
Topics:
Media & Entertainment Summit, Virtual Reality & Augmented Reality, Computer Vision, Gaming and AI
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4640
Streaming:
Download:
Share:
 
Abstract:
This session explores issues around delivering real-time 3D content into mobile and web applications, by considering the following questions: (1)Images have JPEG, musics have MP3, why not a format to deliver 3D Content? (2) When designing a format fo ...Read More
Abstract:
This session explores issues around delivering real-time 3D content into mobile and web applications, by considering the following questions: (1)Images have JPEG, musics have MP3, why not a format to deliver 3D Content? (2) When designing a format for delivery, we can't ignore the underlying graphic API (GL) to do so. Therefore, wouldn't the most efficient engine formats eventually converge on the same kind of design? (3)Once content is baked and ready to be consumed by GL, how can we improve transfer rate with dedicated compression? (4) Wouldn't it be great to have a declarative way to represent GL content, so that developers can easily build a data-driven engine? (5)Why not centralize these common and so far redundant efforts to design a delivery and runtime format that is truly efficient for GL APIs? During this this show and tell presentation, glTF (graphics library Transmission Format) will be introduced. Following an overview of the eco-system, an introduction to glTF design and catchy demos from different implementations will be shown.Finally, compression results leveraging Open3DGC will be shared.   Back
 
Topics:
Mobile Summit, Web Acceleration, Gaming and AI, Real-Time Graphics
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4805
Streaming:
Download:
Share:
 
Abstract:
Web based apps and games are growing both in number and complexity. Running outside the browser on a mobile device is still a challenging path full of bumps and hoops to overcome. From efficient memory management to access to native features, hybrid ...Read More
Abstract:
Web based apps and games are growing both in number and complexity. Running outside the browser on a mobile device is still a challenging path full of bumps and hoops to overcome. From efficient memory management to access to native features, hybrid apps provide a great way to solve the problems and mix all the advantages of both worlds: web and native. Far from the media fight of which is best, a combination of both technologies provide a much richer development experience. In this talk attendees will know and understand how to solve important matters when dealing with the system webview fragmentation, the poor bandwidth of the native bridges or the lack of support for certain important technologies like WebGL.  Back
 
Topics:
Mobile Summit, Debugging Tools & Techniques, Web Acceleration, Gaming and AI
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4807
Streaming:
Download:
Share:
 
Abstract:
3D Animation is the art form of the present and the future, with hundred of millions people drawn to its emotional power in movie theaters and games every year. Mixamo recently developed a facial capture and animation technology to enable anybod ...Read More
Abstract:

3D Animation is the art form of the present and the future, with hundred of millions people drawn to its emotional power in movie theaters and games every year. Mixamo recently developed a facial capture and animation technology to enable anybody to create compelling animated content that is immediately reflected on a character's face. The technology was originally developed for 3D professionals, but with the recent introduction of the new generation mobile GPU hardware supporting OpenCL APIs such as the Tegra K1 it is now possible to port the technology to mobile devices. In the course of this presentation we will introduce numerical approaches to facial motion capture and animation that are based on a mixture of global and local models of human facial expressions and shape. The presenter will also go into the details of implementing the real-time technology on a Tegra K1 device.

  Back
 
Topics:
Mobile Summit, Artificial Intelligence and Deep Learning, Computer Vision, Gaming and AI
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4808
Streaming:
Download:
Share:
 
Abstract:
After years of fear, uncertainty and doubt, the jury is now in: HTML5 is the platform of choice for building cross-platform, connected applications for desktop and mobile. The advanced programming, animation and multimedia capabilities of modern web ...Read More
Abstract:
After years of fear, uncertainty and doubt, the jury is now in: HTML5 is the platform of choice for building cross-platform, connected applications for desktop and mobile. The advanced programming, animation and multimedia capabilities of modern web browsers, combined with hardware-accelerated 3D rendering provided by WebGL, represents a combination with limitless possibilities. With these technologies, developers can create immersive 3D games, integrated 2D/3D presentations, product displays, social media sites and more, all coded in JavaScript and running in the browser. This awesome power is also available to mobile devices: WebGL is now built into Android, and there are quality adapter libraries for use in developing hybrid applications (native + WebKit) for iOS. With HTML5 and WebGL, developers can build high-performance mobile 3D applications and web sites rivaling native implementations, in a fraction of the time. Join 3D pioneer and WebGL guru Tony Parisi as he explores the technology, busts the myths and tells us where it's really at for creating the next generation of 3D web and mobile applications.   Back
 
Topics:
Mobile Summit, Virtual Reality & Augmented Reality, Web Acceleration, Gaming and AI
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4837
Streaming:
Download:
Share:
 
Abstract:
Today's developers face unprecedented challenges in choosing which platforms to target when developing games and applications meant to be used by a wide consumer audience. Beyond the Windows desktop, there are now a huge variety of new choices: alte ...Read More
Abstract:
Today's developers face unprecedented challenges in choosing which platforms to target when developing games and applications meant to be used by a wide consumer audience. Beyond the Windows desktop, there are now a huge variety of new choices: alternative desktop OS platforms such as Linux and Mac OS X; mobile devices such as phones and tablets; HTML-based web platforms, running on cloud-based servers; and a plethora of embedded CE systems, ranging from video game consoles to TV platforms. All of these platforms use some variety of OpenGL or OpenGL ES, rather than Direct3D. If you have games or other Direct3D-based content that you want to retarget to a new platform, this session will show you how to quickly and easily enable your graphics code to run on OpenGL platforms using TransGaming's shim technology.  Back
 
Topics:
Mobile Summit, Programming Languages, Gaming and AI, Media and Entertainment
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4846
Streaming:
Download:
Share:
 
Abstract:
Project Tango is a focused effort to harvest research from the last decade of work in computer vision and robotics and concentrate that technology into a mobile platform. It uses computer vision and advanced sensor fusion to estimate position and or ...Read More
Abstract:
Project Tango is a focused effort to harvest research from the last decade of work in computer vision and robotics and concentrate that technology into a mobile platform. It uses computer vision and advanced sensor fusion to estimate position and orientation of the device in the real-time, while simultaneously generating a 3D map of the environment. We will discuss the underlying technologies that make this possible, such as the hardware sensors and some of the software algorithms. We will also show demonstrations of how the technology could be used in both gaming and non-gaming applications. This is just the beginning and we hope you will join us on this journey. We believe it will be one worth taking.   Back
 
Topics:
Mobile Summit, Virtual Reality & Augmented Reality, Computer Vision, Gaming and AI
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4848
Streaming:
Share:
 
Abstract:
Android continues its meteoric rise as the world's dominate mobile operating system. Every day developers large and small discover new ways to delight users but getting noticed is increasingly difficult. The latest NVIDIA Tegra K1 processors provide ...Read More
Abstract:
Android continues its meteoric rise as the world's dominate mobile operating system. Every day developers large and small discover new ways to delight users but getting noticed is increasingly difficult. The latest NVIDIA Tegra K1 processors provide developers with a host of new features to differentiate their titles and get them flying above the rest of the crowd.   Back
 
Topics:
Mobile Summit, Gaming and AI, Mobile Applications
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4878
Streaming:
Share:
 
Abstract:
Integrating innovative computer vision sensors into new mobile devices is a very complex endeavor. Especially when popular machines like tablets and smartphones have already a very high level of functionality and users expectations are sky high. In t ...Read More
Abstract:
Integrating innovative computer vision sensors into new mobile devices is a very complex endeavor. Especially when popular machines like tablets and smartphones have already a very high level of functionality and users expectations are sky high. In this talk, we will discuss the challenges and opportunities of developing a new device with advanced sensor capabilities.   Back
 
Topics:
Mobile Summit, Virtual Reality & Augmented Reality, Computer Vision, Gaming and AI
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
S4900
Streaming:
Share:
 
Abstract:
Topics covered in this session include: Minko: game development & real-time graphics applications for web & mobile platforms    Panelists: Scott Budman (Business & Technology Reporter, NBC) Jeff Herbst (Vi ...Read More
Abstract:

Topics covered in this session include: Minko: game development & real-time graphics applications for web & mobile platforms 

 

Panelists: Scott Budman (Business & Technology Reporter, NBC) Jeff Herbst (Vice President of Business Development, NVIDIA) Jens Hortsmann (Executive Producer & Managing Partner, Crestlight Venture Productions) Pat Moorhead (President and Principal Analyst, Moor Insights & Strategy) Bill Reichert (Managing Director, Garage Technology Ventures)

  Back
 
Topics:
Emerging Companies Summit, Gaming and AI, Real-Time Graphics
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
ECS400
Streaming:
Share:
 
Abstract:
Topics covered in this session include: Video game development & technology consulting    Panelists: Scott Budman (Business & Technology Reporter, NBC) Jeff Herbst (Vice President of Business Development, NVIDIA) Jens Hortsmann ...Read More
Abstract:

Topics covered in this session include: Video game development & technology consulting    Panelists: Scott Budman (Business & Technology Reporter, NBC) Jeff Herbst (Vice President of Business Development, NVIDIA) Jens Hortsmann (Executive Producer & Managing Partner, Crestlight Venture Productions) Pat Moorhead (President and Principal Analyst, Moor Insights & Strategy) Bill Reichert (Managing Director, Garage Technology Ventures)

  Back
 
Topics:
Emerging Companies Summit, Gaming and AI, Rendering & Ray Tracing
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
ECS408
Streaming:
Share:
 
Abstract:
Topics covered in this session include: GPU-accelerated computer vision for mobile AR applications    Panelists: Scott Budman (Business & Technology Reporter, NBC) Jeff Herbst (Vice President of Business Developme ...Read More
Abstract:

Topics covered in this session include: GPU-accelerated computer vision for mobile AR applications 

 

Panelists: Scott Budman (Business & Technology Reporter, NBC) Jeff Herbst (Vice President of Business Development, NVIDIA) Jens Hortsmann (Executive Producer & Managing Partner, Crestlight Venture Productions) Pat Moorhead (President and Principal Analyst, Moor Insights & Strategy) Bill Reichert (Managing Director, Garage Technology Ventures)

  Back
 
Topics:
Emerging Companies Summit, Gaming and AI, Mobile Applications
Type:
Talk
Event:
GTC Silicon Valley
Year:
2014
Session ID:
ECS409
Streaming:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next