GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:
This presentation shows in-depth comparisons of several neural network models for 3D object classification. Object classification from 2D image is studied thoroughly and widely adopted during last few years by following the advances of deep neural networks. From then, 3D object classification methods are actively studied, and yet not completely mature. Point cloud is most basic format of 3D objects. In this work, we present many neural network models that can be learned from 3D point cloud. It includes directly learning from 3D point cloud, projected 2D pixels, and voxelated volumes. This work uses Princeton ModelNet datasets and ShapeNetCore.v2 dataset, and then provides the comparisons of those neural network models.
This presentation shows in-depth comparisons of several neural network models for 3D object classification. Object classification from 2D image is studied thoroughly and widely adopted during last few years by following the advances of deep neural networks. From then, 3D object classification methods are actively studied, and yet not completely mature. Point cloud is most basic format of 3D objects. In this work, we present many neural network models that can be learned from 3D point cloud. It includes directly learning from 3D point cloud, projected 2D pixels, and voxelated volumes. This work uses Princeton ModelNet datasets and ShapeNetCore.v2 dataset, and then provides the comparisons of those neural network models.  Back
 
Topics:
AI & Deep Learning Research, Graphics and AI, Rendering & Ray Tracing, Real-Time Graphics
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8453
Streaming:
Download:
Share:
 
Abstract:

This talk presents sparse voxelization of time-lapse point cloud. Point cloud has several advantages including capturing easiness, data simplicity, and most fundamental 3D primitive. Because of these advantages, the easy way to collect time-lapse 3D information is by capturing point cloud using laser scan or photogrammetry. However, point cloud representation is lack of spatial connectivity and has notoriously big size of captured data. Our sparse volumetric representation fills the gap between the pros and cons of point cloud by keeping the simplicity and easiness and providing spatial connectivity as well as GPU-friendly data structure. In this talk, we show our massive-scale time-lapse point cloud dataset, the compression as sparse voxels, and further processing in parallel and visualization using GVDB in CUDA.

This talk presents sparse voxelization of time-lapse point cloud. Point cloud has several advantages including capturing easiness, data simplicity, and most fundamental 3D primitive. Because of these advantages, the easy way to collect time-lapse 3D information is by capturing point cloud using laser scan or photogrammetry. However, point cloud representation is lack of spatial connectivity and has notoriously big size of captured data. Our sparse volumetric representation fills the gap between the pros and cons of point cloud by keeping the simplicity and easiness and providing spatial connectivity as well as GPU-friendly data structure. In this talk, we show our massive-scale time-lapse point cloud dataset, the compression as sparse voxels, and further processing in parallel and visualization using GVDB in CUDA.

  Back
 
Topics:
Rendering & Ray Tracing, Professional Visualisation, Real-Time Graphics
Type:
Talk
Event:
GTC Silicon Valley
Year:
2017
Session ID:
S7108
Download:
Share:
 
Abstract:
We present our novel methods for visualizing massive scale time-lapse point cloud data, and navigating and handling point cloud VR. Our method provides new approaches for normal and stereoscopic rendering of 120 GB time-lapse point cloud data, and targeted to apply our method to 2 TB data. Time-lapse point cloud has many problems including color mismatching, registration, out-of-core design, and memory management. We generate progressive blue-noise point cloud, and apply sparse buffer extension in OpenGL 4.5, by them, reduce the complexity of out-of-core design and memory manipulation cost. In addition, point cloud with VR is an emerging field so that not many methods are applicable yet. We are investigating a new method that is able to visualize and navigate large point cloud data.
We present our novel methods for visualizing massive scale time-lapse point cloud data, and navigating and handling point cloud VR. Our method provides new approaches for normal and stereoscopic rendering of 120 GB time-lapse point cloud data, and targeted to apply our method to 2 TB data. Time-lapse point cloud has many problems including color mismatching, registration, out-of-core design, and memory management. We generate progressive blue-noise point cloud, and apply sparse buffer extension in OpenGL 4.5, by them, reduce the complexity of out-of-core design and memory manipulation cost. In addition, point cloud with VR is an emerging field so that not many methods are applicable yet. We are investigating a new method that is able to visualize and navigate large point cloud data.  Back
 
Topics:
Virtual Reality & Augmented Reality, Product & Building Design, In-Situ & Scientific Visualization
Type:
Talk
Event:
GTC Silicon Valley
Year:
2016
Session ID:
S6512
Streaming:
Download:
Share:
 
Abstract:
Motion retiming is used to edit character animations to match a given time. It is a non-trivial task to retime the motion of a set of joints since spatio-temporal correlation exists among them. We present a novel approach to motion retiming that exploits the proximity of joints as a way of preserving the motion coherence. Our framework that allows users to intuitively and interactively retime motion using CUDA.
Motion retiming is used to edit character animations to match a given time. It is a non-trivial task to retime the motion of a set of joints since spatio-temporal correlation exists among them. We present a novel approach to motion retiming that exploits the proximity of joints as a way of preserving the motion coherence. Our framework that allows users to intuitively and interactively retime motion using CUDA.  Back
 
Topics:
Media and Entertainment, Performance Optimization1
Type:
Poster
Event:
GTC Silicon Valley
Year:
2015
Session ID:
P5110
Download:
Share:
 
Abstract:
We present a sketch-based 3D animation application that can easily search and create 3D animation from motion captured database. By using CUDA, The 6.5 hours of motion sequences can be found in few seconds based on users' sketches. And the new motions can be created by connecting the found motion sequences.
We present a sketch-based 3D animation application that can easily search and create 3D animation from motion captured database. By using CUDA, The 6.5 hours of motion sequences can be found in few seconds based on users' sketches. And the new motions can be created by connecting the found motion sequences.  Back
 
Topics:
Graphics Performance Optimization, Tools & Libraries
Type:
Poster
Event:
GTC Silicon Valley
Year:
2013
Session ID:
P3156
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next