SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC On-Demand

AI and DL Research
Presentation
Media
Deep Generative Modeling for Speech Synthesis and Sensor Data Augmentation
We'll discuss how we could use deep generative modeling in two application domains; in speech synthesis, and in sensor data modeling. We'll give an overview of what generative modeling is and how it could be used for practical AI tasks through these examples. We'll also give a flavor of latent space methods, which we can use to learn more about our data so as to transform them in meaningful ways, with uses in both reconstruction and in generation.
We'll discuss how we could use deep generative modeling in two application domains; in speech synthesis, and in sensor data modeling. We'll give an overview of what generative modeling is and how it could be used for practical AI tasks through these examples. We'll also give a flavor of latent space methods, which we can use to learn more about our data so as to transform them in meaningful ways, with uses in both reconstruction and in generation.  Back
 
Keywords:
AI and DL Research, Advanced AI Learning Techniques (incl. GANs and NTMs), GTC Silicon Valley 2018 - ID S8617
Streaming:
Download:
Share:
Computational Physics
Presentation
Media
Plasma Turbulence Simulations: Porting Gyrokinetic Tokamak Solver to GPU Using CUDA
The porting process of a large scale Particle-In-Cell Solver (GTS) to the GPU using CUDA is described. We present weak scaling results run at scale on Titan which show a speed up of 3-4x for the entire solver. Starting from a performance analysis of computational kernels, we systematically proceed to eliminating the most significant bottlenecks in the code - in this case, the PUSH step, which constitutes the 'gather' portion of the gather-scatter algorithm that characterizes this PIC code. Points that we think might be instructive to developers include: (1) using the PGI CUDA Fortran infrastructure to interface between CUDA C and Fortran; (2) memory optimizations - creation of a device memory pool, and pinned memory; (3) a demonstration of how communication causes performance degradation at scale, with implications on shifter performance in general PIC solvers, and why we need algorithms that handle communication in particle shifters more effectively; (4) Use of textures and LDG for irregular memory accesses.
The porting process of a large scale Particle-In-Cell Solver (GTS) to the GPU using CUDA is described. We present weak scaling results run at scale on Titan which show a speed up of 3-4x for the entire solver. Starting from a performance analysis of computational kernels, we systematically proceed to eliminating the most significant bottlenecks in the code - in this case, the PUSH step, which constitutes the 'gather' portion of the gather-scatter algorithm that characterizes this PIC code. Points that we think might be instructive to developers include: (1) using the PGI CUDA Fortran infrastructure to interface between CUDA C and Fortran; (2) memory optimizations - creation of a device memory pool, and pinned memory; (3) a demonstration of how communication causes performance degradation at scale, with implications on shifter performance in general PIC solvers, and why we need algorithms that handle communication in particle shifters more effectively; (4) Use of textures and LDG for irregular memory accesses.   Back
 
Keywords:
Computational Physics, Computational Fluid Dynamics, HPC and Supercomputing, GTC Silicon Valley 2014 - ID S4495
Streaming:
Share: