SeqAn (www.seqan.de) is an open-source C++ template library (BSD license) that implements many efficient and generic data structures and algorithms for Next-Generation Sequencing (NGS) analysis. It contains gapped k-mer indices, enhanced suffix arrays (ESA) or an FM-index, as well algorithms for fast and accurate alignment or read mapping. Based on those data types and fast I/O routines, users can easily develop tools that are extremely efficient and easy to maintain. Besides multi-core, the research team at Freie Universität Berlin has started generic support for distinguished accelerators such as NVIDIA GPUs.
In this webinar, Knut Reinert, Professor, Freie Universität Berlin will introduce SeqAn and string indices, then explain his team’s generic parallelization concept and end with details on how they achieved an up to 47 speedup using an FM-index on a NVIDIA Tesla K20.
With the combined power of large-scale distributed computing resources such as Folding@home or supercomputers such as Blue Waters or Titan, one can now routinely simulate atomistic protein dynamics on the milliseconds timescale. Join Professor Vijay Pande, Stanford University as he presents efforts to push the limits of this methodology even further to the seconds timescale for protein folding, as well as to a variety of new applications in protein conformational change. The results of these simulations suggest novel targets for disease intervention (for Alzheimer’s and Cancer), as well as new biophysical insights into protein dynamics.Back
Folding@home is a large-scale volunteer distributed computing project, started in October 1, 2000. For over a decade, new types of hardware (such as GPUs, multi-core CPUs, and PS3) and algorithms have been pioneered in order to make significant advances in our ability to simulate diseases at the molecular scale. Join Professor Vijay Pande from Stanford University for a brief introduction to the goals of Folding@home, followed by the successes so far. Prof. Pande will end with a discussion of what’s being done today, as well as the plans for greatly enhancing what Folding@home can do through new initiatives currently under way.Back
Recent technological advances have made it practical to deliver 3D professional graphics applications from the Cloud (private or public) with a high quality user experience and at an attractive cost. Organizations can keep their intellectual property safe in the data center since only fully-rendered screen images are sent over the network. Users in remote locations no longer have to wait for large file transfers and can access 3D models from a wide variety of devices, including iPads, Android tablets and thin clients.
Join Derek Thorslund, Director of Product Management, Citrix to learn how Citrix XenDesktop, XenServer and Receiver technologies leverage NVIDIA GRID to make all of this a reality for many organizations today.Back
Join Mike Coleman, Sr. Product Manager User Experience at VMware to understand how virtualized 3D graphics can benefit your entire user base - from knowledge workers to high-end design engineers.
NVIDIA and VMware have built a platform that allows the toughest workloads to be virtualized, while improving reliability and security. From software-based GPUs to shared graphics to dedicated virtual workstations, there is an option for every use case and budget.
Specific topics include:
• What is VMware Horizon View and what benefits does it bring to the desktop?
• How are 3D graphics implemented in VMware Horizon View, including the latest joint announcements from VMware and NVIDIA?
• What use cases can be addressed with virtualized 3D graphics?
• What customers are using the technology today?Back
Imagine giving your Adobe, Autodesk or SolidWorks designers and engineers the power of a workstation delivered over your network. NVIDIA’s GRID Visual Computing Appliance (VCA) is a turnkey solution that delivers remote graphics for up to 8 concurrent workstation users. In this webinar, Ankit Patel, Sr. Product Manager, will show how GRID VCA allows you to optimize your design and engineering teams, giving them the performance they need, while giving you the security and manageability you require.Back
If you’ve ever wanted to virtualize your CAD or professional video graphics application and have the exact same local experience on a secure central platform, then this webinar provides insight on how to get there. Join Technology Evangelist, Thomas Poppelgaard, and learn how Citrix XenDesktop®, XenApp® and XenServer®, in combination with NVIDIA GRID, makes it possible to virtualize 2D/3D applications from any device, anywhere, while keeping your data and intellectual property safe and secure.Back
You can’t buy a phone, computer, tablet, PC or workstation today without a GPU. Why would you expect a server without graphics to successfully serve the same users? As enterprises look to move PCs to the data center, users are requiring the modern PC experience that they have come to expect from their desktop. Users are not willing to go back to Windows 95. And now they don’t have to. NVIDIA GRID for enterprise enables IT managers to deliver an experience equal to a local PC with all the promised benefits of a virtual desktop environment.In this webinar presented by Will Wade, Director of GRID Products, NVIDIA, you'll learn how GRID is being enabled in the most common hypervisors and talk about the technology behind GPUs in virtual environments.Back
With all PCs, tablets, phones, and even modern cars running a graphical user interface, how can we expect a virtual desktop without a graphics accelerator to compete in the minds of users? Well, now we don't have to. Just like virtualization enables sharing of other system resources, NVIDIA's new GRID vGPU technology now enables virtualized graphics acceleration. This new technology enables the GRID GPU to scale across the spectrum of users in your company giving them the experience they've come to expect from a modern desktop.Join Will Wade, Director of GRID Products, NVIDIA as he discusses the details of the architecture, and how to successfully deploy usable virtual desktops across your organization.Back
Join Steve Harpster, Solution Architect, NVIDIA, for this technical webinar and learn how to set up GRID vGPU with Citrix XenServer and Citrix XenDesktop 7.1 Tech Preview. You'll also discover how to optimize the virtual machines to get the best performance for your demanding 3D workloads, and have your questions answered by Citrix and NVIDIA experts.Back
The NVIDIA GRID™ Visual Computing Appliance (VCA) is the only platform certified and supported by Dassault Systèmes to virtualize and remotely deliver SolidWorks 2014 over the network. VCA is a powerful GPU-based appliance that can be centrally located and accessed via the company network. GPU acceleration gives users working locally or remotely, the same SolidWorks experience they would get from a dedicated high performance desk-side workstation. It’s a powerful tool for small and medium-size businesses looking to provide their workforce with workstation performance anywhere, anytime - without the IT complexity of commercial virtualization solutions.Join Ankit Patel, Sr. Product Manager, NVIDIA to learn more about GRID VCA and its benefits for the SolidWorks community.Back
Join Jared Cowart, NVIDIA Solution Architect, for this technical webinar and learn how to set up an NVIDIA GRID™ vGPU (virtual GPU) with Citrix XenServer and Citrix XenDesktop 7.1. You'll also discover how to optimize your virtual machines to get the best performance for your demanding 3D workloads. Plus, get insight into what to consider when planning for scalability and density. Key takeways from the webinar include:- How to demo, pilot, and deploy GPU-accelerated virtual desktops and applications- Tips and tricks for having an amazing HDX 3D Pro demo- Planning and scaling guidance- How to equip your demo, lab, or hosting platform with NVIDIA graphicsBack
Bright Cluster Manager delivers a comprehensive and integrated CUDA-ready solution for those who seek to make optimal use of their GPU-based environments for HPC. Bright provisions, monitors and manages systems with NVIDIA GPUs within cluster-management hierarchies.
Join Ian Lumb, Bright Evangelist and learn how Bright:
CUDA-ready clusters enable developers to: Focus on coding, not maintaining infrastructure (drivers, configs) and toolchains (compilers, libraries) Routinely keep pace with innovation - from the latest in GPU hardware to the CUDA toolkit itself Cross-develop with confidence and ease - maintain, and shift between, highly customized CUDA development environments Exercise their preference in programming GPUs - choose CUDA or OpenCL or OpenACC and combine appropriately (with, for example, the Message Passing Interface, MPI) Exploit the convergence of HPC and Big Data Analytics - make simultaneous use HPC and Hadoop services in GPU applications Make use of private and public clouds - create a CUDA-ready cluster in a cloud or extend an on-site CUDA infrastructure into a cloud In this webinar, participants will learn how Bright Cluster Manager provisions, monitors and manages CUDA-ready clusters for developer advantage. Case studies will be used to illustrate all six advantages for Bright developers. Specific attention will be given to: Cross-developing under CUDA 6.0 and CUDA 6.5 with Kepler-architecture GPUs (e.g., the NVIDIA Tesla K80 GPU accelerator) The challenges and opportunities for making use of private (using OpenStack) and public (using Amazon Web Services) clouds in GPU applicationsBack
Running the latest versions of GPU accelerated applications maximizes performance and improves user productivity. The latest version, NAMD 2.11, provides up to 7x* speedup on GPUs over CPU-only systems and up to 2x performance over NAMD 2.10.
Watch this on-demand webinar to hear experts from NVIDIA and NAMD answer your NAMD and GPU related questions ranging from installation to job optimization.
*Dual CPU server, Intel E5-2698 email@example.comGHz, NVIDIA Tesla K80 with ECC off, Autoboost On; STMV datasetoolkit to date.Back
OpenCV is a free library for research and commercial purposes that includes hundreds of optimized computer vision and image processing algorithms. NVIDIA and Itseez have optimized many OpenCV functions using CUDA on desktop machines equipped with NVIDIA GPUs. These functions are 5 to 100 times faster in wall-clock time compared to their CPU counterparts.In this follow-up webinar to the OpenCV – Accelerated Computer Vision using GPUs presentation on June 11, Anatoly Baksheev, OpenCV GPU Module Team Leader at Itseez will demonstrate how to obtain and build OpenCV, its GPU module, and the sample programs. You’ll then learn how to use the OpenCV GPU module to create your own high-performance computer vision applications. Finally, you’ll learn how to start using CUDA to create your own custom GPU computer vision functions and integrate them with OpenCV GPU functions to add novel capabilities.Back
OpenCV (Open Source Computer Vision Library: http://opencv.willowgarage.com/wiki/) is an open-source BSD-licensed library that includes several hundreds of computer vision algorithms. In this webinar, learn how this powerful library has been accelerated using CUDA on NVIDIA GPUs.Back
Luciad provides software components for geospatial situational awareness in the defense, security, aviation and maritime area. Whether for mission planning, optimal radar placement or command and control systems, true situational awareness requires the handling, analysis, and visualization of large and diverse geospatial datasets. Luciad's flagship product LuciadLightspeed takes advantage of NVIDIA GPUs to obtain highly interactive performance for both analysis and visualization. By intelligent use of the GPU, it sets a new standard for performance in the defense world.Join Frank Suykens and Bart Adams as they introduce how to make optimal use of NVIDIA GPU-acceleration for high performance line-of-sight calculations. The webinar will start with a brief overview of the challenges and use-cases for geospatial situational awareness, followed by a detailed use-case with a focus on line-of-sight analysis in large terrains. Bart will then go into the technical details on how to use an NVIDIA Tesla to obtain very fast line-of-sight and how to integrate the line-of-sight results with other GPU-accelerated analysis and visualization.Back
GPU-optimized Deep Neural Networks (DNNs) excel on image classification, detection and segmentation tasks. They are the current state of the art method in many visual pattern recognition problems by a significant margin. DNNs are already better than humans at recognizing handwritten digits and traffic signs. The complex handwritten Chinese characters are recognized with almost human performance. DNNs are successfully used for automotive problems like traffic signs and pedestrian detection; they are fast and extremely accurate. DNNs help the field of connectomics by making it possible to segment and reconstruct the neuronal connections in large sections of brain tissue for the first time. This will bring a new understanding of how biological brains work. Detecting mitotic cells in breast cancer histology images can be done quickly and efficiently with DNNs. Segmenting blood vessels from retinal images with DNNs helps diagnosticians to detect glaucoma.Back
In this talk, Dhruv Batra will describe CloudCV, an ambitious system that will provide access to state-of-the-art distributed computer vision algorithms as a cloud service. Our goal is to democratize computer vision; one should not have to be a computer vision, big data and distributed computing expert to have access to state-of-the-art distributed computer vision algorithms. As the first step, CloudCV is focused on object detection and localization in images. CloudCV provides APIs for detecting if one of 200 different object categories such as entities (person, dog, cat, horse, etc), indoor objects (chair, table, sofa, etc), outdoor objects (car, bicycle, etc) are present in the image.Back
NVIDIA GPUs have been used for visual computing in many different industries. SynerScope now addresses discovery and exploration in Big Data through the use of OpenGL-based rendering. SynerScope’s high-data-density displays support the human visual system to create insight from complex patterns present in large data sets. The speed of computers and flexibility of the human mind allows fast analysis of many different combinations of structured and unstructured data. The option of virtualization with the latest NVIDIA GRID series helps serve the demands of data security that require organizations to keep their data safe in the data center.
This webinar, presented by SynerScope’s CEO, Jan-Kees Buenen, offers data scientists and business owners dealing with Big Data the opportunity to learn how advanced visual analysis drives discovery and exploration from a broad variety and mixture of business data. It also provides a glimpse into the future, showing how next-generation GPUs may be used to change the landscape of data analytics forever.
map-D makes big data interactive for anyone! map-D is a super-fast GPU database that allows anyone to interact and visualize streaming big data in real time. Its unique architecture runs 70-1,000x faster than other in-memory databases or big data analytics platforms. To boot, it works with any size or kind of dataset; works with data that is streaming live on to the system; uses cheap, off-the-shelf hardware; is easily scalable.map-D is focused on learning from big data. At the moment, the map-D team is working on projects with MIT CSAIL, the Harvard Center for Geographic Analysis and the Harvard-Smithsonian Center for Astrophysics. Join Todd Mostak and Tom Graham, key members of the map-D team, as they demonstrate the speed and agility of map-D and describe the live processing, search and mapping of over 1 billion tweets.Back
This on-demand webinar will help you to start developing applications with advanced AI and computer vision today using NVIDIA’s deep learning tools, including TensorRT and DIGITS. By watching this webinar, you'll learn: How to use NVIDIA’s deep learning tools such as TensorRT and DIGITS About various types of neural network-based primitives available as a building blocks, deployable onboard intelligent robots and drones using NVIDIA’s Jetson Embedded Platform. Realtime deep-learning solutions for image recognition, object localization, and segmentation Training workflows for customizing network models with new training datasets and emerging approaches to automation like deep reinforcement learning and simulationBack
Latent fingerprints represent a significant challenge to automated fingerprint matching and may latent fingerprint are not matched because they lack sufficient information for conventional matching methods. Gannon Technologies (Gannon) has developed a method for meeting the challenge posed by latent prints using ridge flow as an alternative to conventional minutiae matching. A matching approach focusing on ridges benefits from eliminating the need to detect minutiae which can be difficult even for human experts. Furthermore, a ridge based approach can find useful identity information in sparse prints with insufficient minutiae for conventional matching. The challenge presented by ridge-based matching takes the form of the great number of calculations necessary to achieve a one-to-many match between a latent print and a large reference set that can contain millions of candidate specimens. This computational complexity is very costly in CPU resources and can become time and cost prohibitive. With the introduction of GPUs, much expanded processing capacity numbers have become attainable, but as data scales in size exponentially, an equivalent problem begins to arise. A new approach to address this mission need has been created with these vast numbers in mind, and the ability to scale almost infinitely.Back
Significant improvements in speeds for imagery orthorectification, atmospheric correction, and image transformations like Independent Components Analysis (ICA) have been achieved using GPU-based implementations. Additional optimizations, when factored in with GPU processing capabilities, can provide 50x – 100x reduction in the time required to process large imagery. Exelis Visual Information Solutions (VIS) has implemented a CUDA-based GPU processing frame work for accelerating ENVI and IDL processes that can best take advantage of parallelization.Join Amanda O'Connor, a Senior Solutions Engineer at Exelis as she discusses- Implementing GPU processing for large imagery datasets- Operational use of GPU processing for orthorectification- Benchmarks against desktop algorithmsBack
**Please note that no recording of this webinar is available. Please contact Peter Wurmsdobler directly at peter dot wurmsdobler at aveillant dot com for more information on the subject matter.** In this webinar, Peter Wurmsdobler, Lead Software Architect, Aveillant, will give a short introduction to Aveillant's Holographic Radar systems, the principles of Holographic radars, as opposed to scanning radar systems, as well as its computational requirements. Peter will go on to explore the technical challenges faced in the implementation of the mathematical algorithms needed, how they were solved, and why NVIDIA GPUs proved to be a good fit to meet the computational needs. Finally, Peter will present performance charts that reveal the amount of processing needed in real-time for a real radar system.Back
In this webinar, we will bring CUDA into a compute intensive application usin ...Read More
In this webinar, we will bring CUDA into a compute intensive application using Allinea tools. First of all, we will discover Allinea Performance Reports - a great tool to analyze an existing application and determine whether it is appropriate for GPUs or not. If it is, profiling the application is critical to identify the most compute intensive code regions that need to be replaced with CUDA (or OpenACC) implementations. But as the code is being reworked, errors can be introduced. To resolve those profiling and debugging challenges, professional tools such as Allinea Forge are necessary to produce the correct, working, high performance GPU accelerated code with a minimum level of effort. During this technical session, an Allinea expert will illustrate how Allinea Performance Reports and Allinea Forge can help modernize applications very easily.Back
Programming GPU accelerators involves 3 basic aspects: splitting the source code between host and GPU, managing data allocation and movement between host memory and GPU memory, and optimizing GPU kernels. Much of this process can be automated using modern compiler technology and high level programming techniques. This talk is a case study on using PGI Accelerator compiler directives to achieve a 5x speed-up in approximately 5 hours of programing time on this popular geophysics code.Back
Join Chris Mason, Product Manager, Acceleware, for an informative introduction to GPU Programming. The tutorial will begin with a brief overview of CUDA and data-parallelism before focusing on the GPU programming model. We will explore the fundamentals of GPU kernels, host and device responsibilities, CUDA syntax and thread hierarchy.Back
Learn how to optimize your algorithms for the Fermi and Kepler architectures. Join Kelly Goss, Acceleware, Software Developer and Trainer, for this informative webinar that will focus on key optimization strategies for compute and memory bound problems. The session will include techniques for ensuring peak utilization of CUDA cores by choosing the optimal block size and using dynamic parallelism on the Kepler architecture. For compute bound algorithms we will discuss how to improve branching efficiency, intrinsic functions and loop unrolling. For memory bound algorithms, optimal access patterns for global and shared memory will be presented and highlighting the differences between the Fermi and Kepler architecture.Back
Join Chris Mason, Product Manager, Acceleware for an informative introduction to CUDA programming. The webinar will begin with a brief overview of CUDA and data-parallelism before focusing on the GPU programming model. Chris will explore the fundamentals of GPU kernels, host and device responsibilities, CUDA syntax, and thread hierarchy. A programming demonstration of a simple CUDA kernel will be provided.Back
Join Chris Mason, Product Manager at Acceleware, and explore the memory model of the GPU! The webinar will begin with an essential overview of the GPU architecture and thread cooperation before focusing on the different memory types available on the GPU. Chris will define shared, constant and global memory and discuss the best locations to store your application data for optimized performance. Features available in the Kepler architecture such as shared memory configurations and Read-Only Data Cache are introduced and optimization techniques discussed. A programming demonstration of shared and constant memory will be delivered.Back
Join Chris Mason, Product Manager at Acceleware, as he leads attendees in a deep dive into asynchronous operations and how to maximize throughput on both the CPU and GPU with streams. Chris will demonstrate how to build a CPU/GPU pipeline and how to design your algorithm to take advantage of asynchronous operations. The second part of the webinar will focus on dynamic parallelism.Back
Learn how to optimize your algorithms for NVIDIA GPUs. This informative webinar will provide an overview of the improved analysis performance tools available in CUDA 6.0 and key optimization strategies for compute, latency and memory bound problems. The webinar will include techniques for ensuring peak utilization of CUDA cores by choosing the optimal block size. For compute bound algorithms Dan will discuss how to improve branching efficiency, intrinsic functions and loop unrolling. For memory bound algorithms, optimal access patterns for global and shared memory will be presented including a comparison between the Fermi and Kepler architectures.Back
This webinar will serve as an introductory tutorial for anyone interested in accelerated computing using compiler directives. Participants will learn about OpenACC and a proven process for accelerating applications using compiler directives. No prior GPU or parallel programming experience is required to attend this webinar, but the ability to read and understand C, C++, and or Fortran code is needed.Back
Software companies use frameworks such as .NET to target multiple platforms from desktops to mobile phones with a single code base in order to reduce costs by leveraging existing libraries and to cope with changing trends. While developers can easily write scalable parallel code for multi-core CPUs on .NET, they face a bigger challenge using GPUs to tackle compute intensive tasks. Alea GPU closes this gap by bringing GPU computing directly into the .NET ecosystem. In this hands-on webinar we show how you can write cross platform GPU accelerated .NET applications in any .NET language much easier than ever before. To follow the examples during the webinar, prepare your computer with Alea GPU and a free community license. Setup details can be found at http://bit.ly/1HL35a3. For more information, read Daniel's blog on Parallel Forall http://bit.ly/1Gu62Q3.Back
Join us for an overview of the key features in the new CUDA 7.5 release. Ujval Kapasi, director of product management, and members of the CUDA engineering team will share how you can store up to 2x larger datasets in GPU memory and reduce memory bandwidth requirements significantly using the new 16-bit floating point (FP16) data format. They’ll also cover the new cublasSgemmEx() routine that supports 2x larger matrices, cuSPARSE GEMVI routines for dense matrix x sparse vector operations that are ideal for natural language processing, and new instruction-level profiling capabilities that help you quickly pinpoint performance bottlenecks in your GPU code. Finally, you’ll see some examples of the new (experimental) GPU lambdas feature in action. Get a head start on the webinar by downloading the CUDA 7.5 Release Candidate at https://developer.nvidia.com/cuda-toolkitBack
With Totalview 8.9.2 and the NVIDIA CUDA add-on you can debug both the CPU and the GPU code in applications that use CUDA. You can set breakpoints, step, and dive in code running on the CUDA device using all the familiar TotalView GUI methods. TotalView supports unified virtual addressing as well as multi-device debugging, handles CUDA function in-lining and provides type qualification in the expression system. You can display how your logical threads are being mapped to hardware and navigate kernel threads using either hardware or logical coordinates. The webinar will also preview the upcoming TotalView 8.10 with support for CUDA 4.1.This webinar will provide examples of how to install and run the TotalView Debugger with CUDA across several programming examples, some common pitfalls will also be explained.Back
The next major advancement of GPUDirect™ technology is here. GPUDirect RDMA provides direct GPU-GPU communications across the network resulting in a significant reduction in communication latency between remote GPUs and completely bypassing the CPU. This webinar will cover the latest schedule for GPUDirect RDMA, scaling and optimization techniques for maximizing application performance using MVAPICH2, and the latest advancements of CUDA.Back
Performance optimization is an important part of CUDA application development. The latest NVIDIA Visual Profiler included in CUDA 5.5 now features a guided performance-analysis mode that you can use to identify optimization opportunities in your application. This new feature will guide you step-by-step through the optimization process so that you can unlock the full potential of your CUDA application.Please join David Goodwin, NVIDIA Software Manager, as he demonstrates how this new feature can help you develop more optimized code.Back
Major defense and intelligence institutions are discovering just how effective GPU computing can be in enabling unique solutions for applications related to video analysis, recognition, and tracking. During this informative webinar, Kyle Spafford, Senior Software Developer, AccelerEyes will explain how to accelerate common defense and intelligence algorithms using ArrayFire, a productive, easy-to-use GPU software library for C, C++, and Fortran.Back
The latest release of Nsight Visual Studio Edition enables OpenGL developers to debug OpenGL API calls with complete render state inspection, debug graphics shaders natively on GPU, and perform Pixel History queries to find fragments that contribute to a given pixel. Through a comprehensive set of demonstrations, attendees will be able to familiarize themselves with the key concepts of the tool and learn how to exercise the various debugging features to address common OpenGL programming challenges.Back
The latest release of Nsight Visual Studio Edition puts powerful, to the metal, profiling capabilities into the hands of OpenGL developers for the first time. Through a comprehensive set of demonstrations, attendees will be able to learn how to best utilize Nsight to optimize their applications. These exercises will include source code instrumentation, system trace analysis, and advanced frame profiling using hardware performance counters to see the exact bottleneck for each draw call in the scene. These features empower developers to gain much more visibility into the performance behavior (both GPU and CPU) of the system and application to address common modern OpenGL performance pitfalls.Back
This webinar will cover the CUDA Toolkit as a build tool for compiling applications on high-performance computing clusters, focusing on the needs of cluster administrators and application packagers. Adam DeConinck, HPC Systems Engineer at NVIDIA will walk through the compilation process for a CUDA C program built by the NVCC compiler and look at compiler options for targeting different NVIDIA GPUs. His talk will also cover some common issues encountered when compiling pre-existing CUDA applications, and briefly describe other tools for building CUDA applications including CUDA Fortran, OpenACC, and CUDA libraries.Back
The CUDA 5.5 RC is now available to CUDA Registered Developers on https://developer.nvidia.com/registered-developer-programsUjval Kapasi, NVIDIA CUDA Product Manager, will provide an overview of the new features of CUDA 5.5.CUDA Registered Developers already have access to the toolkit downloads – but we’ll provide brief instructions on how to download and install the binaries and how to submit bugs and issues.Back
This webinar, hosted by Dr. James Beyer, Compiler Engineer, Cray Inc. will briefly introduce two accelerator programming directive sets with a common heritage, OpenACC 2.0 and OpenMP 4.0 device constructs. After introducing the two directive sets, a side by side comparison of available features along with code examples will be presented to help developers understand the subtle and not so subtle differences in these directive sets.Back
Dynamic Parallelism is a great new feature introduced by NVIDIA in CUDA 5. As powerful features like this are introduced, the complexity of debugging and profiling often increase.This webinar will provide technical insight into how Allinea’s powerful tools can save the day if bugs come up when developing with Dynamic Parallelism. The webinar, presented by Mark O'Connor, Allinea’s VP of Product Development, provides a short journey through one engineer's efforts to make sample code work, showing how to debug and understand CUDA kernel execution with Allinea's unified at-scale debugging and profiling tools.Back
This webinar, hosted by Dr. James Beyer, Compiler Engineer, Cray Inc. will concentrate on using OpenACC 2.0 with the latest Cray Compilation Environment release. Sample codes will be used to show both how the compiler communicates the transformation it makes, as well as the performance changes caused by the new features.Back
Join Chris Mason, Product Manager, Acceleware and explore the memory model of the GPU and the memory enhancements available in the new Kepler architecture and how these will affect your performance optimization. The webinar will begin with an essential overview of GPU architecture and thread cooperation before focusing on the different memory types available on the GPU. We will define shared, constant and global memory and discuss the best locations to store your application data for optimized performance. The shuffle instruction, new shared memory configurations and Read-Only Data Cache of the Kepler architecture are introduced and optimization techniques discussed.Back
Get the low down on debugging and profiling your GPU program from Dan Cyca, Chief Technology Officer, Acceleware.This webinar dives deep into profiling techniques and the tools available to help you optimize your code. We will demonstrate NVIDIA’s Visual Profiler, nvcc flags and cuobjdump and highlight the various methods available for understanding the performance of your CUDA program.The second part of the webinar will focus on debugging techniques and available tools to help you identify issues in your kernels. The latest debugging tools provided in CUDA 5.5 including Nsight and cuda-memcheck will be presented.Back
Please join Ujval Kapasi, NVIDIA’s CUDA Product Manager, and members of the CUDA Engineering Team for an overview of the features of new CUDA Toolkit 6.Learn how Unified Memory Addressing can dramatically reduce the complexity of managing your data over host and accelerator memory spaces. New enhanced drop-in GPU accelerated libraries will help users get multi-GPU accelerator performance quickly and easily. CUDA 6 is the most powerful and easy to use version of CUDA to date, and enables you to get the best possible performance from your existing TESLA and NVIDIA GPU’s and prepare for next generation GPUs.CUDA 6 Production is now available download : www.nvidia.com/getcudaBack
Please join Jonathan Cohen supported by other members of the NVIDIA engineering team responsible for the new high performance libraries which are part of the CUDA 6 Toolkit. In this webinar, the team will present the latest performance improvements and give attendees a chance to ask questions and even make suggestions for future enhancements - a must attend webinar for any serious GPU Computing Developer. CUDA 6.0 Production is now available download : www.nvidia.com/getcudaBack
In this webinar, we will show how to use NVIDIA GPUs to accelerate computationally intensive MATLAB applications in areas such as image processing, signal processing, and RADAR. We will demonstrate how you can speed up your MATLAB code by using built-in GPU enabled functionality or by replacing key computations with CUDA kernels. We will also illustrate how MATLAB can be used for CUDA kernel evaluation, visualization, and validation.Back
The CUDA® 6.5 Production Release is now available to the public! Please join Ujval Kapasi, NVIDIA's CUDA Product Manager as he talks through this latest version of the CUDA Toolkit Highlights which include: • 64-bit ARM-based systems • Microsoft Visual Studio 2013 (VC12) • BSR sparse matrix format in cuSPARSE routines • Using cuFFT callbacks for higher performance custom processing on input or output data • Improved debugging for CUDA FORTRAN applications • Application Replay mode in both the Visual Profiler and command line cudaprof • Updated CUDA Occupancy Calculator API provides optimal kernel launch configurations • New “nvprune” utility to remove portions of object files for specified GPU architectures.Back
This talk describes cuDNN, NVIDIA’s CUDA library of deep learning primitives. Deep learning workloads tend to be time consuming, and a substantial fraction of their execution time is spent in a handful of computationally intensive functions. Optimizing these kernels for each new generation of hardware is a painstaking task that is, nonetheless, necessary as it translates into significant improvements in overall execution time. For decades, similar concerns have been addressed in the HPC community via the BLAS library that provides abstract implementations of the core linear algebra functions. The object of cuDNN is to provide a set of APIs similar in intent to BLAS, but directed toward deep learning applications. The library is easy to integrate into existing frameworks, providing optimized performance with low memory usage. It is also actively maintained across hardware generations, allowing neural network developers to focus their attention on the higher level framework, instead of the low level performance tuning. For instance, integrating cuDNN into Caffe, improved overall performance by 36% on a standard model while also reducing memory consumption.Back
Learn about the performance improvements in the libraries and numerical functions included in the CUDA Toolkit 7.0.Back
The new NVIDIA® CUDA® Toolkit 8 presents major improvements to the memory model, profiling tools, and new libraries. This enables you to improve performance, simplify memory usage, profile and debug your application more efficiently.Back
Learn how updates to the CUDA toolkit improve the performance of GPU-accelerated applications. Through benchmark results, we will review the impact of new libraries, updates to memory management and mixed precision programming. The session will cover performance of CUDA toolkit components including libraries and the compiler.Back
The new CUDA Toolkit 8 includes support for Pascal GPUs, up to 2TB of Unified Memory and new automated critical path analysis for effortless performance optimization. This is the most powerful and easy version of the CUDA Toolkit to date.Back
As performance and functionality requirements of interdisciplinary computing applications rise, industry demand for new graduates familiar with accelerated computing with GPUs grows. This webinar introduces a comprehensive set of academic labs and university teaching material for use in courses leveraging introductory and advanced parallel programming concepts. The teaching materials start with the basics and focus on programming GPUs with CUDA, and go on to advanced topics such as optimization, advanced architectural enhancements, and integration of a variety of programming languages.Back
Learn about the new JetPack Camera API and start developing camera applications using the CSI and ISP imaging components available on Jetson TX1.
From advanced algorithms to large-scale datacenter implementations, GPUs are literally rewriting the rules for energy exploration and data processing. GPU-based solutions for HPC and visualization deliver exceptional-quality results in a timely and cost-effective manner that minimizes Total Cost of Ownership. By drawing on their collective experience in energy exploration and data processing, industry veterans from NVIDIA, R-Associates Inc., and Bright Computing will share:HPC trends in energy exploration and data processing - GPU-accelerated seismic processing clusters deliver 4-6X more throughput, and make high-resolution subsurface images affordable using advanced algorithms that improve drilling decisionsTrends in energy-exploration visualization - virtualized GPUs improve economies of scale by `pushing pixels’ to the `thin devices’ of globally distributed exploration teams, keeping voluminous datasets secure within datacentersPioneering innovations in non-blocking systems architecture that permit ultra-dense GPU configurations - 20 GPUs in a 7U chassis!Best practices for putting GPUs to productive use - automated installation of CUDA drivers, tools, and toolkits, GPU-specific metrics for monitoring and rules-based actions, plus health checks based on nvidia-healthmonBack
100x faster Monte Carlo simulations!70x speedup in stochastic volatility models!Price complex derivatives in real time! In this GTC Express webinar, Gerald A. Hanweck, Jr., PhD, CEO and Founder of Hanweck Associates, LLC, will discuss his experiences using NVIDIA CUDA and GPUs to accelerate quantitative financial computation, including real-world performance gains in derivatives pricing and risk management applications, implementation guidelines and lessons learned, and the cost/benefit tradeoffs of porting legacy code -- or writing new code -- to CUDA.Back
The risks associated with Over-The-Counter (OTC) derivatives were a key contributing factor in the 2007/2008 global financial crisis. Since then, financial institutions have trained their risk management sights on Counterparty Credit Risk (CCR), i.e., the risk associated with a counterparty default before the end of an OTC contract.A CCR measure that has attracted particular attention is Credit Valuation Adjustment (CVA). CVA is computationally complex and the Monte Carlo simulations used require massive computing power. Banks are striving to quickly respond to market and regulatory changes while maintaining the flexibility of their own in-house software.In this webinar, Hicham Lahlou, CEO & Co-founder, Xcelerit, will discuss real world applications of GPUs in risk management. He will show, using the Xcelerit SDK, how the complexity of GPU programming can be overcome allowing existing models and applications to be easily accelerated and extended, cutting software development and maintenance costs.Back
Tier 1 banks have been working on implementing CVA calculations in their front offices in the last few years and have been spending vast amounts of budget on their capability to price trades including CVA, thus making it more and more real-time. Solving this issue proved to be both a financial and systemic challenge. Some Tier 1 banks have - since the emergence of GPU cards - moved away from the more traditional and costly approach of solving this by using expensive server farms. In this webinar, Thomas Moser, Product Manager, Misys will focus on what the industry has learned so far and how Tier 2 and 3 banks can benefit from their experiences without hiring additional quants, software developers and without significant investment in hardware using an off the shelf product which is built on GPU and In Memory aggregation technology. The webinar will also featureNew technologies used by Tier 1 banks that demonstrate higher performance and have been integrated with existing trading systemsWhat it takes to deploy these technologies in a bank while keeping the right level of proprietary model and other in-house requirementsA Misys client presenting on achieved cost savings and performance benchmarks by moving to new technologiesBack
Artificial intelligence, developmental psychology, neuroscience, and dynamical systems theory have directly inspired a novel approach called developmental robotics, a highly interdisciplinary subfield of robotics also known as epigenetic or ontogenetic robotics. In this webinar, Martin Peniak and Anthony Morse from the University of Plymouth (UK), will talk about Aquila, an open-source toolkit for robotics application developed as a part of the European projects ITALK and RobotDoC. The software provides many different tools and biologically-inspired models, useful for cognitive and developmental robotics research. Aquila addresses the need for high-performance robot control, typically confounded by processing power limitations that are inherent in the standard CPU architectures, by adopting the latest CUDA-based parallel processing paradigm.Back
Join Adam Coates of Baidu as he shows us how a cluster of GPUs has enabled his research group to train Artificial Neural Networks with more than 10 billion connections. "Deep learning" algorithms, driven by bigger datasets and the ability to train larger networks, have led to advancements in diverse applications including computer vision, speech recognition, and natural language processing. After a brief introduction to deep learning, we will show how neural network training fits into our GPU computing environment and how this enables us to duplicate deep learning results that previously required thousands of CPU cores.Back
Convolutional Networks (ConvNets) have become the dominant method for a wide array of computer perception tasks including object detection, object recognition, face recognition, image segmentation, visual navigation, handwriting recognition, as well as acoustic modeling for speech recognition and audio processing. ConvNets have been widely deployed for such tasks over the last two years by companies like Facebook, Google, Microsoft, NEC, IBM, Baidu, Yahoo, sometimes with levels of accuracy that rival human performance. ConvNets are composed of multiple layers of filter banks (convolutions) interspersed with point-wise non-linearities and spatial pooling and subsampling operations. ConvNets are a particular embodiment of the concept of "deep learning" in which all the layers in a multi-layer architecture are subject to training. This is unlike more traditional pattern recognition architectures that are composed of a (non-trainable) hand-crafted feature extractor followed by a trainable classifier. Deep learning allows us to train a system end to end, from raw inputs to ultimate outputs, without the need for a separate feature extractor or pre-processor. This presentation will demonstrate several practical applications of ConvNets. ConvNets bring the promise of real-time embedded systems capable of impressive image recognition tasks with applications to smart cameras, and mobile devices, automobiles, and robots.Back
Significant advances have recently been made in the fields of machine learning and image recognition, impacted greatly by the use of NVIDIA GPUs. Join Matthew Zeiler, CEO of Clarifai, as he explains how they harness leading performance from deep neural networks trained on millions of images to predict thousands of categories of objects. Clarifai's expertise in deep neural networks helped them get our start by achieving the world's best published image labeling results [ImageNet 2013]. Clarifai uses NVIDIA GPUs to train large neural networks within practical time constraints and we’ve created an API to enable the next generation of intelligent applications in a variety of fields. This talk will describe what these neural networks learn from natural images and how they can be applied to auto-tagging new images, searching large untagged photo collections, and detecting near-duplicates. A live demo of our state of the art system will showcase these capabilities and allow audience interactionBack
This tutorial, led by Evan Shelhamer, Ph.D. student at UC Berkeley and lead developer of the Caffe deep learning framework, is designed to equip researchers and developers with the tools and know-how needed to incorporate deep learning into their work. Both the ideas and implementation of state-of-the-art deep learning models will be presented. While deep learning and deep features have recently achieved strong results in many tasks, a common framework and shared models are needed to advance further research and applications and reduce the barrier to entry. To this end we present the Caffe – Convolutional Architecture for Fast Feature Embedding – framework that offers an open-source library, public reference models, and worked examples for deep learning. Join the tour from the 1989 LeNet for digit recognition to today's top ILSVRC2014 vision models. This tutorial focuses on vision, but includes coverage of general techniques and tools.Back
The new cuDNN v2 drop-in library accelerates deep learning applications using Caffe, Theano or Torch. Join NVIDIA’s Larry Brown for an update and learn how you can accelerate your deep neural net training.Back
Join Allison Gray, Solution Architect at NVIDIA, for this webinar to learn more about how you can leverage DIGITS. The NVIDIA Deep Learning GPU Training System (DIGITS) puts the power of deep learning in the hands of data scientists and researchers. Quickly design the best deep neural network (DNN) for your data using real-time network behavior visualization. Best of all, DIGITS is a complete system so you don’t have to write any code. Get started with DIGITS in under an hour.Back
Join Allison Gray of NVIDIA, for a walk-through of the new features in the second release of NVIDIA’s Deep Learning GPU Training System (DIGITS). Engage with NVIDIA deep learning experts as they demonstrate how you can train your image classification networks up to 2x faster using the new automatic multi-GPU scaling feature, assign training jobs to the fastest available GPUs, and quickly deploy trained models in your own applications using inference sample code.Back
This webinar takes an example of the WNDCHRM image classification ap ...Read More
This webinar takes an example of the WNDCHRM image classification application, and the image processing advantage that an NVIDIA GPU brings to the table. We achieved a 20X performance improvement using CUDA. Image Processing Technology has significantly improved the ability to interpret and analyze medical imaging content like X-Ray Scan, CT Scans, ECG Output, Endoscopy and Ultrasound. Medical professionals can now make life-saving decisions with real-time diagnosis that is aided by image processing technology. While the example is specific to WNDCHRM, the basis of performance improvement remains CUDA parallel programming – that applies to other medical image processing requirements as well. The webinar is ideal for engineering professionals from the medical devices and healthcare industry.Back
GPU-enabled simulation of fully atomistic macromolecular simulation is rapidly gaining momentum, enabled by the massive parallelism and due to parallelizability of various components of the underlying algorithms and methodologies. The massive parallelism, in the order of several hundred to a few thousand cores, presents opportunities as well as poses implementation challenges. In this webinar, Michela Taufer, Assistant Professor, Department of Computer and Information Sciences, University of Delaware, discusses various key aspects of simulation methodologies of macro molecular systems specifically adapted to GPUs. She will also visit some of the underlying challenges and solutions devised to tackle them.Back
This webinar showcases the latest GPU-acceleration technologies available to AMBER users and discusses features, recent updates and future plans. Join us to learn how to obtain the latest accelerated versions of AMBER, which features are supported, the simplicity of its installation and use, and how it performs with Kepler GPUs.
Learn about the first multi-node, multi-GPU-enabled release 4.6 of GROMACS from Dr. Erik Lindahl, the project leader for this popular molecular dynamics package. GROMACS 4.6 allows you to run your models up to 3X faster compare to the latest state-of-the-art parallel AVX-accelerated CPU-code in GROMACS. Dr. Lindahl will talk about the new features of the latest GROMACS 4.6 release as well as future plans. You will learn how to download the latest accelerated version of GROMACS and which features are GPU supported. Dr. Lindahl will cover GROMACS performance on the very latest NVIDIA Kepler hardware and explain how to run GPU-accelerated MD simulations. You will also be invited to try GROMACS on K20 with a free test drive and experience all the new features and enhanced performance for yourself: http://www.nvidia.com/gputestdriveBack
Shape is a fundamental three dimensional molecular property and a powerful descriptor for molecular comparison and similarity assessment; similarity in shape has proven to be a very effective method for predicting similarity in biology. As such shape-based virtual screening (searching a database of molecules for those compounds that are similar in shape to a molecule with desirable biological activity) has become an integral part of computational drug discovery, due to both its speed and efficacy.OpenEye’s recent port of their shape similarity application, ROCS, to the GPU has resulted in a virtual screening tool of unprecedented power – FastROCS. FastROCS’ speed allows it to perform large-scale calculations of a kind inaccessible in the past (shape comparisons of millions of molecules to one another) and has accelerated more routine shape searching to the point that it has become competitive with more traditional, but less effective, two dimensional methods. Join Paul Hawkins, Applications Science Group Leader at OpenEye, as he presents some recent performance data on FastROCS on NVIDIA hardware and discusses some of the new applications that this speed has enabled.You will also be invited to take the Tesla K20 for a free test drive and experience all the new features and enhanced performance for yourself: www.nvidia.com/gputestdrive.Back
Join Dr. Juan R. Perilla and learn how in a tour de force effort, experimental and computational scientists at the University of Illinois at Urbana–Champaign and the University of Pittsburg have now resolved the HIV capsid's chemical structure. As reported recently on the cover of Nature, the researchers combined NMR structure analysis, electron microscopy and data-guided molecular dynamics simulations utilizing VMD to prepare and analyze simulations performed using NAMD on NVIDIA GPUs in one of the most powerful computers worldwide, Blue Waters, to obtain and characterize the HIV-1 capsid. The discovery can now guide the design of novel drugs for enhanced antiviral therapy.Also learn how NAMD performs with the latest Kepler GPUs, as well as details about GPU Test Drive (www.nvidia.com/GPUTestDrive) and how to try NAMD on Kepler GPUs for free.Back
Join Acellera Founder, Gianni De Fabritiis, and CTO, Matt Harvey, to learn about the latest developments of high-throughput molecular dynamics both in terms of applications and methodological advances. Examples will be given in the context of ACEMD, a highly efficient, best-in-class graphical processing units (GPUs) centric code for running MD simulations, and its protocols. In particular, attendees will learn how the high arithmetic performance and intrinsic parallelism of the latest NVIDIA Kepler GPUs can offer a technological edge for molecular dynamics simulations. Micro to milliseconds molecular dynamics on accelerator hardware which will have important methodological and scientific implications will be highlighted. This webinar presents a great opportunity for industrial scientists to get an overview of the current achievements in molecular simulations for medicinal chemistry.Back
VMD is a tool for preparing, analyzing, and visualizing molecular dynamics simulations, with particular emphasis on large biomolecular systems, including drug targets such as the bacterial ribosome, and large viruses such as HIV.The computational challenges posed by large simulations present a significant hurdle for simulation and analysis tools. GPUs provide unprecedented computational capabilities at a very low cost, making it possible for applications like VMD to accelerate tasks that would otherwise be beyond our grasp. The ubiquitous nature of powerful GPUs on hardware ranging from tablets to supercomputers has allowed us to make a significant investment in developing GPU algorithms for a broad range of uses covering everything from ion placement during simulation preparation to photorealistic ray tracing of movies on hundreds of supercomputer nodes.Join us for this webinar as John Stone, Senior Research Programmer, University of Illinois provides an overview of the GPU-accelerated features of VMD and how they can be used to speed up a wide range of simulation preparation, analysis, and visualization tasks today, along with a roadmap of things to come in the future.Back
This webinar will provide an overview of the AMBER Molecular Dynamics Software package with focus on what is new with regards to GPU acceleration in the recently released version 14. This includes details of peer-to-peer support and optimizations, which have resulted in version 14 being the fastest MD software package on commodity hardware. Benchmarks will be provided, along with recommended hardware choices. In addition, an overview of the new GPU centric features in AMBER 14 will be covered, including support for multi-dimensional replica exchange MD, hydrogen mass repartitioning, accelerated MD, Scaled MD, and support-as-a-service on Amazon Web Services. This is a joint webinar by Ross C. Walker, University of California San Diego and Adrian Roitberg, University of Florida.Back
This is a first snapshot of the heterogeneous CPU+GPU Molecular Dynamics (MD) in CHARMM and its performance and the accuracy. GPU is used only for the direct part of forces; CPU computes all other contributions (reciprocal, bonded, SHAKE, etc.). The GPU code was implemented natively in CHARMM using CUDA C. The MD engine is built around the DOMDEC domain decomposition code and therefore naturally enables MD simulations on multiple CPU+GPU nodes. We will present discoveries that used features implemented in DOMDEC_GPU, showing the current usefulness of the code and GPUs for biomolecular simulation, advanced sampling techniques, and for enabling DOE/NREL efforts toward affordable consumer biofuels.Back
Join us for the free Introduction to OpenACC course this month, October, 2016. The course is comprised of three instructor-led classes that include interactive lectures with dedicated Q&A sections and hands-on exercises. You’ll learn everything you need to start accelerating your code with OpenACC on GPUs and CPUs. The course will cover introduction on how to analyze and parallelize your code, as well as perform optimizations like managing data movements and utilizing multiple GPUs.Back
Join NVIDIA and IBM for a tour of IBM’s ...Read More
Join NVIDIA and IBM for a tour of IBM’s new Power Systems S822LC featuring POWER8 processors coupled with NVIDIA® Tesla® P100 GPUs using NVIDIA NVLink™ Technology. We’ll detail the features and specifications of the fattest, flattest architecture for data movement available in any server, including early HPC application performance results. We’ll also show you how you can get started using OpenACC to realize leaps in performance and ease-of-programming for your application out-of-the-box, including real coding examples and best practices developed through early client adoption programs.Back
Recent advances in reformulating electronic structure algorithms for stream processors such as graphical processing units have made DFT calculations on systems comprising up to O(10 to the 3) atoms feasible. Simulations on such systems that previously required half a week on traditional processors can now be completed in only half an hour. Join Professor Heather Kulik, Massachusetts Institute of Technology, as she discusses how she leverages these GPU-accelerated quantum chemistry methods in the code TeraChem to investigate large-scale quantum mechanical features in applications ranging from protein structure to mechanochemical depolymerization. In each case, large-scale and rapid evaluation of electronic structure properties is critical for unearthing previously poorly understood properties and mechanistic features of these systems. Professor Kulik will also discuss outstanding challenges in the use of Gaussian localized-basis-set codes on GPUs pertaining to limitations in basis set size and how she circumvents such challenges to computational efficiency with systematic, physics-based error corrections to basis set incompleteness.Back
In this webinar Adam Jull, CEO, IMSCAD will reference some customers who have deployed successfully and explain some of the challenges they faced and how they overcame them.Speed up your ‘Proof of Concept’ by learning how implementing a trusted and tested process for a CAD virtualization can get you to production roll out much faster.The webinar will focus on 3 customers - an architect, a construction firm, and a manufacturing customer - and how the combination of NVIDIA GRID, Citrix and IMSCAD services delivered results that have brought far reaching benefits to their businesses.Back
Join Steve Harpster, Solution Architect, NVIDIA, for this technical webinar and learn how to set up NVIDIA GRID with VMware Horizon View vDGA. You’ll also discover how to optimize the virtual machines to get the best performance for your demanding 3D workloads and what to consider when planning for scalability and density.Key takeways from the webinar include:- Why GPUs (Graphics Processing Units) help in virtual desktops and applications- How to demo, pilot and deploy GPU-accelerated virtual desktops & applications- Tips and tricks for tuning GRID enabled virtual machines- Planning and guidance for the right virtual machine setup based on common applicationsBack
Join Vineet Batra of Adobe as he covers a real-world application of NVIDIA's path rendering technology (NVPR) for accelerating 2D vector graphics, based on Adobe PDF model. We shall demonstrate the use of this technology for real-time, interactive rendering in Adobe Illustrator CC. The substantial performance improvement is primarily attributed to NVPR's ability to render complex cubic Bezier curves independently of device resolution. Further, we shall also discuss the use of NVIDIA's Blend extension to support compositing of transparent artwork in conformance with the Porter-Duff model using 8X-multisampling and per-sample fragment Shaders. Using these technologies, we achieve performance of 30 FPS when rendering and scaling a complex artwork consisting of a hundred thousand cubic Bezier curves with ten thousand blend operations per frame using GTX 780 TI graphics card.Back
Dave Coldron of Lightwork Design and Peter de Lappe of NVIDIA will introduce you to Iray+ in 3ds Max showing how lightning fast interactive rendering can transform workflows. We will introduce you to Iray+ material powered by MDL that enables for editing based on real-world manufacturing concepts and how you can adapt and edit materials implemented directly within the 3ds Max Slate and Compact material editors. Plus, see how the NVIDIA VCA delivers final frame quality interactivity to Iray+ inBack
Peter de Lappe of NVIDIA and Christoph Berndt of [0x1] Software Consulting will demonstrate how physically based rendering with NVIDIA Iray can accelerate your Maya workflow. [0x1] IrayforMaya delivers GPU accelerated final frame quality interactive design review and look development - on a workstation or supercharged on a cluster of NVIDIA VCAs. We will teach you how to use light path expressions for compositing and introduce you to material creation with MDL.Back
Learn about intuVision’s faster than real-time video surveillance analytics on GPUs and how an OpenCV library was used to build this technology.Dr. Sadiye Guler, the founder of intuVision, Inc. will introduce intuVision's video object detection and tracking algorithms. In intuVision video tracking systems, parallel operations such as background model generation, update and new frame to background comparisons, as well as image filtering operations are performed much faster on GPUs resulting in large savings in processing time. By performing only the inherently non-parallel operations on the CPU, and utilizing multi-thread processing on the GPU, the overall computational performance of video analytics experiences a significant boost beyond real-time.Back
Face-in-the-crowd recognition capability is at the heart of ambitious city-wide rollouts such as the Safe City Test Bed in Singapore. City-wide CCTV processing means the analysis of many thousands of video feeds requiring enormous computing power. A key component of the implementation and rollout strategy is NVIDIA GPU technology. GPUs can speed up both face recognition and face detection. In one implementation, the Robot Operating System (ROS) from Willow Garage was used to distribute the processing tasks across potentially thousands of networked CPUs and GPUs to form a completely scalable architecture.Join Brian Lovell and Stephen Brain from Imagus Technology as they introduce the recently released iFace Library and show how GPU technology can be used to address bottlenecks at each stage of processing. The presentation will also cover the speakers’ use of ROS, OpenCV, OpenMP, and Armadillo libraries to develop fast reliable distributed video processing code.Back
NVIDIA GPUs have been used to accelerate visual effects in movies for over a decade. We have witnessed them mature from graphics acceleration hardware, to generalized supercomputing co-processors. At the same time, we have seen the complexity of rendering and the fidelity of simulations in movie-FX increase exponentially.In this webinar, Wil Braithwaite, Senior Applied Engineer, NVIDIA examines the current state-of-the-art of GPU-accelerated HPC at leading VFX studios, and provides a glimpse into the future of how next-generation GPUs may be used to change the way movies are made.Back
Join us for this webinar to learn how Geoweb3d uses the GPU for real-time geospatial 3D visualization, modeling, and analytics. Bob Holicky, the president of Geoweb3d, will demonstrate how native, high resolution datasets including GIS, CAD, 3D Models, LIDAR, and FMV are fused together in real-time with game quality graphics and pixel accurate analysis. Users across the commercial, defense, and intelligence communities use Geoweb3d when cost, speed, performance, and accuracy matter.The 3D engine uses a GPU resident mesh that adapts to any resolution data on the fly eliminating the need to preprocess any data prior to real-time use. Demonstration will include Geoweb3d Mobile which now uses HTML5 for use on any device in the cloud including phones and tablets.Back