Abstract:
Modern workstation applications demand a tightly coupled compute-graphics pipeline where the simulation and the graphics are done interactively and in parallel. The use of multiple GPUs provides an affordable way for such applications to improve their performance and increase their useable data size by partitioning the processing and subsequent visualization among multiple GPUs. This session explains the methodologies of how to program your application for a multi-GPU environment, including: How to structure an application to optimize compute-graphics performance and manage synchronization; How to manage efficient data transfers across the PCIE bus; Debugging and profiling and; Programming considerations when scaling beyond two GPUs, such as multiple compute GPUs feeding to one or multiple graphics GPUs. Throughout this session, OpenGL and CUDA code examples designed for a single GPU will be modified to efficiently work in a multi-GPU environment.