Workstation applications today demand a tightly coupled compute-graphics pipeline where the simulation and the graphics are done interactively and in parallel. The use of multiple GPUs provides an affordable way for such applications to improve their performance and increase their useable data size by partitioning the processing and subsequent visualization among multiple GPUs. This tutorial explains the methodologies of how to program your application for a multi-GPU environment. Part 1 of this tutorial will cover GPU resources allocation and system configuration, including: What to expect when you add additional GPUs to your system; How to select, query and allocate all the necessary GPU resources; Provide a rudimentary introduction into the use of profiling and analysis tools. Throughout this tutorial, simple OpenGL and CUDA examples designed for a single GPU will be modified to efficiently work in a multi-GPU environment.