Abstract:
We'll do a dive deep into best practices and real world examples of leveraging the power and flexibility of local GPU workstations, such has the DGX Station, to rapidly develop and prototype deep learning applications. We'll demonstrate the setup of our small lab, which is capable of supporting a team of several developers/researchers, and our journey as we moved from lab to data center. Specifically, we'll walk through our experience of building the TensorRT Inference Demo, aka Flowers, used by Jensen to demonstrate the value of GPU computing throughout the world-wide GTCs. As an added bonus, get first-hand insights into the latest advancements coming to AI workstations this year. The flexibility for fast prototyping provided by our lab was an invaluable asset as we toyed with different software and hardware components. As the models and applications stabilized and we moved from lab to data center, we were able to run fully load-balanced over 64 V100s serving video inference demonstrating Software-in-the-Loop's (SIL) ReSim capabilities for Autonomous Vehicles at GTC EU. Real live examples will be given.