Running deep learning inference tasks on embedded platforms often requires deployment of pretrained models. Finding the best hyper-parameters and training are usually performed on a workstation or large-scale system to obtain the best model. In this talk, we'll show through examples using frameworks how to train models on a workstation and deploy models on embedded platforms such as the NVIDIA® Jetson™ TX1 or NVIDIA Drive™ PX. We'll also show dedicated tools and how to monitor performance and debug issues on embedded platforms for easy demo setup. This talk will include a live demo session.