We''ll discuss how to get started with PyTorch from the creator of the project, Soumith Chintala. PyTorch is a fast and flexible deep learning framework that has been called a ''breath of fresh air'' by researchers and developers alike for its ease of use, flexibility, and similarity to python programming. It consists of an ndarray library that natively supports GPU execution (automatic differentiation engine that is flexible and fast), and an optimization package for gradient based optimization methods.
In this session, you shall be introduced to a new framework for scientific computing, mainly aimed at deep learning workloads. The framework consists of an ndarray library that natively supports GPU execution, an automatic differentiation engine that is flexible and fast, and an optimization package for gradient based optimization methods. We shall discuss practical workflows, our features on top of python multiprocessing for efficient parallel data loaders and finally we shall briefly look at our upcoming just-in-time Tensor compiler to fuse computations and execute them more efficiently.
AI research has seen many shifts in the last few years. We've seen research and deployments go from using static datasets such as ImageNet to being more dynamic and online in self-driving cars, robots, and game-playing. Paradigm shifts in AI research need new tools to enable this research. We'll introduce and talk about PyTorch -- a new deep learning framework that enables cutting-edge AI research by having a complete dynamic view of the world.
Facebook AI Research (FAIR) in partnership with NVIDIA has designed a scale-out infrastructure built on NVIDIA DGX-1. This initiative began with an extensive evaluation of design approaches for multi-system scale, as well as considerations for networking and storage supporting one of the world's largest DGX-1 clusters. Attend this session to gain valuable insights into how one of the world's leading AI innovators is building a scale-out infrastructure for deep learning, learn architectural best practices, and participate in Q&A with featured panelists from FAIR and NVIDIA.
Deep Learning is an emerging subfield in the area of machine learning, often involving compute intensive, but embarrassingly parallel problems. We'll give a very brief background about deep learning, discuss the typical computational workloads from a systems perspective, and finally give an overview of building deep learning systems that scale over multiple GPUs, machines and clusters. We'll also discuss the current frameworks and tools used in the deep learning space such as Torch, Theano, TensorFlow, Caffe, MXNet.