Learn the benefits that virtualization provides for an architecture and engineering design firm, along with the journey through the advancements in virtualization technology it took to finally meet the graphics-intensive needs of our design software. We'll share our experiences in how virtualization allows a large company, with over 15 offices and 1,000 people worldwide, to collaborate and work as a single firm. We'll show some cost comparisons with virtualization, along with their management benefits and requirements. We'll also look at the methods we used to set and test metrics specific to our requirements, and follow the results of those metrics through the changes in graphics virtualization technology.
We'll discuss Bunsen, a large-scale visualization framework that prepares and optimizes engineering, architectural, and other CAD and CAM data. Bunsen is a cloud-hosted solution that reads and writes various industry standard file formats (for example, Revit, SOLIDWORKS, Rhino, Maya, Max, Siemens, and Microstation) and provides powerful tools for processing and conversion. It runs on public cloud solutions, such as AWS or Google, or within your own data center or on-prem cloud. All hardware and software are provisioned in the cloud and are usable from any laptop, tablet, or phone with a web browser. Within Bunsen, the user can create sets of reusable rules to process data for visualization and output. You can think of these rules as company standards relating to lighting, materials, colors, and how to reduce object complexity. Possible visualization output platforms include rendering and animation, virtual reality, augmented reality, and real-time game engines, such as Unreal and Unity. Bunsen doesn't mean you change your workflow -- it is a framework to automate, document, and accelerate your existing workflows.
We'll present, in a case study driven presentation, specific examples of how GPU-enabled deep neural networks are powering new methods for analyzing the content of photos and videos from industrial contexts. First, we'll present a collaboration between Smartvid.io and Engineering News-Record, the leading publication in the architecture, engineering, and construction vertical. This ongoing initiative leverages computer vision techniques and semantic approaches to help identify and indicate safe and unsafe situations in jobsite photos. Second, we'll present a collaboration with Arup, a London-based engineering firm, on the use of specific classifiers to localize and measure cracks and related defects in infrastructure.
Learn how Gensler is using the latest technology in virtual reality across all aspects of the design process for the AEC industry. We'll cover how VR has added value to the process when using different kinds of VR solutions. Plus we'll talk about some of the challenges Gensler has faced with VR in terms of hardware, software, and workflows. Along with all of this, NVIDIA's latest VR visualization tools are helping with the overall process and realism of our designs.
Learn about the unique challenges being solved using deep learning on GPUs in a large-scale mass customization of medical devices. Deep neural networks have been successfully applied to some of the most difficult problems in computer vision, natural language processing, and robotics. But we still haven't seen the full potential of this technology used in manufacturing. Glidewell Labs daily produces thousands of patient specific items, such as dental restorations, implants, and appliances. Our goal is to make high-quality restorative dentistry affordable to more patients. This goal can only be achieved with flexible, highly autonomous CAD/CAM systems, which rely on AI for real-time decision making.
Honda's evolutionary new project?internally called the "Next-gen Engineering Workstation (EWS) Project"?is designed to optimize usage of our CAD-VDI environment for R&D offices and factories. The project's challenges are to move from the existing physical EWS and pass-through VDI environments to an NVIDIA GRID vGPU environment. All while improving user density (CCU/server), usage monitoring, resource optimization for designers, and flexible resource reallocation. Honda successfully deployed more than 4,000 concurrent CAD-VDI users in its initial phase, with aggressive plans to further increase utilization. This session will review the project's challenges and Honda's future vision.
Improvements in 3D printing allow for unique processes, finer details, better quality control, and a wider range of materials as printing hardware improves. With these improvements comes the need for greater computational power and control over 3D-printed objects. We introduce NVIDIA GVDB Voxels as an open source SDK for voxel-based 3D printing workflows. Traditional workflows are based on processing polygonal models and STL files for 3D printing. However, such models don't allow for continuous interior changes in color or density, for descriptions of heterogeneous materials, or for user-specified support lattices. Using the new NVIDIA GVDB Voxels SDK, we demonstrate practical examples of design workflows for complex 3D printed parts with high-quality ray-traced visualizations, direct data manipulation, and 3D printed output.
We'll present how deep learning is applied in a manufacturer's production line. Fujikura and OPTOENERGY are introducing a visual inspection system incorporating deep learning in the production process of semiconductor lasers. The same inspection accuracy as skilled workers was achieved by optimizing the image size and the hyper parameters of a CNN model. The optimized image size is less than one quarter of the image size required for the visual inspection by skilled workers, which leads to large cost reduction of the production line. It was also confirmed that the highlighted region in the heatmaps of NG images didn't meet the criteria of the visual inspection. The visual inspection incorporating deep learning is being applied to other products such as optical fibers and electrical cables.
Hundreds of talks and competing events crammed into a few days can be daunting. Get an overview of GTC's programs and events and how to make best use of them from Greg Estes, NVIDIA's VP of developer programs. Addressing both first-timers and returning alums, Greg will cover how to get the most from your time here, including can't-miss talks and never-before-seen tech demos. He'll also cover NVIDIA's resources for developers, startups, and larger organizations, as well as training courses and networking opportunities.
A fireside chat with U.S. Rep. Jerry McNerney (D-Calif.), co-chair of the congressional AI caucus, and Ned Finkle, VP of Govt. Affairs, NVIDIA. Artificial Intelligence has become a front-and-center issue for policymakers. Legislative proposals to encourage AI development and head off possible harms are gaining traction, and the Administration is working to build a national strategy. This fireside chat will give enterprises and researchers a first-hand look at how key Members of Congress are approaching AI, as well as what policies they're advocating for and expect.
This customer panel brings together AI implementers who have deployed deep learning at scale. The discussion will focus on specific technical challenges they faced, solution design considerations, and best practices learned from implementing their respective solutions.
Artificial Intelligence has the potential to profoundly affect our world and lives. In this era of constant change, how do organizations keep up? We'll discuss the forces that drive technology forward and the technology trends, including AI, that can help organizations remain relevant in a world of constant transformation.
GTC Europe will feature groundbreaking work from startups using artificial intelligence to transform the world in the fields of autonomous machines, cyber security, healthcare and more. Join us to watch the hottest startups in Europe take to the stage and pitch their work for a chance to win $100,000 and a DGX Station.
Innovation can take many forms, and led by varying stakeholders across an organization. One successful model is utilizing AI for Social Good to drive a proof-of-concept that will advance a critical strategic goal. The Data Science Bowl (DSB) is an ideal example, launched by Booz Allen Hamilton in 2014, it galvanizes thousands of data scientists to participate in competitions that will have have far reaching impact across key industries such as healthcare. This session will explore the DSB model, as well as look at other ways organizations are utilizing AI for Social Good to create business and industry transformation.
From healthcare to financial services to retail, businesses are seeing unprecedented levels of efficiencies and productivity, which will only continue to rise and transform how companies operate. This session will look at how Accenture as an enterprise is optimizing itself in the age of AI, as well as how it guides its customers to success. A look at best practices, insights, and measurement to help the audience inform their AI roadmap and journey.
Advancements in deep learning are enabling enterprise companies to make meaningful impacts to bottom-line profits. Enterprises capture thousands of hours of customer phone call recordings per day. This voice data is extremely valuable because it contains insights that the business can use to improve customer experience and operations. We'll follow Deepgram CEO Dr. Scott Stephenson's path from working in a particle physics lab two miles underground to founding a deep learning company for voice understanding. We'll describe applications of cutting-edge AI techniques to make enterprise voice datasets mineable for valuable business insights. Companies today use these insights to drive the bottom line.
Has your team developed an AI proof-of-concept with promising metrics? Next step is to broaden the scope to impact larger areas of the enterprise. With its unique challenges and complexities, scaling POCs across multiple business units is a significant part of any company''s AI roadmap. This session will look at best practices, insights and success, rooted in Element AI''s experience with enterprise customers.
For enterprises daunted by the prospect of AI and investing in a new technology platform, the reality is that AI can leverage already-in-place big data and cloud strategies. This session will explore AI and deep learning use cases that are designed for ROI, and look at how success is being measured and optimized.
Get the latest information on how the proliferation of mobile, cloud, and IoT devices has brought us into a new era: The Extreme Data Economy. There''s a greater variety of data than ever before, and exponentially more of it, streaming in real time. Across industries, companies are turning data into an asset, above and beyond any product or service they offer. But unprecedented agility is required to keep business in motion and succeed in this post-big data era. To enable this level of agility, companies are turning to instant insight engines that are powered by thousands of advanced GPU cores, bringing unparalleled speed, streaming data analysis, visual foresight, and machine learning to break through the old bottlenecks. Learn about new data-powered use cases you''ll need to address, as well as advances in computing technology, particularly accelerated parallel computing, that will translate data into instant insight to power business in motion.
We'll review three practical use cases of applying AI and deep learning in the marketing and retail industries. For each use case, we'll cover business situations, discuss potential approaches, and describe final solutions from both the AI and infrastructural points of view. Attendees will learn about applications of AI and deep learning in marketing and advertising; AI readiness criteria; selecting the right AI and deep learning methods, infrastructure, and GPUs for specific use cases; and avoiding potential risks.
GTC Fast Forward Poster program is an accelerated poster presentation program that serves as a catalyst for the advancement of an array of innovations that come from universities, research labs, and industry. The GTC Poster Review Committee selected the best 20 posters submitted to GTC2017. This program gives the author a chance to present his or her GPU project in front of the top technology developers working in a vast array of industries.
I will introduce a game developed at Johns Hopkins University/Applied Physics Laboratory called Reconnaissance Blind Chess (RBC), a chess variant where the players do not see their opponent's moves, but they can gain information about the ground-truth board position through the use of an (imperfect) sensor. RBC incorporates key aspects of active sensing and planning: players have to decide where to sense, use the information gained through sensing to update their board estimates, and use that world model to decide where to move. Thus, just as chess and go have been challenge problems for decision making with complete information, RBC is intended to be a common challenge problem for decision making under uncertainty. After motivating the game concept and its relationship to other chess variants, I will describe the current rules of RBC as well as other potential rulesets, give a short introduction to the game implementation and bot API, and discuss some of our initial research on the complexity of RBC as well as bot algorithm
We'll discuss an implementation of GPU convolution that favors coalesced accesses without requiring prior data transformations. Convolutions are the core operation of deep learning applications based on convolutional neural networks. Current GPU architectures are typically used for training deep CNNs, but some state-of-the-art implementations are inefficient for some commonly used network configurations. We'll discuss experiments that used our new implementation, which yielded notable performance improvements including up to 2.29X speedups in a wide range of common CNN configurations.
We'll introduce new concepts and algorithms that apply deep learning to radio frequency (RF) data to advance the state of the art in signal processing and digital communications. With the ubiquity of wireless devices, the crowded RF spectrum poses challenges for cognitive radio and spectral monitoring applications. Furthermore, the RF modality presents unique processing challenges due to the complex-valued data representation, large data rates, and unique temporal structure. We'll present innovative deep learning architectures to address these challenges, which are informed by the latest academic research and our extensive experience building RF processing solutions. We'll also outline various strategies for pre-processing RF data to create feature-rich representations that can significantly improve performance of deep learning approaches in this domain. We'll discuss various use-cases for RF processing engines powered by deep learning that have direct applications to telecommunications, spectral monitoring, and the Internet of Things.
To acquire rich repertoires of skills, robots must be able to learn from their own autonomously collected data. We'll describe a video-prediction model that predicts what a robot will see next, and show how this model can be used to solve complex manipulations tasks in real-world settings. Our model was trained on 44,000 video sequences, where the manipulator autonomously pushes various objects. Using the model, the robot is capable of moving objects that were not seen during training to desired locations, handling multiple objects and pushing objects around obstructions. Unlike other methods in robotic learning, video-prediction does not require any human labels. Our experiments show that the method achieves a significant advance in the range and complexity of skills that can be performed entirely with self-supervised robotic learning. This session is for attendees that possess a basic understanding of convolutional and recurrent neural networks.
Reinforcement learning aims to determine a mapping from observations to actions that maximize a reward criterion. The agent starts off exploring the environment for rewards with random search, which is only likely to succeed in all but simplest of settings. Furthermore, measuring and designing reward functions for real-world tasks is non-trivial. Inspired by research in developmental psychology, in this talk I will discuss how reinforcement learning agents might use curiosity and knowledge accumulated from experience for efficient exploration. I will present results illustrating an agent learning to play the game of Mario and learning to navigate without rewards, a study quantifying the kinds of prior knowledge used by humans for efficient exploration and some robotic manipulation experiments including the use of an anthropomorphic hand for grasping objects.
To convert phonemes of telephone conversations and responses at meetings into texts in real time, pass the text to the computational model created by DGX-1, label with a learning without teacher, and add the clusters, we are developing a system which compares objects and analyzes meaning of conversation and profiles of interlocutors. With this technology, customers can receive appropriate responses at the beginning of a conversation with a help desk, and patients can receive correspondence during a remote diagnosis with a doctor based solely off of their dialogue and examination results. By using TensorFlow as a platform and running the K-Means method, Word2vec, Doc2Vec, etc. in DGX-1 clustered environment on DGX-1, the result of arithmetic processing is found at high speed conversation. Even if the amount of sentences is increased, the learning effect increases linearly, demonstrating that the proportion of validity can be raised without taking grammar of languages ??other than English (e.g. Japanese) into account.
We'll explain the concept and the importance of audio recognition, which aims to understand literally all the information contained in the audio, not limiting its scope to speech recognition. It includes the introduction of various types of non-verbal information contained in the audio such as acoustic scenes/events, speech, and music. This session is helpful to the people who are not familiar with audio processing but are interested in the context-aware system. Also, it might be inspiring for someone who develops AI applications such as AI home assistant, a humanoid robot, and self-driving cars. It also covers the potential use-cases and creative applications, including a video demonstration of the audio context-aware system applied to media-art performance for real-time music generation.
The paradigm for robot programming is changing with the adoption of the deep learning approach in the field of robotics. Instead of hard coding a complex sequence of actions, tasks are acquired by the robot through an active learning procedure. This introduces new challenges that have to be solved to achieve effective training. We'll show several issues that can be encountered while learning a close-loop DNN controller aimed at a fundamental task like grasping, and their practical solutions. First, we'll illustrate the advantages of training using a simulator, as well as the effects of choosing different learning algorithms in the reinforcement learning and imitation learning domains. We'll then show how separating the control and vision modules in the DNN can simplify and speed up the learning procedure in the simulator, although the learned controller hardly generalizes to the real world environment. Finally, we'll demonstrate how to use domain transfer to train a DNN controller in a simulator that can be effectively employed to control a robot in the real world.
We'll discuss training techniques and deep learning architectures for high-precision landmark localization. In the first part of the session, we'll talk about ReCombinator Networks, which aims at maintaining pixel-level image information, for high-accuracy landmark localization. This model combines coarse-to-fine features to first observe global (coarse) image information and then recombines local (fine) information. By using this model, we report SOTA on three facial landmark datasets. This model can be used for other tasks that require pixel-level accuracy (for example, image segmentation, image-to-image translation). In the second part, we'll talk about improving landmark localization in a semi-supervised setting, where less labeled data is provided. Specifically, we consider a scenario where few labeled landmarks are given during training, but lots of weaker labels (for example, face emotions, hand gesture) that are easier to obtain are provided. We'll describe training techniques and model architectures that can leverage weaker labels to improve landmark localization.
Using only randomized simulated images, we'll present a system to infer and simply execute a human-readable robotic program after watching a real-world task demonstration. The system is comprised of a series of deep neural network modules, each learned entirely in simulation. During training, images are generated in a gaming engine and made transferable to the real world by domain randomization. After training, the system is straightforwardly deployed on a real robot with no retuning of the neural networks and having never previously seen a real image. We demonstrate the system on a Baxter robot performing block tower construction tasks.
Robust object tracking requires knowledge and understanding of the object being tracked: its appearance, motion, and change over time. A tracker must be able to modify its underlying model and adapt to new observations. We present Re3, a real-time deep object tracker capable of incorporating temporal information into its model. Rather than focusing on a limited set of objects or training a model at test-time to track a specific instance, we pretrain our generic tracker on a large variety of objects and efficiently update on the fly; Re3 simultaneously tracks and updates the appearance model with a single forward pass. This lightweight model is capable of tracking objects at 150 FPS, while attaining competitive results on challenging benchmarks. We also show that our method handles temporary occlusion better than other comparable trackers using experiments that directly measure performance on sequences with occlusion.
We''ll present a multi-node distributed deep learning framework called ChainerMN. Even though GPUs are continuously gaining more computation throughput, it is still very time-consuming to train state-of-the-art deep neural network models. For better scalability and productivity, it is paramount to accelerate the training process by using multiple GPUs. To enable high-performance and flexible distributed training, ChainerMN was developed and built on top of Chainer. We''ll first introduce the basic approaches to distributed deep learning and then explain the design choice, basic usage, and implementation details of Chainer and ChainerMN. To demonstrate the scalability and efficiency of ChainerMN, we''ll discuss the remarkable results from training ResNet-50 classification model on ImageNet database using 1024 Tesla P100 GPUs and our in-house cluster, MN-1.
We''ll explore how deep learning approaches can be used for perceiving and interpreting the driver''s state and behavior during manual, semi-autonomous, and fully-autonomous driving. We''ll cover how convolutional, recurrent, and generative neural networks can be used for applications of glance classification, face recognition, cognitive load estimation, emotion recognition, drowsiness detection, body pose estimation, natural language processing, and activity recognition in a mixture of audio and video data.
In this talk, we will survey how Deep Learning methods can be applied to personalization and recommendations. We will cover why standard Deep Learning approaches don''t perform better than typical collaborative filtering techniques. Then we will survey we will go over recently published research at the intersection of Deep Learning and recommender systems, looking at how they integrate new types of data, explore new models, or change the recommendation problem statement. We will also highlight some of the ways that neural networks are used at Netflix and how we can use GPUs to train recommender systems. Finally, we will highlight promising new directions in this space.
The growth in density of housing in cities like London and New York has resulted in the higher demand for efficient smaller apartments. These designs challenge the use of space and function while trying to ensure the occupants have the perception of a larger space than provided. The process of designing these spaces has always been the responsibility and perception of a handful of designers using 2D and 3D static platforms as part of the overall building design and evaluation, typically constraint by a prescriptive program and functional requirement. A combination of human- and AI-based agents creating and testing these spaces through design and virtual immersive environments (NVIDIA Holodeck) will attempt to ensure the final results are efficient and best fit for human occupancy prior to construction.
Go beyond working with a single sensor and enter the realm of Intelligent Multi-Sensor Analytics (IMSA). We''ll introduce concepts and methods for using deep learning with multi-sensor, or heterogenous, data. There are many resources and examples available for learning how to leverage deep learning with public imagery datasets. However, few resources exist to demonstrate how to combine and use these techniques to process multi-sensor data. As an example, we''ll introduce some basic methods for using deep learning to process radio frequency (RF) signals and make it a part of your intelligent video analytics solutions. We''ll also introduce methods for adapting existing deep learning frameworks for multiple sensor signal types (for example, RF, acoustic, and radar). We''ll share multiple use cases and examples for leveraging IMSA in smart city, telecommunications, and security applications.
As the race to full autonomy accelerates, the in-cab transportation experience is also being redefined. Future vehicles will sense the passengers'' identities and activities, as well as their cognitive and emotional states, to adapt and optimize their experience. AI capable of interpreting what we call "people analytics" captured through their facial and vocal expressions, and aspects of the context that surrounds them will power these advances. We''ll give an overview of our Emotion AI solution, and describe how we employ techniques like deep learning-based spatio-temporal modeling. By combining these techniques with a large-scale dataset, we can develop AI capable of redefining the in-cab experience.
Learn how VUE.ai''s model generator uses conditional GANs to produce product-specific images suitable for replacing photographs in catalogs. We''ll present networks that generate images of fashion models wearing specific garments, using an image of the garment as a conditioning variable. Network architecture variants, training, and manipulation of latent variables to control attributes such as model pose, build, or skin color will be addressed.
We'll discuss the development of a novel model for video prediction and analysis -- the parallel multi-dimensional long short-term memory (PMD-LSTM). PMD-LSTM is a general model for learning from higher dimensional data such as images, videos, and biomedical scans. It is an extension of the popular LSTM recurrent neural networks to higher dimensional data with a rearrangement of the recurrent connections to dramatically increase parallelism. This gives the network the ability to compactly model the effect of long-range context in each layer, unlike convolutional networks, which need several layers to cover a larger input context. We'll discuss the blind spot problem in recent work on video prediction, and show how PMD-LSTM based models are fully context-aware for each predicted pixel. These models outperform comparatively complex state-of-the-art approaches significantly in a variety of challenging video prediction scenarios such as car driving, human motion, and diverse human actions.
We'll showcase how you can apply a wealth of unlabeled image data to significantly improve accuracy and speed of single-shot object-detection (SSD) techniques. Our approach, SSD++, advances the state-of-the-art of single shot multibox-based object detectors (such as SSD, YOLO) by employing a novel combination of convolution-deconvolution networks to learn robust feature maps, thus making use of unlabeled dataset, and the fresh approach to have confluence of convolution and deconvolution features to combine generic as well as semantically rich feature maps. As a result, SSD++ drastically reduces the requirement of labeled datasets, works on low-end GPUs, identifies small as well as large objects with high fidelity, and speeds up inference process by decreasing the requirement of default boxes. SSD++ achieves state-of-the-art results on PASCAL VOC and MS COCO datasets. Through ablation study, we'll explain the effectiveness of different components of our architecture that help us achieve improved accuracy on the above datasets. We'll further show a case study of SSD++ to identify shoppable objects in fashion, home decor, and food industry from images in the wild.
Deep residual networks (ResNets) made a recent breakthrough in deep learning. The core idea of ResNets is to have shortcut connections between layers that allow the network to be much deeper while still being easy to optimize avoiding vanishing gradients. These shortcut connections have interesting properties that make ResNets behave differently from other typical network architectures. In this talk we will use these properties to design a network based on a ResNet but with parameter sharing and adaptive computation time, we call it IamNN. The resulting network is much smaller than the original network and can adapt the computational cost to the complexity of the input image. During this talk we will provide an overview of ways to design compact networks, give an overview of ResNets properties and discuss how they can be used to design compact dense network with only 5M parameters for ImageNet classification.
Want to get started using TensorFlow together with GPUs? Then come to this session, where we will cover the TensorFlow APIs you should use to define and train your models, and the best practices for distributing the training workloads to multiple GPUs. We will also look at the underlying reasons why are GPUs are so great to use for Machine Learning workloads?
Humans are remarkably proficient at controlling their limbs and tools from a wide range of viewpoints, diverse environments and in the presence of distractors. In robotics, this ability is referred to as visual servoing. Standard visual servoing approaches have limited generalization as they typically rely on manually designed features and calibrated camera. We exhibit generalizable visual servoing in the context of robotic manipulation and navigation tasks learned through visual feedback and by deep reinforcement learning (RL) without needing any calibrated setup. By highly randomizing our simulator, we train policies that generalize to novel environments and also to the challenging real world scenarios. Our domain randomization technique addresses the high sample complexity of deep RL, avoids the dangers of trial-and-error and also provides us with the liberty to learn recurrent vision-based policies for highly diverse tasks where capturing sufficient real robot data is impractical. An example of such scenario is learning view-invariant robotic policies which leads into learning physical embodiment and self-calibration purely through visual feedback.