Learn about the unique challenges being solved using deep learning on GPUs in a large-scale mass customization of medical devices. Deep neural networks have been successfully applied to some of the most difficult problems in computer vision, natural language processing, and robotics. But we still haven't seen the full potential of this technology used in manufacturing. Glidewell Labs daily produces thousands of patient specific items, such as dental restorations, implants, and appliances. Our goal is to make high-quality restorative dentistry affordable to more patients. This goal can only be achieved with flexible, highly autonomous CAD/CAM systems, which rely on AI for real-time decision making.
Honda's evolutionary new project?internally called the "Next-gen Engineering Workstation (EWS) Project"?is designed to optimize usage of our CAD-VDI environment for R&D offices and factories. The project's challenges are to move from the existing physical EWS and pass-through VDI environments to an NVIDIA GRID vGPU environment. All while improving user density (CCU/server), usage monitoring, resource optimization for designers, and flexible resource reallocation. Honda successfully deployed more than 4,000 concurrent CAD-VDI users in its initial phase, with aggressive plans to further increase utilization. This session will review the project's challenges and Honda's future vision.
We'll present how deep learning is applied in a manufacturer's production line. Fujikura and OPTOENERGY are introducing a visual inspection system incorporating deep learning in the production process of semiconductor lasers. The same inspection accuracy as skilled workers was achieved by optimizing the image size and the hyper parameters of a CNN model. The optimized image size is less than one quarter of the image size required for the visual inspection by skilled workers, which leads to large cost reduction of the production line. It was also confirmed that the highlighted region in the heatmaps of NG images didn't meet the criteria of the visual inspection. The visual inspection incorporating deep learning is being applied to other products such as optical fibers and electrical cables.
Learn how Gensler is using the latest technology in virtual reality across all aspects of the design process for the AEC industry. We'll cover how VR has added value to the process when using different kinds of VR solutions. Plus we'll talk about some of the challenges Gensler has faced with VR in terms of hardware, software, and workflows. Along with all of this, NVIDIA's latest VR visualization tools are helping with the overall process and realism of our designs.
Exploring the Best Server for AI Speaker: Samuel D. Matzek, Sr. Software Engineer Speaker: Maria Ward, IBM Accelerated Server Offering Manager Explore the server at the heart of the Summit and Sierra supercomputers, and the best server for AI. We will discuss the technical details that set this server apart and why it matters for your machine learning and deep learning workloads. IBM Cloud for AI at Scale Speaker: Alex Hudak, IBM Cloud Offering Manager AI is fast changing the modern enterprise with new applications that are resource demanding, but provide new capabilities to drive insight from customer data. IBM Cloud is partnering with NVIDIA to provide a world class and customized cloud environment to meet the needs of these new applications. Learn about the wide range of NVIDIA GPU solutions inside the IBM Cloud virtual and bare metal server portfolio, and how customers are using them across Deep Learning, Analytics, HPC workloads, and more. IBM Spectrum LSF Family Overview & GPU Support Speaker: Larry Adams, Global Architect - Cross Sector, Developer, Consultant, IBM Systems How to Fuel the Data Pipeline Speaker: Kent Koeninger, IBM IBM Storage Reference Architecture for AI with Autonomous Driving Speaker: Kent Koeninger, IBM
Take a journey through the TensorFlow container provided by the NVIDIA GPU Cloud. We'll start with how to launch and navigate inside the container, and stop along the way to explore the included demo scripts, extend the container with extra software, and examine best practices for how to take advantage of all the benefits bundled inside the NGC TensorFlow container. This session will help NGC beginners get the most out of the TensorFlow container and become productive as quickly as possible.
We'll describe our work at Intelligent Voice on explainable AI. We are working to separate AI technology into smaller components so it can be more easily explained, build explainability into AI architecture design, and make it possible for AI to progress within confines of current regulation. New GDPR regulations in Europe, which affect any company with European consumers, give people a right to challenge computer-aided decisions and to have these decisions explained. We'll discuss how existing technology can make it difficult to provide an explanation and how that inhibits AI adoption in customer-facing fields such as insurance, health, and financial services.
We'll discuss Project MagLev, NVIDIA's internal end-to-end AI platform for developing its self-driving car software, DRIVE. We'll explore the platform that supports continuous data ingest from multiple cars producing TB of data per hour. We'll also cover how the platform enables autonomous AI designers to iterate training of new neural network designs across thousands of GPU systems and validate the behavior of these designs over multi PB-scale data sets. We will talk about our overall architecture for everything from data center deployment to AI pipeline automation, as well as large-scale AI dataset management, AI training, and testing.
This customer panel brings together A.I. implementers who have deployed deep learning at scale using NVIDIA DGX Systems. We'll focus on specific technical challenges we faced, solution design considerations, and best practices learned from implementing our respective solutions. Attendees will gain insights such as: 1) how to set up your deep learning project for success by matching the right hardware and software platform options to your use case and operational needs; 2) how to design your architecture to overcome unnecessary bottlenecks that inhibit scalable training performance; and 3) how to build an end-to-end deep learning workflow that enables productive experimentation, training at scale, and model refinement.
A wide area and city surveillance system solution for running real-time video analytics on thousands of 1080p video streams will be presented. System hardware is an embedded computer cluster based on NVIDIA TX1/TX2 and NXP iMX6 modules. A custom designed system software manages job distribution, resulting in collection and system wide diagnostics including instantaneous voltage, power and temperature readings. System is fully integrated with a custom designed video management software, IP cameras and network video recorders. Instead of drawing algorithm results on the processed video frames, re-encoding and streaming back to the operator computer for display, only the obtained metadata is sent to the operator computer. Video management software streams video sources independently, and synchronizes decoded video frames with the corresponding metadata locally, before presenting the processed frames to the operator.
Businesses of all sizes are increasingly recognizing the potential value of AI, but few are sure how to prepare for the transformational change it is sure to bring to their organizations. Danny Lange rolled out company-wide AI platforms at Uber and Amazon; now, through Unity Technologies, he's making AI available to the rest of us. He'll also share his thoughts for the most exciting advances that AI will bring over the next year. His insights will help you understand the true potential of AI, regardless of your role or industry.
What is Deep Learning? In what fields is it useful? How does it relate to artificial intelligence? We'll discuss deep learning and why this powerful new technology is getting so much attention, learn how deep neural networks are trained to perform tasks with super-human accuracy, and the challenges organizations face in adopting this new approach. We'll also cover some of the best practices, software, hardware, and training resources that many organizations are using to overcome these challenges and deliver breakthrough results.
We''ll introduce deep learning infrastructure for building and maintaining autonomous vehicles, including techniques for managing the lifecycle of deep learning models, from definition, training and deployment to reloading and life-long learning. DNN autocurates and pre-labels data in the loop. Given data, it finds the best run-time optimized deep learning models. Training scales with data size beyond multi-nodes. With these methodologies, one takes only data from the application and feeds DL predictors to it. This infrastructure is divided into multiple tiers and is modular, with each of the modules containerized to lower infrastructures like GPU-based cloud infrastructure.
Join our presentation on the first application of deep learning to cybersecurity. Deep learning is inspired by the brain's ability to learn: once a brain learns to identify an object, its identification becomes second nature. Similarly, as a deep learning-based artificial brain learns to detect any type of cyber threat, its prediction capabilities become instinctive. As a result, the most evasive and unknown cyber-attacks are immediately detected and prevented. We'll cover the evolution of artificial intelligence, from old rule-based systems to conventional machine learning models until current state-of-the-art deep learning models.
We'll introduce a novel approach to digital pathology analytics, which brings together a powerful image server and deep learning based image analysis on a cloud platform. Recent advances in AI and Deep Learning in particular show great promise in several fields of medicine, including pathology. Human expert judgement augmented by deep learning algorithms has the potential to speed up the diagnostic process and to make diagnostic assessments more reproducible. One of the major advantages of the novel AI-based algorithms is the ability to train classifiers for morphologies that exhibit a high level of complexity. We will present examples on context-intelligent image analysis applications, including e.g. fully automated epithelial cell proliferation assay and tumor grading. We will also present other examples of complex image analysis algorithms, which all run on-demand on whole-slide images in the cloud computing environment. Our WebMicroscope® Cloud is sold as a service (SaaS) approach, which is extremely easy to set up from a user perspective, as the need for local software and hardware installation is removed and the solution can immediately be scaled to projects of any size.
Long term goal of any financial institution is achieve the ability to address users with utmost experience within the boundaries of resources. It could only be a possibility when financial institutions adapt to intelligent systems. The success of such systems depends heavily on the intelligence. Deep Learning has provided a huge opportunity for financial institutions to start building and planning for such large scale intelligent systems which are multi-functional and adapt. In this talk, we will discuss about how we used Deep Learning, Vega as the platform and GPUs to build high scale automation use cases in Fraud detection to complex process automation in both banking and insurance.
Hundreds of talks and competing events crammed into a few days can be daunting. Get an overview of GTC's programs and events and how to make best use of them from Greg Estes, NVIDIA's VP of developer programs. Addressing both first-timers and returning alums, Greg will cover how to get the most from your time here, including can't-miss talks and never-before-seen tech demos. He'll also cover NVIDIA's resources for developers, startups, and larger organizations, as well as training courses and networking opportunities.
A fireside chat with U.S. Rep. Jerry McNerney (D-Calif.), co-chair of the congressional AI caucus, and Ned Finkle, VP of Govt. Affairs, NVIDIA. Artificial Intelligence has become a front-and-center issue for policymakers. Legislative proposals to encourage AI development and head off possible harms are gaining traction, and the Administration is working to build a national strategy. This fireside chat will give enterprises and researchers a first-hand look at how key Members of Congress are approaching AI, as well as what policies they're advocating for and expect.
This customer panel brings together AI implementers who have deployed deep learning at scale. The discussion will focus on specific technical challenges they faced, solution design considerations, and best practices learned from implementing their respective solutions.
Artificial Intelligence has the potential to profoundly affect our world and lives. In this era of constant change, how do organizations keep up? We'll discuss the forces that drive technology forward and the technology trends, including AI, that can help organizations remain relevant in a world of constant transformation.
Innovation can take many forms, and led by varying stakeholders across an organization. One successful model is utilizing AI for Social Good to drive a proof-of-concept that will advance a critical strategic goal. The Data Science Bowl (DSB) is an ideal example, launched by Booz Allen Hamilton in 2014, it galvanizes thousands of data scientists to participate in competitions that will have have far reaching impact across key industries such as healthcare. This session will explore the DSB model, as well as look at other ways organizations are utilizing AI for Social Good to create business and industry transformation.
From healthcare to financial services to retail, businesses are seeing unprecedented levels of efficiencies and productivity, which will only continue to rise and transform how companies operate. This session will look at how Accenture as an enterprise is optimizing itself in the age of AI, as well as how it guides its customers to success. A look at best practices, insights, and measurement to help the audience inform their AI roadmap and journey.
For enterprises daunted by the prospect of AI and investing in a new technology platform, the reality is that AI can leverage already-in-place big data and cloud strategies. This session will explore AI and deep learning use cases that are designed for ROI, and look at how success is being measured and optimized.
I will introduce a game developed at Johns Hopkins University/Applied Physics Laboratory called Reconnaissance Blind Chess (RBC), a chess variant where the players do not see their opponent's moves, but they can gain information about the ground-truth board position through the use of an (imperfect) sensor. RBC incorporates key aspects of active sensing and planning: players have to decide where to sense, use the information gained through sensing to update their board estimates, and use that world model to decide where to move. Thus, just as chess and go have been challenge problems for decision making with complete information, RBC is intended to be a common challenge problem for decision making under uncertainty. After motivating the game concept and its relationship to other chess variants, I will describe the current rules of RBC as well as other potential rulesets, give a short introduction to the game implementation and bot API, and discuss some of our initial research on the complexity of RBC as well as bot algorithm
We'll discuss an implementation of GPU convolution that favors coalesced accesses without requiring prior data transformations. Convolutions are the core operation of deep learning applications based on convolutional neural networks. Current GPU architectures are typically used for training deep CNNs, but some state-of-the-art implementations are inefficient for some commonly used network configurations. We'll discuss experiments that used our new implementation, which yielded notable performance improvements including up to 2.29X speedups in a wide range of common CNN configurations.
We'll introduce new concepts and algorithms that apply deep learning to radio frequency (RF) data to advance the state of the art in signal processing and digital communications. With the ubiquity of wireless devices, the crowded RF spectrum poses challenges for cognitive radio and spectral monitoring applications. Furthermore, the RF modality presents unique processing challenges due to the complex-valued data representation, large data rates, and unique temporal structure. We'll present innovative deep learning architectures to address these challenges, which are informed by the latest academic research and our extensive experience building RF processing solutions. We'll also outline various strategies for pre-processing RF data to create feature-rich representations that can significantly improve performance of deep learning approaches in this domain. We'll discuss various use-cases for RF processing engines powered by deep learning that have direct applications to telecommunications, spectral monitoring, and the Internet of Things.