Generative methods allow a computer to automatically distill the essence of a dataset and then produce novel examples that are indistinguishable from the original data. That's the promise, but getting there has been difficult. This talk focuses on recent advances in generative adversarial networks (GAN), describing ideas that have finally enabled the synthesis of credible high-resolution images. It also covers recent work by NVIDIA (StyleGAN) that makes the image generation more controllable by borrowing ideas from style transfer literature, and also leads to an interesting, unsupervised separation of high-level attributes (e.g. pose or identity in case of human faces) and inconsequential variation in the images (exact placement of hair, etc.).
As a new computing paradigm, Virtual Reality (VR) is changing workflows and redefining how we interact with computers. Deep Learning (DL) is revolutionizing business processes, defining how autonomous machines interact with us and with the world, and demanding application developers learn new ways of working in every field touching compute. In this panel we explore the intersection of these two revolutions with VR industry innovators who are leveraging deep learning using NVIDIA GPU compute systems to bring depth to the VR experience. This discussion will focus on the use of Artificial Intelligence (AI) in both building rich VR environments and enhancing the user's interaction with the VR environment. Panelists will share their vision on how AI will shape the near future of VR and give the audience a view of potential challenges to that future. In this session, we will explore: o Pain points in creating VR experiences, which are driving adoption of AI in the VR space o Challenges encountered in using DL to bring rich content to life in a VR environment o Challenges of implementing DL-enhanced VR environment interaction within the latency-critical VR space o How DL/AI will continue to fundamentally change the VR space
The U.S. is the world leader in developing AI technologies, but other countries are catching up. What must the U.S. do to sustain and strengthen its global leadership in AI research and development? What are the challenges and what more can and should be done by industry and the government to advance AI?
Artificial Intelligence has the potential to advance some of the thorniest problems in virtual and augmented reality. Come hear a panel of experts from across the VR industry talk about how deep learning can revolutionize topics ranging from gaze tracking, to user pose sensing and avatar control, to rendering for focus-capable displays, and discuss applications, limitations, and implications of AI in VR.
In this "state of the union" survey, we will review the technology, the components, and the challenges of virtual reality. We'll describe how GPUs fit into these challenges, and lay out NVIDIA Research's vision for the future of VR.
NVIDIA Research reviews the technology, the components, and the challenges of virtual reality. We describe how GPUs are addressing these challenges, and our vision for the future of VR.
Modern graphics processing units, or GPUs, herald the democratization of parallel computing. Today's GPUs not only render video game frames, they also accelerate astrophysics, video transcoding, image processing, protein folding, seismic exploration, computational finance, radio astronomy, heart surgery, self-driving cars - the list goes on and on. It is imperative that we teach students parallel computing: they will inherit a world in which there exists no other kind. Meanwhile, the world of education is being shaken up by massively online open courses, or MOOCs, that offer a democratization of education. Universities and companies suddenly offer high quality courses over the internet - for free! - to anybody in the world. John Owens (UC Davis) and David Luebke (NVIDIA) have been teaching a MOOC focused on GPU computing. The Udacity course has over 40,000 register students from over 130 countries. This session will present their experience and thoughts on GPUs, MOOCs, and parallel computing education.
We invite you to a special presentation detailing our Academic Programs and all the ways NVIDIA supports teaching and research in higher education.You will find out what programs are available, what benefits they have, what our expectations are, who the key players are, best practices and how you can participate as an academic or researcher. The highlight of the session will be the CUDA Achievement Awards showcasing work at the CUDA Centers of Excellence. The CUDA Centers of Excellence (CCOEs) are institutions at the forefront of GPU computing teaching and research. If you are an academic researcher you won't want to miss this session!
We invite you to a special presentation from our 2011-2012 Graduate Fellowship recipients to learn "what's next" in the world of research and academia. The NVIDIA Graduate Fellowship recipients were selected from 200 applications in 27 countries. Sponsored projects involve a variety of technical challenges, including computer architecture, computer vision, programmability and optimization for heterogeneous systems, automotive computing and much more. We believe that these minds lead the future in our industry and we are proud to support the 2011-2012 NVIDIA Graduate Fellows. For more information on the 2011-2012 NVIDIA Graduate Fellows, please visit www.NVIDIA.com/fellowship.
The future of computer graphics presents many challenges. The worlds we render will be vastly more complex in geometry and artistic texture. Real-time rendering will use global illumination to achieve a far richer appearance, robustly. And content creation, which has grown to be the dominant cost of producing both games and film, must get simpler and less expensive. The NVIDIA Graphics Research group addresses these challenges with a focus on Computational Graphics: using general-purpose computation to enhance and extend the traditional pipelines and capabilities of real-time rendering. In this talk David Luebke, who leads graphics research, will give an overview of recent and ongoing work in computational graphics at NVIDIA Research.
To highlight and reward the excellent research taking place at our CCOEs, we hosted an event during GTC 2012 to showcase four of their top achievements. Each of our 18 CCOEs was asked to submit an abstract describing what they considered to be their top achievement in GPU Computing over the past 18 months. An NVIDIA panel selected four exemplars from these submissions to represent their work on GPU Computing research. Each of our CCOEs has made amazing contributions, but the four CCOEs selected to showcase their work were:
Each of the four CCOE finalists will be awarded an HP ProLiant SL250 Gen8 GPU system configured with dual NVIDIA Tesla K10 GPU accelerators in recognition of this accomplishment. After the four presentations, the CCOE representatives were asked to vote for their favorite presentation and achievement. Tokyo Tech was voted as the audience favorite, and thus wins the extra bragging rights of being honored by their peers as the inaugural recipient of the CUDA Achievement Award 2012.