GTC ON-DEMAND

 
SEARCH SESSIONS
SEARCH SESSIONS

Search All
 
Refine Results:
 
Year(s)

SOCIAL MEDIA

EMAIL SUBSCRIPTION

 
 

GTC ON-DEMAND

Presentation
Media
Abstract:

See how RAPIDS and the open source ecosystem are advancing data science. In this session, we will explore RAPIDS, the NEW open source data science platform from NVIDIA. Come learn how to get started leveraging these open-source libraries for faster performance and easier development on GPUs. See the latest engineering work and new release features, including, benchmarks, roadmaps, and demos. Finally, hear how customers are leveraging RAPIDS in production, benefiting from early adoption, and outperforming CPU equivalents.

See how RAPIDS and the open source ecosystem are advancing data science. In this session, we will explore RAPIDS, the NEW open source data science platform from NVIDIA. Come learn how to get started leveraging these open-source libraries for faster performance and easier development on GPUs. See the latest engineering work and new release features, including, benchmarks, roadmaps, and demos. Finally, hear how customers are leveraging RAPIDS in production, benefiting from early adoption, and outperforming CPU equivalents.

  Back
 
Topics:
Accelerated Data Science
Type:
Talk
Event:
GTC Silicon Valley
Year:
2019
Session ID:
S9577
Streaming:
Download:
Share:
 
Abstract:

The next big step in data science combines the ease of use of common Python APIs, but with the power and scalability of GPU compute. The RAPIDS project is the first step in giving data scientists the ability to use familiar APIs and abstractions for data science while taking advantage of GPU accelerated hardware commonly found in HPC centers. This session discusses RAPIDS, how to get started, and our roadmap for accelerating more of the data science ecosystem.

The next big step in data science combines the ease of use of common Python APIs, but with the power and scalability of GPU compute. The RAPIDS project is the first step in giving data scientists the ability to use familiar APIs and abstractions for data science while taking advantage of GPU accelerated hardware commonly found in HPC centers. This session discusses RAPIDS, how to get started, and our roadmap for accelerating more of the data science ecosystem.

  Back
 
Topics:
Accelerated Data Science
Type:
Talk
Event:
Supercomputing
Year:
2018
Session ID:
SC1824
Download:
Share:
 
Abstract:

Learn how RAPIDS and the open source ecosystem are advancing data science. In this session, we will explore RAPIDS, the NEW open source data science platform from NVIDIA. Deep dive into the RAPIDS platform and learn how to get started leveraging the open-source libraries for easier development and enhanced performance data science on GPUs. See the latest engineering work, including benchmarks and demos. Finally, see how customers are benefiting from early primitives and outperforming CPU equivalents.

Learn how RAPIDS and the open source ecosystem are advancing data science. In this session, we will explore RAPIDS, the NEW open source data science platform from NVIDIA. Deep dive into the RAPIDS platform and learn how to get started leveraging the open-source libraries for easier development and enhanced performance data science on GPUs. See the latest engineering work, including benchmarks and demos. Finally, see how customers are benefiting from early primitives and outperforming CPU equivalents.

  Back
 
Topics:
Accelerated Data Science
Type:
Talk
Event:
GTC Washington D.C.
Year:
2018
Session ID:
DC8256
Streaming:
Download:
Share:
 
Abstract:

In this session, we will explore the latest work, showcase benchmarks, and provide demos of the GPU Open Analytics Initiative (GoAi), a collection of open-source libraries, frameworks, and APIs established to standardize GPU analytics to allow for easier development and enhanced performance for GPU-accelerated analytics technologies. Numerous Fortune 500 customers experience latency and performance issues in their data pipeline. Big data frameworks and solutions tried to address this problem, but the cost to scale to the volume and velocity of current needs has proven to be prohibitively expensive. GoAi is addressing these challenges with a vision is to create an end-to-end GPU-accelerated data pipeline that will smooth onboarding ramp for enterprises to explore and integrate AI into their core data driven decision making processes. The session will also provide examples of how customers are benefiting from early primitives and outperforming CPU equivalents.

In this session, we will explore the latest work, showcase benchmarks, and provide demos of the GPU Open Analytics Initiative (GoAi), a collection of open-source libraries, frameworks, and APIs established to standardize GPU analytics to allow for easier development and enhanced performance for GPU-accelerated analytics technologies. Numerous Fortune 500 customers experience latency and performance issues in their data pipeline. Big data frameworks and solutions tried to address this problem, but the cost to scale to the volume and velocity of current needs has proven to be prohibitively expensive. GoAi is addressing these challenges with a vision is to create an end-to-end GPU-accelerated data pipeline that will smooth onboarding ramp for enterprises to explore and integrate AI into their core data driven decision making processes. The session will also provide examples of how customers are benefiting from early primitives and outperforming CPU equivalents.

  Back
 
Topics:
Artificial Intelligence and Deep Learning
Type:
Instructor-Led Lab
Event:
GTC Europe
Year:
2018
Session ID:
E8495
Streaming:
Download:
Share:
 
Abstract:
This talk will discuss the evolution of the GPU Open Analytics Initiative (GoAi) from its inception to today. GoAi, at its core, is a collection of libraries, frameworks, and APIs that lower the barrier of GPU adoption for data scientists. The goal of GoAi is to enable end to end data science workflows across many multi-GPU servers, to analyze and understand data more efficiently than ever before. To date, GoAi includes methods for performing SQL, machine learning, data processing or feature engineering, graph analytics, and graph visualization all on the GPU. This talk will discuss the who, what, when, where, and whys of GoAi, and its integration into the traditional big data world through leading open source projects like Apache Arrow and Apache Parquet. Finally, this talk will highlight major achievements of GoAi, our plans for the future, and how developers can become a part of this rapidly evolving ecosystem.
This talk will discuss the evolution of the GPU Open Analytics Initiative (GoAi) from its inception to today. GoAi, at its core, is a collection of libraries, frameworks, and APIs that lower the barrier of GPU adoption for data scientists. The goal of GoAi is to enable end to end data science workflows across many multi-GPU servers, to analyze and understand data more efficiently than ever before. To date, GoAi includes methods for performing SQL, machine learning, data processing or feature engineering, graph analytics, and graph visualization all on the GPU. This talk will discuss the who, what, when, where, and whys of GoAi, and its integration into the traditional big data world through leading open source projects like Apache Arrow and Apache Parquet. Finally, this talk will highlight major achievements of GoAi, our plans for the future, and how developers can become a part of this rapidly evolving ecosystem.  Back
 
Topics:
Accelerated Data Science, 5G & Edge, Deep Learning & AI Frameworks
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8502
Streaming:
Download:
Share:
 
Abstract:
As cybersecurity data volumes grow, even the best designed SIEMs struggle to perform complex analytics on a large range of data with interactive speeds. We'll discuss how NVIDIA GPU accelerated its own Splunk instance with technologies that are a part of the GPU Open Analytics Initiative, GOAI, to drastically improve cyberhunting. Using tools such as Anaconda, BlazingDB, Graphistry, and MapD, NVIDIA interactively explored billions of events faster than ever to detect threats and perform root cause analysis. We'll walk through how cyberdefenders can use open source tools and libraries to accelerate their own Splunk instance, with code samples and how to's. Finally, we'll discuss how to stay involved in the GPU-accelerated Splunk community.
As cybersecurity data volumes grow, even the best designed SIEMs struggle to perform complex analytics on a large range of data with interactive speeds. We'll discuss how NVIDIA GPU accelerated its own Splunk instance with technologies that are a part of the GPU Open Analytics Initiative, GOAI, to drastically improve cyberhunting. Using tools such as Anaconda, BlazingDB, Graphistry, and MapD, NVIDIA interactively explored billions of events faster than ever to detect threats and perform root cause analysis. We'll walk through how cyberdefenders can use open source tools and libraries to accelerate their own Splunk instance, with code samples and how to's. Finally, we'll discuss how to stay involved in the GPU-accelerated Splunk community.  Back
 
Topics:
Accelerated Data Science, 5G & Edge, Cyber Security
Type:
Talk
Event:
GTC Silicon Valley
Year:
2018
Session ID:
S8499
Streaming:
Download:
Share:
 
Abstract:
Analyzing vast amounts of enterprise cyber security data to find threats can be cumbersome. Cyber threat detection is also a continuous task, and because of financial pressure, companies have to find optimized solutions for this volume of data. We'll discuss the evolution of big data architectures used for cyber defense and how GPUs are allowing enterprises to efficiently improve threat detection. We'll discuss (1) briefly the evolution of traditional platforms to lambda architectures and ultimately GPU-accelerated solutions; (2) current GPU-accelerated database, analysis tools, and visualization technologies (such as MapD, BlazingDB, H2O.ai, Anaconda and Graphistry), and discuss the problems they solve; (3) the need to move beyond traditional rule based indicators of compromise and use a combination of machine learning, graph analytics, and deep learning to improve threat detection; and finally (4) our future plans to continue to advance GPU accelerated cyber security R&D as well as the GPU Open Analytics Initiative.
Analyzing vast amounts of enterprise cyber security data to find threats can be cumbersome. Cyber threat detection is also a continuous task, and because of financial pressure, companies have to find optimized solutions for this volume of data. We'll discuss the evolution of big data architectures used for cyber defense and how GPUs are allowing enterprises to efficiently improve threat detection. We'll discuss (1) briefly the evolution of traditional platforms to lambda architectures and ultimately GPU-accelerated solutions; (2) current GPU-accelerated database, analysis tools, and visualization technologies (such as MapD, BlazingDB, H2O.ai, Anaconda and Graphistry), and discuss the problems they solve; (3) the need to move beyond traditional rule based indicators of compromise and use a combination of machine learning, graph analytics, and deep learning to improve threat detection; and finally (4) our future plans to continue to advance GPU accelerated cyber security R&D as well as the GPU Open Analytics Initiative.  Back
 
Topics:
Cyber Security, Artificial Intelligence and Deep Learning
Type:
Talk
Event:
GTC Israel
Year:
2017
Session ID:
SIL7129
Download:
Share:
 
Abstract:
NVIDIA DGX Systems powered by Volta deliver breakthrough performance for today's most popular deep learning frameworks. Attend this session to hear from DGX product experts and gain insights that will help researchers, developers, and data science practitioners accelerate training and iterate faster than ever. Learn (1) best practices for deploying an end-to-end deep learning practice, (2) how the newest DGX systems including DGX Station address the bottlenecks impacting your data science, and (3) how DGX software including optimized deep learning frameworks give your environment a performance advantage over GPU hardware alone.
NVIDIA DGX Systems powered by Volta deliver breakthrough performance for today's most popular deep learning frameworks. Attend this session to hear from DGX product experts and gain insights that will help researchers, developers, and data science practitioners accelerate training and iterate faster than ever. Learn (1) best practices for deploying an end-to-end deep learning practice, (2) how the newest DGX systems including DGX Station address the bottlenecks impacting your data science, and (3) how DGX software including optimized deep learning frameworks give your environment a performance advantage over GPU hardware alone.  Back
 
Topics:
Artificial Intelligence and Deep Learning
Type:
Talk
Event:
GTC Israel
Year:
2017
Session ID:
SIL7146
Download:
Share:
 
Abstract:

Enterprises "assume breach": someone, somewhere, already compromised them. Analysts sift through a GB/min (or more!) of attack logs from hundreds of thousands of systems. For every identified incident, they then map out the entire breach by backtracking through months of alerts. This talk shares how Graphistry and Accenture tackled the visual analytics problem: how do we explore big graphs? We'll drill into two of our GPU technologies for visualizing graphs: [1] StreamGL, our distributed real-time renderer for delivering buttery interactions, smart designs, and responsive analytics to standard web devices; [2] Node-OpenCL and our CLJS client: open source JavaScript libraries for server-side GPU scripting.

Enterprises "assume breach": someone, somewhere, already compromised them. Analysts sift through a GB/min (or more!) of attack logs from hundreds of thousands of systems. For every identified incident, they then map out the entire breach by backtracking through months of alerts. This talk shares how Graphistry and Accenture tackled the visual analytics problem: how do we explore big graphs? We'll drill into two of our GPU technologies for visualizing graphs: [1] StreamGL, our distributed real-time renderer for delivering buttery interactions, smart designs, and responsive analytics to standard web devices; [2] Node-OpenCL and our CLJS client: open source JavaScript libraries for server-side GPU scripting.

  Back
 
Topics:
Big Data Analytics, Aerospace and Defense, Professional Visualisation
Type:
Talk
Event:
GTC Silicon Valley
Year:
2016
Session ID:
S6114
Streaming:
Download:
Share:
 
 
Previous
  • Amazon Web Services
  • IBM
  • Cisco
  • Dell EMC
  • Hewlett Packard Enterprise
  • Inspur
  • Lenovo
  • SenseTime
  • Supermicro Computers
  • Synnex
  • Autodesk
  • HP
  • Linear Technology
  • MSI Computer Corp.
  • OPTIS
  • PNY
  • SK Hynix
  • vmware
  • Abaco Systems
  • Acceleware Ltd.
  • ASUSTeK COMPUTER INC
  • Cray Inc.
  • Exxact Corporation
  • Flanders - Belgium
  • Google Cloud
  • HTC VIVE
  • Liqid
  • MapD
  • Penguin Computing
  • SAP
  • Sugon
  • Twitter
Next