Abstract:
We'll describe methods to improve the performance of FaceNet in parallel computing environments by quantitatively identifying major bottlenecks in performance and investigating the influence of measurements on accuracy and efficiency in TensorFlow. We'll discuss our performance monitoring tool for bottleneck analysis,and describe how data preprocessing, learning-rate scaling, and communication algorithms are incorporated into our neural network.
We'll describe methods to improve the performance of FaceNet in parallel computing environments by quantitatively identifying major bottlenecks in performance and investigating the influence of measurements on accuracy and efficiency in TensorFlow. We'll discuss our performance monitoring tool for bottleneck analysis,and describe how data preprocessing, learning-rate scaling, and communication algorithms are incorporated into our neural network.
Back
Topics:
Deep Learning & AI Frameworks
Event:
GTC Silicon Valley