Abstract:
Markov decision processes have been used in real-world path planning, where environment information is incomplete or dynamic. The problem with the MDP formalism is that its state space grows exponentially with the number of domain variables, and its inference methods grow with the number of available actions. To overcome this issue, we formulate an MDP solver in terms of matrix multiplications, based on the value iteration algorithm; thus we can take advantage of GPUs to produce interactively obstacle-free paths in the form of an optimal policy. We'll present a performance analysis of our technique using Jetson TK1, CPU, and GPU platforms. Our algorithm presents 90x speed-up in GPUs, and 30x speed-up in the Jetson TK1 in contrast with its CPU multi-threaded version.
Markov decision processes have been used in real-world path planning, where environment information is incomplete or dynamic. The problem with the MDP formalism is that its state space grows exponentially with the number of domain variables, and its inference methods grow with the number of available actions. To overcome this issue, we formulate an MDP solver in terms of matrix multiplications, based on the value iteration algorithm; thus we can take advantage of GPUs to produce interactively obstacle-free paths in the form of an optimal policy. We'll present a performance analysis of our technique using Jetson TK1, CPU, and GPU platforms. Our algorithm presents 90x speed-up in GPUs, and 30x speed-up in the Jetson TK1 in contrast with its CPU multi-threaded version.
Back
Topics:
Algorithms & Numerical Techniques, Artificial Intelligence and Deep Learning
Event:
GTC Silicon Valley