Value Function Iteration Without the Curse of Dimensionality

Icon of open book, ANU

This paper presents a novel approach to solving dynamic programming problems using value function iteration based on the tensor train decomposition that is not subject to the curse of dimensionality. The tensor train decomposition approximates high-dimensional functions by expressing them as a series of interconnected cores, producing an approximation that separates by variables. This approach is well-suited for approximating and integrating a high-dimensional function such as a value function. I apply the method to a range of models and compare its performance against policy iteration and established sparse-grid techniques involving Smolyak and hyperbolic cross polynomials. For models with as few as four state variables, the tensor train method is shown to be faster and comparably accurate to leading sparse-grid alternatives. This paper introduces the first application of tensor trains to solve dynamic optimization problems in Economics, offering a powerful approach to solve high-dimensional macroeconomic models.

Attachments