COVID-19

A parallel solution for the Hessian matrix in neural network optimisation: the case of the mixture of expert model

Crawford School of Public Policy
Image by Gowerrobert on Wikimedia Commons

Event details

ACDE Seminar

Date & time

Tuesday 27 July 2021
2.00pm–3.30pm

Venue

Online via Zoom

Speaker

Pham Van Ha, Australian National University

Second-order optimisation methods such as the Newton method have been known for their fast convergence. However, the high computational cost required for calculating the Hessian matrix and its inverse has hindered the use of the Newton method in neural network optimisation. The recent literature has shown that with a simple reordering of variables and equations, the Jacobian matrix of computable general equilibrium (CGE) models can be transformed into special forms that allow for an efficient parallel solution for large-scale CGE models. This paper shows that the Hessian matrix of certain ‘mixture of expert’ models can also be reordered into doubly-bordered block diagonal form and can be solved efficiently in both distributed and shared memory environments. The seminar will provide illustrative numerical applications, including for the S&P/ASX 200 stock index.

Updated:  29 March 2024/Responsible Officer:  Crawford Engagement/Page Contact:  CAP Web Team