This paper presents a gradient-based framework for hyperparameter optimisation in Hamiltonian Monte Carlo, directly maximising convergence speed by optimising the expected log-target under the chain’s final state.

This work introduces a novel objective for tuning HMC hyperparameters, using variational principles to optimise the expected log-target density while circumventing the limitations of existing mixing-speed proxies and loose ELBO bounds.
