Computationally efficient Bayesian approximation of fractional Gaussian noise using AR1 processes
Abstract
The goal of this thesis is to explore a way of performing efficient Bayesian inference of fractional Gaussian noise series using the R-INLA framework. Finding the MLE of the Hurst exponent and the innovation variance of an FGN can easily be implemented for INLA using a latent Gaussian model. However, since the variables of an FGN process are conditionally dependent, the INLA program will run so slow that it is not deemed viable as a method for performing Bayesian inference. To combat this another approach is considered, namely to approximate the FGN as a weighted sum of AR1 models with parameters that are determined by numerical optimization techniques. The AR1 models and the weighted sum forms another latent field for the LGM, one that has more variables, but less conditional dependence. This approximation is revealed to be much faster, but also more inaccurate than the previous model, as the accuracy of the estimation was found to be biased and connected to the true value of the Hurst exponent. This could be improved if the process of finding the parameters for the AR1 models was repeated with more precision.