On approximations of the forward-backward algorithm for a switching linear Gaussian model.
Abstract
In this report, the forward-backward algorithm is derived and explained for a one-layer and a two-layer Hidden Markov model. We describe the structure of the hidden Markov models, before we enter the world of the forward-backward algorithm. The one-layer model is employed in order to give a simple and ordered overview of how the algorithm is derived. We show how to implement the algorithm for the two-layer model, before we explain the structure and the problematic aspects of the algorithm. Subsequently, we introduce the general description of the approximative forward-backward algorithm. In each iteration step of the forward procedure, a sum of normally distributed elements is saved to memory, to be used later. The general idea of the approximative version of the algorithm is to remove the least important elements from these sums. We define four approximation procedures, based on two different ideas. Two of the approximation procedures basically compare all elements of a sum with the maximal valued element in the sum, removing the elements that are smaller than a certain predefined acceptance level. The other two procedures sort the elements in the sums, removing all elements apart from a certain number of the biggest ones. One of these procedures deletes the removed elements, the other procedure approximates them as one normally distributed element, before adding it to the sum. Applying the same data set, we test the four procedures, to check whether some of them stand out with regard to precision, CPU time, and memory usage. We end the report by concluding that the last mentioned procedure, in which the removed elements are approximated as one normally distributed element, gives significantly better results than the others do.