Quantification of an Approximate forward-backward Algorithm applied to a Convolutional Model
Abstract
In this master thesis an approximated forward-backward algorithm for binary Markov random fields is applied to and evaluated for a convolutional Bayesian model. The Bayesian model is transformed into its unique corresponding energy function of binary variables, where interaction parameters defines the function. We quantify the quality of the approximation by using an independent proposal Metropolis-Hastings algorithm, where we apply the approximation to a variety of synthetic test cases. The acceptance rates increases as the maximum number of neighbors increase, which was to be expected. Highest percentage was generated for a case with increased noise in the likelihood, with a resulting acceptance rate of 94.95% for 10 neighbors. The lowest acceptance rates were gained from low noise cases, and for the binary Markov chain prior an acceptance rate of 8.03% was registered. For this last mentioned case the approximation was also simulated without the use of the Metropolis-Hastings algorithm, and compared with the aposteriori, where these two cases have approximately the same marginal probabilities. The same was seen for the four state Markov chain prior. Thus we conclude that the approximated forward-backward algorithm is viable even when the Metropolis-Hastings algorithm generate low acceptance rates.