## A multiple-try Metropolis-Hastings algorithm with a tailored number of proposals

##### Master thesis

##### Permanent lenke

http://hdl.handle.net/11250/2622329##### Utgivelsesdato

2019##### Metadata

Vis full innførsel##### Samlinger

##### Sammendrag

Oppgaven presenterer en multiple-try Metropolis Hastings algoritme, som utvider målfordelingen til å sample en rettet asyklisk graf med et sett av noder og rettede kanter. Hver node er assosiert med en verdi som potensielt kan bli akseptert som en verdi fra den opprinnelige målfordelingen. Algoritmen er en ny versjon av en algoritme presentert i Luo og Tjelmeland 2018, og mange av utledningene er basert på denne algoritmen. A multiple-try Metropolis--Hastings(M--H) algorithm is a special case of the M--H algorithm which, instead of proposing one proposal, proposes multiple values in every iteration of the algorithm. The motivation for generating multiple proposals is to make the sampled values from the M--H simulation less correlated. Luo and Tjelmeland (2018) present a multiple-try M--H algorithm which uses an undirected acyclic graph to generate multiple proposals, by first sampling one of the nodes in the graph as a root in a new directed version of the graph. The root node has a value associated with it, which is assumed to be distributed according to a target distribution. The multiple proposals are proposed by using a proposal distribution $q(x_n|x_{pred(n)})$, and propose values in direction of the directed edges, by first conditioning on the value associated with the root node. When all nodes in the directed graph are associated to a value generated through $q(x_n|x_{pred(n)})$, a new node is sampled as the root node, and the value associated with it is accepted as a sample from the target distribution. The new proposals are then generated in a new iteration of the algorithm, with the order determined by the structure of the graph with the sampled node as root.
We present a new version of the algorithm from Luo and Tjelmeland (2018), which instead of using a fixed undirected graph to generate multiple proposals, grows a directed acyclic graph (DAG) in every iteration of the algorithm. The idea is to tailor the number of proposals by making the DAG grow in directions of high density values. Every node in the DAG is associated with a value, and these values are generated through a proposal distribution $q(x_n|x_{pred(n)})$, like in Luo and Tjelmeland (2018). The difference is that when we grow the DAG stochastically, the edges and nodes of the DAG are generated as the proposals are made, making it possible for the DAG to stop growing in directions of bad proposals and expand in the directions of high density values. The derivation of the algorithm is based on the derivations from Luo and Tjelmeland (2018), and are similar in all ways except for that we include the growth of the DAG in the joint distribution used as the target distribution. We demonstrate that the new algorithm does not converge to the specific target distribution of interest, but has a variance dependent on the parameters used in the M--H setup. We try to locate the error through simulating the algorithm for different parameters, and conclude that there must be something wrong in how the probability for a specific DAG being generated is calculated. We also investigate whether the algorithm succeeds in proposing high density values compared to the algorithm from Luo and Tjelmeland, and also compare different strategies for growing the DAG based on their ability to generate high density values and jump between levels when sampling a new node as root.