dc.contributor.advisor | Martins, Thiago Guerrera | |
dc.contributor.author | Wind, Johan Sokrates | |
dc.date.accessioned | 2019-10-26T14:00:31Z | |
dc.date.available | 2019-10-26T14:00:31Z | |
dc.date.issued | 2019 | |
dc.identifier.uri | http://hdl.handle.net/11250/2624603 | |
dc.description.abstract | I denne artikkelen studerer vi nevrale nettverk som tilnærminger til en relatert kjerne-metode (kernel method). Vi analyserer forutsigelsene gjort av et enkelt nevralt nettverk med et skjult lag. To kilder til varians i forutsigelsene blir identifisert og eliminert gjennom introduksjonen av Lineære Differeanse-Nettverk. Det vises at for en uendelig ensemble av uendelig vide nevrale nettverk, vil forutsigelsene bli nøyaktig de samme som for en kjerne-metode. Vi verifiserer empirisk at endelige Lineære Differanse-Nettverk faktisk gir forutsigelser mer lignende denne ideelle kjerne-metoden enn et normalt nevralt nettverk. | |
dc.description.abstract | In this paper we study neural networks as approximations to a related kernel method. We analyze the predictions made by a vanilla neural network with one hidden layer. Two sources of variance in the predictions are identified and eliminated by introducing Linearized Difference Networks. It is shown that in the case of an infinite ensemble of infinitely wide neural networks (IEIN limit), the predictions will be exactly the same as those produced by a kernel method. We verify empirically that finite Linearized Difference Networks indeed produce predictions closer to this ideal kernel method than a vanilla neural network. | |
dc.language | eng | |
dc.publisher | NTNU | |
dc.title | Linearized Difference Networks as Approximate Kernel Methods | |
dc.type | Master thesis | |