Guiding the training of Generative Adversarial Networks
MetadataShow full item record
Image generation with Artificial Neural Networks (ANNs) has been popular in the last few years by using the knowledge about objects and structures to generate dreamlike images, applying smart filters or making images appear scary. Although vivid and impressive, the generated images do not appear natural to a human viewer. One can easily tell that an image is generated or digitally altered. This is not too surprising as the neural networks are usually trained with the goal of learning the common structures for image recognition and object classification. Training ANNs to generate better images is a hard problem to solve with conventional supervised learning, requiring a human to judge the quality of the generated images. Generative Adversarial Networks (GAN) is a new approach that replaces the human judge with a neural network. This network is called the Discriminator and evaluates generated images from the other neural network called the Generator. These two networks partake in an adversarial battle to improve their own performance. The results of this battle is a model capable of generating new natural looking images based on unsupervised learning of an image dataset. GAN have evolved considerably since its inception, but there are still open problems. One such problem is how to objectively evaluate their performance and the quality of their generated output. Image generation has no fixed solution, and evaluating it is a difficult as evaluating other creative works. One method allows for two GANs to be compared by battling them against each other. This thesis will propose a custom state-of-the-art implementation of this method that allows for comparison between two GAN models both overall and throughout the learning process. GANs have a reputation for being hard to train. One concrete problem is maintaining the balance between the Generator and Discriminator. As for humans, it is easier to rate the quality of images then it is to actually create them. A good evaluator is necessary, but it must not outpower the generative model. This thesis will propose concrete techniques that favour the Generator without limiting the Discriminator. Two main approaches will be explored to achieve this. The first method makes guided alterations to the usually random input of the Generator. The second method adds an additional Discriminator to the model. Techniques based on both of these methods were shown to effectively guide the training process and creating strong models that outperformed regular GANs when compared with the previously mentioned evaluation metric.