A Continuous Approach to Controllable Text Generation using Generative Adversarial Networks
MetadataShow full item record
The challenges of training generative adversarial network (GAN) to produce discrete tokens, have seen a considerable amount of work in the past year. However, the amount of successful work on applying deep generative models to text generation is limited, when compared to the visual domain. One of the reasons, is the challenge of passing the gradient, while keeping the network differentiable. The known effective models that generate text with GAN, extend the original framework proposed by Goodfellow et al. , using reinforcement learning. We propose a novel approach that requires no modification to the training process introduced by Goodfellow et al. , and in addition is able to produce meaningful text without any pre-training. Our approach is not limited to the textual domain, and could be able to solve a variety of problems related to generating sequences of discrete tokens. We show a proof of concept for an image captioning model that extends our text generator to perform controllable text generation. This approach provides a straightforward method for controlling text generation using images, or any other vector representing relevant information. Evaluating results produced by GANs is a common problem. The absence of real data for each corresponding sample generated from the network, makes the appliance of common evaluation methods difficult. As a solution to this challenge, we have developed an automatic evaluation method for text generative systems. This method combines the machine learning evaluation metric, bilingual evaluation understudy (BLEU), with a set of interchangeable information retrieval techniques. This permits us to evaluate the semantic quality of our models, as well as comparing them to a baseline.