Exploring Cells and Context Approaches for RNN Based Conversational Agents
MetadataShow full item record
Natural Language Processing is a challenging field within Artificial Intelligence, and building bots and conversational agents have been pursued by many researchers over the last decades. These agents should output reasonable responses, based on user inputs. In this thesis, we give an introduction on how to create conversational agents based on the current state-of-the-art, using Recurrent Neural Networks (RNN). We delve into different RNN architectures and compare the quality of the outputs from nine distinct agents. The baseline is an Encoder-Decoder model using Long Short-Term Memory (LSTM) cells, which is fed with question-response pairs. We compare it with models consisting of other RNN cells and explore different approaches that take the context of the entire conversation into account. To evaluate the models, we trained them on two different datasets. Five models were trained on the Ubuntu Dialogue Corpus (UDC), whereas four were trained on the OpenSubtitles dataset. The UDC is a closed domain dataset which is suitable when we aim for a beneficial conversational agent for a specific area. The OpenSubtitles dataset, on the other hand, is an open domain dataset and is used to capture how well the models handle chit-chatting (casual conversations and small talk). To feed the models with proper training data, we propose a procedure which preprocesses the data in four steps. One of the advantages of this preprocessing procedure is the removal of unknown tokens. This means that the training data only consists of words that exist in the conversational agents' vocabulary. The results indicate that the use of Grid LSTM cells improve the quality of the responses for the chit-chatting task and that the use of a context-based model generates responses which reflect the topic of the conversation.