Domain general Active Learning strategies using inter-sample similarity and Reinforcement Learning
MetadataShow full item record
One of the major drawbacks of deep learning is the amount of labeled training data required in order to reach acceptable performance. This labeled data may be difficult and expensive to obtain. The goal of active learning is to reduce the amount of labeled data needed for a model to achieve acceptable performance. Traditional active learning has limited effectiveness across different domains, and there is no single query strategy that outperforms all others in various domains. In this thesis, we perform research on ways to close the gap between the current state of the art active learning research and a domain general active learning strategy. This thesis proposes an architecture that utilizes the similarity between samples to improve the performance achieved by traditional active learning algorithms. We test the architecture on datasets with different characteristics, and prove that the utilization of inter-sample similarity enhances the performance in all cases. Our architecture is able to reach or exceed the performance achieved by traditional entropy sampling using only 62.5% and 83.9% of the data on the MR and UMICH dataset. To the best of our knowledge, this is the first analysis of the effects of inter-sample similarity in active learning. Furthermore, we explore approaches to active learning in a visual semantic embedding setting. First, we provide a qualitative discussion on how model uncertainty may be represented in such an application. Using this representation, we propose a reinforcement learning approach to stream-based active learning. Lastly, we evaluate the effectiveness of the proposed approach, and provide an in-depth discussion of its performance.