dc.description.abstract | In many of the problem domains typically tackled by deep learning, data
is plentiful and cheap but labeling of the data is tedious and expensive.
Letting a model actively select the data instances it is uncertain about to train
on and ignore others can reduce the percentage of instances that must
be labeled to achieve satisfactory results. To this end, this project presents a
novel semi-supervised active learning algorithm called Active Deep Dropout
networks (ADD-networks). It is based on evaluating a deep neural network s
uncertainty on unlabeled instances, through measuring disagreement within
a committee of networks derived from the original network. The committee
members are Monte-Carlo-sampled from the full network using the concept
of dropout. Experiments on classifying handwritten digits show that ADD-
networks are comparable to a state-of-the-art method, and vastly outperforms
random selection of instances. | |