ANN based classification of humans and animals using UWB radar
Abstract
and Conclusion
Micro-Doppler signatures take advantage of Doppler information in radar data to create timeVS. frequency information. Such signatures are interpretable by the human eye with visiblefeatures such as frequencies of swinging arms and legs as well as radial velocities of movingobjects. This work explores the possibilities of distinguishing human and animal signaturesby using micro-Doppler data as input to a neural network classier, more specically a CNN.A simulator was used to generate radar data of humans and dogs. This data is noise-freeand formed the basis for training and testing the neural network. Furthermore, a secondtest set was developed based on real radar data. The real data was created from recordingsof a human and a dog in various situations using Noveldas X4 radar. This was done in acontrolled test environment to minimize noise and disturbances on the data. Micro-Dopplersignatures are retrieved from the radar data by a feature extraction process using a seriesof digital signal processing steps. These are appropriately formatted to serve as input datato a CNN. Two types of images with dierent resolutions are studied. Large images of size256x256 contain high resolution frequency content, whereas the downsampled images of size32x32 contain contours of the same content with poor resolution. Machine learning methodssuch as L2-regularization and dropout are explored in eorts to increase testing performanceof the neural network.
The neural network was found to train more easily when using images of micro-Dopplersignatures over several seconds. These images contain frequency information such as mul-tiple cycles of moving limbs that can be picked up by the network to distinguish a humanfrom a dog. The alternative of using images with only small variations in frequency contentproved to be hard to classify as no general features that can be extracted are visible in theimages.
Testing the images on the dierent CNN congurations revealed that the human imageswere typically more easily correctly classied than the dog images. The greater access todiverse human scenario radar data than dog data makes it dicult for the network to learngeneral dog features. The best network predictions were made on the 32 x 32 images. 100%classication accuracy was achieved on the simulated test images when using the dropouttechnique as well as when using the dropout technique together with L2-regularization. Thesecases showed strong levels of predictions on both human images and dog images. This sug-gests that some general features that could separate the two classes were learned by the CNNduring training.
The real data was poorly classied by the network congurations tested upon. This is asexpected as the training of the CNN was based on simulated images only. The noise in thereal images is non-existent in the simulated images. Relating real and simulated images istherefore difficult.
As a conclusion, simulated micro-Doppler signatures over several seconds can be classiedwith high certainty. A high resolution is not necessary to capture important features in thedata. Due to a greater access to diverse radar data on humans, general features in humanimages are more easily learned by a neural network than what is the case with dog images.Real data can not be classied with the methods used in this thesis. For this to be realis-tic, more resemblance between real and simulated data is necessary or real images must beincluded when training the neural network, which requires a greater access to real data.