Microfluidic droplet detection via region-based and single-pass convolutional neural networks with comparison to conventional image analysis methodologies.
Peer reviewed, Journal article
Published version
View/ Open
Date
2022Metadata
Show full item recordCollections
Original version
10.1016/j.mlwa.2021.100222Abstract
As the complexity of microfluidic experiments and the associated image data volumes scale, traditional feature extraction approaches begin to struggle at both detection and analysis pipeline throughput. Deep-neural networks trained to detect certain objects are rapidly emerging as data gathering tools that can either match or outperform the analysis capabilities of the conventional methods used in microfluidic emulsion science. We demonstrate that two types of neural-networks, You Only Look Once (YOLOv3, YOLOv5) and Faster R-CNN, can be trained on a dataset which comprises of droplets generated across several microfluidic experiments and systems. The latitude of droplets used for training and validation, produce model weights which are easily transitive to emulsion systems at large, while completely circumventing any necessity of manual feature extraction. In flow cell experiments which comprised of greater than either 10,000 mono- or polydisperse droplets, the models show excellent or superior statistical symmetry to classical implementations of the Hough transform or widely utilized ImageJ plugins. In more complex chip architectures which simulate porous media, the produced image data typically requires heavy pre-processing to extrapolate valid data, where the models were able to handle raw input and produce size distributions with accuracy of ± for intermediate magnifications. This data harvesting fidelity is extended to foreign datasets not included in the training such as micrograph observation of various emulsified systems. Implementing these neural networks as the sole feature extraction tools in these microfluidic systems not only makes the data pipelining more efficient but opens the door for live detection and development of autonomous microfluidic experimental platforms due to inference times of greater than 100 frames per second.