Show simple item record

dc.contributor.authorRutkowski, Gregory Philip
dc.contributor.authorAzizov, Ilgar
dc.contributor.authorUnmann, Evan
dc.contributor.authorDudek, Marcin
dc.contributor.authorGrimes, Brian Arthur
dc.date.accessioned2023-05-22T12:28:07Z
dc.date.available2023-05-22T12:28:07Z
dc.date.created2021-12-15T10:38:43Z
dc.date.issued2022
dc.identifier.issn2666-8270
dc.identifier.urihttps://hdl.handle.net/11250/3068536
dc.description.abstractAs the complexity of microfluidic experiments and the associated image data volumes scale, traditional feature extraction approaches begin to struggle at both detection and analysis pipeline throughput. Deep-neural networks trained to detect certain objects are rapidly emerging as data gathering tools that can either match or outperform the analysis capabilities of the conventional methods used in microfluidic emulsion science. We demonstrate that two types of neural-networks, You Only Look Once (YOLOv3, YOLOv5) and Faster R-CNN, can be trained on a dataset which comprises of droplets generated across several microfluidic experiments and systems. The latitude of droplets used for training and validation, produce model weights which are easily transitive to emulsion systems at large, while completely circumventing any necessity of manual feature extraction. In flow cell experiments which comprised of greater than either 10,000 mono- or polydisperse droplets, the models show excellent or superior statistical symmetry to classical implementations of the Hough transform or widely utilized ImageJ plugins. In more complex chip architectures which simulate porous media, the produced image data typically requires heavy pre-processing to extrapolate valid data, where the models were able to handle raw input and produce size distributions with accuracy of ± for intermediate magnifications. This data harvesting fidelity is extended to foreign datasets not included in the training such as micrograph observation of various emulsified systems. Implementing these neural networks as the sole feature extraction tools in these microfluidic systems not only makes the data pipelining more efficient but opens the door for live detection and development of autonomous microfluidic experimental platforms due to inference times of greater than 100 frames per second.en_US
dc.language.isoengen_US
dc.publisherElsevier B. V.en_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleMicrofluidic droplet detection via region-based and single-pass convolutional neural networks with comparison to conventional image analysis methodologies.en_US
dc.title.alternativeMicrofluidic droplet detection via region-based and single-pass convolutional neural networks with comparison to conventional image analysis methodologies.en_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.volume7en_US
dc.source.journalMachine Learning with Applications (MLWA)en_US
dc.identifier.doi10.1016/j.mlwa.2021.100222
dc.identifier.cristin1968705
dc.relation.projectNorges forskningsråd: 237893en_US
dc.source.articlenumber100222en_US
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Navngivelse 4.0 Internasjonal
Except where otherwise noted, this item's license is described as Navngivelse 4.0 Internasjonal