Vis enkel innførsel

dc.contributor.authorFraser, Nicholas J.
dc.contributor.authorUmuroglu, Yaman
dc.contributor.authorGambardella, Giulio
dc.contributor.authorBlott, Michaela
dc.contributor.authorLeong, Philip W.
dc.contributor.authorVissers, Kees
dc.contributor.authorJahre, Magnus
dc.date.accessioned2018-02-01T13:35:52Z
dc.date.available2018-02-01T13:35:52Z
dc.date.created2017-03-21T15:41:57Z
dc.date.issued2017
dc.identifier.isbn978-1-4503-4877-5
dc.identifier.urihttp://hdl.handle.net/11250/2481250
dc.description.abstractBinarized neural networks (BNNs) are gaining interest in the deep learning community due to their significantly lower computational and memory cost. They are particularly well suited to reconfigurable logic devices, which contain an abundance of fine-grained compute resources and can result in smaller, lower power implementations, or conversely in higher classification rates. Towards this end, the FINN framework was recently proposed for building fast and flexible field programmable gate array (FPGA) accelerators for BNNs. FINN utilized a novel set of optimizations that enable efficient mapping of BNNs to hardware and implemented fully connected, non-padded convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements. However, FINN was not evaluated on larger topologies due to the size of the chosen FPGA, and exhibited decreased accuracy due to lack of padding. In this paper, we improve upon FINN to show how padding can be employed on BNNs while still maintaining a 1-bit datapath and high accuracy. Based on this technique, we demonstrate numerous experiments to illustrate flexibility and scalability of the approach. In particular, we show that a large BNN requiring 1.2 billion operations per frame running on an ADM-PCIE-8K5 platform can classify images at 12 kFPS with 671 μs latency while drawing less than 41 W board power and classifying CIFAR-10 images at 88.7% accuracy. Our implementation of this network achieves 14.8 trillion operations per second. We believe this is the fastest classification rate reported to date on this benchmark at this level of accuracy.nb_NO
dc.language.isoengnb_NO
dc.publisherAssociation for Computing Machinery (ACM)nb_NO
dc.relation.ispartofProceedings of the 8th Workshop and 6th Workshop on Parallel Programming and Run-Time Management Techniques for Many-core Architectures and Design Tools and Architectures for Multicore Embedded Computing Platforms
dc.relation.urihttps://www.researchgate.net/profile/Yaman_Umuroglu/publication/311791831_Scaling_Binarized_Neural_Networks_on_Reconfigurable_Logic/links/585aa85c08aeffd7c4fe9369.pdf
dc.titleScaling Binarized Neural Networks on Reconfigurable Logicnb_NO
dc.typeChapternb_NO
dc.description.versionacceptedVersionnb_NO
dc.source.pagenumber25-30nb_NO
dc.identifier.doi10.1145/3029580.3029586
dc.identifier.cristin1460138
dc.description.localcode© ACM, 2017. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions of Computing Education, https://dl.acm.org/citation.cfm?id=3029586nb_NO
cristin.unitcode194,63,10,0
cristin.unitnameInstitutt for datateknikk og informasjonsvitenskap
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel