dc.contributor.author | Lange, Stanislav | |
dc.contributor.author | Linguaglossa, Leonardo | |
dc.contributor.author | Geissler, Stefan | |
dc.contributor.author | Rossi, Dario | |
dc.contributor.author | Zinner, Thomas Erich | |
dc.date.accessioned | 2020-01-22T07:12:07Z | |
dc.date.available | 2020-01-22T07:12:07Z | |
dc.date.created | 2019-12-20T15:08:24Z | |
dc.date.issued | 2019 | |
dc.identifier.isbn | 978-1-7281-0515-4 | |
dc.identifier.uri | http://hdl.handle.net/11250/2637348 | |
dc.description.abstract | Network Functions Virtualization (NFV) is among the latest network revolutions, bringing flexibility and avoiding network ossification. At the same time, all-software NFV implementations on commodity hardware raise performance issues with respect to ASIC solutions. To address these issues, numerous software acceleration frameworks for packet processing have appeared in the last few years. Common among these frameworks is the use of batching techniques. In this context, packets are processed in groups as opposed to individually, which is required at high-speed to minimize the framework overhead, reduce interrupt pressure, and leverage instruction-level cache hits. Whereas several system implementations have been proposed and experimentally benchmarked, the scientific community has so far only to a limited extent attempted to model the system dynamics of modern NFV routers exploiting batching acceleration. In this paper, we fill this gap by proposing a simple generic model for such batching-based mechanisms, which allows a very detailed prediction of highly relevant performance indicators. These include the distribution of the processed batch size as well as queue size, which can be used to identify loss-less operational regimes or quantify the packet loss probability in high-load scenarios. We contrast the model prediction with experimental results gathered in a high-speed testbed including an NFV router, showing that the model not only correctly captures system performance under simple conditions, but also in more realistic scenarios in which traffic is processed by a mixture of functions. | nb_NO |
dc.language.iso | eng | nb_NO |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | nb_NO |
dc.relation.ispartof | Conference on Computer Communications, INFOCOM 2019 | |
dc.title | Discrete-Time Modeling of NFV Accelerators that Exploit Batched Processing | nb_NO |
dc.type | Chapter | nb_NO |
dc.description.version | acceptedVersion | nb_NO |
dc.source.pagenumber | 64-72 | nb_NO |
dc.identifier.doi | 10.1109/INFOCOM.2019.8737428 | |
dc.identifier.cristin | 1763465 | |
dc.description.localcode | © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | nb_NO |
cristin.unitcode | 194,63,30,0 | |
cristin.unitname | Institutt for informasjonssikkerhet og kommunikasjonsteknologi | |
cristin.ispublished | true | |
cristin.fulltext | preprint | |
cristin.qualitycode | 1 | |