Vis enkel innførsel

dc.contributor.authorLange, Stanislav
dc.contributor.authorLinguaglossa, Leonardo
dc.contributor.authorGeissler, Stefan
dc.contributor.authorRossi, Dario
dc.contributor.authorZinner, Thomas Erich
dc.date.accessioned2020-01-22T07:12:07Z
dc.date.available2020-01-22T07:12:07Z
dc.date.created2019-12-20T15:08:24Z
dc.date.issued2019
dc.identifier.isbn978-1-7281-0515-4
dc.identifier.urihttp://hdl.handle.net/11250/2637348
dc.description.abstractNetwork Functions Virtualization (NFV) is among the latest network revolutions, bringing flexibility and avoiding network ossification. At the same time, all-software NFV implementations on commodity hardware raise performance issues with respect to ASIC solutions. To address these issues, numerous software acceleration frameworks for packet processing have appeared in the last few years. Common among these frameworks is the use of batching techniques. In this context, packets are processed in groups as opposed to individually, which is required at high-speed to minimize the framework overhead, reduce interrupt pressure, and leverage instruction-level cache hits. Whereas several system implementations have been proposed and experimentally benchmarked, the scientific community has so far only to a limited extent attempted to model the system dynamics of modern NFV routers exploiting batching acceleration. In this paper, we fill this gap by proposing a simple generic model for such batching-based mechanisms, which allows a very detailed prediction of highly relevant performance indicators. These include the distribution of the processed batch size as well as queue size, which can be used to identify loss-less operational regimes or quantify the packet loss probability in high-load scenarios. We contrast the model prediction with experimental results gathered in a high-speed testbed including an NFV router, showing that the model not only correctly captures system performance under simple conditions, but also in more realistic scenarios in which traffic is processed by a mixture of functions.nb_NO
dc.language.isoengnb_NO
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)nb_NO
dc.relation.ispartofConference on Computer Communications, INFOCOM 2019
dc.titleDiscrete-Time Modeling of NFV Accelerators that Exploit Batched Processingnb_NO
dc.typeChapternb_NO
dc.description.versionacceptedVersionnb_NO
dc.source.pagenumber64-72nb_NO
dc.identifier.doi10.1109/INFOCOM.2019.8737428
dc.identifier.cristin1763465
dc.description.localcode© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.nb_NO
cristin.unitcode194,63,30,0
cristin.unitnameInstitutt for informasjonssikkerhet og kommunikasjonsteknologi
cristin.ispublishedtrue
cristin.fulltextpreprint
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel