dc.description.abstract | The performance gap between processors and main memory has been growing over
the last decades. Fast memory structures know as caches were introduced to mitigate
some of the effects of this gap. After processor manufacturers reached the
limits of single core processors performance in the early 2000s, multicore processors
have become common. Multicore processors commonly share cache space between
cores, and algorithms that manage access to shared cache structures have become
an important research topic. Many researchers have presented algorithms that are
supposed to improve the performance of multicore processors by modifying cache
policies. In this thesis, we present and evaluate several recent and important works
in the cache management eld. We present a simulation framework for evaluation
of various cache management algorithms, based on the Sniper simulation system.
Several of the presented algorithms are implemented; Thread Aware Dynamic Insertion
Policy (TADIP), Dynamic Re-Reference Interval Prediction (DRRIP), Utility
Cache Partition (UCP), Promotion/Insertion Pseduo-Partitioning (PIPP), and
Probabilistic Shared Cache Management (PriSM). The implemented algorithms are
evaluated against the commonly used Least Recently Used (LRU) replacement policy
and each other. In addition, we perform ve sensitivity analysis experiments,
exploring algorithm sensitivity to changes the simulated architecture. In total data
from almost 9000 simulation runs is used in our evaluation.
Our results suggest that all implemented algorithms mostly perform as good
as or better than LRU in 4-core architectures. In 8- and 16-core architectures
some of the algorithms, especially PIPP, perform worse than LRU. Throughout all
our experiments UCP, the oldest of the evaluated alternative to LRU, is the best
performer with an average performance increase of about 5%. We also show that
UCP performance increases to more than 20% when available cache and memory
resources are reduced. | |