@gron

Measuring Program Similarity: Experiments with SPEC CPU Benchmark Suites

, , , and . IEEE International Symposium on Performance Analysis of Systems and Software, 2005. ISPASS 2005., page 10--20. (March 2005)
DOI: 10.1109/ISPASS.2005.1430555

Abstract

It is essential that a subset of benchmark programs used to evaluate an architectural enhancement, is well distributed within the target workload space rather than clustered in specific areas. Past efforts for identifying subsets have primarily relied on using microarchitecture-dependent metrics of program performance, such as cycles per instruction and cache miss-rate. The shortcoming of this technique is that the results could be biased by the idiosyncrasies of the chosen configurations. The objective of this paper is to present a methodology to measure similarity of programs based on their inherent microarchitecture-independent characteristics which will make the results applicable to any microarchitecture. We apply our methodology to the SPEC CPU2000 benchmark suite and demonstrate that a subset of 8 programs can be used to effectively represent the entire suite. We validate the usefulness of this subset by using it to estimate the average IPC and L1 data cache miss-rate of the entire suite. The average IPC of 8-way and 16-way issue superscalar processor configurations could be estimated with 3.9% and 4.4% error respectively. This methodology is applicable not only to find subsets from a benchmark suite, but also to identify programs for a benchmark suite from a list of potential candidates. Studying the four generations of SPEC CPU benchmark suites, we find that other than a dramatic increase in the dynamic instruction count and increasingly poor temporal data locality, the inherent program characteristics have more or less remained the same

Description

IEEE Xplore Abstract - Measuring Program Similarity: Experiments with SPEC CPU Benchmark Suites

Links and resources

Tags

community

  • @gron
  • @dblp
@gron's tags highlighted