From WikiChip
Difference between revisions of "spec/cpu2017"

(Created page with "{{title|SPEC CPU2017}} '''SPEC CPU2017''' is an industry-standardized CPU benchmarking suite introduced by SPEC in 2017 as a replacement to the previous {{\\|CPU2006}} sui...")
 
Line 25: Line 25:
  
 
Note that the ''base'' is a subset of the ''peak'' metric. That is, all ''base'' results are valid ''peak'' results, but not all ''peak'' results are valid ''base'' results.
 
Note that the ''base'' is a subset of the ''peak'' metric. That is, all ''base'' results are valid ''peak'' results, but not all ''peak'' results are valid ''base'' results.
 +
 +
=== Reference Machine ===
 +
As mentioned above, the performance is normalized using a reference machine. The reference machine is:
 +
 +
* '''Vendor:''' [[Sun Microsystems]]
 +
* '''Machine:''' Sun Fire V490
 +
** '''CPU:''' {{sun|UltraSPARC-IV+}}
 +
** '''Frequency:''' 2,100 MHz
 +
* '''Introduction:''' 2006

Revision as of 19:31, 5 July 2017

SPEC CPU2017 is an industry-standardized CPU benchmarking suite introduced by SPEC in 2017 as a replacement to the previous CPU2006 suite.

Overview

Introduced in 2017, CPU2017 consists of 43 benchmarks organized into four categories:

Floating PointInteger
LatencySPECspeed2017_int_*SPECspeed2017_fp_*
= time_on_reference_machine / time_on_SUT
ThroughputSPECrate2017_int_*SPECrate2017_fp_*
= number_of_copies * (time_on_reference_machine / time_on_SUT)
  • Latency ("speed"; SPECspeed2017_*) - The time it takes to run 1 task at a time. The shorter the time, the faster the execution and thus higher score.
  • Throughput ("rate"; SPECrate2017_*) - The amount of work that can be done in a unit of time. The more work done at a given time the higher the throughput and thus higher score. The tester chooses how many tasks to run (ideally highest sustainable).

Regardless of the metric, higher performance yields higher score.

Compilation

Because the test suite must be compiled on the tester's machine, the way compilation is conducted becomes a point disputes (e.g., no optimization?, default optimization?, higher optimization?). CPU2017 defines two compilation possibilities:

  • base metric (*_base) - this metric is required for all reports. The base metric requires that all modules in the suite be compiled using the same flags and in the same order for each language.
  • peak metric (*_peak) - this metric is optional. The peak metric provides flexibility for tests that want to extract additional performance through higher compiler optimization by using additional different compiler options.

Note that the base is a subset of the peak metric. That is, all base results are valid peak results, but not all peak results are valid base results.

Reference Machine

As mentioned above, the performance is normalized using a reference machine. The reference machine is: