From WikiChip
Difference between revisions of "spec/cpu2017"

(Created page with "{{title|SPEC CPU2017}} '''SPEC CPU2017''' is an industry-standardized CPU benchmarking suite introduced by SPEC in 2017 as a replacement to the previous {{\\|CPU2006}} sui...")
 
 
(5 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
{{title|SPEC CPU2017}}
 
{{title|SPEC CPU2017}}
'''SPEC CPU2017''' is an industry-standardized CPU benchmarking suite introduced by [[SPEC]] in 2017 as a replacement to the previous {{\\|CPU2006}} suite.
+
'''SPEC CPU2017''' is an industry-standardized CPU benchmarking suite introduced by [[SPEC]] in 2017 as a replacement to the previous {{\\|CPU2006}} suite. CPU2017 was officially released on June 20, 2017.
  
 
== Overview ==
 
== Overview ==
Line 6: Line 6:
  
 
<table class="wikitable" style="text-align: center;">
 
<table class="wikitable" style="text-align: center;">
<tr><th></th><th>Floating Point</th><th>Integer</th></tr>
+
<tr><th></th><th>{{\|Integer}}</th><th>{{\|Floating Point}}</th></tr>
<tr><th rowspan="2">Latency</th><td>SPECspeed2017_int_*</td><td>SPECspeed2017_fp_*</td></tr>
+
<tr><th rowspan="2">Latency</th><td>SPECspeed2017_int_base<br>SPECspeed2017_int_peak</td><td>SPECspeed2017_fp_base<br>SPECspeed2017_fp_peak</td></tr>
 
<tr><td colspan="2"><code>= time_on_reference_machine / time_on_SUT</code></td></tr>
 
<tr><td colspan="2"><code>= time_on_reference_machine / time_on_SUT</code></td></tr>
<tr><th rowspan="2">Throughput</th><td>SPECrate2017_int_*</td><td>SPECrate2017_fp_*</td></tr>
+
<tr><th rowspan="2">Throughput</th><td>SPECrate2017_int_base<br>SPECrate2017_int_peak</td><td>SPECrate2017_fp_base<br>SPECrate2017_fp_peak</td></tr>
 
<tr><td colspan="2"><code>= number_of_copies * (time_on_reference_machine / time_on_SUT)</code></td></tr>
 
<tr><td colspan="2"><code>= number_of_copies * (time_on_reference_machine / time_on_SUT)</code></td></tr>
 
</table>
 
</table>
Line 25: Line 25:
  
 
Note that the ''base'' is a subset of the ''peak'' metric. That is, all ''base'' results are valid ''peak'' results, but not all ''peak'' results are valid ''base'' results.
 
Note that the ''base'' is a subset of the ''peak'' metric. That is, all ''base'' results are valid ''peak'' results, but not all ''peak'' results are valid ''base'' results.
 +
 +
=== Reference Machine ===
 +
As mentioned above, the performance is normalized using a reference machine. The reference machine is:
 +
 +
* '''Vendor:''' [[Sun Microsystems]]
 +
* '''Machine:''' Sun Fire V490
 +
** '''CPU:''' {{sun|UltraSPARC-IV+}}
 +
** '''Frequency:''' 2,100 MHz
 +
* '''Introduction:''' 2006
 +
 +
== External links ==
 +
* https://www.spec.org/ SPEC
 +
* https://www.spec.org/cpu2017/ SPEC CPU2017

Latest revision as of 00:10, 27 November 2017

SPEC CPU2017 is an industry-standardized CPU benchmarking suite introduced by SPEC in 2017 as a replacement to the previous CPU2006 suite. CPU2017 was officially released on June 20, 2017.

Overview[edit]

Introduced in 2017, CPU2017 consists of 43 benchmarks organized into four categories:

IntegerFloating Point
LatencySPECspeed2017_int_base
SPECspeed2017_int_peak
SPECspeed2017_fp_base
SPECspeed2017_fp_peak
= time_on_reference_machine / time_on_SUT
ThroughputSPECrate2017_int_base
SPECrate2017_int_peak
SPECrate2017_fp_base
SPECrate2017_fp_peak
= number_of_copies * (time_on_reference_machine / time_on_SUT)
  • Latency ("speed"; SPECspeed2017_*) - The time it takes to run 1 task at a time. The shorter the time, the faster the execution and thus higher score.
  • Throughput ("rate"; SPECrate2017_*) - The amount of work that can be done in a unit of time. The more work done at a given time the higher the throughput and thus higher score. The tester chooses how many tasks to run (ideally highest sustainable).

Regardless of the metric, higher performance yields higher score.

Compilation[edit]

Because the test suite must be compiled on the tester's machine, the way compilation is conducted becomes a point disputes (e.g., no optimization?, default optimization?, higher optimization?). CPU2017 defines two compilation possibilities:

  • base metric (*_base) - this metric is required for all reports. The base metric requires that all modules in the suite be compiled using the same flags and in the same order for each language.
  • peak metric (*_peak) - this metric is optional. The peak metric provides flexibility for tests that want to extract additional performance through higher compiler optimization by using additional different compiler options.

Note that the base is a subset of the peak metric. That is, all base results are valid peak results, but not all peak results are valid base results.

Reference Machine[edit]

As mentioned above, the performance is normalized using a reference machine. The reference machine is:

External links[edit]