(initial page) |
|||
Line 1: | Line 1: | ||
{{title|SPEC CPU2006}} | {{title|SPEC CPU2006}} | ||
'''SPEC CPU2006''' is an industry-standardized CPU benchmarking suite introduced by [[SPEC]] in 2006 as a replacement to the previous {{\\|CPU2006}} suite. CPU2006 was officially released on August 24, 2006. | '''SPEC CPU2006''' is an industry-standardized CPU benchmarking suite introduced by [[SPEC]] in 2006 as a replacement to the previous {{\\|CPU2006}} suite. CPU2006 was officially released on August 24, 2006. | ||
+ | |||
+ | == Overview == | ||
+ | Introduced in 2006, CPU2006 consists of 29 benchmarks organized into four categories: | ||
+ | |||
+ | <table class="wikitable" style="text-align: center;"> | ||
+ | <tr><th></th><th>Integer</th><th>Floating Point</th></tr> | ||
+ | <tr><th rowspan="2">Latency</th><td>SPECint2006<br>SPECint_base2006</td><td>SPECfp2006<br>SPECfp_base2006</td></tr> | ||
+ | <tr><td colspan="2"><code>= time_on_reference_machine / time_on_SUT</code></td></tr> | ||
+ | <tr><th rowspan="2">Throughput</th><td>SPECint_rate2006<br>SPECint_rate_base2006</td><td>SPECfp_rate2006<br>SPECfp_rate_base2006</td></tr> | ||
+ | <tr><td colspan="2"><code>= number_of_copies * (time_on_reference_machine / time_on_SUT)</code></td></tr> | ||
+ | </table> | ||
+ | |||
+ | * '''Latency''' ("speed"; '''SPEC*2006''') - The time it takes to run 1 task at a time. The shorter the time, the faster the execution and thus higher score. | ||
+ | * '''Throughput''' ("rate"; '''SPEC*_rate*2006''') - The amount of work that can be done in a unit of time. The more work done at a given time the higher the throughput and thus higher score. The tester chooses how many tasks to run (ideally highest sustainable). | ||
+ | |||
+ | Regardless of the metric, higher performance yields higher score. | ||
+ | |||
+ | === Compilation === | ||
+ | Because the test suite must be compiled on the tester's machine, the way compilation is conducted becomes a point disputes (e.g., no optimization?, default optimization?, higher optimization?). CPU2006 defines two compilation possibilities: | ||
+ | |||
+ | * '''base''' metric (''_base'') - this metric is required for all reports. The base metric requires that all modules in the suite be compiled using the same flags and in the same order for each language. | ||
+ | * '''peak''' metric (''no suffix'') - this metric is optional. The peak metric provides flexibility for tests that want to extract additional performance through higher compiler optimization by using additional different compiler options. | ||
+ | |||
+ | Note that the ''base'' is a subset of the ''peak'' metric. That is, all ''base'' results are valid ''peak'' results, but not all ''peak'' results are valid ''base'' results. | ||
+ | |||
+ | === Reference Machine === | ||
+ | As mentioned above, the performance is normalized using a reference machine. The reference machine is: | ||
+ | |||
+ | * '''Vendor:''' [[Sun Microsystems]] | ||
+ | * '''Machine:''' Ultra Enterprise 2 | ||
+ | ** '''CPU:''' {{sun|UltraSPARC II}} | ||
+ | ** '''Frequency:''' 296 MHz | ||
+ | * '''Introduction:''' 1997 |
Revision as of 15:37, 23 July 2017
SPEC CPU2006 is an industry-standardized CPU benchmarking suite introduced by SPEC in 2006 as a replacement to the previous CPU2006 suite. CPU2006 was officially released on August 24, 2006.
Overview
Introduced in 2006, CPU2006 consists of 29 benchmarks organized into four categories:
Integer | Floating Point | |
---|---|---|
Latency | SPECint2006 SPECint_base2006 | SPECfp2006 SPECfp_base2006 |
= time_on_reference_machine / time_on_SUT | ||
Throughput | SPECint_rate2006 SPECint_rate_base2006 | SPECfp_rate2006 SPECfp_rate_base2006 |
= number_of_copies * (time_on_reference_machine / time_on_SUT) |
- Latency ("speed"; SPEC*2006) - The time it takes to run 1 task at a time. The shorter the time, the faster the execution and thus higher score.
- Throughput ("rate"; SPEC*_rate*2006) - The amount of work that can be done in a unit of time. The more work done at a given time the higher the throughput and thus higher score. The tester chooses how many tasks to run (ideally highest sustainable).
Regardless of the metric, higher performance yields higher score.
Compilation
Because the test suite must be compiled on the tester's machine, the way compilation is conducted becomes a point disputes (e.g., no optimization?, default optimization?, higher optimization?). CPU2006 defines two compilation possibilities:
- base metric (_base) - this metric is required for all reports. The base metric requires that all modules in the suite be compiled using the same flags and in the same order for each language.
- peak metric (no suffix) - this metric is optional. The peak metric provides flexibility for tests that want to extract additional performance through higher compiler optimization by using additional different compiler options.
Note that the base is a subset of the peak metric. That is, all base results are valid peak results, but not all peak results are valid base results.
Reference Machine
As mentioned above, the performance is normalized using a reference machine. The reference machine is:
- Vendor: Sun Microsystems
- Machine: Ultra Enterprise 2
- CPU: UltraSPARC II
- Frequency: 296 MHz
- Introduction: 1997