Important Notice: Our web hosting provider recently started charging us for additional visits, which was unexpected. In response, we're seeking donations. Depending on the situation, we may explore different monetization options for our Community and Expert Contributors. It's crucial to provide more returns for their expertise and offer more Expert Validated Answers or AI Validated Answers. Learn more about our hosting issue here.

What criteria were used to select the benchmarks?

benchmarks criteria select Used
0
10 Posted

What criteria were used to select the benchmarks?

0

In the process of selecting applications to use as benchmarks, SPEC considered the following criteria: • well-known applications or application areas • available workloads that represent real problems • portability to a variety of CPU architectures (32- and 64-bit including AMD64, Intel IA32, Itanium, PA-RISC, PowerPC, SPARC, etc.) • portability to various operating systems, particularly UNIX and Windows • nearly all of the time is spent compute bound • little time spent in IO and system services (except MPI) • minimal benchmarking time should be spent processing code not provided by SPEC (other than the MPI), e.g. in math libraries or the operating system • benchmarks should run in about 1GB RAM per rank without swapping or paging • reasonably smooth scaling properties • guaranteed to work with 4 or more ranks • not “embarassingly parallel” with no blocking communication between subcomputations Note that not every benchmark satisfies every criterion. 122.

0

A13: In the process of selecting applications to use as benchmarks, SPEC considered the following criteria: • portability to all SPEC hardware architectures (64-bit including Alpha, Intel Architecture, MIPS, SPARC, etc.) • portability to various operating systems • benchmarks should produce scalable parallel performance over several architectures. • benchmarks should not include measurable I/O • benchmarks should not include networking or graphics • benchmarks should run in 2GB of RAM for SPEC OMPM2001 and 8GB of RAM for SPEC OMPL2001 without swapping for a single CPU. • no more than five percent of benchmarking time should be spent processing code not provided by SPEC.

0

A18: In the process of selecting applications to use as benchmarks, SPEC considered the following criteria: • portability to all SPEC hardware architectures (32- and 64-bit including Alpha, Intel Architecture, PA-RISC, Rxx00, SPARC, etc.) • portability to various operating systems, particularly UNIX and NT • benchmarks should not include measurable I/O • benchmarks should not include networking or graphics • benchmarks should run in 256MB RAM without swapping (SPEC is assuming this will be a minimal memory requirement for the life of CPU2000 and the emphasis is on compute-intensive performance, not disk activity) • no more than five percent of benchmarking time should be spent processing code not provided by SPEC.

0

In the process of selecting applications to use as benchmarks, SPEC considered the following criteria: • well-known applications or application areas • available workloads that represent real problems • portability to a variety of CPU architectures (32- and 64-bit including AMD64, Intel IA32, Itanium, PA-RISC, PowerPC, SPARC, etc.) • portability to various operating systems, particularly UNIX and Windows • nearly all of the time is spent compute bound • little time spent in IO and system services (except MPI) • minimal benchmarking time should be spent processing code not provided by SPEC (other than the MPI), e.g. in math libraries or the operating system • benchmarks should run in about 1GB RAM per rank without swapping or paging • reasonably smooth scaling properties • guaranteed to work with 4 or more ranks • not “embarassingly parallel” with no blocking communication between subcomputations Note that not every benchmark satisfies every criterion. 122.tachyon, for example, is “emba

Related Questions

What is your question?

*Sadly, we had to bring back ads too. Hopefully more targeted.

Experts123