SPEc Benchmarks
SPECjEnterprise2010 (Source : spec.org)
The SPECjEnterprise2010 benchmark is a full system benchmark which allows performance measurement and characterization of Java EE 5.0 servers and supporting infrastructure such as JVM, Database, CPU, disk and servers.
The workload consists of an end to end web based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages , Enterprise Java Beans, Java Persistence Entities (pojo’s) and Message Driven Beans.
SPECjEnterprise2010 is the third generation of the SPEC organization’s J2EE end-to-end industry standard benchmark application. The new SPECjEnterprise2010 benchmark has been re-designed and developed to cover the Java EE 5.0 specification’s significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server’s implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems.
To see additional benchmark details and results please Click Here.
SPEC JMS 2007 (Source : spec.org)
The SPEC JMS 2007 benchmark is the first industry-standard benchmark for evaluating the performance of enterprise message-oriented middleware servers based on JMS (Java Message Service). It provides a standard workload and performance metrics for competitive product comparisons, as well as a framework for in-depth performance analysis of enterprise messaging platforms.
The benchmark measures the end-to-end performance of all components that make up the application environment, including hardware, JMS server software, JVM software, database software if used for message persistence, and the system network. The benchmark provides two metrics, SPECjms2007@Horizontal for the horizontal topology and SPECjms2007@Vertical for the vertical topology.
To see additional benchmark details and results please Click Here.
SPECjvm2008 (Source : spec.org)
SPECjvm2008 (Java Virtual Machine Benchmark) is a benchmark suite for measuring the performance of a Java Runtime Environment (JRE), containing several real life applications and benchmarks focusing on core java functionality. The suite focuses on the performance of the JRE executing a single application; it reflects the performance of the hardware processor and memory subsystem, but has low dependence on file I/O and includes no network I/O across machines. The SPECjvm2008 workload mimics a variety of common general purpose application computations. These characteristics reflect the intent that this benchmark will be applicable to measuring basic Java performance on a wide variety of both client and server systems.
SPEC also finds user experience of Java important, and the suite therefore includes startup benchmarks and has a required run category called base, which must be run without any tuning of the JVM to improve the out of the box performance.
To see additional benchmark details and results please Click Here.
SPEC SFS 2014 (Source : spec.org)
The SPEC SFS 2014 benchmark is the latest version of the Standard Performance Evaluation Corporation benchmark suite measuring file server throughput and response time, providing a standardized method for comparing performance across different vendor platforms. The suite is the successor to the SFS 2008 benchmark, but has been completely rewritten to shift focus from measuring performance at the component level to measuring an end-to-end storage solution for specific applications.
To see additional benchmark details and results please Click Here.
SPECpower_ssj2008 (Source : spec.org)
SPECpower_ssj2008 is the first industry-standard benchmark that evaluates the power and performance characteristics of single server and multi-node servers.
The drive to create the power and performance benchmark came from the recognition that the IT industry, computer manufacturers, and governments are increasingly concerned with the energy use of servers. This benchmark provides a means to measure power (at the AC input) in conjunction with a performance metric. This helps IT managers to consider power characteristics along with other selection criteria to increase the efficiency of data centers.
The workload exercises the CPUs, caches, memory hierarchy and the scalability of shared memory processors (SMPs) as well as the implementations of the JVM (Java Virtual Machine), JIT (Just-In-Time) compiler, garbage collection, threads and some aspects of the operating system. The benchmark runs on a wide variety of operating systems and hardware architectures and should not require extensive client or storage infrastructure.
To see additional benchmark details and results please Click Here.
SPEC VIRT (Source : spec.org)
The SPEC VIRT benchmark suites are used to measure performance of virtualized platforms. These benchmark suites are targeted for use by hardware vendors, virtualization software vendors, application software vendors, datacenter managers, and academic researchers.
The SPEC virt_sc 2013 benchmark addresses performance evaluation of datacenter servers used in virtualized server consolidation (sc). SPEC virt_sc 2013 benchmark measures the end-to-end performance of all system components including the hardware, virtualization platform, and the virtualized guest operating system and application software. The benchmark supports hardware virtualization, operating system virtualization, and hardware.
The benchmark utilizes several SPEC workloads representing applications that are common targets of virtualization and server consolidation. We modified each of these standard workloads to match a typical server consolidation scenario of CPU resource requirements, memory, disk I/O, and network utilization for each workload. These workloads are modified versions of SPECweb 2005, SPECjAppServer 2004, SPECmail 2008, and SPEC INT 2006 benchmarks. The client-side SPEC virt_sc 2013 harness controls the workloads. Scaling is achieved by running additional sets of virtual machines, called “tiles”, until overall throughput reaches a peak. All VMs must continue to meet required quality of service (QoS) criteria.
To see additional benchmark details and results please Click Here.
SPEC CPU (Source : spec.org)
The SPEC CPU 2006 benchmark is SPEC’s next-generation, industry-standardized, CPU-intensive benchmark suite, stressing a system’s processor, memory subsystem and compiler.
This benchmark suite includes the SPECint benchmarks and the SPECfp benchmarks. The SPECint 2006 benchmark contains 12 different benchmark tests and the SPECfp 2006 benchmark contains 19 different benchmark tests. SPEC designed this suite to provide a comparative measure of compute-intensive performance across the widest practical range of hardware using workloads developed from real user applications. These benchmarks are provided as source code and require the user to be comfortable using compiler commands as well as other commands via a command interpreter using a console or command prompt window in order to generate executable binaries.
To see additional benchmark details and results please Click Here.
Trevor Warren is passionate about challenging the status-quo and finding reasons to innovate. Over the past 16 years he has been delivering complex systems, has worked with very large clients across the world and constantly is looking for opportunities to bring about change. Trevor constantly strives to combine his passion for delivering outcomes with his ability to build long lasting professional relationships. You can learn more about the work he does at LinkedIn. You can download a copy of his CV at VisualCV. Visit the Github page for details of the projects he’s been hacking with.