JEP 230: Microbenchmark Suite

AuthorsStaffan Friberg, Aleksey Shipilev
OwnerMikael Vidstedt
Created2014/07/17 00:07
Updated2016/04/05 17:29
Discussionplatform dash jep dash discuss at openjdk dot java dot net
Reviewed byJoe Darcy, Mikael Vidstedt
Endorsed byMikael Vidstedt


Add a basic suite of microbenchmarks to the JDK source code, and make it easy for developers to run existing microbenchmarks and create new ones.




The microbenchmark suite will be co-located with the JDK source code in a single repository or directory and, when built, will produce a single JAR file. A single repository will simplify adding and locating benchmarks during development. When running benchmarks, JMH provides powerful filtering capabilities that allow the user to run only the benchmarks that are currently of interest. The exact location remains to be determined.

Benchmarking generally require comparisons to an earlier build, or even release, so the microbenchmarks must support JDK(N), for benchmarks targeting features in the new JDK, and JDK(N-1), for benchmarks targeting features existing in an earlier release. This means, for JDK 9, that the structure and build scripts must support compiling benchmarks for both JDK 9 and JDK 8. The benchmarks will further be divided by using Java package names that describe the area of the JDK they are testing.

After discussion on the jdk9-dev email list, (repository location and naming), the following repository structure has been decided. Source structure will be discussed once the repository is in place.

   .../make (Shared folder for Makefiles)
      .../src/jdk8 (same as under JDK 9)
         .../jdk (subdirectories similar to JDK packages and modules)
         .../hotspot (subdirectories similar to HotSpot components)
         .../resources (if needed)

The building of the microbenchmark suite will be integrated with the normal JDK build system. It will be a separate target that is not executed during normal JDK builds in order to keep the build time low for developers and others not interested in building the microbenchmark suite. To build the microbenchmark suite the user will have to specifically run make microbenchmarks or something similar. It is not yet determined how this will best be integrated into the build system, and it will require further discussion with those responsible for the build system. The benchmarks will all depend on JMH in much the same way that some unit tests depend on TestNG or jtreg, so while the dependence on JMH is new there are other parts of the build that have similar dependencies which can be reviewed to determine an appropriate solution. One difference compared to jtreg is that JMH is both used during the build and is packaged as part of the resulting JAR file.

A Maven POM file will be added to the microbenchmark directory to enable easy development using IDEs. The target directory when using the POM file will be created under the regular JDK build directory to avoid mixing source code with the build output.

All added benchmarks will be thoroughly tested and reviewed to determine that they test what they are supposed to test and return predictable results, to allow them to be used in regression-testing scenarios. All required benchmark parameters will be configured as part of the benchmarks using JMH annotations to ensure trusted and easy comparison between builds. Any user, however, is still expected to make sure that other parameters such as the execution machine and the JDK are stable and comparable when doing analysis. Benchmarks are expected, in the common case, to be able to finish a complete run in less than a minute. This is not a wrapper framework for large or long-running benchmarks; the goal is to provide a suite of fast and targeted benchmarks. In some exceptional cases benchmarks may require longer warmup or runtime in order to achieve stable results, but that should be avoided as much as possible. It is not a goal of the suite to act as a general wrapper for larger workloads such a Java EE benchmarks running on top of Glassfish; the intent, rather, is to extract a critical component or method from a larger benchmark and stress only that part as a microbenchmark.

As part of this project a new page on will be created to help explain how to develop new benchmarks and describe what the requirements are for adding a benchmark. The requirements will mandate adherence to coding standards, reproducible performance, and clear documentation of the benchmark and what it is measuring.


We considered maintaining the microbenchmark suite as a separate project and repository, but co-locating the microbenchmarks with the JDK source code and in the JDK forest is the preferred solution. Co-location simplifies adding benchmarks for new features and removing benchmarks, while still keeping the suite stable for the benchmarking of different JDK releases.


The microbenchmarks will be validated by the performance team as part of the performance testing of JDK 9 to ensure that only stable, tuned, and accurate microbenchmarks are added. Evaluation and profiling will also be done for each benchmark to ensure that it tests the intended functionality. All tests must be run multiple times on all applicable platforms to ensure that they are stable.



The microbenchmark suite will depend on the Java Microbenchmark Harness version 1.0 or later.