JEP 233: Generate Run-Time Compiler Tests Automatically
|Component||hotspot / compiler|
|Discussion||hotspot dash compiler dash dev at openjdk dot java dot net|
|Reviewed by||Aleksandre Iline, Igor Veresov, Mikael Vidstedt, Vladimir Kozlov|
|Endorsed by||Mikael Vidstedt|
Develop a tool to test the run-time compilers by automatically generating test cases.
Generated tests should be
The tool should be configurable in terms of the language of final result (Java source code, Java bytecode), use of language constructs, control-flow and expressions complexity, etc.
Tests should be generated in randomly but reproducibly
As we add new platforms, leverage new CPU instructions, introduce new optimizations, and make other enhancements to the run-time compilers, it becomes increasingly infeasible to test the compilers effectively with direct, targeted tests.
The tool will randomly generate syntactically and semantically correct
Java source code or byte code, compile it if necessary, run it in both
-Xint) and compiled (
-Xcomp) modes, and verify the
The tool will work automatically, without human interaction. The generated tests will cover as many combinations as possible in a reasonable amount of time.
The Java source-code compiler,
javac, does not use all of Java's byte
codes, so generating only Java source code would leave some byte codes
uncovered. Generating only byte code for all types of tests would be a
much more complicated task, so we will adopt a hybrid approach that
generates both Java source code and byte code.
Compiling source code during test execution is problematic for embedded platforms, where a full JDK might not be available, so the tool will provide a way to pre-compile source-code tests.
The generated test cases will include complicated expressions and
control-flow graphs and will make use of intrinsics, floating-point
try-catch-finally constructs, etc. There will be a way to
adjust the tool's configuration.
The tool will generate tests randomly, but for reproducibility it will report its randomization seed and accept such a seed in order to replay the exact same sequence of generated tests.
The tool's source code will be placed in the
hotspot/test/testlibrary/jit-tester directory. Tests can be generated
via targets provided in the tool's makefile. The result of test
generation is a complete
jtreg test suite which can be run from the
same makefile or via
jtreg directly. The tool makefiles will not be
integrated into the HotSpot/JDK build infrastructure.
Given that the test generation process takes a significant amount of time, generating and running these tests is not expected to be a part of pre-integration testing. It makes sense, however, to regularly run pre-generated tests, for reliability testing, and new generated tests, to get better code coverage. Generated tests which find bugs should be integrated as regular regression tests into an appropriate test suite and run in the same way as other regression tests.
Running existing tests in compiled modes can be considered as a viable alternative for the tool. Such an approach has several drawbacks:
It doesn't guarantee covering all language constructs and combinations of different optimization
Test failure doesn't always mean a defect in run-time compilers, additional engineer's time for investigation/reproduction is needed
Since some of tests can be run-time compiler specific tests, they may force run-time compilation themselves and/or require specific run-time compiler states. So explicit compiled mode can change the tests behavior and lead to false positive results
Creating a regression test for test failure is relatively harder
Due to these drawbacks this approach can not fully replace the proposed tool.