JEP draft: JWarmup precompile java hot methods at application startup

AuthorWenqian Pei
OwnerYumin Qi
Componenthotspot / compiler
Created2018/05/25 16:52
Updated2018/05/25 16:52


JWarmup overcomes Java application warmup performance problem caused by JIT threads compete with normal java threads for CPU resource at same time when both the application (requests) loads up at peak and JIT kicks in for compiling tasks. By precompiling java hot methods during warmup, JWarmup can successfully improve peak time performance degradation.


Pre-compile java hot methods to reduce CPU usage for java application at load up peak time.


Record compiling information into a file during normal application execution. The file is not standard and used only for JWarmup.

Success Metrics

In a normal run under same application execution, collect and record compiling information into a file. Next run with this recorded file to compile the recorded hot methods into native versions ahead of load up peak time so the methods can run a fast native version instead of executed in interpreter first. Successful execution should not throw exceptions and crash, instead, output like normal run, and run in a lower usage of CPU.


For a normal java method be compiled into native code, C2, JIT server compiler uses profiled data collected on the target method during runtime and makes decision when it will be compiled into native code. For a large java application, the load usually comes in large within a short period of time, meanwhile this leads to hot methods be compiled by JIT compiler to have fast native versions. JIT threads would take much more CPU cycles for the compilation tasks at the same time so that less resources of CPU for the java application threads. When such case happens, application throughput decreases and response time becomes longer evidently. The solution of overcoming the problem is pre-compile the hot java methods ahead of real large load come in.


There are 2 phases to enable JWarmup, pre run and normal run. Pre run usually runs with massive load data for testing. The purpose of pre-run is to record compiling information (profiling data) for the hot java methods and store the data in a disk file. During a normal run, the application will run with previously recorded file, the JIT threads will first compile the methods in the file into native versions. Those methods thus from starting up, executed in native version other than interpreter mode.

Flags: -XX:+CompilationWarmUp this enables JWarmUp -XX:CompilationWarmUpRecordMinLevel= set minimal record level -XX:+CompilationWarmUpRecording this starts JWarmUp recording -XX:CompilationWarmUpLogFile= this sets log file path -XX:CompilationWarmupRecordTime= this sets time to flush log into file, default at vm exit -XX:+PrintCompilationWarmUpDetail this prints detail information

Other flags: The pre-run with -XX:+CompilationWarmUpRecording plus setting recording file use -XX:CompilationWarmUpLogFile, also set how long it should record with –XX:CompilationWarmUpRecordTime, like: -XX:+CompilationWarmUpRecording –XX: CompilationWarmUpLogFile=”jwarmup.log” –XX: CompilationWarmUpRecordTime=1200 record compiling info for 1200 (s) and store the info into jwarmup.log

With the record file available, application with -XX:+CompilationWarmUp –XX:CompilationWarmUpLofFile=”jwarmup.log” Will precompile the recorded methods in the log file after startup.


AOT(Ahead of Time) compilation with Graal has been integrated into OpenJDK, which can generate a native binary version for java methods. It can improve startup performance and partially avoid the problem in this JEP. Then why should we develop another technique to solve the problems?

AOT works on JDK with Graal compiler. JWarmup is based on existing JIT compiler like C2 so it could be ported to old JDK. AOT and native image has some limits about runtime, like GC policy and close-world assumption. JWarmup could support most runtime options. JWarmup is driven by profile data. So it can know hot methods, inline tree, branch profiling, which are helpful to generate optimized code than AOT compiler.

Based on these thinking, we believe JWarmup is a complement of AOT and JIT compilers.

The normal run may behave different with testing pre-run. The profile data is inaccurate, we found some JWarmup methods are deoptimized because of this. In high load environment, it should be avoided. Because of the same reason, JWarmup method is not as optimized as JIT method generated in higher compiler level. To prevent load peak time deoptimization from happening there is a control flag for user to control when the deoptimization can happen (-XX:CompilationWarmUpDeoptTime= ), also add control how many methods can be deoptimized at one iteration (-XX:+CompilationWarmUpDeoptNumOfMethodsPerIter). With those flexible control flags, user can choose a time roughly after the peak time to allow deoptimization to take place.


The implementation is platform independent, so it would apply to all platforms. Beside existing hotspot testing for JIT, it also includes multiple test test case fro this JEP only.

Risks and Assumptions

Use C2 compiler code to generate the native code based on the recorded profiling information. The generated code may not be as good quality as compiled in normal run by C2. Since real application is complicated so the class relationships are complicated too, the order of class loading is recorded, only the order in class loading in the run is greater or equal to the recorded order number it is safe to compile. This may cause many methods are not compiled at start up due to disorder in class loading.