Trying out Graal's Native image functionality
In OpenRefine, we have been enforcing a code style for our Java files using a linter, which reformats source files according to a configuration expressed in Eclipse’s internal format.
Because the linter reuses Eclipse’s internal libraries, it is of course written in Java. Also, we invoke it via Maven, meaning that
the start up time of the linter is quite long: we need to boot a Java virtual machine (JVM), which then boots Maven, which finally runs
the formatter. So that’s not exactly fast. On my laptop it takes about 11 seconds. It’s not the end of the world, but it means it’s
a bit annoying to have it as a git pre-commit hook, and it’s definitely
not suitable as a git filter driver. Those filter drivers are processes which take
the contents of a file on standard input and output a normalized version of it. Given that they are run on the fly for things as basic
as doing a git diff
or git status
, they need to be really quick.
GraalVM
For a while I have been looking at Oracle’s GraalVM project, mostly with an interest in the inter-language interoperability and interpreter optimization features because it could be useful for OpenRefine’s integration of Python and other expression languages. I won’t reproduce the sales pitch here, there are plenty of resources out there which explain it much better than I’d be able to. But it seems that the most popular feature of this technology is a fairly different one: the ability to generate “native images”, meaning the compilation of a Java program to native code. By doing so it makes it possible to boot the program much faster, so it seems to be useful in cloud environments where processes are started up on demand, for instance when an HTTP request is coming in.
So this felt like a good opportunity to try out this technology: can we turn this linter into a native image, such that it’s fast enough to be run as a git filter?
Putting something together
As a quick experiment, I started out a small Java program to lint Java files according to OpenRefine’s style. It’s basically doing the same thing as the Maven plugin we use, but without going through Maven at all, so that we can at least directly avoid this slowness. It’s calling Eclipse’s own formatting code using a hard-coded configuration that matches OpenRefine’s settings. Because we also started normalizing our import order, I went ahead and added calls to another library which does that (that library is actually a Maven plugin, but luckily we can avoid pulling in the Maven dependencies by excluding them explicitly, and only call Maven-free code).
When running this Java program normally, via a .jar
file which includes all necessary dependenciesi, it takes about 0.8 second to format a fairly big Java file. That’s still too long for a filter driver.
Turning the program into a binary
To turn this program into a binary, we first need to download GraalVM, which is a sort of alternative Java distribution with all those fancy features enabled. There is an open source version, called the “Community Edition” (released under LGPL v2). Once we are running that, there is a Maven plugin which helps generating the native image as part of Maven’s build process, which is quite helpful. It even supports compiling and running the Java tests as a native binary. To generate the native image (skipping the tests), one can just run:
mvn -Pnative -DskipTests -Dagent package
Well, “just” is a big word because this process is actually really intense: the process takes in total 2m 22s to compile my little linter (on a more beefy desktop), which is turned into a 61 MB binary. We get an overview of the compilation phases:
[1/8] Initializing... (4,1s @ 0,17GB)
Java version: 21.0.1+12, vendor version: Oracle GraalVM 21.0.1+12.1
Graal compiler: optimization level: 2, target machine: x86-64-v3, PGO: ML-inferred
C compiler: gcc (linux, x86_64, 13.2.0)
Garbage collector: Serial GC (max heap size: 80% of RAM)
1 user-specific feature(s):
- com.oracle.svm.thirdparty.gson.GsonFeature
-----------------------------------------------------------------------------------------------------------------------
Build resources:
- 11,75GB of memory (75,6% of 15,54GB system memory, determined at start)
- 8 thread(s) (100,0% of 8 available processor(s), determined at start)
[2/8] Performing analysis... [***] (34,0s @ 0,95GB)
9 142 reachable types (83,5% of 10 948 total)
16 427 reachable fields (61,2% of 26 857 total)
55 394 reachable methods (61,8% of 89 703 total)
2 749 types, 466 fields, and 1 313 methods registered for reflection
59 types, 56 fields, and 53 methods registered for JNI access
4 native libraries: dl, pthread, rt, z
[3/8] Building universe... (4,0s @ 1,43GB)
[4/8] Parsing methods... [****] (13,1s @ 1,51GB)
[5/8] Inlining methods... [****] (1,7s @ 1,89GB)
[6/8] Compiling methods... [********] (75,4s @ 1,92GB)
[7/8] Layouting methods... [***] (5,8s @ 1,25GB)
[8/8] Creating image... [**] (3,4s @ 1,69GB)
36,34MB (59,58%) for code area: 33 145 compilation units
22,27MB (36,50%) for image heap: 249 379 objects and 85 resources
2,39MB ( 3,92%) for other data
60,99MB in total
Now let’s start it!
$ ./target/filterlinter < Example.java
Exception in thread "main" java.lang.ExceptionInInitializerError
at java.base@21.0.1/java.lang.Class.ensureInitialized(DynamicHub.java:595)
at org.eclipse.jdt.core.dom.CompilationUnitResolver.parse(CompilationUnitResolver.java:604)
at org.eclipse.jdt.core.dom.ASTParser.internalCreateAST(ASTParser.java:1264)
at org.eclipse.jdt.core.dom.ASTParser.createAST(ASTParser.java:868)
at org.eclipse.jdt.internal.formatter.DefaultCodeFormatter.parseSourceCode(DefaultCodeFormatter.java:317)
at org.eclipse.jdt.internal.formatter.DefaultCodeFormatter.prepareFormattedCode(DefaultCodeFormatter.java:221)
at org.eclipse.jdt.internal.formatter.DefaultCodeFormatter.format(DefaultCodeFormatter.java:185)
at eu.delpeuch.antonin.filterlinter.Formatter.format(Formatter.java:49)
at eu.delpeuch.antonin.filterlinter.App.formatString(App.java:41)
at eu.delpeuch.antonin.filterlinter.App.main(App.java:35)
at java.base@21.0.1/java.lang.invoke.LambdaForm$DMH/sa346b79c.invokeStaticInit(LambdaForm$DMH)
Caused by: java.lang.NullPointerException
at java.base@21.0.1/java.text.MessageFormat.applyPattern(MessageFormat.java:468)
at java.base@21.0.1/java.text.MessageFormat.<init>(MessageFormat.java:382)
at java.base@21.0.1/java.text.MessageFormat.format(MessageFormat.java:882)
at org.eclipse.jdt.internal.compiler.util.Messages.bind(Messages.java:173)
at org.eclipse.jdt.internal.compiler.util.Messages.bind(Messages.java:150)
at org.eclipse.jdt.internal.compiler.parser.Parser.readTable(Parser.java:816)
at org.eclipse.jdt.internal.compiler.parser.Parser.initTables(Parser.java:658)
at org.eclipse.jdt.internal.compiler.parser.Parser.<clinit>(Parser.java:175)
... 11 more
Oops, that does not look like the linted Java code I wanted!
The catch
After searching the web to understand what this could be due to, I realized that it’s because the GraalVM compiler needs a bit of help to handle some corner cases. There are aspects of the Java language that it cannot simply compile by itself, such as the use of reflection. Reflection is the dynamic inspection of Java classes by the Java code itself, which can be used to do all sorts of funky things. For this to work in the native image, you basically need to tell the compiler ahead of time which classes will be inspected, so that it can pre-compute and store the output of those introspection calls.
GraalVM offers an “agent” to solve that problem. You can run the original Java program with this agent enabled and it will record which of those reflection calls are made (and other sorts of special calls). That generates a set of configuration files to be used by the compiler, which in my case looked like that:
[
{
"name": "com.github.javaparser.ast.body.FieldDeclaration",
"allDeclaredFields": true
},
{
"name": "com.github.javaparser.ast.expr.VariableDeclarationExpr",
"allDeclaredFields": true
},
{
"name": "java.util.concurrent.ForkJoinTask",
"fields": [
{
"name": "aux"
},
{
"name": "status"
}
]
},
{
"name": "java.util.concurrent.atomic.AtomicBoolean",
"fields": [
{
"name": "value"
}
]
},
{
"name": "jdk.internal.misc.Unsafe"
},
{
"name": "org.eclipse.core.internal.runtime.Messages",
"allDeclaredFields": true
},
{
"name": "org.eclipse.jdt.internal.compiler.util.Messages",
"allDeclaredFields": true
}
]
Great! If we compile again and run the binary on the same file we run the agent on, we do get linted code out this time. And it’s indeed faster: 17 ms total time for a small file that would take 375 ms if we use the .jar
instead. Nice.
But if I try running the linter again on a new file… I get a new error!
$ ./target/filterlinter < Example2.java
Exception in thread "main" java.lang.NoSuchFieldError: levels
at com.github.javaparser.metamodel.PropertyMetaModel.getValue(PropertyMetaModel.java:260)
at com.github.javaparser.ast.validator.language_level_validations.chunks.CommonValidators.lambda$new$7(CommonValidators.java:63)
at com.github.javaparser.ast.validator.TreeVisitorValidator.accept(TreeVisitorValidator.java:38)
at com.github.javaparser.ast.validator.TreeVisitorValidator.accept(TreeVisitorValidator.java:40)
at com.github.javaparser.ast.validator.TreeVisitorValidator.accept(TreeVisitorValidator.java:40)
at com.github.javaparser.ast.validator.TreeVisitorValidator.accept(TreeVisitorValidator.java:40)
at com.github.javaparser.ast.validator.TreeVisitorValidator.accept(TreeVisitorValidator.java:40)
at com.github.javaparser.ast.validator.TreeVisitorValidator.accept(TreeVisitorValidator.java:40)
at com.github.javaparser.ast.validator.TreeVisitorValidator.accept(TreeVisitorValidator.java:40)
at com.github.javaparser.ast.validator.TreeVisitorValidator.accept(TreeVisitorValidator.java:40)
at com.github.javaparser.ast.validator.Validators.lambda$accept$0(Validators.java:64)
at java.base@21.0.1/java.util.ArrayList.forEach(ArrayList.java:1596)
at com.github.javaparser.ast.validator.Validators.accept(Validators.java:64)
at com.github.javaparser.ast.validator.Validators.lambda$accept$0(Validators.java:64)
at java.base@21.0.1/java.util.ArrayList.forEach(ArrayList.java:1596)
at com.github.javaparser.ast.validator.Validators.accept(Validators.java:64)
at com.github.javaparser.ParserConfiguration$2.postProcess(ParserConfiguration.java:308)
at com.github.javaparser.JavaParser.parse(JavaParser.java:128)
at com.github.javaparser.JavaParser.parse(JavaParser.java:305)
at net.revelc.code.impsort.ImpSort.parseFile(ImpSort.java:129)
at eu.delpeuch.antonin.filterlinter.Formatter.sortImports(Formatter.java:67)
at eu.delpeuch.antonin.filterlinter.App.formatString(App.java:43)
at eu.delpeuch.antonin.filterlinter.App.formatFile(App.java:53)
at eu.delpeuch.antonin.filterlinter.App.main(App.java:30)
at java.base@21.0.1/java.lang.invoke.LambdaForm$DMH/sa346b79c.invokeStaticInit(LambdaForm$DMH)
That’s where I think the technology starts to get much less convincing. The problem here, as the stack trace might suggest, is that the Java parser we rely on uses reflection internally.
By running the agent on an example file, the uses of reflection were only captured for the syntactic constructs that were present in the example file: in the JSON configuration above, you see for instance a mention of com.github.javaparser.ast.body.FieldDeclaration
, which must be a class that represents Java field declarations in an abstract syntax tree.
So whenever we run our binary on another file with unseen constructs, we are missing the required information to simulate the reflection calls, and we just fail.
I really wonder what the expected workaround is for this sort of situation. Of course, it is natural to try and execute the agent on a set of files that’s as diverse as possible, but I don’t want the correctness of my program to rely on me finding a set of Java files which cover the entire set of possible AST nodes! Even if my code base had Java tests featuring 100% code coverage, executing my tests with the agent would not be enough, since those reflection calls are done by a dependency, not my own code.
Because this was just an experiment, I decided to go for a hacky route. Just manually add all the AST nodes to the JSON file. I can first generate the list of AST nodes by inspecting the .jar
file:
$ jar -tf target/filterlinter-0.0.1-SNAPSHOT-jar-with-dependencies.jar| grep com.github.javaparser.ast | grep -P "\.class$"
com/github/javaparser/ast/AccessSpecifier.class
com/github/javaparser/ast/AllFieldsConstructor.class
com/github/javaparser/ast/ArrayCreationLevel.class
com/github/javaparser/ast/body/AnnotationDeclaration.class
com/github/javaparser/ast/body/AnnotationMemberDeclaration.class
com/github/javaparser/ast/body/BodyDeclaration.class
...
That’s 282 classes in total. Then with a bit more bash scripting I can translate this list into the required JSON objects:
[
{
"name":"com.github.javaparser.ast.AccessSpecifier",
"allDeclaredFields":true
},
{
"name":"com.github.javaparser.ast.AllFieldsconstructor",
"allDeclaredFields":true
},
{
"name":"com.github.javaparser.ast.ArrayCreationLevel",
"allDeclaredFields":true
},
...
And recompile my native image with this new configuration. That’s incredibly ugly, but it seems to work, and it still runs fast.
Of course it would be much easier if you could at least ask for all the classes in a given Java package to have reflection enabled. I was pleased to add the 42nd thumbs up on this GitHub issue. But even if that was possible, I really wonder what is the expected solution for this problem. How can I be really sure there isn’t another reachable use of reflection somewhere, that I just haven’t encountered yet?
Needless to say, the resulting binary is not something I can propose to use as a linter in the OpenRefine project: I would need to distribute one for all reasonable operating systems and CPU architectures and embedding even a single 61 MB binary in the git repository is not going to fly. It was fun to try this out regardless, and I might still use the binary myself, as it is actually fast enough to be run as a filter in my opinion.