Build your own framework using a Java annotation processor
Jacek Dubikowski
Senior Software Engineer
Published: Jan 11, 2023|10 min read10 minutes read
A majority of developers in the JVM world work on various web applications, most of which are based on a framework like Spring or Micronaut. However, some people state that frameworks produce a too big overhead. I decided to see how valid such claims are and how much work is necessary to replicate what frameworks provide us out-of-the-box.
This article isn’t about whether or not it is feasible to use a framework or when to use it. It is about writing your framework – tinkering is the best way of learning!
For the sake of simplicity, we will use a demo app code. The application consists of
The starting point of an application without a framework would look like the code below:
1public class NoFrameworkApp {
2 public static void main(String[] args) {
3 ParticipationService participationService = new ManualTransactionParticipationService(
4 new ParticipantRepositoryImpl(),
5 new EventRepositoryImpl(),
6 new TransactionalManagerStub()
7 );
8 participationService.participate(new ParticipantId(), new EventId());
9 }
10}
As we can see, the application’s main method is responsible for providing the implementation of interfaces that ManualTransactionParticipationService depends on. The developer must know which ParticipationService implementation should be created in the main method. When using a framework, programmers typically don’t need to create instances and dependencies on their own. They rely on the core feature of the frameworks – Dependency Injection.
So, let’s take a look at a simple implementation of the dependency injection container based on annotation processing.
Dependency Injection, or DI, is a pattern for providing class instances its instance variables (its dependencies).
But how is this done? The pattern separates responsibility for object creation from its usage. The required objects are provided (“injected”) during runtime, and the pattern’s implementation handles the creation and lifecycle of the dependencies.
The feature has its advantages, like decreased coupling, simplified testing, and increased flexibility. But also drawbacks: framework dependence, harder debugging, or more work at the beginning of the project.
Java/Jakarta CDI – standard DI framework that originates in Java EE 6.
Most of these DI frameworks use annotations as one of the possible ways to configure the bindings. By bindings, I mean the configuration of which implementations should be used for interfaces or which dependencies should be provided to create objects.
Spring, the most popular Java framework, processes annotations in runtime. The solution is heavily based on the reflection mechanism. The reflection-based approach is one of the possible ways to handle annotations, and if you would like to follow that lead, please refer to Java Own Framework – step by step.
Compile-based handling
In addition to runtime handling, there is another approach. The part of the dependency injection can happen during annotation processing, a process that occurs during compile time. It has become popular lately thanks to Micronaut and Quarkus as they utilise the approach.
Annotation processing isn’t just for dependency injection. It is a part of various tools. For example, in libraries like Lombok or MapStruct.
Annotation Processing and Processors
The purpose of annotation processing is to generate not modify files. It can also make some compile-time checks, like ensuring that all class fields are final. If something is wrong, the processor may fail the compilation and provide the programmer with information about an error.
Annotation processors are written in Java and are used by javac during the compilation. However, programmers must compile the processor before using it. It cannot directly process itself.
The processing happens in rounds. In every round, the compiler searches for annotated elements. Then the compiler matches annotated elements to the processors that declared being interested in processing them. Any generated files become input for the next round of the compilation. If there are no more files to process, the compilation ends.
How to observe the work of annotation processors
There are two compiler flags -XprintProcessorInfo and -XprintRounds that will present the information about the compilation process and the compilation rounds.
Round 1:
1 input files: {io.jd.Data}
2 annotations: [io.jd.AllFieldsFinal]
3 last round: false
Processor io.jd.AnnotationProcessor matches [/io.jd.SomeAnnotation] and returns true.
Round 2:
1 input files: {}
2 annotations: []
3 last round: true
We can make some assumptions based on the code above. First, we need the framework to provide annotations for pointing classes. I decided to use the standardised jakarta.inject.* library for annotation. To be more precise, just the jakarta.inject.Singleton. The same is used by Micronaut.
The second thing we can be sure about is that we need a BeanProvider. The frameworks like to refer to it using the word Context, like ApplicationContext.
The third necessary thing is an annotation processor that will process the mentioned annotation(s). It should produce classes allowing the framework to provide the expected dependencies in runtime.
The framework should use the reflection mechanism as little as possible.
For the sake of simplicity, we would assume the framework:
handles concrete classes annotated with @Singleton that have one constructor only,
utilises the singleton scope (each bean will have only one instance for a given BeanProvider).
How should the framework work?
The annotation processing approach is powerful and offers many ways to achieve the goal. Therefore, the design is the point where we should start. We will begin with a basic version, which we will develop gradually as the article develops.
The diagram below shows the high-level architecture of the desired solution.
17 var bean = beanProvider.provider(SoftDrink.class);
18 System.out.println(bean.name()); // prints "Bubbles"
19 }
20}
As you can see, we need a BeanProcessor to generate implementations of the BeanDefinition for each bean. Then the BeanDefinitions are picked by BaseBeanProvider, which implements the BeanProvider (not in the diagram). In the application code, we use the BaseBeanProvider, created for us by the BeanProviderFactory. We also use the ScopeProvider interface that is supposed to handle the scope of the bean lifespan. In the example, as mentioned, we only care about the singleton scope.
Implementation of the framework
The framework itself is placed in the Gradle subproject called framework.
The interface only has two methods: type() to provide a Class object for the bean class and one to build the bean itself. The create(…) method accepts the BeanProvider to get its dependencies needed during build time as it is not supposed to create them, hence the DI.
The framework will also need the BeanProvider interface with just two methods.
1package io.jd.framework;
2
3public interface BeanProvider {
4 <T> T provide(Class<T> beanType);
5
6 <T> Iterable<T> provideAll(Class<T> beanType);
7}
The provideAll(…) method provides all beans that match the parameter Class<T> beanType. By match, I mean that the given bean is a subtype or is the same type as the given beanType. The provide(…) method is almost the same but provides only one matching bean. An exception is thrown in the case of no beans or more than one bean.
Annotation processor
We expect the annotation processor to find classes annotated with @Singleton. Then check if they are valid (no interfaces, abstract classes, just one constructor). The final step is creating the implementation of the BeanDefinition for each annotated class.
So we should start by implementing it, right?
The test-driven-development would object. We will get back to the tests later. Now, let’s focus on implementation.
Our processor will extend the provided AbstractProcessor instead of fully implementing the Processor interface.
The actual implementation differs from what you are seeing. Don’t worry; it will be used to the full extent in the next step. The simplified version shown here is enough to do the actual DI work.
Thanks to the usage of the AbstractProcess, we don’t have to override any methods. The annotations can be used instead:
@SupportedAnnotationTypes corresponds to Processor.getSupportedAnnotationTypes and is used to build the returned value. As defined, the processor cares only for @jakarta.inject.Singleton. 2.
@SupportedSourceVersion(SourceVersion.RELEASE_17) corresponds to Processor.getSupportedSourceVersion and is used to build the returned value. The processor will support language up to the level of Java 17.
Step 3 – Override the process method
Please assume that the code below is included in the BeanProcessor class body.
The annotations param provides a set of annotations represented as Elements. The annotations are represented at least by the TypeElements interface. It may seem unusual, as everyone is used to java.lang.Class or broader java.lang.reflect.Type, which is runtime representations.
On the other hand, there is also the compile-time representation.
Let me introduce the Element interface, the common interface for all language-level compile-time constructs such as classes, modules, variables and packages. It is worth mentioning that there are subtypes corresponding to the constructs like PackageElement or TypeElement.
The processor code is going to use the Elements a lot.
As the processor should catch any exception and log it, we will use the try and catch clauses here. The BeanProcessor.processBeans method will provide the actual annotation processing.
The annotation processor framework provides the Messager instance to the user through the processingEnv field of AbstractProcessor. The Messager is a way to report any errors, warnings, etc. It defines four overloaded methods printMessage(…), and the first parameter of the methods is used to define message type using Diagnostic.Kind enum. In the code, there is an example of an error message. If a processor throws an exception, the compilation will fail without extra diagnostic data.
There is no need to claim the annotations, so the method returns false.
First, the RoundEnvironment is used to provide all elements from the compilation round annotated with @Singleton.
Then the ElementFilter is used to get only TypeElements out of annotated. It could be wise to fail here when annotated differs in size from types, but one can annotate anything with @Singleton, and we don’t want to handle that. Therefore, we won’t care for anything other than TypeElements. They represent class and interface elements during compilation.
The ElementFilter is a utility class that filters Iterable<? extends Element> or Set<? extends Element> to get elements matching criteria with type narrowed to matching Element implementation.
As the next step, we instantiate the TypeDependencyResolver, which is part of our framework. The class is responsible for getting the type element, checking if it has only one constructor and what are the constructor parameters. We will cover its code later on.
Then we resolve our dependencies using the TypeResolver to be able to build our BeanDefinition instance.
The last thing to do is write Java files with definitions. We will cover it in Step 5.
Getting back to the TypeDefinitionResolver, the code below shows the implementation:
1public class TypeDependencyResolver {
2
3 public Dependency resolve(TypeElement element, Messager messager) {
4 var constructors = ElementFilter.constructorsIn(element.getEnclosedElements()); // 1
12 return new Dependency(element, constructor.getParameters().stream().map(VariableElement::asType).toList());
13 }
14 ...
15}
The ElementFilter, which we’re already familiar with, gets the constructors of the element.
A check is carried out to ensure our element has just one constructor.
If there is one constructor, we follow the process.
In case there is more than one, the compilation fails. You can see the failOnTooManyConstructors method implementation here. The single constructor creates a Dependency object with the element and its dependencies. It will be used for writing the actual Java code. Seeing the Dependency implementation would be beneficial, so please take a look:
1public final class Dependency {
2 private final TypeElement type;
3 private final List<TypeMirror> dependencies;
4
5 ...
6
7 public TypeElement type() {
8 return type;
9 }
10
11 public List<TypeMirror> dependencies() {
12 return dependencies;
13 }
14 ...
15 }
You may have noticed the strange type TypeMirror. It represents a type in Java language (literally language, as this is a compile-time thing).
Step 5 – Writing definitions
How can I write Java source code?
To write Java code during annotation processing, you can use almost anything. You are good to go as long as you end up with CharSequence/String/byte[].
In examples on the Internet, you will find that it is popular to use StringBuffer. Honestly, I find it inconvenient to write any source code like that. There is a better solution available for us.
JavaPoet is a library for writing Java source code using JavaAPI. You will see it in action in the next section.
To write Java code during annotation processing, you can use almost anything. You are good to go as long as you end up with CharSequence/String/byte[].
In examples on the Internet, you will find that it is popular to use StringBuffer. Honestly, I find it inconvenient to write any source code like that. There is a better solution available for us.
JavaPoet is a library for writing Java source code using JavaAPI. You will see it in action in the next section.
Missing part of BeanProcessor
Getting back to BeanProcessor. Some parts of the file were not revealed yet. Let us get back to it:
10 processingEnv.getMessager().printMessage(ERROR, "Failed to write definition %s".formatted(javaFile));
11 }
12 }
The writing is done in two steps:
The DefinitionWriter creates the BeanDefinition, and a JavaFile instance contains it.
The programmer writes the implementation to the actual file using provided via processingEnv Filer instance. Should writing fail, the compilation will fail, and the compiler will print the error message.
Filer is an interface that supports file creation for an annotation processor. The place for the generated files to be stored is configured through the -s javac flag. However, most of the time, build tools handle it for you. In that case, the files are stored in a directory like build/generated/sources/annotationProcessor/java for Gradle or similar for different tools.
The creation of Java code takes place in DefinitionWriter, and you will see the implementation in a moment. However, the question is what such a definition looks like. I think an example will show it best.
An example of what should be written
For the below Bean:
1@Singleton
2public class ServiceC {
3 private final ServiceA serviceA;
4 private final ServiceB serviceB;
5
6 public ServiceC(ServiceA serviceA, ServiceB serviceB) {
7 this.serviceA = serviceA;
8 this.serviceB = serviceB;
9 }
10}
The definition should look like the code below:
1public class $ServiceC$Definition implements BeanDefinition<ServiceC> { // 1
2 private final ScopeProvider<ServiceC> provider = // 2
3 ScopeProvider.singletonScope(beanProvider -> new ServiceC(beanProvider.provide(ServiceA.class), beanProvider.provide(ServiceB.class)));
4
5 @Override
6 public ServiceC create(BeanProvider beanProvider) { // 3
7 return provider.apply(beanProvider);
8 }
9
10 @Override
11 public Class<ServiceC> type() { // 4
12 return ServiceC.class;
13 }
14}
There are four elements here:
An inconvenient name to prevent people from using it directly. The class should implement BeanDefinition<BeanType>.
A field of type ScopeProvider, responsible for instantiation of bean and ensuring its lifetime (scope).
Singleton scope is the only scope the framework covers, so the ScopeProvider.singletonScope() method will be the only one used. The Function<BeanProvider, Bean>, used to instantiate the bean is passed to the method ScopeProvider.singletonScope.
I will cover the implementation of the ScopeProvider later. For now, it is enough to know that it will ensure just one instance of the bean in our DI context.
However, if you are curious, the source code is available here.
The actual create method uses the provider and connects it with the beanProvider through the apply method.
The implementation of the type method is a simple task.
The example shows that the only bean-specific things are the type passed to BeanDefinition declaration, new call, and field/returned types.
Implementation of the DefinitionWriter
To keep this concise, I will omit the private methods’ code, the constructor and some small snippets. Let us see the overview of Java code that writes Java code. Here is a link to the full code.
1class DefinitionWriter {
2 private final TypeElement definedClass; // 1
3 private final List<TypeMirror> constructorParameterTypes; // 1
Phew, that is a lot. Don’t be afraid; it’s simpler than it looks.
There are three instance fields:
TypeElement definedClass is our bean,
List<TypeMirror> constructorParameterTypes contains parameters for bean constructor (who would guess, right?),
ClassName definedClassName is the JavaPoet object, created out of definedClass. It represents a fully qualified name for classes.
TypeSpec is a JavaPoet class representing Java type creation (classes and interfaces). It is created using the classBuilder static method, in which we pass our strange name, constructed based on the actual bean type name.
ParameterizedTypeName.get(ClassName.get(BeanDefinition.class), definedClassName) creates code that represents BeanDefinition<BeanTypeName>, which is applied as a super interface of our class through the addSuperinterface method.
The create() method implementation is not that hard, and it’s quite self-explanatory. Please look at the createMethodSpec() method and its application.
The same applies to the type() method as for the create().
The scopeProvider() is similar to the previous methods. However, the tricky part is to invoke the constructor. The singletonScopeInitializer() is responsible for creating a constructor call wrapped in ScopeProvider.singletonScope(beanProvider -> …). We call BeanProvider.provide for every parameter to get the dependency and keep the calls in the order of the constructor parameters.
Ok, the BeanDefinitions are ready. Now, we move on to the ScopeProvider.
16 public synchronized T apply(BeanProvider beanProvider) {
17 if (value == null) {
18 value = delegate.apply(beanProvider);
19 }
20 return value;
21 }
22}
You can see the sealed interface definition that extends Function<BeanProvider, T>. So the Function.apply() method is available.
Factory method for SingletonProvider
Implementation of the SingletonScope is based on any kind of lazy value implementation in Java. In the synchronized apply method, we only create the instance of our bean if there isn’t one. The value field is marked as volatile to prevent issues in a multithreaded environment.
Now we are ready. It is time for the runtime part of the framework.
Step 5 – Runtime provisioning of beans
Runtime provisioning is the last part of the framework to work on. The BeanProvider interface has already been defined. Now we just need the implementation to do the actual provisioning.
The BaseBeanProvider must have access to all instantiated BeanDefinitions. This is because the BaseBeanProvider shouldn’t be responsible for creating and providing the beans.
The BeanProvider Factory
Due to the mentioned fact, the BeanProviderFactory took responsibility via the static BeanProvider getInstance(String… packages) method. Where packages parameter defines places to look for the BeanDefinitions present on the classpath. This is the code:
33 throw new FailedToInstantiateBeanDefinitionException(e, ex);
34 }
35 }
36}
The method is responsible for getting an instance of the BeanProvider.
Here is where it gets interesting. I define constant TYPE_QUERY with a very specific type from the Reflections library. The project README.md defines it as:
Reflections scans and indexes your project’s classpath metadata, allowing reverse transitive query of the type system on runtime.
I encourage you to read more about it, but I will just explain how it is used in the code. The defined QueryFunction will be used to scan the classpath in runtime to find all subtypes of the BeanDefinition.
The configuration is created for the Reflections object. It will be used in the next part of the code.
The configuration is defined by the parameters and the package filter that the BeanProviderFactory will scan the io.jd package and the passed packages. Thanks to that, the framework only provides beans from the expected packages.
The Reflections object is created. It will be responsible for performing our query later in the code.
The reflections object performs the TYPE_QUERY. It will create all the BeanDefinition instances using static BeanDefinition<?> getInstance(Class<?> e).
The method that creates instances of BeanDefinition uses the reflection. When there’s an exception, the code wraps it in a custom RuntimeException. The code of the custom exception is here.
The instance of BeanProvider interface in the form of BaseBeanProvider instance, which source will be presented in the next few paragraphs.
BaseBeanProvider
So, how is the BaseBeanProvider implemented? It is easy to embrace. The source code in the repository is very similar, but (Spoiler alert!) changed to handle @Transactional in Part 4.
1class BaseBeanProvider implements BeanProvider {
2 private final List<? extends BeanDefinition<?>> definitions;
3
4 public BaseBeanProvider(List<? extends BeanDefinition<?>> definitions) {
5 this.definitions = definitions;
6 }
7
8 @Override
9 public <T> List<T> provideAll(Class<T> beanType) { // 1
19 throw new IllegalStateException("No bean of given type: '%s'".formatted(beanType.getCanonicalName()));
20 } else if (beans.size() > 1) { // 4
21 throw new IllegalStateException("More than one bean of given type: '%s'".formatted(beanType.getCanonicalName()));
22 } else {
23 return beans.get(0); // 5
24 }
25 }
26}
provideAll(Class<T> beanType) takes all of the BeanDefinition and finds all type() methods, which return Class<?> that is a subtype or exactly provided beanType. Thanks to that, it can collect all matching beans.
provide(Class<T> beanType) is also simple. It reuses the provideAll method and then takes all matching beans.
The piece of code makes check if there is any bean matching the beanType and throws an exception if not.
The piece of code makes check if there is more than one bean matching the beanType and throw an exception if yes.
If there is just one matching bean, it is returned.
That’s it!
We got all the parts. Now we should check if the code works.
Did we miss something?
Shouldn’t we have started with tests of the annotation processor? How can the annotation processor be tested?
Annotation processor testing
The annotation processor is rather poorly prepared for being tested. One way to test it is to create a separate project/Gradle or Maven submodule. It would then use the annotation processor, and compilation failure would mean something is wrong. It doesn’t sound good, right?
The other option is to utilise the compile-testing library created by Google. It simplifies the testing process, even though the tool isn’t perfect. Please find the tutorial on how to use it here.
I introduced both approaches in the article’s repository. The compile-testing was used for “unit tests”, and the integrationTest module was used for “integration tests”.
You can find the test implementation and configuration in the framework subproject’s files below:
5 participationService.participate(new ParticipantId(), new EventId());
6 }
7}
However, to make it work, we have to add @Singleton here and there. Please refer to the source code in the directory. If we run that main, we will get the same result:
1Begin transaction
2Participant: 'Participant[]' takes part in event: 'Event[]'
3Commit transaction
4
Therefore, we can call it a success. The framework works like a charm!
What’s next?
Once you checked the result of running the code from the previous paragraph, you saw there were additional messages. They are about the beginning and committing a transaction.
Handling the transactions is also typical for frameworks. I will cover how to handle transactions in the next article of this series.