



A step-by-step guide for Java developers on how to build a framework using an annotation processor. This is part one of the series.
A majority of developers in the JVM world work on various web applications, most of which are based on a framework like Spring or Micronaut. However, some people state that frameworks produce a too big overhead. I decided to see how valid such claims are and how much work is necessary to replicate what frameworks provide us out-of-the-box.
This article isn’t about whether or not it is feasible to use a framework or when to use it. It is about writing your framework – tinkering is the best way of learning!
For the sake of simplicity, we will use a demo app code. The application consists of
The starting point of an application without a framework would look like the code below:
public class NoFrameworkApp {
public static void main(String[] args) {
ParticipationService participationService = new ManualTransactionParticipationService(
new ParticipantRepositoryImpl(),
new EventRepositoryImpl(),
new TransactionalManagerStub()
);
participationService.participate(new ParticipantId(), new EventId());
}
}
As we can see, the application’s main method is responsible for providing the implementation of interfaces that ManualTransactionParticipationService depends on. The developer must know which ParticipationService implementation should be created in the main method. When using a framework, programmers typically don’t need to create instances and dependencies on their own. They rely on the core feature of the frameworks – Dependency Injection.
So, let’s take a look at a simple implementation of the dependency injection container based on annotation processing.
Dependency Injection, or DI, is a pattern for providing class instances its instance variables (its dependencies).
But how is this done? The pattern separates responsibility for object creation from its usage. The required objects are provided (“injected”) during runtime, and the pattern’s implementation handles the creation and lifecycle of the dependencies.
The feature has its advantages, like decreased coupling, simplified testing, and increased flexibility. But also drawbacks: framework dependence, harder debugging, or more work at the beginning of the project.
NOTE: Dependency Injection is the implementation of Inversion of control!
There are at least a few DI frameworks widely adopted in the Java world.
Most of these DI frameworks use annotations as one of the possible ways to configure the bindings. By bindings, I mean the configuration of which implementations should be used for interfaces or which dependencies should be provided to create objects.
In fact, DI is so popular that there was a Java Specification Request made for it.
Spring, the most popular Java framework, processes annotations in runtime. The solution is heavily based on the reflection mechanism. The reflection-based approach is one of the possible ways to handle annotations, and if you would like to follow that lead, please refer to Java Own Framework – step by step.
In addition to runtime handling, there is another approach. The part of the dependency injection can happen during annotation processing, a process that occurs during compile time. It has become popular lately thanks to Micronaut and Quarkus as they utilise the approach.
Annotation processing isn’t just for dependency injection. It is a part of various tools. For example, in libraries like Lombok or MapStruct.
The purpose of annotation processing is to generate not modify files. It can also make some compile-time checks, like ensuring that all class fields are final. If something is wrong, the processor may fail the compilation and provide the programmer with information about an error.
Annotation processors are written in Java and are used by javac during the compilation. However, programmers must compile the processor before using it. It cannot directly process itself.
The processing happens in rounds. In every round, the compiler searches for annotated elements. Then the compiler matches annotated elements to the processors that declared being interested in processing them. Any generated files become input for the next round of the compilation. If there are no more files to process, the compilation ends.
There are two compiler flags -XprintProcessorInfo and -XprintRounds that will present the information about the compilation process and the compilation rounds.
Round 1:
input files: {io.jd.Data}
annotations: [io.jd.AllFieldsFinal]
last round: false
Processor io.jd.AnnotationProcessor matches [/io.jd.SomeAnnotation] and returns true.
Round 2:
input files: {}
annotations: []
last round: true
You can find an example config for Gradle here.
To write an annotation processor, you must create the Processor interface implementation.
The Processor defines six methods, which is a lot to implement. Fortunately, the tool’s creator prepared the AbstractProcessor to be extended and to simplify a programmer’s job. The AbstractProcessor’s API is slightly different from the Processor’s and provides some default implementations of the methods.
Once the implementation is ready, you must notify the compiler to use your processor. The javac has some flags for annotation processing, but this is not how you should work with it. To notify the compiler about the processor, you must specify its name in META-INF/services/javax.annotation.processing.Processor file. The name must be fully qualified, and the file can contain more than one processor. The latter approach works with the build tools. No one builds their project using javac, right?
The build tools like Maven or Gradle have support for using the processors.
As mentioned above, the Java Own Framework – step by step article covers how the DI’s runtime annotation processing works. As a counterpart, I will gladly show the basic compile-time framework. This approach has some advantages over the ‘classic’ one. You can read more about it in the Micronaut release notes. Neither the framework we are building nor Micronaut is reflection-free, but it relies on it partially and in a limited manner.
Note: An annotation processor is a flexible tool. The presented solution is highly unlikely to be the only option.
Here comes the main dish of the repository. We are going to build our DI framework together. The goal is to make the code below work.
interface Water {
String name();
}
@Singleton
class SparklingWater implements Water {
@Override
String name() {
return "Bubbles";
}
}
public class App {
public static void main(String[] args) {
BeanProvider provider = BeanProviderFactory.getInstance();
var bean = beanProvider.provider(SoftDrink.class);
System.out.println(bean.name()); // prints "Bubbles"
}
}
We can make some assumptions based on the code above. First, we need the framework to provide annotations for pointing classes. I decided to use the standardised jakarta.inject.* library for annotation. To be more precise, just the jakarta.inject.Singleton. The same is used by Micronaut.
The second thing we can be sure about is that we need a BeanProvider. The frameworks like to refer to it using the word Context, like ApplicationContext.
The third necessary thing is an annotation processor that will process the mentioned annotation(s). It should produce classes allowing the framework to provide the expected dependencies in runtime.
The framework should use the reflection mechanism as little as possible.
For the sake of simplicity, we would assume the framework:
The annotation processing approach is powerful and offers many ways to achieve the goal. Therefore, the design is the point where we should start. We will begin with a basic version, which we will develop gradually as the article develops.
The diagram below shows the high-level architecture of the desired solution.
As you can see, we need a BeanProcessor to generate implementations of the BeanDefinition for each bean. Then the BeanDefinitions are picked by BaseBeanProvider, which implements the BeanProvider (not in the diagram). In the application code, we use the BaseBeanProvider, created for us by the BeanProviderFactory. We also use the ScopeProvider interface that is supposed to handle the scope of the bean lifespan. In the example, as mentioned, we only care about the singleton scope.
The framework itself is placed in the Gradle subproject called framework.
Let’s start with the BeanDefinition interface.
package io.jd.framework;
public interface BeanDefinition<T> {
T create(BeanProvider beanProvider);
Class<T> type();
}
The interface only has two methods: type() to provide a Class object for the bean class and one to build the bean itself. The create(…) method accepts the BeanProvider to get its dependencies needed during build time as it is not supposed to create them, hence the DI.
The framework will also need the BeanProvider interface with just two methods.
package io.jd.framework;
public interface BeanProvider {
<T> T provide(Class<T> beanType);
<T> Iterable<T> provideAll(Class<T> beanType);
}
The provideAll(…) method provides all beans that match the parameter Class<T> beanType. By match, I mean that the given bean is a subtype or is the same type as the given beanType. The provide(…) method is almost the same but provides only one matching bean. An exception is thrown in the case of no beans or more than one bean.
We expect the annotation processor to find classes annotated with @Singleton. Then check if they are valid (no interfaces, abstract classes, just one constructor). The final step is creating the implementation of the BeanDefinition for each annotated class.
So we should start by implementing it, right?
The test-driven-development would object. We will get back to the tests later. Now, let’s focus on implementation.
Let’s define our processor:
import javax.annotation.processing.AbstractProcessor;
class BeanProcessor extends AbstractProcessor {
}
Our processor will extend the provided AbstractProcessor instead of fully implementing the Processor interface.
The actual implementation differs from what you are seeing. Don’t worry; it will be used to the full extent in the next step. The simplified version shown here is enough to do the actual DI work.
import javax.annotation.processing.AbstractProcessor;
import javax.annotation.processing.SupportedAnnotationTypes;
import javax.annotation.processing.SupportedSourceVersion;
import javax.lang.model.SourceVersion;
@SupportedAnnotationTypes({"jakarta.inject.Singleton"}) // 1
@SupportedSourceVersion(SourceVersion.RELEASE_17) // 2
class BeanProcessor extends AbstractProcessor {
}
Thanks to the usage of the AbstractProcess, we don’t have to override any methods. The annotations can be used instead:
Please assume that the code below is included in the BeanProcessor class body.
@Override
public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) { // 1
try {
processBeans(roundEnv); // 2
} catch (Exception e) {
processingEnv.getMessager() // 3
.printMessage(ERROR, "Exception occurred %s".formatted(e));
}
return false; // 4
}
private void processBeans(RoundEnvironment roundEnv) {
Set<? extends Element> annotated = roundEnv.getElementsAnnotatedWith(Singleton.class); // 1
Set<TypeElement> types = ElementFilter.typesIn(annotated); // 2
var typeDependencyResolver = new TypeDependencyResolver(); // 3
types.stream().map(t -> typeDependencyResolver.resolve(t, processingEnv.getMessager())) // 4
.forEach(this::writeDefinition); // 5
}
Getting back to the TypeDefinitionResolver, the code below shows the implementation:
public class TypeDependencyResolver {
public Dependency resolve(TypeElement element, Messager messager) {
var constructors = ElementFilter.constructorsIn(element.getEnclosedElements()); // 1
return constructors.size() == 1 // 2
? resolveDependency(element, constructors) // 3
: failOnTooManyConstructors(element, messager, constructors); // 4
}
private Dependency resolveDependency(TypeElement element, List<ExecutableElement> constructors) { // 5
ExecutableElement constructor = constructors.get(0);
return new Dependency(element, constructor.getParameters().stream().map(VariableElement::asType).toList());
}
...
}
public final class Dependency {
private final TypeElement type;
private final List<TypeMirror> dependencies;
...
public TypeElement type() {
return type;
}
public List<TypeMirror> dependencies() {
return dependencies;
}
...
}
You may have noticed the strange type TypeMirror. It represents a type in Java language (literally language, as this is a compile-time thing).
To write Java code during annotation processing, you can use almost anything. You are good to go as long as you end up with CharSequence/String/byte[].
In examples on the Internet, you will find that it is popular to use StringBuffer. Honestly, I find it inconvenient to write any source code like that. There is a better solution available for us.
JavaPoet is a library for writing Java source code using JavaAPI. You will see it in action in the next section.
Getting back to BeanProcessor. Some parts of the file were not revealed yet. Let us get back to it:
private void writeDefinition(Dependency dependency) {
JavaFile javaFile = new DefinitionWriter(dependency.type(), dependency.dependencies()).createDefinition(); // 1
writeFile(javaFile);
}
private void writeFile(JavaFile javaFile) { // 2
try {
javaFile.writeTo(processingEnv.getFiler());
} catch (IOException e) {
processingEnv.getMessager().printMessage(ERROR, "Failed to write definition %s".formatted(javaFile));
}
}
The writing is done in two steps:
Filer is an interface that supports file creation for an annotation processor. The place for the generated files to be stored is configured through the -s javac flag. However, most of the time, build tools handle it for you. In that case, the files are stored in a directory like build/generated/sources/annotationProcessor/java for Gradle or similar for different tools.
The creation of Java code takes place in DefinitionWriter, and you will see the implementation in a moment. However, the question is what such a definition looks like. I think an example will show it best.
For the below Bean:
@Singleton
public class ServiceC {
private final ServiceA serviceA;
private final ServiceB serviceB;
public ServiceC(ServiceA serviceA, ServiceB serviceB) {
this.serviceA = serviceA;
this.serviceB = serviceB;
}
}
The definition should look like the code below:
public class $ServiceC$Definition implements BeanDefinition<ServiceC> { // 1
private final ScopeProvider<ServiceC> provider = // 2
ScopeProvider.singletonScope(beanProvider -> new ServiceC(beanProvider.provide(ServiceA.class), beanProvider.provide(ServiceB.class)));
@Override
public ServiceC create(BeanProvider beanProvider) { // 3
return provider.apply(beanProvider);
}
@Override
public Class<ServiceC> type() { // 4
return ServiceC.class;
}
}
There are four elements here:
Singleton scope is the only scope the framework covers, so the ScopeProvider.singletonScope() method will be the only one used. The Function<BeanProvider, Bean>, used to instantiate the bean is passed to the method ScopeProvider.singletonScope.
I will cover the implementation of the ScopeProvider later. For now, it is enough to know that it will ensure just one instance of the bean in our DI context.
However, if you are curious, the source code is available here.
The example shows that the only bean-specific things are the type passed to BeanDefinition declaration, new call, and field/returned types.
To keep this concise, I will omit the private methods’ code, the constructor and some small snippets. Let us see the overview of Java code that writes Java code. Here is a link to the full code.
class DefinitionWriter {
private final TypeElement definedClass; // 1
private final List<TypeMirror> constructorParameterTypes; // 1
private final ClassName definedClassName; // 1
public JavaFile createDefinition() {
ParameterizedTypeName parameterizedBeanDefinition = ParameterizedTypeName.get(ClassName.get(BeanDefinition.class), definedClassName); // 3
var definitionSpec = TypeSpec.classBuilder("$%s$Definition".formatted(definedClassName.simpleName())) // 2
.addSuperinterface(parameterizedBeanDefinition) // 3
.addMethod(createMethodSpec()) // 4
.addMethod(typeMethodSpec()) // 5
.addField(scopeProvider()) // 6
.build();
return JavaFile.builder(definedClassName.packageName(), definitionSpec).build(); // 7
}
private MethodSpec createMethodSpec() { ... } // 4
private MethodSpec typeMethodSpec() { ... } // 5
private FieldSpec scopeProvider() { ... } // 6
private CodeBlock singletonScopeInitializer() { ... } // 6
}
Phew, that is a lot. Don’t be afraid; it’s simpler than it looks.
Ok, the BeanDefinitions are ready. Now, we move on to the ScopeProvider.
public interface ScopeProvider<T> extends Function<BeanProvider, T> { // 1
static <T> ScopeProvider<T> singletonScope(Function<BeanProvider, T> delegate) { // 2
return new SingletonProvider<>(delegate);
}
}
final class SingletonProvider<T> implements ScopeProvider<T> { // 3
private final Function<BeanProvider, T> delegate;
private volatile T value;
SingletonProvider(Function<BeanProvider, T> delegate) {
this.delegate = delegate;
}
public synchronized T apply(BeanProvider beanProvider) {
if (value == null) {
value = delegate.apply(beanProvider);
}
return value;
}
}
Now we are ready. It is time for the runtime part of the framework.
Runtime provisioning is the last part of the framework to work on. The BeanProvider interface has already been defined. Now we just need the implementation to do the actual provisioning.
The BaseBeanProvider must have access to all instantiated BeanDefinitions. This is because the BaseBeanProvider shouldn’t be responsible for creating and providing the beans.
Due to the mentioned fact, the BeanProviderFactory took responsibility via the static BeanProvider getInstance(String… packages) method. Where packages parameter defines places to look for the BeanDefinitions present on the classpath. This is the code:
public class BeanProviderFactory {
private static final QueryFunction<Store, Class<?>> TYPE_QUERY = SubTypes.of(BeanDefinition.class).asClass(); // 2
public static BeanProvider getInstance(String... packages) { // 1
ConfigurationBuilder reflectionsConfig = new ConfigurationBuilder() // 3
.forPackages("io.jd") // 4
.forPackages(packages) // 4
.filterInputsBy(createPackageFilter(packages)); // 4
var reflections = new Reflections(reflectionsConfig); // 5
var definitions = definitions(reflections); // 6
return new BaseBeanProvider(definitions); // 8
}
private static FilterBuilder createPackageFilter(String[] packages) { // 4
var filter = new FilterBuilder().includePackage("io.jd");
Arrays.asList(packages).forEach(filter::includePackage);
return filter;
}
private static List<? extends BeanDefinition<?>> definitions(Reflections reflections) { // 6
return reflections
.get(TYPE_QUERY)
.stream()
.map(BeanProviderFactory::getInstance) // 7
.toList();
}
private static BeanDefinition<?> getInstance(Class<?> e) { // 7
try {
return (BeanDefinition<?>) e.getDeclaredConstructors()[0].newInstance();
} catch (InstantiationException | IllegalAccessException | InvocationTargetException ex) {
throw new FailedToInstantiateBeanDefinitionException(e, ex);
}
}
}
I encourage you to read more about it, but I will just explain how it is used in the code. The defined QueryFunction will be used to scan the classpath in runtime to find all subtypes of the BeanDefinition.
So, how is the BaseBeanProvider implemented? It is easy to embrace. The source code in the repository is very similar, but (Spoiler alert!) changed to handle @Transactional in Part 4.
class BaseBeanProvider implements BeanProvider {
private final List<? extends BeanDefinition<?>> definitions;
public BaseBeanProvider(List<? extends BeanDefinition<?>> definitions) {
this.definitions = definitions;
}
@Override
public <T> List<T> provideAll(Class<T> beanType) { // 1
return definitions.stream().filter(def -> beanType.isAssignableFrom(def.type()))
.map(def -> beanType.cast(def.create(this)))
.toList();
}
@Override
public <T> T provide(Class<T> beanType) { // 2
var beans = provideAll(beanType); // 2
if (beans.isEmpty()) { // 3
throw new IllegalStateException("No bean of given type: '%s'".formatted(beanType.getCanonicalName()));
} else if (beans.size() > 1) { // 4
throw new IllegalStateException("More than one bean of given type: '%s'".formatted(beanType.getCanonicalName()));
} else {
return beans.get(0); // 5
}
}
}
That’s it!
We got all the parts. Now we should check if the code works.
Shouldn’t we have started with tests of the annotation processor? How can the annotation processor be tested?
The annotation processor is rather poorly prepared for being tested. One way to test it is to create a separate project/Gradle or Maven submodule. It would then use the annotation processor, and compilation failure would mean something is wrong. It doesn’t sound good, right?
The other option is to utilise the compile-testing library created by Google. It simplifies the testing process, even though the tool isn’t perfect. Please find the tutorial on how to use it here.
I introduced both approaches in the article’s repository. The compile-testing was used for “unit tests”, and the integrationTest module was used for “integration tests”.
You can find the test implementation and configuration in the framework subproject’s files below:
In the beginning, there was NoFrameworkApp:
public class NoFrameworkApp {
public static void main(String[] args) {
ParticipationService participationService = new ManualTransactionParticipationService(
new ParticipantRepositoryImpl(),
new EventRepositoryImpl(),
new TransactionalManagerStub()
);
participationService.participate(new ParticipantId(), new EventId());
}
}
If the main is run, we got the three lines printed:
Begin transaction
Participant: 'Participant[]' takes part in event: 'Event[]'
Commit transaction
It looks like this with FrameworkApp:
public class FrameworkApp {
public static void main(String[] args) {
BeanProvider provider = BeanProviderFactory.getInstance();
ParticipationService participationService = provider.provide(ParticipationService.class);
participationService.participate(new ParticipantId(), new EventId());
}
}
However, to make it work, we have to add @Singleton here and there. Please refer to the source code in the directory. If we run that main, we will get the same result:
Begin transaction
Participant: 'Participant[]' takes part in event: 'Event[]'
Commit transaction
Therefore, we can call it a success. The framework works like a charm!
Once you checked the result of running the code from the previous paragraph, you saw there were additional messages. They are about the beginning and committing a transaction.
Handling the transactions is also typical for frameworks. I will cover how to handle transactions in the next article of this series.