Our client’s current process of billing the suppliers is not compliant with the newly passed law. There is a tight deadline and harsh financial implications if this deadline is not met. The ways of working also raise questions about the source of the charges, especially if they are raised retrospectively, without any trace. To make matters worse, it is a serious bottleneck, blocking a substantial part of income.
The development needs to happen in parallel to the business transformation which scope is not yet fully understood. This is a huge standardization effort, as the outcome solution will have to work across many regions and businesses. As a consequence, the requirements are dynamically evolving and it is difficult to formulate a clear backlog of work.
The software needs to satisfy strict audit requirements both on the functional and non-functional plane. We start by creating a simple domain model, which documents our current understanding following Behaviour-Driven Development (BDD) combined with Domain-Driven Design (DDD).
This model is intentionally simple and naive. By iteratively presenting invalid models of the problem we gather stakeholders’ attention and pinpoint the main areas of interest. By addressing these upfronts holistically rather than focusing on their details individually, we gain insight into the workflow which allows selecting optimal architecture.
This allows us to identify the invariants of the architecture. If picked correctly, they remain valid for any change in requirements and allow incremental evolution and rapid experimentation:
The core requirement, that is keeping the audit log of user actions as they were, is at the heart of the design. Therefore any new functionality is hard to get wrong and easy to get right. This has allowed our client to successfully pass the external audit and display full compliance with the law.
The solution is inherently event-based and asynchronous which allows maintaining proper consistency with at least monotonic read consistency on one hand and easy integration with both upstream and downstream services on the other. There is no difference in how the system manages the calls of the users or upstream systems as both are served using the same command-based mechanism. Downstream systems can consume events that can be easily propagated to the enterprise data integration system like Apache Kafka.
The architecture automatically handles almost all common concerns of any potential new functionality like observability or appropriate execution. For example, there is no need to define the log messages as the logging is being done automatically based on the structure – each command, event and event consumption is logged automatically and can be easily correlated.
The system is able to operate at scale in multiple geographical regions and supporting different functional requirements. The architecture does not need to be bent which allows realizing any functionality through providing additional extensions with no modifications that greatly shorten the delivery time.
The solution has broadened the horizon of what is achievable by the business itself. The system quickly gained the features which greatly shortened the turnaround time of making new deals by explicitly and automatically monitoring the bottlenecks. One example is the remainders sent to offending individuals and automatically escalated to their managers if no action is taken. Such side effects can be safely plugged in this architecture at any time, and without affecting the system.
Our system processed close to a billion pounds worth of deals in just a year since it has been launching.