Reasons behind the creation of Enterprise Architecture for Front-End applications
A number of companies, not important how big and agile, started to invest in their own, built in-house software solutions that support their daily routines, reduce operating costs or increase revenue.
What if it does not have to be a single solution but the whole internal ecosystem? Building it without well-designed, scalable and understandable architecture (which is step ahead) is a short-sighted concept as the future is not defined & known.
Building software based on a Front-End Enterprise Architecture (FEEA) significantly increases the tech value and secures your business being in these digital times.
Front-End Enterprise Architecture is based on 4 concepts that joint together gives End-to-End solution that secures, scales, pulls & pushes large amounts of data.
Each software solution has to aim at relevant security levels to the end-users, which is impossible to fulfill when based only on the client-side. One of the FEEA principals is a Back-End For Front-End (B4F) layer which validates, hashes & secures all the communication between the end-user and downstream systems or APIs using JWT or trusted providers with Single Sign-On.
Even a small company thinks about scaling itself. If the software won’t follow the same path at some point, it also won’t be efficient enough to support expected growth. Environment and hardware independence, and as result scalability, is the main advantage of Cloud-Native and Kubernetes based on which the FEEA is run.
Each solution should be designed taking into account the subsequent maintenance and transfer of knowledge between the supplier, which must not necessarily be an external company that will continuously participate in the development of the created system. Easily maintained and well-engineered architecture is a crucial factor here. Proposed Front-End Enterprise Architecture uses common technologies and libraries combined into understandable and clean End-to-End concepts that might be used in multiple different solutions and domains at once. Such an approach gives the ability to distribute responsibility and knowledge across a number of independent teams without restricting their scope to the apps they own.
In the ecosystem of in-house applications that cover a number of areas in separation one of the key things that might evolve into different directions depending on the purpose. With properly defined integration interface a package as a plugin approach becomes a powerful solution that gives unlimited options for extensions and system mutations.
Multi-Level Code sharing
Code is shared on multiple levels:
- Open Source – As any JS/TS projects we leverage reach frontend ecosystem with carefully chosen, high-quality libraries. Stars on Github and downloads on NPM aren’t the main factor.
- Company – There are use cases when OpenSource doesn’t provide us with solutions for our unique needs. We may want to create our own abstraction for data fetching, authentication or authorization or even component library. Those are ambitious, long term projects for a company but all current and future projects will benefit from it. In further future, those libraries may even become Open Source projects
- Monorepo – Projects with the shared domains are gathered in one mono repository with shared utilities specific to this domain. In some cases, those utilities may evolve into more universal solutions and be moved to Company-wide packages. This way other projects will benefit from battle-proven solutions developed by other teams.
- Project – Solutions unique to a given project implemented with the help of libraries from all layers above.
At the core of our architecture lie three libraries:
- React – responsible for view layer and local state
- Redux – responsible for global state and client cache
- Redux-First-Router – responsible for routing and navigation
URL driven state is a forgotten lesson in the frontend world. In Web applications, everything starts with URL. Treating routing as a first-class citizen opens up a great number of possibilities:
- Sharable link to every part of the application
- Server-Side Rendering
- Clean separation of data fetching and rendering
- Access control
- Features like Breadcrumbs or Last Viewed history
The structuring is all about designing the most optimal routing, state, and view configuration with the maximum usability and findability in mind. When designing the structure, the point is to combine the best practices with an elastic individual approach to achieve the structure that scales well, is open for extensions, and is effortless to maintain.
For CRUD-like, model-oriented systems, the structure emerges naturally from the shape of the domain and its internal dependencies. Following that flow, the solutions that proved to be scalable and maintainable is to stick to the main domain points.
Packages, each referring to one of the domain resources, consist of view, model, and routing subcomponents, each exposing an entry point for the global application-scope configuration.
Having all these subcomponents gathered under a single package helps to stay in continuous sync and optimize the number of interdependencies.
For more feature-oriented systems, where the domain patterns do not straightforwardly match the business goals, structuring around features is an obvious move.
Domain models live separately from the view and routing configuration, causing minimum interference. This gives a lot of freedom and flexibility for feature design, as feature packages can consume domain model dependencies in any combination imaginable. Each feature lives in a separate space, along with business logic and corresponding view part.
The same way as for the resources, routing configuration is not distributed around the whole system but lives as a separate entity, gathering all the view entry points into a single application-scope configuration.
As was mentioned in Code Sharing, Monorepo and Company levels evolve in a kind of unique framework, gathering reusable solutions under a common space, which becomes the foundation of every project.
As the codebase grows, the inevitable chaos inside grows too. And no quality standards, no clean code practices can make up for the developer experience the statically-typed languages offer. This is what TypeScript exactly does:
- reduces the pain of maintaining the code
- creates a very safe environment for any kind of refactors or extensions
- helps developers to avoid a huge number of bugs
- provides a rich tooling support
- boosts the developers’ productivity
- the flexibility of the language allows adopting many paradigms, whether it’s object-oriented, functional, or even reactive
All that by introducing a zero-cost level of abstraction.
Node.js was designed with streaming and low latency in mind. To achieve this Node.js uses an asynchronous non-blocking event-driven IO. It proves to be a perfect solution for building efficient real-time data-intensive applications, but also serves as a good foundation for a web library or framework.
When Node.js was entering the market, it started with an approach different from existing traditional solutions. Instead of following the thread-per-request model, which exhausts hardware resources severely, Node.js adopted a model working on a single thread, which was able to handle tens of thousands concurrent request via the non-blocking IO.
Node.js created a huge community with a vast sharing space, which now owns the largest package registry in the world, gathering over a billion solutions.
Full-Stack TS/JS brought unification to web development, optimized developer resource usage, and greatly expanded the boundaries of code sharing.
Fluid Design System
Design systems proved to be an excellent solution for an efficient and consistent way to build UI. By introducing more domain language to the level of UI components the communication between the designer and the developer becomes much clearer. A library of reusable domain-specific UI components becomes a wide sharing space, where the amount of duplicated work comes to zero, and quality standards can be constantly monitored. All this creates a positive boost for UI development on a large scale.
Following all the best practices, we design the component library with reusability and responsiveness in mind. And we actually take one step further. Instead of sticking to some predefined set of breakpoints, as some traditional approaches do, library components are fluid, adopting any device and any scale imaginable.
Server Side Plugin System
Support of Server-Side-Rendering opens up a possibility to move more responsibility to the server-side. A lot of crucial functionalities can be implemented on a server-side, such as SSO or any kind of authentication, analytics, logging, etc. All these features can be implemented in a form of server-side plugins, which are easy to integrate with, highly customizable, and thus reusable. By standardizing the approach new features can be quickly introduced to a company scope in a consistent shape.
Therefore instead of integrating many different tools individually per each application, the process is greatly simplified. And obviously, moving more responsibility to the server-side decreases the exposed front-end surface which is much more vulnerable.
The Final Design
Joining all together we’re getting end-to-end solutions which on the basis is built from two parts. The End User interface that handles all the experience and acts on a first-line talking to the Back-end For Front-end layer.
Digging into the detailed design of the Back-end for Front-end Node.js layer, the extensibility concept gave us the possibility to join the number of small pieces (plugins) dedicated to specific features or domain purposes, depends on our needs and not build the closed monolithic solution.
Each plugin by design contains at least one service block and not more than one “external world” connector to pull & push data out of system borders if required. That kind of approach allows easy maintenance of the BFF because all the downstream changes might be reflected in the versioned plugins & services which we’re plugging in or out.
By default, all the encryption and security happens here to not expose any sensitive or secret tokens to the end-user and trust the environment provider like AWS, Azure, GCloud or even our own private cloud.
The high-level approach, if we skip the orientation of a system and if it’s a feature or model-oriented, looks similar to what allows keeping all the code BFF and FE within the same packages on the code monorepo.
Benefits are visible on the first sight if we took into consideration the Fluid Design System and code sharing between packages – building new parts of our application becomes quick and efficient. Joined together with maintainability & security, the FEEA is the right choice for every scale business. The framework didn’t materialize just now, it’s an output of years of experience with enterprise delivery we’ve managed.