Modern software development has moved far beyond simply writing application code, copying the artifact to a server, and restarting the service. We have entered and then embraced the Cloud Age. Today, entire operations workflows are automated through infrastructure as code solutions, where provisioning and configuring servers, containers, and other resources becomes a fully programmable process.
This shift is accelerating rapidly. Platforms like public cloud providers or Kubernetes and tools such as Terraform made provisioning repeatable at scale. More recently, frameworks like Pulumi or Kubernetes Operators have inched us closer to a reality where the entire infrastructure lifecycle can be managed through code, often without direct human oversight.
Each year operations become less manual, more automated, and move towards even quicker iteration loops. Despite these advances, the DevOps space remains relatively immature in how it enforces correctness and consistency.
On paper, YAML configurations or Terraform plans look like neat solutions: you declare your desired state, apply it to a cluster or cloud provider, and watch your infrastructure come online. The reality is more nuanced.
Kubernetes will readily accept a directory of YAML manifests and fail on the last one, leaving behind a broken environment. Even worse, it might report a successful deployment while the application remains silently misconfigured and fails to serve traffic.
Terraform offers a more robust approach than hand-edited or even template-driven YAML, but its domain-specific language can still be limiting, especially when you need rich logic or dynamic integrations.
Both tools result in a “successful” deployment that doesn’t actually work in practice. Otherwise, you only see the failure after your cloud environment has spun up and begun incurring costs.
The deeper challenge is that most modern systems pull together multiple applications with intricate dependency graphs. They require specific configurations, credentials, network rules, and resource policies that often exist only in documentation such as JIRA tickets, or the collective heads of team members.
As people leave or switch projects, this “tribal knowledge” fragments further. Adding to the complexity is the slow iteration cycle on real cloud environments: every misconfiguration, discovered at deployment time, eats away precious hours of a DevOps specialist’s day and inflates infrastructure bills. It’s not just a question of risk – it’s also a question of time and money.
That burden is poised to grow. In an era where AI tools increasingly generate the same YAML manifests or HCL files, it’s easy to produce massive amounts of infrastructure code quickly, but each new iteration still requires validation. The more code that’s written, the more chance there is of introducing subtle errors, and if the primary guardrail is “does it deploy or not,” then a lot of costly trial-and-error lies ahead.
In application development, many of these pitfalls have already been addressed by encoding invariants and domain knowledge into the program. By leveraging compilers, developers quickly verify useful and critical properties across entire monorepos containing thousands upon thousands of lines of code.
Programming languages that initially lacked strong typing now incorporate it, while others have introduced optional static type checkers or robust type inference. These features reduce or even prevent entire classes of errors – like passing a string where an integer is expected, mixing up parameter orders, or omitting crucial fields – before the code ever runs. They also make continuous integration pipelines faster and more reliable. A simple compile-time error is cheaper to fix than a late-stage production bug.
Bringing these same principles to DevOps processes seems logical and essential. That’s where TypeOps comes in. The use of type systems for infrastructure will let us scale our efforts with confidence, catching mistakes before they get deployed.
Such compile-time checks will help humans and AI-based code generators because they provide instant feedback on whether the code truly satisfies the requirements. By extending the power of type checking into the domain of deployment and operations, we enable the move from the process of manual configuration and trial-and-error evaluations to a flow well-known to programmers where even remote errors show up in the IDE as changes are introduced to the code.
To be able to achieve this, two ingredients are necessary:
First, an infrastructure-as-code environment that would be extensible enough to allow type-checking and a programming language well-known for a robust type system.
Second, a modern language focused on correctness and expressiveness that makes the expression of domain-specific languages a breeze.
VirtusLab invests heavily in Scala, so it is no surprise that we have chosen Scala as the language to build such a solution. This leaves the question of the infrastructure-of-code tool and fortunately, there’s a perfect candidate for such a role.
Pulumi is one of the next-generation infrastructure-as-code platforms that support full-blown programming languages for defining infrastructure. Instead of writing YAML or HCL, you can use Java, C#, Go, Python, or TypeScript.
For Scala enthusiasts such as us, there’s Besom. Besom is a Pulumi SDK for Scala created by VirtusLab. With Besom, you can define your infrastructure in Scala, reaping some of the same benefits of static checks and type inference you’re used to in your application code.
For example:
1val appName = "scala-app"
2val httpPortInContainer = 80
3val httpPort = httpPortInContainer
4val taskDefinition = ecs.TaskDefinition(
5 "scala-app-task",
6 ecs.TaskDefinitionArgs(
7 family = appName,
8 containerDefinitions = List(
9 AppDefinition(
10 name = appName,
11 image = appImage,
12 memory = 512,
13 cpu = 256,
14 portMappings = List(
15 PortMapping(httpPort, httpPortInContainer)
16 )
17 )
18 ).toJson,
19 networkMode = "bridge",
20 requiresCompatibilities = List("EC2"),
21 executionRoleArn = taskExecutionRole.arn
22 )
23)
We use familiar Scala features for everything from memory configurations to port mappings. That’s already a step up from plain YAML or HCL as we have proper type checks. It’s something that you can’t get with YAML. You also get easy-to-use and robust base types and collections with a plethora of helpful methods that make necessary data transformations a breeze.
But it still doesn’t tell us what the application needs in terms of runtime configuration or the exact shapes of the service interfaces. Besom by itself covers only part of the journey, it ensures that once you define your resources they’re type-checked, but it doesn’t automatically ensure that your application’s interfaces align with the infrastructure it relies on.
Yaga is a code-generation tool designed to bridge the gap between application code and infrastructure. Rather than manually declaring which interfaces or configuration parameters each service needs, Yaga collects that information directly from your compiled Scala artifacts via a dedicated SDK.
The SDK tracks type details (for requests, responses, and config) and embeds these details into each artifact, so every version of your service carries its own accurate metadata. This means you can easily pinpoint and prevent version drift when updating your infrastructure code. If anything no longer matches, you catch it during compilation rather than after deployment.
Yaga uses this embedded information to generate the necessary integration code and configuration definitions. As a result, whenever one service calls another, or whenever you provision a matching infrastructure resource, everything lines up with the exact types your application expects. There’s no guesswork, and no need to manually configure environment variables or IAM roles. If something isn’t right, your code simply won’t compile.
Because everything is written in Scala, we rely on the compiler to catch any misalignments early. If you change a Lambda’s request type, for instance, and forget to update the service that calls it, the build will fail immediately, long before you spend time (and money) on a half-deployed cloud environment.
This way, Yaga extends the benefits of strong typing all the way into the infrastructure layer, letting the compiler handle checks we’d otherwise have to manage through either brittle manual processes or additional smoke tests and turning costly reverse deployment treatments into preventative compile errors.
For the prototype implementation of Yaga, we have chosen to focus on AWS Lambda API. AWS Lambdas have a narrow, easily manageable interface that consists of requests, responses, and configuration. Lambdas also invoke other lambdas through a simple interface that takes request objects and returns response objects.
This limited scope allowed us to refine Yaga’s core concepts - embedding versioned metadata, generating code automatically, and enforcing compile-time checks - without the complexity of more elaborate integration points. By proving these ideas on a straightforward platform, we could validate and polish the user experience before broadening support to a broader spectrum of cloud services.
Instead of managing services and infrastructure in isolation, the Yaga sbt plugin automatically handles code generation and dependency tracking for you. By defining each service as a dedicated sbt project and indicating which other services or infrastructure it depends on, Yaga builds a graph of relationships. Whenever one project changes, any downstream project that depends on it will regenerate, or recompile, its Yaga-generated code.
Below is an example sbt configuration for a small system with one “worker” Lambda service and a “router” Lambda that depends on it by the way of invoking the “worker”, plus an infrastructure module that provisions both:
1ThisBuild / scalaVersion := "3.3.5"
2
3lazy val workerLambda =
4 project.in(file("worker-lambda")).awsLambda
5
6lazy val routerLambda = project.in(file("router-lambda"))
When you run sbt ~infra/compile, the plugin watches all modules in this graph for changes. For instance, if you edit the workerLambda code (say, to add or remove a request field), Yaga regenerates the associated model artifacts and facades. Then the routerLambda and infra modules recompile, ensuring they stay in sync with the updated types. Ultimately, all changes flow into the infra module, which produces the Pulumi (Besom) code reflecting the correct environment setup for each Lambda.
Here’s a simplified view of the relationships:
In watch mode, each time you hit “save,” sbt automatically checks whether the new code still satisfies all type requirements across the entire system. If something breaks, like the router’s request parameters no longer matching the worker’s updated model, the compiler flags it immediately. This ensures you catch broken contracts early, avoiding the delays and costs of a failed deployment. The entire system is “shaped” by the same typed discipline, from application code through to infrastructure code.
It’s also important to note that this integration allows one to define modules based on remote artifacts like Docker images or published jars. In this case, Yaga pulls the artifact, extracts the interface metadata from it, and then uses it the same way as when the module contained sources. Thanks to this feature, it’s easy to switch versions of services and still be certain that everything lines up across the stack.
The Yaga SDK integrates directly into your application’s code to capture the “shape” of each service’s interface - its requests, responses, and configuration requirements. Once compiled, these shapes become part of the artifact’s metadata that Yaga extracts and uses to generate type-safe integration stubs.
WorkerLambda example
Below is a simplified example of a WorkerLambda, showing how Yaga’s LambdaHandler makes its input-output contract explicit:
Notice how WorkerRequest specifies precisely what data the Lambda needs, in this case a Foo object containing a string, while WorkerResponse declares what the Lambda outputs. Both automatically derive JSON serialization formats, so their input and output are consistently parsed and validated.
Once compiled, Yaga’s SDK stores these request and response definitions in the Lambda artifact’s metadata, which means any service wanting to call WorkerLambda can learn the exact structure of WorkerRequest and WorkerResponse before it even tries to build or deploy.
RouterLambda example
Below is a RouterLambda that calls the WorkerLambda safely, thanks to a strongly typed facade that Yaga generates based on the WorkerLambda’s metadata:
14 val underlyingClient = UnshapedLambdaClient.create()
15 val lambdaClient = LambdaClient(underlyingClient)
16
17 override def handleInput(req: RouterRequest) =
18 val workerLambdaInput = worker_lambda.WorkerRequest(worker_lambda.Foo(str = req.str))
19 val workerResponse = lambdaClient.invokeSyncUnsafe(config.workerLambda, workerLambdaInput)
20 println(s"Response from worker-lambda: $workerResponse")
21 "Processing completed"
In this code, the Config class declares that the RouterLambda has a handle to another Lambda expecting a WorkerRequest and returning a WorkerResponse. Because the handle is generated by Yaga, the compiler knows how to call WorkerLambda correctly.
When you compile, Yaga cross-checks the metadata from both Lambdas to ensure the call structure matches exactly. If WorkerRequest changes in the worker but the router doesn’t adapt, the build fails before you ever attempt to deploy.
All of this means errors show up far sooner than if you were juggling JSON schemas or environment variables in a wiki somewhere. Each release of WorkerLambda ships with the interface definitions needed to talk to it, and Yaga ensures that if something shifts, you’ll know right away. By letting the compiler handle these checks, Yaga elevates the type system from the application layer into the infrastructure layer, keeping every part of your system - from calls between Lambdas to the cloud resources that host them - in sync.
Once our Lambdas have defined, type-checked interfaces, the next step is to provision them in a way that respects those types. This is where Yaga’s integration with Pulumi (via Besom) truly shines. Instead of manually juggling resource IDs and environment variables, we create well-typed resource constructors and hand them the correct handles.
Below is a small snippet showing how we might define our WorkerLambda and RouterLambda in infrastructure code. The details of custom AWS IAM roles were skipped for brevity.
1// custom AWS IAM roles skipped for brevity
2
3val workerLambdaResource = WorkerLambda(
4 "workerLambda",
5 FunctionArgs(role = lambdaAssumeRole.arn)
6)
7
8// ...
9
10val routerLambdaResource = RouterLambda(
11 "routerLambda",
12 FunctionArgs(role = routerLambdaRole.arn),
13 config =
14 for workerLambda <- workerLambdaResource
15 yield RouterLambdaConfig(
16 workerLambda = workerLambda.lambdaHandle
17 )
18)
Notice how we’re creating the WorkerLambda resource - which itself is a type-safe artifact produced by Yaga - using the minimal configuration needed to deploy a Lambda on AWS. Then, for RouterLambda, we pull in the type-checked handle from the workerLambdaResource and embed it into the RouterLambdaConfig.
Because these objects come from code generation based on introspection into the artifacts, the compiler knows exactly what is required, and if you accidentally pass in the wrong handle or left out a required parameter, your code will not compile.
Moreover, we’re free to integrate more dynamic logic here if our application calls for it. For example, we could compute additional settings or pass values from other services. The resulting infrastructure definition is straightforward and type-safe: any mismatch between the worker and router Lambdas is caught at compile time, rather than after your cloud environment spins up. By the time you run pulumi up, you already know everything aligns with the precise contract each Lambda expects.
Although Yaga started with Scala, the underlying concept isn’t Scala-specific. Any language that stores or generates metadata about types – for example, Java with annotation processing, C# with source generators, or Rust with procedural macros – could apply a similar technique. The value proposition – ensuring robust “shapes” at the infrastructure level – is relevant to any strongly typed ecosystem.
TypeOps builds on the advantages of strong typing – compile-time checks, self-documenting interfaces, and enforced invariants – and brings them to a part of our stack that needs it more than ever: The infrastructure.
Tools like Pulumi already allow us to avoid the pitfalls of sprawling YAML by letting us use languages like Scala for Infrastructure-as-Code, but integrating Pulumi with Yaga raises the bar again. Now, every aspect of your application’s shape can inform how your infrastructure is provisioned, driving down the potential for human error.
This approach reduces the risk of mismatched request/response definitions or missing environment variables and boosts performance when iterating on changes. In many cases, watching the compiler catch mistakes is far quicker than waiting for a full deployment cycle against cloud APIs. It’s time that really adds up when you’re making frequent tweaks.
Looking ahead, cloud-based systems and programmable infrastructure will only become more intricate. To harness that complexity without sacrificing development velocity, we need abstractions that seamlessly bridge the gap between application requirements and the resources on which those applications depend. Yaga takes a bold step in that direction, demonstrating how these ideas work in a language widely used for building robust and safe systems. The result is faster feedback, clearer contracts, and fewer hard-to-diagnose runtime issues.
If you’re tired of discovering errors too late in the development process, TypeOps might just offer the more reliable, streamlined workflow you’ve been waiting for. As more ecosystems adopt type-safe infrastructure practices, we can expect cloud deployments to become significantly simpler and far less error-prone.