The role of AI agents in accelerating the insurance underwriting process
Krzysztof Korbacz
Staff Software Engineer
Published: Sep 16, 2025|24 min read24 minutes read
Underwriting remains one of the great examples of where the insurance industry is in need of technology modernization. Specialist underwriters are spending far too much of their valuable time piecing together highly fragmented information, instead of focusing on managing the true complexity of risk profiles. This leaves insufficient time to apply their judgment effectively and the wider business struggles to move critical insurance transactions forward.
Recent advancements in Agentic AI are finally showing promise in changing this dynamic. By automating the labor-intensive tasks and uncovering additional insights, this technology empowers underwriters to make faster, smarter decisions.
In this article, we’ll explore the reference architecture and key considerations on how Agentic AI could be applied in underwriting space.
Traditionally, the insurance underwriting process consists of the following elements:
Application Gathering & Submission Intake: the process begins with a prospective client submitting a coverage application, often through a broker.
Initial Assessment & Triage: an initial screening ensures submissions fit insurer’s risk appetite and are winnable, allowing underwriters to focus on the best opportunities.
Data Gathering & Enrichment: a labor-intensive hunt for missing details across internal and external sources to complete the risk profile.
Risk Analysis & Evaluation: where underwriters transform raw data into actionable risk insights that predict the likelihood and financial severity of future claims.
Pricing & Quoting: risk insights are translated into premiums, terms, and coverage tailored to the client’s profile.
Decision Making & Policy Issuance: the underwriter makes the final call - approve, decline, or modify, and if accepted, issues and binds the policy.
Anyone who has seen the underwriting process from the inside knows that it is largely manual and relies on fragmented data, making it slow and inefficient.
The disconnect between a slow, outdated process and the modern market’s demands for speed and efficiency has created a significant bottleneck. This bottleneck prevents insurers from scaling, limits profits, and impacts customer satisfaction.
Here are a few key observations that highlight the challenge:
A considerable amount of insurance data is unstructured, locked away in broker emails, PDFs, and forms
Underwriters spend a significant portion of their workday on manual, non-value-added tasks like data entry or re-keying information into disconnected legacy systems
Many underwriters feel their pricing models lack sufficient insight, and they are dissatisfied with the lack of real-time portfolio visibility
The clerical nature of work contributes to low job satisfaction among underwriters
50% of the current workforce is projected to retire by 2028
The consequences are clear: profitable business is often "left on the table" because submissions cannot be handled in a timely manner, and the entire operation hits a capacity ceiling, limiting the insurer's ability to scale or innovate. The talent cost of this bottleneck is equally concerning.
The evolution of solutions for the "underwriting bottleneck" didn't start today. For decades, Underwriting 1.0 relied on physical documents, Excel spreadsheets, and siloed reporting systems. The process was entirely manual, and the analytics were purely descriptive.
Then came Underwriting 2.0, the era of predictive AI, where ML models were used for enhanced risk scoring, sophisticated fraud detection, and predicting customer behavior. However, these were point solutions, designed to solve one specific, narrow problem at a time. Success in this era depended on investing in a solid data foundation - capable of managing increasing data volumes and supporting expanding reporting needs.
We are currently entering the emerging phase of Underwriting 3.0 and the production use of Gen AI and Agentic AI. Gen AI tackles the unstructured data problem by understanding, summarizing, and processing human language from documents. And Agentic AI provides the intelligence to autonomously act on that understanding, streamlining and orchestrating large parts of the underwriting process while keeping the human expert firmly in control.
Before we dive deeper into the solution, let’s define two key terms used throughout the article:
AI Agent - a system designed to perceive its environment, create a plan, and use tools to achieve a specific goal. Unlike a standard language model, an agent can be given a high-level objective, like "process this new submission," and it will autonomously orchestrate various tools - from GenAI to predictive models - to achieve it. It’s like a smart assistant that learns on the job.
Agentic AI - a team of specialized agents that work together to achieve a common goal.
At VirtusLab, we have collaborated with many companies in the insurance industry, gaining deep insights into the common challenges and solution space across underlying technologies, off-the-shelf products, and the broader data landscape.
Following the recent advancements in the Agentic AI space, we focused our efforts on exploring the practical application of AI agents in underwriting. This work resulted in a reference architecture of the Agentic Underwriting Platform, supported by technology evaluation and a working proof-of-value prototype.
The outcome serves as a blueprint for a comprehensive system designed to automate and streamline a large portion of the underwriting workflow, from initial submission to the final quote. It's built to handle the complexities of real-world insurance data while keeping the human underwriter fully in control.
Although the current focus is on the Property domain, the architecture is designed to be easily adaptable to other lines of business, like specialty risks.
The following diagram provides a high-level view of this architecture, illustrating how the Agentic Underwriting Platform interacts with the key actors and systems in its environment.
Source: VirtusLab's Reference architecture for Agentic AI in Insurance Underwriting
One of the most innovative aspects of an agentic platform is the opportunity to completely rethink the underwriter's primary tool: the workbench. This is a perfect example of where borrowing ideas from different domains pays off. In the tech industry, tools like the Cursor IDE or the interactive canvas in ChatGPT have become integral to the daily workflow, creating a more fluid and enjoyable way of working.
Now, imagine that same dynamic experience for the underwriter. Instead of a static dashboard, they have a conversational partner. They can chat with the AI agents about specific parts of a submission e.g., "The proposed premium is 15% higher than last year. Generate a client-ready justification that breaks down the key drivers for the increase" or "Summarize the client's claims history." The underwriter can navigate between different revisions of the AI's output, track the provenance of every important piece of information to ensure transparency, and step in to help the agents when human judgment is needed. This new workbench doesn't remove the underwriter's authority; it enhances it. They are the ones who review the complete picture, apply their expertise, and ultimately "pull the trigger" on the final decision.
2. Modularity and extensibility: built for the real world
A modern underwriting platform should be modular so it can adapt to each insurer’s needs and evolve over time.
New agents can be seamlessly added to the workflow as needed. For more complex challenges, a team of specialized agents can collaborate to tackle the problem. This approach also respects existing infrastructure; fully algorithmic parts of a workflow can remain in place, with agents handing off tasks to them where appropriate. The agentic workbench integrates with existing systems, coexisting with them rather than forcing a disruptive, all-or-nothing migration.
The platform's ability to integrate with various systems via APIs and MCP means that adding new enrichment sources, underwriting rules, or new Machine Learning models can be done without disrupting current processes.
Finally, this modularity enables an iterative approach to technology adoption. An organization doesn't need to commit to a full-scale transformation overnight. They can start by automating a single, high-impact area, like submission intake, prove its value, and then progressively expand the agentic capabilities across the entire underwriting lifecycle. This makes the transition to the agentic world both manageable and scalable.
3. The evolving role of the underwriter
The shift to an agentic AI fundamentally redefines the underwriter's role. Where their focus evolves from performing repetitive, manual tasks to becoming a high-value decision orchestrator - or an "agents' manager." The platform's human-in-the-loop capability is available at any point in the process, not just at predefined steps, giving the underwriter the precise level of control they need, when they need it. This is a core element of augmented underwriting.
With agents handling the data-heavy lifting, underwriters can finally focus on their most valuable skills: analyzing complex risks, negotiating terms, and building client relationships. Their role also expands to continuously improve how the AI agents support them. This includes validating agent outputs, providing feedback to train the models, setting operational guardrails, and managing exceptions that require human expertise.
For low-risk, less complex, and approved types of risks, purely algorithmic underwriting can and will still be used. Some carriers also view augmented underwriting as a natural step towards pure algorithmic underwriting, with straight-through processing as an eventual end-state for certain cases.
This evolution in the underwriter's daily work is a practical example of a broader, systemic shift in business operating models that companies need to embrace to stay relevant in the AI era.
4. Tackling the “shifted complexity” concern
From an engineering perspective, in a multi-agent system, we're moving away from a fully deterministic workflow where we define every single path (in which, if an unforeseen situation arises, the process breaks and requires development).
Instead, we're giving the agent a certain degree of autonomy so it can achieve its goal. We are counting on the agent to adapt, improvise, and provide additional insights when necessary. To quote a classic: "Good, adaptation, improvisation. But your weakness is not your technique."
In other words, we're shifting from defining the "how" to defining the "what", which is a big thing.
And it’s really crucial to realize that we are dealing with “constrained autonomy” here, through the selection of tools the agent can use, business guardrails, output schemas, making all actions transparent by investing in observability, data provenance, and explainability, making the human the ultimate decision maker.
It is a nice exercise to look at the agents from the following perspective:
When giving a task to a smart junior analyst, you tell them: here's a new submission. I need you to handle this for me. Here are the databases you can use (the tools), and here's my phone number if you get stuck. The analyst will figure out the best order to do things, handle a database being temporarily offline, and know when a set of facts just doesn't look right and requires your expertise.
A common and valid concern in any automation initiative is that complexity rarely disappears - it just shifts. Underwriters may worry that, instead of spending hours rekeying data, they will now spend those same hours “babysitting” AI agents, validating outputs, and troubleshooting new kinds of errors. This perspective echoes insights from systems theory, which teaches us that in complex adaptive systems, changes often shift - rather than eliminate - complexity. The key challenge is not to deny this shift, but to manage it intelligently.
VirtusLab's approach to tackle the complexity shift:
Structured evaluation and self-improvement
As part of this article series, we will dive deeper into how insurers can adopt a thoughtful, structured approach to evaluating AI agents. Rather than relying on ad-hoc testing, evaluation frameworks ensure that agents are continuously monitored against how the policies actually perform. Over time, this evaluation loop also fuels self-improvement, so the agents become more reliable and reduce the supervision burden.
Observability and auditability at the core
The reference architecture for the Agentic Underwriting Platform has been designed with observability and auditability as first-class citizens. Every data transformation, model output, and agentic decision can be traced back, audited, and explained. This not only gives underwriters confidence in the system but also helps pinpoint and resolve issues quickly, reducing the “babysitting” effect.
Pragmatic awareness of imperfection
We are mindful that no system is flawless - not legacy workflows, and not AI-driven ones. However, if implemented thoughtfully, the net gain is undeniable. By removing repetitive manual tasks and embedding robust oversight mechanisms, we can move the needle strongly towards efficiency. Underwriters spend less time on clerical work, and while some new tasks emerge, they are higher-value, more engaging, and less draining.
In short, yes - complexity shifts. But if you play it right, the shift is toward higher value: from low-value manual work to higher-order supervision and agent management. This evolution, supported by rigorous evaluation and strong observability, ensures that the net result is a visible, measurable gain in speed, accuracy, and underwriter satisfaction.
To make the concepts we've discussed more tangible and demonstrate the real-world acceleration, let's dive into two key agents we implemented as part of our prototype: the Extraction Agent and the Enrichment Agent.
The extraction agent
The Extraction Agent's primary role is to tackle the very first and most time-consuming step in the underwriting process: getting accurate data out of the initial submission. Here’s how it works:
Cataloging the submission: After downloading an email from the underwriter's inbox, the agent carefully catalogs all its contents, including the email body and any attachments, whether they are PDFs, Excel spreadsheets, or even images.
Intelligent text recognition: Files that require Optical Character Recognition (OCR) are passed through a layout-preserving text recognition service. This is a critical step. We care not only about extracting character strings but also about their precise location on the page - coordinates, table boundaries, and headers. This detailed spatial information is essential, because evidence for data points will be built on these exact offsets.
Schema-driven extraction: Based on a solid foundation of structured text, we can begin the extraction. The extraction is powered by LLMs but is strictly bound by data contracts (like JSON Schema or Pydantic models) and domain-specific validators to ensure accuracy and consistency. The model is forced to return values in canonical formats (e.g., standardized dates and addresses) along with confidence scores and precise provenance, linking each piece of data back to its source in the original document.
The enrichment agent
Once the Extraction Agent has done its job, the Enrichment Agent takes over. Its purpose is to take the initial, often incomplete, data and build a comprehensive, 360-degree view of the risk. This is where the agent moves from simply processing information to actively seeking it out.
Gap analysis: The agent's first step is to analyze the extracted data and compute a "gap list" - a precise inventory of all the missing information required to make the underwriting decision.
Tool selection and planning: To fill these gaps, the agent accesses a library of available "tools." These tools, exposed via a Model Context Protocol, can be anything from internal databases to external APIs. For our prototype, a key tool was an integration with a third-party property risk data provider. The agent intelligently selects the right tools for the job and formulates a step-by-step plan to retrieve the necessary data. This process is highly extensible; adding a new tool, like an integration with a system that analyzes satellite imagery to check property roof conditions, is a straightforward process.
Executing the plan: The agent then proceeds with its plan, systematically querying the selected sources to gather the missing information.
Intelligent human handoff: The agent doesn't operate in a vacuum. It knows when to ask for help. If it encounters a situation it can't resolve - such as persistently missing fields, multiple valid candidates for a given address, or a violation of a critical underwriting rule - it will pause its plan and loop in the human underwriter for guidance.
Data correction: In some cases, the enrichment process may uncover discrepancies that lead to the correction of the originally extracted data. For example, if the initial submission lists a building's construction as 'brick' but a third-party data source clarifies it as 'joisted masonry' and provides a more accurate Fire Protection Class, the agent can update the record with this more precise and critical risk information.
After the enrichment, submission data is passed to the risk & pricing agent, which calculates the suggested premium and the risk level. All information gathered up to this point is then analysed by the summarisation agent that creates an executive summary for the underwriter with a decision recommendation.
Effectively, the underwriter receives a one-pager view presenting the results of the agentic workflow, which can be easily modified and streamlines the decision-making process.
We began this exploration by examining the underwriting bottleneck - a critical friction point born from a reliance on manual processes, unstructured data, and fragmented systems. This bottleneck doesn't just slow down quote to bind; it limits growth, burns out talented underwriters, and puts a hard ceiling on an insurer's capacity to scale and innovate. The challenges are clear: too much time spent on low-value tasks, not enough insight from available data, and a looming talent crisis.
The agentic underwriting platform we've outlined is a direct response to these challenges. It tackles the unstructured data overload head-on, transforming a sea of emails, PDFs, and forms into clean, actionable insights. It acts as a universal translator for old, fragmented systems, ensuring data flows smoothly without major re-engineering. Most importantly, it automates the mundane, repetitive work that has historically consumed the underwriter's day, freeing them to become what they were always meant to be: strategic risk experts.
The question is no longer if insurers should modernize, but how they can do so on their own terms. In today's landscape, doing nothing is a slow, silent exit from relevance. But this technological shift is not an unstoppable force to be feared; it's an unprecedented opportunity to be seized. The future of underwriting belongs to those who embrace the new, symbiotic relationship between human expertise and agentic AI.
By approaching this transition sensibly and iteratively, insurers can tackle this transformation in a way that works for them, moving from a position of defense to one of empowered control.
For a deeper look at how Agentic AI can be applied to underwriting, reach out to us for a guided walkthrough of our reference architecture.