• Underwrite.In
  • Posts
  • Why do companies see 24% higher losses after AI automation?

Why do companies see 24% higher losses after AI automation?

And how to correct it...

In this edition, we’ll cover:

  • Why full AI automation often increases losses instead of reducing them

  • Hidden decision gaps automation creates inside underwriting teams

  • How Underwrite.In help augment human judgment

When companies rush into full automation in underwriting, something counterintuitive often happens: losses go up instead of down.

I agree, on paper, it makes sense, AI can automate routine tasks like data extraction, document parsing, or claims scoring, and studies show automation can reduce processing times by up to 50% in claims and underwriting workflows.

But in practice, automation without thoughtful integration can strip out the very human judgment that prevents costly mistakes.

One of the most common pitfalls is overreliance on algorithmic rules without sufficient human oversight. 

AI systems trained on historical data can reproduce past biases, miss contextual nuance, or fail to catch emerging patterns that don’t match their training sets, all while creating a false sense of confidence that everything “looks fine.”

In insurance specifically, automation can excel at processing volume, for example, machine learning models have been shown to improve underwriting accuracy by over 30%, and reduce cycle times significantly.

But when underwriting decisions are fully automated without human context, companies can misprice risk, misinterpret nuanced exposures, or reject borderline but profitable business, ultimately leading to higher loss ratios or portfolio shrinkage.

Another consequence of going “full auto” is model brittleness.

AI systems tend to perform well in familiar conditions, but struggle with anomalies or shifts in patterns, such as emerging risk categories or sudden market changes.

Insurance leaders have noted that while a large majority (62%) view AI as a quality and fraud reducer, only 43% of underwriters trust and use automated recommendations regularly.

Hidden gaps automation creates inside underwriting teams

One of the biggest gaps automation creates is context loss.

AI evaluates submissions based on the data it’s fed like revenue, sector codes, historical claims, exposure metrics.

What it doesn’t inherently understand is why those inputs look the way they do right now.

That’s where your underwriting team has traditionally added enormous value.

They’ve been the ones to notice when a broker’s submission behavior subtly shifts quarter to quarter.

Or when a sudden surge of business starts coming from a specific geography, not enough to trip alarms, but enough to raise an eyebrow.

So, when decision flow becomes fully automated, those weak signals tend to get flattened.

Underwrite.In is designed to close that exact gap, not by replacing judgment, but by surfacing context back to your team at the moment it matters.

Our AI-generated insights pull together key details across documents, claims history, and submission patterns into a concise narrative, so your underwriters see the story of the risk, not just the fields.

That allows your them to focus on judgment calls sooner, not later.

Another gap that creeps into is risk ownership

This one is far more dangerous than most leaders realize.

When decisions become heavily automated, responsibility doesn’t disappear overnight. It diffuses.

  • Underwriters begin approving outcomes they didn’t fully shape.

  • Pricing teams start trusting scores they didn’t interrogate.

  • Leadership sees dashboards that look clean and confident, but when something goes wrong, explanations are thin.

Over time, this changes behavior inside your team.

Instead of asking, “Does this risk actually make sense for our portfolio right now?” the question subtly shifts to, “Did the system approve it?”

That shift may sound small, but at scale, it’s costly.

Studies on human-automation interaction consistently show that when people are removed from decision ownership, automation bias creeps in teams are more likely to accept recommendations without challenge, even when something feels off.

This is pure human psychology, and you cannot blame anyone for it.

If your team doesn’t feel ownership over decisions, they’re less likely to review outcomes deeply, question assumptions, or feed insights back into the process.

Surveys show that while over 70% of insurers are increasing AI investment, fewer than 45% feel confident that their teams can explain or challenge automated underwriting decisions in real time.

Thus, the strongest underwriting organizations do the opposite. They use technology to strengthen ownership, not dilute it. Decisions remain clearly human-led, with AI acting as support, surfacing insights, highlighting concerns, and making rationale explicit.

Lastly, the most overlooked gap is skill atrophy

When underwriters stop actively evaluating risk and start supervising systems, something subtle changes in how they think.

  • Over time, instincts weaken.

  • Then, their pattern recognition skill dulls.

  • And then portfolio intuition slowly fades.

Junior underwriters, in particular, feel this shift first.

Instead of learning why a risk makes sense or doesn’t, they learn how to move submissions through workflows.

So, they become fluent in tools, but less confident in judgment.

That’s dangerous for your business.

Because underwriting isn’t static. Markets move every day, or several times a day which create risks to mutate, and entire categories to emerge where historical data is thin or misleading.

That’s why you want a team that can step in, reason through ambiguity, and make a call even when the system hesitates.

Studies in decision science show that prolonged reliance on automated recommendations can reduce independent decision quality by over 20% over time.

DataManagement.AI’s ‘RealTime Alerts & Notifications’ sends instant, data-informed alerts when key conditions are triggered, reducing response times to minutes and freeing teams from manual monitoring.

In insurance specifically, leaders consistently report that while automation boosts throughput, it can unintentionally weaken underwriting capability if humans are pushed too far out of the loop.

That’s why the smartest insurers are changing course.

They’re no longer asking, “How much can we automate?”
They’re asking, “How do we make our underwriters better decision-makers?”

This is where augmentation, not automation matters.

That’s the philosophy behind Underwrite.In.

Instead of turning underwriters into system supervisors, Underwrite.In is designed to act as an AI-powered research and skills-assist layer.

It surfaces clear summaries, highlights key signals, and points your team toward what actually deserves attention without taking the decision away from them.

Underwriters still decide.

And that difference shows up in outcomes.

Companies that keep humans as decision owners supported by explainable AI insights consistently report stronger risk selection, better loss control, and healthier portfolios over time.

Remember: In underwriting, the most expensive mistakes aren’t slow decisions.

They’re fast, confident decisions made without context, and without a human who feels accountable for the outcome.

AI should never take that accountability away.
It should make it easier to uphold.

Team Underwrite.In