- Underwrite.In
- Posts
- AI is opening insurance doors you didn’t know existed
AI is opening insurance doors you didn’t know existed
Your competitors are already in there
Today, we’re diving into:
Why traditional underwriting cannot work now
Real ways AI is rewriting the rules of eligibility and pricing
What your team should not do with AI

Hello !
You know what’s funny?
For decades, risk modeling was sold as “objective.” At least, that’s how it looked on the slide decks.
But somewhere between the spreadsheets and the actuarial tables, something got lost.
Founders can relate, right? Your team build for the “average” user too long, and suddenly, they’re blind to the ones who don’t fit the model.
That’s the flaw AI is now exposing. And, before we jump to the fix…
Answer this:
If underwriting models were rebuilt from scratch today, what should define fairness? |
Now that we have your answer locked in, let’s give you the fix:
How old risk models exclude millions of customers:
Traditional risk pools leaned heavily on actuarial tables and static rating factors: age, occupation, postal code, medical disclosures, and credit scores.
While effective for mass-market segmentation, these inputs created adverse selection traps for anyone who didn’t fit the “standard risk” archetype.
Applicants with thin-file credit histories, non-traditional employment (like gig workers or seasonal labor), or incomplete medical records were disproportionately flagged as substandard.
For carriers, this translated into either punitive pricing (loading premiums to hedge “unknown” risk) or outright declinations.
In emerging markets, where formal credit bureaus and longitudinal health data are limited, the impact was even more stark.
Entire categories like smallholder farmers, micro-entrepreneurs, and informal economy workers were rendered “uninsurable” by legacy models.
Did you know?
From 2016 to 2023, nearly 70% of global losses from natural disasters were not insured, equating to about $260 billion in annual uninsured losses
So, how is AI impacting eligibility and fair pricing?
There are 3 main pain points that AI addresses:
Data signals
AI expands the definition of risk beyond traditional actuarial inputs.
By ingesting alternative data sources from transaction histories, mobile wallet usage, and utility bill payments to IoT telematics and even satellite imagery, underwriters can build far more granular risk profiles.
For instance:
Satellite data is already being used in agricultural insurance to track rainfall patterns and crop cycles, while telematics in auto insurance monitors driving behavior in real time.
Similarly, in emerging markets, mobile payment data has become a credible proxy for creditworthiness, expanding microinsurance access to millions of low-income households.

Price models
Legacy underwriting wasn’t just data-poor; it was pricing-rigid.
Carriers leaned on generalized linear models (GLMs) and credibility theory, assigning risk via broad rating factors (age, ZIP, occupation class).
Those models assumed risk exposures were linear and independent, which often isn’t true.
A 25-year-old gig driver and a 25-year-old accountant in the same city don’t carry the same risk, but legacy pricing often lumps them together.
That’s where things started to break.
Take Progressive’s Snapshot Program in the US. Instead of locking drivers into static rating buckets, they layered telematics into the underwriting process.
Every hard brake, late-night trip, or extra mile behind the wheel became a behavioral exposure indicator, feeding directly into pricing decisions.
Here’s what happened then:
Safer drivers, who had been subsidizing higher-risk segments under blunt rating factors, finally saw 20–25% premium relief.
Loss frequency among Snapshot cohorts dropped by roughly 30%.
For underwriters, the takeaway is:
When they move beyond rigid GLMs and embrace experience-based credibility, they’re not just cutting fairer rates, they’re tightening risk selection and stabilizing loss ratios.

Where Underwrite.In fits
Here’s a reality check: all the AI breakthroughs in the world fall apart if the data feeding them is a mess.
Eligibility models, dynamic pricing, bias checks, they’re only as strong as the inputs.
And let’s be honest, most underwriting teams are still drowning in scattered inboxes, messy PDFs, and half-filled broker portals.
That’s exactly where Underwrite.In comes in.
We capture at the edge: Every submission from broker emails, PDFs, or portals is automatically ingested the moment it arrives, with no manual forwarding.
You get a structure: Attachments are parsed, key fields extracted, and data mapped into structured formats that your team (and your models) can actually query.
We help you trace everything: Every datapoint links back to its original source doc, creating a watertight audit trail that satisfies regulators and keeps bias checks transparent.
You get workflow-ready data: Instead of raw files sitting in inboxes, your team sees a single, clean record they can underwrite, price, or feed into AI models.
Underwrite.In doesn’t replace your team’s judgment or your AI tools; it makes sure they’re both running on clean, defensible, ready-to-use data.

Bias reduction
Advanced AI frameworks embed bias-detection guardrails that identify spurious correlations (e.g., gender, ZIP code, or ethnicity proxies) before they calcify into systemic pricing disparities.
In practice, this involves:
Fairness audits at the model-training stage
Explainability dashboards that show why certain risk factors are weighted more heavily
Regulatory compliance alignment, ensuring adherence to NAIC’s AI principles and the EU’s AI Act
And the Underwrite.In team got it all!

Don’ts when using AI in underwriting
AI is rewriting underwriting, but it’s not a free pass.
Teams that rush adoption without guardrails often end up in compliance trouble or worse, reputational damage.
Here are some clear don’ts your team should keep in mind:
Don’t treat AI as a black box
If your pricing or eligibility models can’t explain their outputs, regulators will ask the hard questions your team can’t answer.
Don’t feed it messy data
Scattered submissions and unstructured broker packs will bias results before models even start learning.
Don’t ignore bias audits
AI can unintentionally replicate historical discrimination (gender, ZIP code, income class).
Don’t over-automate judgment
AI should accelerate triage and pricing, not replace your team’s expertise in nuanced or complex cases.
Don’t skip the audit trail
Without traceable source links, your team is exposed in every compliance review or legal dispute.
So, if your team is serious about scaling underwriting with AI without drowning in rework or compliance risk, Underwrite.In is the connective tissue you’ve been missing.
Team Underwrite.In