Credit Risk Modeling Explained: PD, LGD, EAD and Expected Loss in Plain English | ModelReef
back-icon Back

Published February 13, 2026 in For Teams

Table of Contents down-arrow
  • Quick Summary
  • Introduction PD/LGD/EAD
  • simple PD/LGD/EAD
  • Step-by-step implementation
  • Real-world example
  • Common mistakes
  • FAQs
  • Next Steps
Try Model Reef for Free Today
  • Better Financial Models
  • Powered by AI
Start Free 14-day Trial

Credit Risk Modeling Explained: PD, LGD, EAD and Expected Loss in Plain English

  • Updated February 2026
  • 11–15 minute read
  • Lending Analytics
  • banking analytics
  • loan underwriting
  • risk management

🧠 Quick Summary

  • credit risk modeling is how lenders turn uncertainty into measurable inputs: probability of default (PD), loss given default (LGD), exposure at default (EAD), and expected loss (EL).
  • In practice, EL is the “risk cost” you can plan for: EL = PD × LGD × EAD-simple formula, but only useful if the assumptions are disciplined and consistent.
  • Strong lending analytics separates measurement (what the risk is) from decisioning (what you’ll do about it), so pricing, covenants, limits, and approvals stay aligned.
  • The fastest path to a usable model is to start with portfolio segmentation, pick a time horizon (12-month vs lifetime), and standardise definitions before you tune accuracy.
  • PD is about likelihood, LGD is about severity, and EAD is about timing/usage-most errors happen when teams blend these concepts or mix horizons.
  • A modern AI lending platform can speed up data prep and monitoring, but you still need clear credit policy rules and model governance to avoid “black box drift.”
  • If you want results you can defend, validate with back-testing, stress scenarios, and consistent cutoffs-then operationalise through reporting and review cadences.
  • If you’re short on time, remember this: accurate EL isn’t “perfect math”-it’s consistent assumptions that your pricing, approvals, and monitoring can all use end-to-end.

📌 Introduction: Why PD/LGD/EAD still drives better lending decisions

If you’re building lending analytics to make faster, more consistent credit decisions, PD, LGD, and EAD are the core building blocks that keep everything linked. They translate messy real-world borrower behaviour-missed payments, collateral recovery, line utilisation-into inputs you can use across underwriting, pricing, and portfolio monitoring.

The reason this matters now is simple: credit teams are expected to move quicker, defend decisions more clearly, and update views of risk as conditions change. That’s exactly what disciplined credit risk modeling enables-especially when the model is built around definitions your team agrees on and can maintain. And because these inputs feed multiple workflows, it’s worth investing in clean data pipelines and consistent calculations (for example, pulling exposures from core systems via integrations).

🧩 A simple PD–LGD–EAD framework you can apply today

Use this framework to keep your credit risk modeling practical and decision-ready:

  1. Define the unit of risk (loan, borrower, facility, segment) and the horizon (12-month, lifetime).
  2. Estimate PD by segment using your strongest available signals (ratings, financials, behaviour).
  3. Estimate LGD based on seniority, collateral, recovery timelines, and workout costs.
  4. Estimate EAD based on scheduled balances and utilisation behaviour (especially for revolvers).
  5. Calculate expected loss and validate it with reality: back-tests, overrides, and governance.

This approach scales: you can start with a simple expected loss calculator and improve accuracy over time without rewriting the model every quarter. It also plugs directly into downstream workflows like provisioning and reporting-especially if you standardise outputs early and keep assumptions transparent.

Step-by-step implementation

🧱 Step 1: Set your scope, definitions, and data “source of truth”

Start by deciding exactly what your model is estimating and for whom. In lending analytics, most confusion comes from unclear definitions: what counts as “default,” what the recovery window is, and whether you’re modelling at borrower level or facility level. Lock those down first, then segment your portfolio into groups that behave differently (e.g., secured vs unsecured, SME vs corporate, revolving vs term).

Next, list the minimum data you need: origination terms, balances over time, delinquency/default events, recoveries, and collateral realisations. Even if you’re using an AI lending platform later, your inputs still need a clean lineage and permissions model-especially when multiple teams touch the same assumptions. Treat access control and auditability as part of model design, not an afterthought.

✅ Step 2: Estimate PD in a way your credit team can explain

PD should answer one question: “How likely is default over the chosen horizon for this segment?” A practical way to start is with historical default rates by segment and rating band, then refine using leading indicators (financial ratios, payment behaviour, industry stress). Keep PD interpretable-credit committees trust models they can reason about.

Avoid “precision theatre.” If your dataset is thin, a simple rating-to-PD mapping with conservative calibration is often better than an overfit model. Where an AI lending platform helps is prioritising signals, detecting drift, and automating monitoring-but you still need policy guardrails: when overrides are allowed, how exceptions are documented, and how frequently calibration is reviewed. The goal is a PD you can use consistently in approvals, monitoring, and reporting-not just a PD that scores well in a lab.

🛡️ Step 3: Model LGD based on recovery reality, not optimism

LGD is the share of exposure you don’t recover after default, net of costs. Build it from real recovery outcomes: collateral values at liquidation (not origination), time-to-recovery, seniority, guarantees, and workout expenses. Segment LGD where recoveries behave differently-secured loans typically have distinct recovery curves versus unsecured.

A common approach is to estimate recovery rates by segment and then translate them to LGD (LGD = 1 − recovery rate). But the value comes from being explicit about assumptions: haircut rates, liquidation timelines, legal costs, and whether you include interest and fees. If your LGD feels “too stable,” you may be missing the operating reality that recoveries worsen under stress-something you’ll want to link to scenario testing later. Strong financial risk analytics treats LGD as a disciplined assumption with ranges, not a single magic number.

💳 Step 4: Get EAD right-especially for revolving facilities

EAD is where many models quietly break, because it depends on product structure and borrower behaviour. For amortising term loans, EAD often follows the schedule (with prepayment assumptions). For revolvers and lines, EAD depends on utilisation patterns-borrowers may draw more as they approach distress, which can materially increase loss.

Start simple: use current balance for term loans, and for revolvers use current drawn plus a conservative credit conversion factor on undrawn commitments. Then refine using historical utilisation increases prior to default in similar segments. If you’re modelling facilities with different repayment structures, your EAD logic should reference the underlying schedule mechanics so you don’t misstate exposure at the worst possible time. In smart lending technology, this is the difference between “a model that looks clean” and “a model that prevents surprises.”

🔄 Step 5: Calculate expected loss, validate, and connect it to decisions

Once PD, LGD, and EAD are defined, expected loss is straightforward: EL = PD × LGD × EAD. The hard part is ensuring the output is decision-ready. Validate by comparing predicted EL to observed loss outcomes over comparable periods, then document where the model is conservative or aggressive.

Next, connect EL to the workflows that matter: pricing, limit setting, approvals, and monitoring. This is where lending analytics becomes operational leverage-because the same assumptions inform both “Should we do this deal?” and “What should we do if performance changes?”A practical move is to embed EL outputs into a lending decision model so relationship managers and credit teams see the same picture. If you want to avoid spreadsheet sprawl, centralise drivers, assumptions, and scenario outputs in Model Reef so updates don’t fragment across versions and inboxes.

Real-world example: turning EL into a defensible approval

A mid-market lender is reviewing a secured working-capital facility. The credit memo reads well, but the portfolio has seen rising delinquencies in the borrower’s sector. Using credit risk modeling, the team segments the borrower into a peer set, estimates PD based on updated financials and sector stress signals, models LGD with collateral haircuts and workout costs, and calculates EAD with a conservative utilisation uplift.

The result: expected loss is higher than last quarter, but still acceptable if the loan is structured correctly. The lender updates pricing and introduces tighter monitoring triggers, then runs stress scenarios to understand how quickly headroom could compress under revenue shocks. Instead of debating opinions, the team debates assumptions-faster, clearer, and easier to document.

⚠️ Common mistakes to avoid (and what to do instead)

The most common mistake in credit risk modeling is mixing horizons-using a lifetime PD with a 12-month LGD (or vice versa) and ending up with an EL that nobody can interpret. Another frequent issue is over-averaging: one LGD for every secured loan, even when collateral type and seniority clearly drive different recoveries. Teams also underestimate EAD for revolvers by ignoring utilisation creep as distress approaches.

Avoid these by standardising definitions early, segmenting where behaviour differs, and creating a simple validation routine that compares predicted losses to observed outcomes. Finally, don’t leave the model disconnected from decisions-if EL isn’t reflected in pricing and structure, you’re doing analytics without impact (a gap you can close by tying it into loan pricing mechanics).

❓ FAQs

Not exactly. Expected loss is a risk estimate based on PD, LGD, and EAD; provisions are an accounting outcome shaped by standards, governance, and portfolio classification. Use EL as an input to better forecasting and decisioning, then align it to your provisioning methodology with consistent horizons and documentation. If you’re unsure, start by making your EL calculation transparent and repeatable, then map it into reporting.

Use the best available proxy: external benchmarks, rating mappings, peer segments, and conservative calibration. The objective is consistency and defendability, not perfection. As your dataset grows, refine segmentation and recalibration rather than rebuilding from scratch. A clear override policy and review cadence will protect the model from false precision.

Treat stress as a controlled adjustment: PD typically increases as borrower resilience drops, while LGD often worsens due to lower collateral values and longer recovery timelines. Define scenarios, set adjustment ranges, and document logic so stakeholders can challenge assumptions constructively. The key is to keep “base” and “stress” outputs comparable so decisions remain consistent.

No-an AI lending platform can accelerate signal detection and monitoring, but policy defines what risks you accept and why. Use AI to surface patterns, triage reviews, and flag drift, then keep humans accountable for the decision rules and exception handling. If you want a scalable workflow, centralise assumptions, approvals, and scenario versions so the organisation learns over time rather than repeating one-off analyses.

🚀 Next Steps: make your model usable every week

Start small and operational: build a clean PD/LGD/EAD sheet for one segment, then validate it against observed outcomes. Once the logic is stable, connect it to decisions-pricing, covenants, and monitoring-so lending analytics becomes a workflow, not a slide.

A strong next move is to productionise the model so updates are fast and controlled. Model Reef helps here by keeping drivers, assumptions, and outputs in one governed environment-so when you refresh inputs weekly, you don’t create a dozen conflicting spreadsheet versions across teams. From there, expand segmentation, add scenario layers, and formalise governance so your financial risk analytics scales with portfolio growth.

Start using automated modeling today.

Discover how teams use Model Reef to collaborate, automate, and make faster financial decisions - or start your own free trial to see it in action.

Want to explore more? Browse use cases

Trusted by clients with over US$40bn under management.