📌 Introduction: Why PD/LGD/EAD still drives better lending decisions
If you’re building lending analytics to make faster, more consistent credit decisions, PD, LGD, and EAD are the core building blocks that keep everything linked. They translate messy real-world borrower behaviour-missed payments, collateral recovery, line utilisation-into inputs you can use across underwriting, pricing, and portfolio monitoring.
The reason this matters now is simple: credit teams are expected to move quicker, defend decisions more clearly, and update views of risk as conditions change. That’s exactly what disciplined credit risk modeling enables-especially when the model is built around definitions your team agrees on and can maintain. And because these inputs feed multiple workflows, it’s worth investing in clean data pipelines and consistent calculations (for example, pulling exposures from core systems via integrations).
🧩 A simple PD–LGD–EAD framework you can apply today
Use this framework to keep your credit risk modeling practical and decision-ready:
- Define the unit of risk (loan, borrower, facility, segment) and the horizon (12-month, lifetime).
- Estimate PD by segment using your strongest available signals (ratings, financials, behaviour).
- Estimate LGD based on seniority, collateral, recovery timelines, and workout costs.
- Estimate EAD based on scheduled balances and utilisation behaviour (especially for revolvers).
- Calculate expected loss and validate it with reality: back-tests, overrides, and governance.
This approach scales: you can start with a simple expected loss calculator and improve accuracy over time without rewriting the model every quarter. It also plugs directly into downstream workflows like provisioning and reporting-especially if you standardise outputs early and keep assumptions transparent.
Step-by-step implementation
🧱 Step 1: Set your scope, definitions, and data “source of truth”
Start by deciding exactly what your model is estimating and for whom. In lending analytics, most confusion comes from unclear definitions: what counts as “default,” what the recovery window is, and whether you’re modelling at borrower level or facility level. Lock those down first, then segment your portfolio into groups that behave differently (e.g., secured vs unsecured, SME vs corporate, revolving vs term).
Next, list the minimum data you need: origination terms, balances over time, delinquency/default events, recoveries, and collateral realisations. Even if you’re using an AI lending platform later, your inputs still need a clean lineage and permissions model-especially when multiple teams touch the same assumptions. Treat access control and auditability as part of model design, not an afterthought.
✅ Step 2: Estimate PD in a way your credit team can explain
PD should answer one question: “How likely is default over the chosen horizon for this segment?” A practical way to start is with historical default rates by segment and rating band, then refine using leading indicators (financial ratios, payment behaviour, industry stress). Keep PD interpretable-credit committees trust models they can reason about.
Avoid “precision theatre.” If your dataset is thin, a simple rating-to-PD mapping with conservative calibration is often better than an overfit model. Where an AI lending platform helps is prioritising signals, detecting drift, and automating monitoring-but you still need policy guardrails: when overrides are allowed, how exceptions are documented, and how frequently calibration is reviewed. The goal is a PD you can use consistently in approvals, monitoring, and reporting-not just a PD that scores well in a lab.
🛡️ Step 3: Model LGD based on recovery reality, not optimism
LGD is the share of exposure you don’t recover after default, net of costs. Build it from real recovery outcomes: collateral values at liquidation (not origination), time-to-recovery, seniority, guarantees, and workout expenses. Segment LGD where recoveries behave differently-secured loans typically have distinct recovery curves versus unsecured.
A common approach is to estimate recovery rates by segment and then translate them to LGD (LGD = 1 − recovery rate). But the value comes from being explicit about assumptions: haircut rates, liquidation timelines, legal costs, and whether you include interest and fees. If your LGD feels “too stable,” you may be missing the operating reality that recoveries worsen under stress-something you’ll want to link to scenario testing later. Strong financial risk analytics treats LGD as a disciplined assumption with ranges, not a single magic number.
💳 Step 4: Get EAD right-especially for revolving facilities
EAD is where many models quietly break, because it depends on product structure and borrower behaviour. For amortising term loans, EAD often follows the schedule (with prepayment assumptions). For revolvers and lines, EAD depends on utilisation patterns-borrowers may draw more as they approach distress, which can materially increase loss.
Start simple: use current balance for term loans, and for revolvers use current drawn plus a conservative credit conversion factor on undrawn commitments. Then refine using historical utilisation increases prior to default in similar segments. If you’re modelling facilities with different repayment structures, your EAD logic should reference the underlying schedule mechanics so you don’t misstate exposure at the worst possible time. In smart lending technology, this is the difference between “a model that looks clean” and “a model that prevents surprises.”
🔄 Step 5: Calculate expected loss, validate, and connect it to decisions
Once PD, LGD, and EAD are defined, expected loss is straightforward: EL = PD × LGD × EAD. The hard part is ensuring the output is decision-ready. Validate by comparing predicted EL to observed loss outcomes over comparable periods, then document where the model is conservative or aggressive.
Next, connect EL to the workflows that matter: pricing, limit setting, approvals, and monitoring. This is where lending analytics becomes operational leverage-because the same assumptions inform both “Should we do this deal?” and “What should we do if performance changes?”A practical move is to embed EL outputs into a lending decision model so relationship managers and credit teams see the same picture. If you want to avoid spreadsheet sprawl, centralise drivers, assumptions, and scenario outputs in Model Reef so updates don’t fragment across versions and inboxes.
Real-world example: turning EL into a defensible approval
A mid-market lender is reviewing a secured working-capital facility. The credit memo reads well, but the portfolio has seen rising delinquencies in the borrower’s sector. Using credit risk modeling, the team segments the borrower into a peer set, estimates PD based on updated financials and sector stress signals, models LGD with collateral haircuts and workout costs, and calculates EAD with a conservative utilisation uplift.
The result: expected loss is higher than last quarter, but still acceptable if the loan is structured correctly. The lender updates pricing and introduces tighter monitoring triggers, then runs stress scenarios to understand how quickly headroom could compress under revenue shocks. Instead of debating opinions, the team debates assumptions-faster, clearer, and easier to document.
⚠️ Common mistakes to avoid (and what to do instead)
The most common mistake in credit risk modeling is mixing horizons-using a lifetime PD with a 12-month LGD (or vice versa) and ending up with an EL that nobody can interpret. Another frequent issue is over-averaging: one LGD for every secured loan, even when collateral type and seniority clearly drive different recoveries. Teams also underestimate EAD for revolvers by ignoring utilisation creep as distress approaches.
Avoid these by standardising definitions early, segmenting where behaviour differs, and creating a simple validation routine that compares predicted losses to observed outcomes. Finally, don’t leave the model disconnected from decisions-if EL isn’t reflected in pricing and structure, you’re doing analytics without impact (a gap you can close by tying it into loan pricing mechanics).
🚀 Next Steps: make your model usable every week
Start small and operational: build a clean PD/LGD/EAD sheet for one segment, then validate it against observed outcomes. Once the logic is stable, connect it to decisions-pricing, covenants, and monitoring-so lending analytics becomes a workflow, not a slide.
A strong next move is to productionise the model so updates are fast and controlled. Model Reef helps here by keeping drivers, assumptions, and outputs in one governed environment-so when you refresh inputs weekly, you don’t create a dozen conflicting spreadsheet versions across teams. From there, expand segmentation, add scenario layers, and formalise governance so your financial risk analytics scales with portfolio growth.