How to Run Sensitivities for an Investment Case (What to Flex First) | ModelReef
back-icon Back

Published February 13, 2026 in For Teams

Table of Contents down-arrow
  • Overview
  • Before You Begin
  • Step-by-Step Instructions
  • Cases & Gotchas
  • Quick Illustration
  • FAQs
  • Next Steps
Try Model Reef for Free Today
  • Better Financial Models
  • Powered by AI
Start Free 14-day Trial

How to Run Sensitivities for an Investment Case (What to Flex First)

  • Updated February 2026
  • 11–15 minute read
  • Investment Screening
  • Decision-making
  • Financial modelling
  • Sensitivity analysis

🧭 Overview / What This Guide Covers

Sensitivity analysis tells you which assumptions your investment screening case is truly betting on. This guide shows a practical investment screening method for running sensitivities quickly-what to flex first, how to set ranges, and how to interpret outputs-so you can separate a robust deal from a fragile one. It’s for finance teams, investors, and corporate development who need fast, defensible investment evaluation before deeper diligence. You’ll build a shortlist of value drivers, run one-way and two-way tests, and summarise results in decision-ready language that fits your overall investment screening process. Outcome: clearer go/no-go decisions and better questions for management.

✅ Before You Begin

Before you begin, ensure you have a base-case cash flow model-however light-that reflects the economics you plan to underwrite. For investment opportunity screening, this can be a driver sketch; for project investment screening, it might be a capex schedule plus savings/benefits profile. Define the output you will judge (NPV, IRR, payback, DSCR, headroom) and the time horizon. List the assumptions that materially move value: price, volume, churn, utilisation, gross margin, working capital days, capex timing, or cost inflation. Next, decide what “movement” is meaningful (e.g., NPV swing > 20% or payback shifts beyond an approval threshold) so your investment screening steps are consistent. Finally, set sensible ranges: use historical variability, market benchmarks, or credible downside constraints-not arbitrary ±10%. You’re ready when your drivers are explicit, your formulas are transparent, and you can trace each input back to evidence. If you’re building a repeatable investment screening model across deals, lock the structure first so only assumptions change.

🛠️ Step-by-Step Instructions

Step 1: Choose the outputs and define the base case

Start by selecting the single primary metric that will decide the case (usually NPV or IRR), plus one supporting metric (payback, downside cash minimum, or covenant headroom). Write the base-case assumptions in plain English before you touch sensitivities: what volumes grow to, what margins converge to, what reinvestment is required. This prevents you from “moving numbers” without noticing you changed the story. Keep the base case coherent: if revenue increases, working capital and operating costs often move too. Then confirm the method you’re using to judge value-NPV for absolute value, IRR for rate-of-return comparisons, and payback for liquidity constraints-so your investment evaluation stays aligned with governance. If you need a quick decision rule for method selection, anchor it to your project investment appraisal standard so stakeholders don’t debate metrics instead of risk and returns.

Step 2: Identify the first assumptions to flex

Not all assumptions deserve a sensitivity. Start with the value drivers that are both uncertain and material. A practical ranking is: (1) unit economics (price, volume, churn, conversion), (2) margin levers (COGS, labour efficiency, supplier terms), (3) timing levers (capex phasing, ramp speed, payment terms), then (4) macro levers (inflation, rates, FX) if relevant. For each driver, write a high and low case with a short rationale (why this is plausible). Avoid generic ranges; use evidence-based bounds so the sensitivity tells you something real. This is disciplined investment analysis-you’re stress-testing assumptions, not hunting for a target answer. If you want a standard way to select drivers, ranges, and outputs for repeated financial investment screening, use a sensitivity template that keeps inputs consistent and auditable.

Step 3: Run one-way sensitivities and rank the drivers

Run one-way sensitivities first: change one driver, hold everything else constant, and record the impact on your primary metric. Capture the results as a simple tornado ranking: biggest swing at the top, smallest at the bottom. The goal is to learn what the case is actually underwriting. If two drivers explain most of the movement, you’ve found your diligence priorities. Be careful with correlated drivers: price may affect volume, and volume may affect costs; note dependencies rather than forcing everything independent. In a structured investment screening model, it helps to centralise drivers and feed them into formulas so each sensitivity is a single edit, not a manual rework. If you’re using Model Reef, you can build a reusable sensitivity pack and clone it across opportunities, keeping the investment screening method consistent every time.

Step 4: Expand to two-way sensitivities and scenario bundles

After one-way tests, choose the top two drivers and run a two-way grid (e.g., volume vs margin, or price vs churn). This shows interaction risk: a deal can look fine when drivers move separately and still break when both move together. Next, bundle assumptions into scenarios: Base / Upside / Downside, plus one “stress” scenario that represents your failure mode (slower ramp and higher capex, or margin compression and churn spike). This is where sensitivities blend into investment risk screening-you’re mapping what could realistically go wrong and how badly. Keep scenario definitions explicit so people debate assumptions, not spreadsheets. With Model Reef, you can set scenarios at the driver level and instantly compare outcomes without duplicating files, which accelerates investment screening under time pressure.

Step 5: Translate sensitivity results into a decision and next actions

Convert charts into decisions by writing three statements: (1) “The case is most sensitive to…”, (2) “The minimum acceptable outcome requires…”, and (3) “To de-risk this, we must validate…”. Turn the top drivers into diligence questions (retention evidence, supplier quotes, implementation timelines, pricing power). If the downside case breaches thresholds, be explicit: reprice, restructure, defer, or decline. This closes the loop from investment opportunity screening to action. For internal projects, add an owner and KPI for each risk so post-approval tracking is clear-good investment project evaluation doesn’t end at approval. When communicating sensitivities, use a one-page summary format with a base vs downside comparison and a short interpretation so stakeholders can approve with confidence.

⚠️ Tips, Edge Cases & Gotchas

Sensitivities break when the model is poorly structured. If changing one input requires editing multiple places, you’ll introduce errors; centralise drivers before you start. Watch for non-linearities and step changes (minimum order quantities, capacity constraints, covenant cliffs) that make a simple range misleading. In those cases, treat the variable as a discrete scenario with triggers rather than a smooth sensitivity. For projects with large upfront capex and delayed benefits, timing is often the biggest lever-flex ramp timing and capex phasing before you obsess over small margin deltas. For cases with negative early cash flows, add a liquidity lens alongside NPV so investment evaluation doesn’t ignore runway. Finally, document assumptions and outcomes as you go; otherwise teams re-run the same investment screening steps every month. If you need a clean workflow for building, toggling, and reviewing scenarios with traceable changes,follow a scenario analysis process.

🧪 Example / Quick Illustration

You’re evaluating a subscription business add-on. Base case assumes 20% ARR growth, 90% gross margin, and churn at 1.5% monthly. Action: run one-way sensitivities on churn (1.0%–3.0%) and growth (10%–30%), then a two-way grid of churn vs growth. Output: NPV is highly sensitive to churn; moving from 1.5% to 2.5% wipes out most of the upside, while growth changes are secondary. Decision: diligence focuses on retention drivers (cohorts, contract terms, product adoption) and the integration plan that could worsen churn. In Model Reef, you can keep churn and growth as explicit drivers and apply scenarios in seconds,rather than duplicating spreadsheets for every case. That keeps investment screening fast and comparable across targets.

❓ FAQs

Start with 5–8 drivers and expect only 2–3 to matter. Most cases are dominated by a small set of uncertain, high-impact assumptions (retention, margin, ramp speed, capex timing, working capital). Sensitivity analysis is valuable because it helps you stop arguing about low-impact inputs and focus diligence on what actually moves value. If your tornado chart shows lots of “medium” drivers instead of a clear top few, it usually means your base case is inconsistent or your ranges are not evidence-based. Keep it lean, and expand only when new information changes the driver ranking.

Use defensible bounds tied to what could realistically happen, not a generic percentage. If you don’t have deep history, triangulate: comparable benchmarks, constraints in contracts, capacity limits, or management’s own downside framing. The point is to express uncertainty honestly so investment analysis is informative. A practical approach is to define “low” as the point where the thesis starts to break, and “high” as the point where execution is strong but still plausible. If you want to formalise driver ranges and keep them consistent across cases, build a standard driver library and formula structure first. That makes your ranges reusable and easier to audit.

Centralise drivers, keep formulas transparent, and validate outputs after each change. Sensitivity mistakes usually come from hidden hard-codes, duplicated inputs, or formula logic that isn’t robust to edge cases (like negative values or step changes). Run basic checks: do the direction and magnitude make sense, do cash flows reconcile, and do the results match the narrative? If the model supports it, use built-in validation to catch errors before you circulate results. A strong safeguard is using tools that flag formula issues and inconsistent references automatically. With that in place, you can move faster without sacrificing accuracy.

Share interpretations and decisions, not spreadsheets. Non-finance stakeholders need to know what the case is sensitive to, what “bad” looks like, and what you plan to validate-not the entire grid. Use one chart (tornado or base vs downside) plus three bullets: key drivers, threshold, and next actions. If someone wants deeper detail, provide it as an appendix. For clean handoffs, use an export format that preserves the narrative and the visuals in one package. That keeps the conversation focused on decisions and accountability, not model navigation.

🚀 Next Steps

Next, embed sensitivities into your standard investment screening workflow: decide your default drivers, default ranges, and the one-page format you’ll use to report outcomes. Then run the same sensitivity pack across your last few deals to benchmark what “normal” looks like for your pipeline. If you want to scale this without duplicating spreadsheets, standardise your templates and keep drivers, scenarios, and approvals in a single system so every case is comparable. Model Reef’s template ecosystem can support repeatable modelling and scenario comparisons across teams and opportunities. Once sensitivities are consistent, your next upgrade is tightening decision thresholds so “proceed” and “decline” become faster and less subjective.

Start using automated modeling today.

Discover how teams use Model Reef to collaborate, automate, and make faster financial decisions - or start your own free trial to see it in action.

Want to explore more? Browse use cases

Trusted by clients with over US$40bn under management.