Key Risk Indicators (KRIs): How to Define Them with Examples
Key risk indicators (KRIs) are metrics that signal changes in risk exposure before an event occurs. Learn how to define KRIs, set thresholds, and build a KRI library with examples across cybersecurity, operational, compliance, and financial risk categories.
A risk score tells you where a risk sits today. A key risk indicator tells you where it's heading. KRIs are the operational heartbeat of a risk management program — the metrics that turn a static risk register into a live system that signals when attention is needed.
What Makes a Good KRI
Not every metric is a KRI. A key risk indicator must have four properties:
- Measurable — it can be expressed as a number, percentage, or frequency
- Threshold-linked — it has a defined level that triggers a specific action
- Owned — a named person is responsible for monitoring it and escalating when thresholds are breached
- Timely — it can be measured frequently enough to provide actionable warning
A metric without a threshold is just a data point. A threshold without an owner is just a number. All four elements must be in place.
KRI Structure
Define each KRI with the following fields:
| Field | Description | Example |
|---|---|---|
| Risk | The risk this KRI monitors | Unauthorized system access |
| KRI Name | Short descriptive name | Unpatched Critical Vulnerabilities |
| Metric | What is measured | Count of critical CVEs open > 30 days |
| Frequency | How often measured | Weekly |
| Green | Within appetite | ≤ 5 |
| Amber | Approaching tolerance | 6–9 |
| Red | Tolerance breached | ≥ 10 |
| Owner | Who monitors and escalates | Head of IT Security |
| Escalation | What happens at Red | Immediate report to CISO; patch sprint initiated |
KRI Examples by Risk Category
Cybersecurity
| KRI | Metric | Amber | Red | Type |
|---|---|---|---|---|
| Unpatched critical vulnerabilities | Count of critical CVEs open > 30 days | 6–9 | ≥ 10 | Leading |
| Privileged account review | Days since last PAM access review | 75–89 | ≥ 90 | Leading |
| Failed login attempts | Count per day across critical systems | 500–999 | ≥ 1,000 | Leading |
| Phishing click rate | % of employees clicking simulated phishing | 8–14% | ≥ 15% | Lagging |
| Security training completion | % of employees current on training | 80–89% | < 80% | Leading |
| Mean time to detect (MTTD) | Avg hours from incident start to detection | 24–47h | ≥ 48h | Lagging |
| Third-party with elevated access | Count of vendors with admin/privileged access | 5–9 | ≥ 10 | Leading |
Operational
| KRI | Metric | Amber | Red | Type |
|---|---|---|---|---|
| Critical role vacancy | Days key roles are unfilled | 30–59 | ≥ 60 | Leading |
| Attrition in critical functions | % turnover in key roles (rolling 12 months) | 15–24% | ≥ 25% | Lagging |
| System uptime (critical systems) | % availability of Tier 1 systems | 99.0–99.4% | < 99.0% | Lagging |
| Change failure rate | % of changes causing incidents | 5–9% | ≥ 10% | Lagging |
| Disaster recovery test | Days since last DR test | 270–364 | ≥ 365 | Leading |
| Overdue risk reviews | Count of risks past review date | 3–9 | ≥ 10 | Leading |
Compliance
| KRI | Metric | Amber | Red | Type |
|---|---|---|---|---|
| Open audit findings | Count of unresolved audit findings | 3–9 | ≥ 10 | Lagging |
| Overdue control tests | % of controls with overdue testing | 10–19% | ≥ 20% | Leading |
| Policy exception requests | Count open > 30 days | 3–4 | ≥ 5 | Leading |
| Access review completion | % of access reviews completed on schedule | 80–89% | < 80% | Leading |
| Regulatory change backlog | Count of new regulations not yet assessed | 2–4 | ≥ 5 | Leading |
| Control effectiveness score | % of controls rated Effective or higher | 70–79% | < 70% | Lagging |
Financial and Third-Party
| KRI | Metric | Amber | Red | Type |
|---|---|---|---|---|
| Concentration risk | Revenue % from single customer | 20–29% | ≥ 30% | Leading |
| Overdue vendor assessments | Count of Tier 1 vendors past assessment date | 1–2 | ≥ 3 | Leading |
| Vendor security score | Avg security rating of Tier 1 vendors | 6–6.9 / 10 | < 6 / 10 | Leading |
| Budget variance | Actual vs. budget for risk program | 10–19% | ≥ 20% | Lagging |
| Insurance coverage gap | % of cyber exposure covered by policy | 60–74% | < 60% | Leading |
Setting Thresholds
Anchor to risk tolerance, not intuition
Thresholds should come from your risk appetite framework. If your documented tolerance is "no more than 10 critical vulnerabilities open past 30 days," your Red threshold is 10. Your Amber threshold is typically 70–80% of that — providing enough runway to investigate and remediate before breaching tolerance.
Use context to calibrate
The right threshold varies by organization. A fintech with strict regulatory requirements will set tighter thresholds than a startup without compliance mandates. A 15% phishing click rate might be Red for a financial institution but Amber for a pre-security-awareness-program stage company.
Adjust over time
Thresholds set at program launch are rarely perfect. Track how often each KRI triggers Amber and Red. If a KRI is always Green and never requires attention, either the metric isn't sensitive enough or the threshold is too loose. If a KRI is chronically Red, investigate whether the threshold reflects actual risk tolerance or needs to be recalibrated — and whether the underlying risk has been adequately addressed.
KRI Review Cadence
| KRI Level | Review Frequency | Owner |
|---|---|---|
| Red | Immediately, then daily until resolved | Risk owner + senior leadership |
| Amber | Weekly | Risk owner |
| Green | Monthly | Risk owner |
| Full KRI set | Quarterly | Risk committee |
Connecting KRIs to Risk Scores
KRIs and risk scores are complementary:
- Risk score reflects the current assessed level of a risk (inherent and residual)
- KRI tracks whether that level is stable or trending
A risk with a stable Medium score (8) but an Amber KRI (phishing click rate rising from 5% to 12%) should trigger a review of the residual risk score. The KRI is telling you the control effectiveness assumption behind that score may be changing.
The best-run risk programs automatically flag risks for reassessment when their KRIs breach Amber — treating KRI movement as evidence that the risk register may be out of date. For guidance on writing the appetite statements that KRI thresholds enforce, see How to Write a Risk Appetite Statement.
Starting Your KRI Program
If you don't have KRIs today:
- Pick your top 10 risks — start with High and Critical only
- Define 2 KRIs per risk — one leading, one lagging where possible
- Set thresholds based on documented risk tolerance — not guesswork
- Assign owners — the same person accountable for the risk is accountable for the KRI
- Review for one quarter — see which metrics are measurable in practice, which thresholds need adjustment
- Expand to medium risks in the following quarter
A small set of meaningful, actively monitored KRIs outperforms a large KRI library that no one looks at.
Frequently Asked Questions
- What is a key risk indicator (KRI)?
- A key risk indicator is a metric that provides an early warning signal about increasing risk exposure. KRIs measure leading indicators (conditions that predict a future risk event) or lagging indicators (outcomes that confirm a risk materialized). Unlike a KPI which measures performance against a business objective, a KRI measures movement in risk level — answering 'is this risk trending toward a threshold breach?' Examples: number of unpatched critical vulnerabilities (cybersecurity KRI), days since last access review (compliance KRI), employee attrition rate in critical roles (operational KRI).
- What is the difference between a KRI and a KPI?
- A KPI (Key Performance Indicator) measures how well the organization is achieving a business objective — revenue growth, customer retention, system uptime. A KRI (Key Risk Indicator) measures risk exposure — how close the organization is to a risk threshold or how a risk is trending. The same metric can serve both: system uptime is a KPI (availability objective) and a KRI (operational risk indicator). When a metric's primary purpose is to trigger a risk response rather than report business performance, it's functioning as a KRI.
- How do you set KRI thresholds?
- KRI thresholds should be derived from your risk tolerance, not set arbitrarily. Start by defining three levels: Green (within appetite — monitor as usual), Amber (approaching tolerance — investigate and prepare response), Red (tolerance breached — escalate immediately). Set the Red threshold at your documented risk tolerance limit for that category. Set Amber at 70-80% of the Red threshold to provide warning time. Example: if your tolerance is 'no more than 10 unpatched critical vulnerabilities,' Red = 10, Amber = 7-8.
- How many KRIs should an organization have?
- Focus on 2-3 KRIs per high or critical risk, and 1 KRI per medium risk. A dashboard of 50+ KRIs becomes noise — risk owners stop paying attention. Start with your top 10-15 risks and define 2-3 meaningful KRIs for each. Build the discipline of reviewing and acting on those before expanding. For most mid-size organizations, 20-40 active KRIs across all risk categories is a manageable, meaningful set.
- What is the difference between a leading and lagging KRI?
- Leading KRIs measure conditions that precede a risk event — they provide early warning before the event occurs. Example: number of employees who haven't completed security training (leading indicator for phishing susceptibility). Lagging KRIs measure outcomes after an event has occurred — they confirm whether the risk materialized and at what severity. Example: number of successful phishing attacks in the past quarter. Both types are valuable: leading KRIs enable prevention, lagging KRIs verify whether controls worked.
Related Articles
How to Write a Risk Appetite Statement: Examples and Templates
A risk appetite statement defines how much risk your organization is willing to accept in pursuit of its objectives. Learn the components of an effective statement, with templates and examples by risk category you can adapt for your organization.
7 min read
Risk Treatment Options Explained: Mitigate, Accept, Transfer, Avoid
The four risk treatment options — mitigate, accept, transfer, and avoid — are the core decision framework for every risk in your register. Learn when to use each, how to document the decision, and the most common mistakes.
5 min read
What Is Inherent Risk? How to Score and Use It in Risk Assessments
Inherent risk is the raw exposure before any controls are applied. Learn how to define, score, and use inherent risk in assessments — and why assessing it first leads to more accurate residual risk scores.
6 min read