Improvement Strategy
The EASIER Framework
A diagnostic model for identifying what an improvement initiative is actually trying to accomplish. Six dimensions. One primary center of gravity. Every strategic approach, metric, and tool follows from that choice.
The Problem It Solves
Most improvement efforts don't know what problem they're solving
Organizations adopt new tools, redesign processes, and scale programs without first agreeing on the nature of the constraint. They optimize for speed when the real problem is accuracy. They expand reach when the program isn't working yet. They automate inefficiency into the workflow rather than removing it.
The EASIER framework starts with a diagnosis. Leaders identify one primary dimension: the center of gravity for the initiative. That choice determines the approach, the tools, and the metrics.
There's also a sequencing requirement built in: don't optimize a broken process, don't scale an ineffective program, don't invest in Speed before confirming Accuracy. Skipping those checks is where most improvement efforts go wrong.
The core question
“Which EASIER dimension is the primary constraint on our outcomes?”
How it's used
- 1
Describe the improvement opportunity or challenge.
- 2
Work through the diagnostic sequence to identify the primary dimension.
- 3
Check the prerequisite map: are earlier conditions met?
- 4
Choose strategies, tools, and metrics aligned to that dimension.
The Six Dimensions
The two E dimensions have a fixed sequence: E¹ (Effectiveness) always precedes E² (Efficiency).
Effectiveness
Achieve the intended outcome.
When to use
The approach itself isn't producing the intended result. Not too slow or too expensive. Just not working.
Effectiveness is the foundation of the framework. Before pursuing Efficiency or Reach, Effectiveness must be confirmed. Do not optimize a broken process. Do not scale an approach that doesn't work.
Key metrics
Examples
- Increasing student graduation rates
- Improving patient health outcomes
- Improving policy impact
- Increasing product reliability
Accuracy
Improve the correctness of information, predictions, or decisions.
When to use
Leaders are making poor decisions because the data they rely on is wrong, incomplete, or inconsistent.
Accuracy is about whether the organization can trust what it knows. The problem is data quality, measurement reliability, or evaluation consistency. Not a lack of causal understanding. The core question is whether the signals are correct. That's different from Insights, which asks whether they're understood.
Key metrics
Examples
- More reliable identification of at-risk students
- Reduced data errors in reporting
- More consistent evaluation and admissions decisions
- More accurate demand forecasting
Speed
Reduce the time required for a process or decision.
When to use
The work is directionally correct, but delay has measurable consequences: missed windows, harm from slow response, lost opportunity.
Speed matters when timing is the binding constraint, not just when things feel slow. Confirm Accuracy before optimizing Speed, and confirm Effectiveness before investing heavily in Speed. Making a broken process faster magnifies the problem.
Key metrics
Examples
- Faster admissions decisions
- Real-time early-alert systems
- Reduced approval cycle times
- Faster crisis response
Insights
Discover why outcomes are what they are.
When to use
Leaders have data but lack understanding. Patterns are invisible, causes are unknown, and interventions keep producing the same inadequate results.
Insights serves a special diagnostic role: when you can't clearly identify which primary EASIER dimension applies, Insights work is how you find out. Use Insights to diagnose before committing to a dimension. If the primary dimension isn't clear, Insights comes first.
Key metrics
Examples
- Discovering hidden drivers of student attrition
- Identifying what actually predicts advisor effectiveness
- Understanding why a program works in some contexts but fails in others
- Revealing unmet demand patterns
Efficiency
Reduce the resources required to produce the same outcome.
When to use
Effectiveness is confirmed, but the current cost in time, money, or labor is unsustainable.
Efficiency always follows Effectiveness. That sequencing is not optional. E¹ (Effectiveness) asks: Is the work achieving the right outcome? E² (Efficiency) asks: Is the work sustainable? Efficient execution of the wrong approach is waste. Fix the process first, then optimize it.
Key metrics
Examples
- Automating repetitive administrative tasks
- Reducing cost per enrolled student
- Improving advisor caseload capacity without reducing quality
- Eliminating low-value processes and redundant systems
Reach
Expand who benefits from what the organization does.
When to use
The core work is effective, but scale, geography, or structural access barriers limit who benefits. The question is both how many and who.
Volume and equity are separate questions. An organization can serve large numbers while consistently missing populations by income, race, geography, or first-generation status. Any Reach initiative should name which populations are underserved and why.
Key metrics
Examples
- Serving more students without proportional staffing increases
- Expanding access for first-generation or rural students
- Closing equity gaps in program participation
- Reducing structural access barriers
The Diagnostic Sequence
When the primary dimension isn't obvious, work through this in order. Stop at the first question that reveals the constraint. Insights comes first: if you don't understand why outcomes are what they are, you can't reliably pick the right dimension.
Do we understand WHY current outcomes are what they are?
Is the core approach actually producing the intended outcome?
Is the information driving decisions reliable and accurate?
Is timing a binding constraint? Does delay cause direct harm or missed opportunity?
Is cost or sustainability the binding constraint?
Is scale or equitable access the binding constraint?
The Prerequisite Map
These aren't suggestions. Skipping them is where improvement efforts go wrong.
Do not pursue Efficiency until Effectiveness is confirmed.
Efficient execution of the wrong approach is waste. Automating a broken process makes it worse.
Do not pursue Reach until Effectiveness is confirmed.
Scaling an ineffective program harms more people and wastes more resources.
Confirm Accuracy before investing heavily in Speed.
Fast decisions on bad data don't just fail. They fail confidently.
When the primary dimension is unclear, begin with Insights.
You can't pick the right dimension if you don't understand why outcomes are what they are.
Framework in Practice
Same problem. Six different strategies.
The dimension you choose determines the work, the investments, and what success looks like. Here's how one common challenge plays out differently depending on which dimension you identify as primary.
Problem: Student advising is falling short.
Redesign advising to actually move graduation rates. Test what works.
Build better risk identification so advisors find the right students.
Intervene faster; catching students two weeks earlier matters.
Understand the real drivers of attrition, not just the proxies.
Reduce advisor workload per case so each conversation is higher quality.
Support more students, especially first-gen and low-income, without adding headcount.
Each path is valid. Each leads to different work, different investments, and different metrics. The framework doesn't pick the answer. It makes the choice visible so the right people can make it, and so the prerequisites get checked before the strategy starts.
The core principle
Every improvement initiative should identify one primary dimension. That's its center of gravity.
Other dimensions still matter. But the initiative needs a center of gravity. Trying to improve everything at once leads to unclear priorities, mismatched tools, and metrics that don't tell you anything. Pick the dimension that matters most. Check the prerequisites. Let that choice drive the work.
Downloads
EASIER One-Pager
A single-page overview of all six dimensions: when to use each, key metrics, prerequisite rules, and the diagnostic sequence.
Download PDFDiagnostic Guide
A practical guide for applying the framework: how to identify the right primary dimension, the diagnostic sequence, prerequisite checks, and common pitfalls.
Download PDFFramework Reference
The complete framework reference: all six dimensions with full definitions, canonical metrics, prerequisite map, Insights diagnostic role, and the E¹/E² relationship.
Download PDFTry the AI tool
Open the EASIER App
Describe a challenge. The app identifies the right dimension, asks follow-up questions, and generates a structured analysis with strategic approaches and metrics.