🎯Deciding What to Test (Before You Write a Single Question)
By the time many learning teams start designing a test, the decision has already been made: we’re going to assess knowledge.
That assumption is rarely questioned—and it’s one of the fastest ways to derail a needs assessment.
Deciding what to test is far more important than deciding how to test. When this step is skipped or rushed, even well-written questions produce misleading data, and training decisions are built on shaky ground.
The Hidden Assumption Behind Most Tests
Most baseline assessments quietly assume that a performance problem is caused by missing knowledge.
Sometimes that’s true.
Often, it isn’t.
Performance issues can stem from unclear processes, poor tools, conflicting priorities, environmental constraints, or lack of practice—not a lack of information. When tests are created before the problem is clearly defined, they tend to measure what is easiest to write questions about, not what actually matters.
The result is data that looks legitimate but answers the wrong question.
Start With the Performance, Not the Content
Before deciding what to test, the needs assessment must anchor itself in observable performance.
That means being able to clearly describe:
What people are expected to do
Under what conditions they must do it
What “good” performance looks like
What errors or breakdowns are occurring
If the performance cannot be described in concrete terms, no assessment—test or otherwise—will produce meaningful insight.
Testing should only enter the picture once the performance expectations are clear.
Distinguishing What Can Be Tested
Once performance is defined, the next step is separating testable constructs from those that require other methods.
In a needs assessment, tests are most appropriate for assessing:
Foundational knowledge
Conceptual understanding
Rules, principles, or decision criteria
Recognition of correct vs. incorrect actions
They are far less effective at assessing:
Physical execution of tasks
Troubleshooting under pressure
Adaptation in dynamic environments
Consistency over time
Confusing these categories is how organizations end up testing terminology while the real issue is procedural breakdown.
Knowledge, Skill, and Decision-Making Are Not the Same
One of the most important distinctions in needs assessment work is between:
Knowing what
Knowing how
Knowing when
A technician may know the steps of a process but struggle to decide which process applies in a given situation. A manager may understand policies but fail to apply them consistently under pressure. A learner may pass a quiz and still perform poorly on the job.
If the gap is about decision-making, testing recall will never reveal it.
This is why needs assessments that rely solely on traditional quizzes often miss the most critical gaps.
Using Job Tasks as the Filter
A reliable way to decide what to test is to use job tasks as a filter.
For each critical task, ask:
What knowledge must be present to perform this task?
What decisions must be made correctly?
What errors are most common or most costly?
Which of these can reasonably be assessed without observing performance?
Only the elements that pass that filter belong in a test.
Everything else belongs in observation, simulation, or performance data analysis.
What Not to Test During a Needs Assessment
Knowing what not to test is just as important.
During needs analysis, tests should generally avoid:
Rare edge cases
Content that is not required for day-to-day performance
Material that has never been formally taught
Trivial facts that do not influence decisions
Information that learners can easily reference on the job
Including these elements inflates the perceived gap and leads to unnecessary training.
Avoiding the “Coverage Trap”
Another common pitfall is the belief that a needs assessment test should “cover everything.”
Coverage feels responsible.
In practice, it creates noise.
Effective needs assessment tests are selective by design. They focus on the most critical knowledge and decisions, not exhaustive content lists. The goal is clarity, not completeness.
If a test tries to measure everything, it usually measures nothing well.
A Decision Point, Not a Writing Task
Deciding what to test is not a writing task—it’s a professional judgment call.
It requires resisting pressure to move quickly, questioning assumptions about knowledge gaps, and being willing to say that a test is not the right tool for the problem at hand.
This is where instructional designers and talent development professionals demonstrate their value—not by producing questions, but by preventing bad decisions upstream.
What Comes Next
Once the right things have been identified for testing, the next challenge is choosing the right type of assessment to surface meaningful data.
In the next article, we’ll explore how to select assessment formats that align with diagnostic needs—and why defaulting to multiple-choice questions often limits what you can learn.
Because even when you’re testing the right thing, the wrong assessment type can still hide the gap.