Usability Testing, Part 2: Making a Testing Plan

This is the second post in a three-part series on developing, executing, and analyzing your site’s usability. Part 1 of this series focused on preparing for usability testing. In this second post, I will dive deeper into parts of the usability testing plan.

Usability Testing Plan – The Nitty Gritty

In part one, I briefly outlined the parts and the purpose of the usability testing plan. Now I’d like to get a bit more detailed about its structure and talk about the important questions that your testing plan should answer.

Explain Your Test Plan

To start, your testing plan should provide a short and concise explanation of the contents of the document — a brief introduction to your testing plan. It should answer questions like: ”What is the purpose of this usability test?” and “How will this document assist in the effort?”

This section should also state the goals of this usability testing effort. Some goals of usability testing include:

  • Establishing a baseline of user performance,
  • Establishing and validating user performance measures, and
  • Identifying potential design concerns.

You should also include an executive summary in your plan’s introduction. The executive summary is a recap of the entire testing procedure, what your tests will be evaluating, and the results you are expecting.

Rules and Steps

In your plan, spell out which methodology you plan to follow. There should be no question about what steps you take and how you take them.

Your methodology should, at minimum, define:

  • Who your participants are,
  • How many participants you will be evaluating,
  • Your procedure for performing the tests, and
  • Tools you intend to use for testing.

As I noted in Part 1 of this series, it’s best practice to clearly state each person’s role and responsibilities. You should also note that an individual may play multiple roles and that some tests may not require all roles.


This part of your plan is very specific to what you’re testing. This is where you will define each task the test participants will perform. These usability tasks are usually derived from test cases provided by a developer or business analyst and also from your own heuristics evaluation. The tasks should be identical for every participant of a given role. For example: if you’re testing an application that can have two different types of roles — Admins and Editors — tasks for all Admins should be identical and tasks for all Editors should be identical.

Having each role perform the exact same tasks will ensure the integrity of your results.


Keeping your tasks consistent is vital because your results are determined by the usability metrics you establish in your plan. “Usability metrics” refers to the participant’s performance measured against specific performance goals necessary to satisfy your usability requirements.

These metrics can include:

  • Whether or not the participant completes the task,
  • How long it takes the participant to complete a task,
  • What constitutes critical and non-critical errors, and
  • Subjective evaluations on the ease of use.


Next, you will want to define what the usability goals are. Unlike the goals mentioned earlier, which establish the overall goal of usability testing in general, these goals are specific to what you’re testing in quantitative form.

Here are a few examples of specific goals:

  • Competition Rate:
    How many of the participants were able to successfully complete the tasks?
  • Error-Free Rate:
    What percentage of test participants completed the task without any errors?
  • Time-on-Task:
    How long does it take participants to complete the task from start to finish?
  • Subjective Measures:
    These are usually collected while debriefing the participant after the testing session. Measures such as how easy or difficult tasks were to the participant.

Your plan should also include how you measure problem severity in order to prioritize your recommendations for improvement. This is how you will analyze the data collected during your testing, and it will help organize your final report.

Reporting Your Results

The last section of your test plan should include a brief summary on how and when you will report your data. The report can be either a separate document or a presentation, and it will containing the results based on: your metrics and goals, the subjective evaluations, and your recommendations for improvements. Prioritize the results by how much impact they have on the outcome in order to develop a solution strategy.

Having a bulletproof usability testing plan will make your testing a breeze and eliminate any questions about roles and expectations during the testing. Take advantage of this post as one of many helpful resources to guide you in creating your testing plan. Next up in our usability testing series, we’ll discuss ideal testing participants and some of the tools we used to collect data during test exercises.

Related Posts

Related to: