One of the biggest obstacles to efficient software development is a lack of clarity surrounding software requirements. As a Quality Assurance Engineer in the software world, I’ve worked for several companies over the past decade and a half, and each development team has its own method for documenting requirements. For the past ten years, I have worked exclusively in Agile. If you are familiar with Agile software development, you probably recognize acceptance criteria. A list of acceptance criteria should provide the necessary information to verify a work’s completeness. Acceptance Criteria may be part of a user story, or defined within a task description.

Missing Acceptance Criteria

Overlooking Acceptance Criteria

Many times, teams that I have worked with have struggled to work with acceptance criteria that is incomplete, ambiguous and just not helpful. The potential value of creating acceptance criteria is often overlooked and at times, there might be just one bullet point that reads, “Feature works as expected.”

When acceptance criteria is incomplete, issues can begin immediately. Let’s run through what happens when this crucial piece of the requirement has not been properly done.

First, Quality Assurance (QA) and the development team will attempt to estimate work based off insufficient requirements. Any time a requirement is not explicitly listed, it leads to assumptions. This means that the team has to estimate the work based on incomplete information, which could lead to an estimate that is either too high or too low. The next thing to happen is that development completes its work based on its interpretation and assumptions. The code is passed off to QA to test, based on its interpretation of the acceptance criteria.

Question: How often do two people interpret ambiguous instructions exactly the same way?

Answer: Never. (Okay, almost never.)

The Cost of Not Having Enough Criteria

So, here we go…

QA fails testing because of the differing interpretation of what is required for this functionality to be considered done.

Development disagrees.

The ticket gets passed back to Product to clarify or even worse, a meeting needs to be scheduled to talk it through. What usually happens is product didn’t think it was necessary to point out the particular case, so more acceptance criteria gets added, then the ticket goes back to dev to re-work.

You know how much people love having to re-do their work? Dev will have to re-work, then QA must re-test. If you are in software testing, you know the overhead required for retesting features and bugs while still running through test cases.

Retesting takes A LOT of time. Especially when you must handle the overhead of creating tickets in your tracking system, installing new builds, regression testing other aspects of the feature because the code around it has changed. It’s a lot, and none of this extra work was accounted for in the sprint because the initial information didn’t cover the full scope of the change. Now Dev is behind, QA is behind, and nobody is happy they had to re-work their tasks.

Here’s the scary part. What you’ve just read is the best case scenario. Development, QA and Product all had to put in extra work, but in the end it was resolved in one iteration.

What happens if development and QA inaccurately interpret the acceptance criteria similarly but there are no issues reported and nothing changed? Hopefully, there will be a demo before the product goes live so Product has a chance to see that the result isn’t what they had envisioned.

The good news will be that at least it didn’t make it into production, but now the team may have to re-work the task AND delay the release because the issue was caught so late. Re-working code and re-testing at the last second before a release is way less than ideal, but this is still not the worse case scenario.

What if the code makes it to production before the Product team realizes it is wrong? Now we may be required to push out an emergency change and release another build. If this happens, development and QA will be pulled away from regular sprint work to address the fix.

Could all of these issues have been avoided with an extra line or two of acceptance criteria?

Probably.

Here’s how to improve the situation going forward.

How To Remedy Bad Acceptance Criteria

If you are in development or QA, the first thing you need to do is ask a lot of questions. When reviewing requirements, make sure all of your questions are answered before you estimate the effort.

Request that additional clarifications are documented in the form of acceptance criteria.

Ask the product team to include even the most obvious acceptance criteria because they serve as useful reminders. For example, request that the OS version, or the supported hardware, be included each time. (This might not change very often, but it does need accounted for because certain features may only be supported by an OS that is not the minimum version required for download.)

Implore the product team to include more acceptance criteria. To support your cause, provide examples of cases where additional acceptance criteria could have saved the team time by reducing bug fixes. A good time for this is during the sprint retrospective meeting.

Often times, domain knowledge can be the reason for bad acceptance criteria. Certain information is probably being taken for granted, and those with more experience around the product already know the criteria they want, but don’t list it because of assumptions made, or perhaps they just hurriedly wrote the ticket so development could get underway.

Conclusion

Help improve your team’s acceptance criteria early in the process by asking questions during requirement reviews. Suggest edits or additions to acceptance criteria and report issues to product before work is underway. With repetition, the team will create better requirements with clear acceptance criteria that result in more efficiency and less frustration.