Why you SHOULDN’T Test “Everything”

>>>Why you SHOULDN’T Test “Everything”

Virus searchHave you ever been tasked with a project and thought, “Where do we start?” or “How do I know when we are done?” These are two questions that software teams must be able to answer when trying to test their systems effectively. The other nagging question keeping software teams (and their management) up at night is, “Did we really test everything that is important?” I’m here to tell you that the odds are, you didn’t….something was missed.

As the size and complexity of software applications continue to grow, our ability to execute test cases covering all possible scenarios is greatly hindered. Well-known computer scientist, C. A. R. Hoare was quoted once as saying, “There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”

As users, we tend to tolerate software defects, until they have a serious impact on our lives or business. Take the recent example where a defect in the equity trading software produced by Knight Capital Group caused erroneous orders and trading errors that totaled $440 million. The story shows how much complexity can cost when high-risk defects are left untested.

So, the most important question to answer is, “What can we do about it?” Successful IT organizations are using risk-based testing to tackle the problem. The practice of risk-based testing requires a concerted amount of analysis, planning, and effort up front. The concept, however, is fairly straightforward.

managing project risk

Risk-based testing starts by clearly defining your requirements and maintaining their clarity throughout development. Changing requirements during the development cycle is like trying to hit a moving target. Once the requirements are in place, the software team can then define when the project is “complete”. Defining “complete” is essential. Because once it is established, anything which deviates from that definition can then be tracked as a defect in the software. A risk-based approach prioritizes the tests of features and functions based on the probability of a defect occurring.

Risk can be assessed by asking the following questions of the functionality required:

  • Which area is the core function of the application?
  • Which module has weak programmers?
  • Which module has ambiguous requirements?
  • Which module is the most complex to develop?
  • In which module is a new development tool or methodology introduced?
  • Which modules must interface with systems outside the one being developed?
  • Which module has less time to test? (Complexity vs. testing time ratios, etc.)
  • Which functions or scenarios could result in “catastrophe” should a failure occur?

Risk-based testing involves identifying the most critical business functionalities within the application and determining their related vulnerabilities. The software team must engage system analysts with experience in the application area, to define those areas of risk so that effective test cases can be created and executed. By prioritizing the tests in this way, software teams can ensure that the most critical functions operate correctly, without failure. As systems, business, and operational costs continue to rise, software teams will need to develop solutions in shorter timeframes, without sacrificing quality. Risk-based testing is a winning strategy that helps balance time, quality, and the costs for software QA teams.

Written by Brad Ryba

Please contact Alisa Bigelow at abigelow@sdlcpartners.com with any questions on this blog post or to further discuss risk-based testing.