Ask someone in your QA department the following question: “Which is more important: finding problems in the code you are testing, or making sure that the code you are testing works?”
Just like a sports team, your QA team will either be offensively or defensively oriented. If your team is trying to rack up big bug counts, it is a strong indicator that they lean towards “offense.” If emphasis is placed on presenting code coverage charts and confidence percentage levels, it is a sign of a defensive mindset.
On the field or in an office, a good mixture of offense and defense is needed for success. Adjusting your approach in the middle of the match can actually be a good way to get the most out of your team.
At SDLC Partners, we recently wrapped up a project for a client who had a mobile messaging service. They called upon our high performing team to test the mobile part of their suite. For this project, the client used the Waterfall methodology. Therefore, we were presented with a block of code which we tested three times. Since the code being released would have far-reaching impacts throughout their organization, they wanted to ensure that the code would work in a real-time mobile environment.
In this case, the best strategy was to find the bugs early and then methodically go through the fixed code to provide our client with the confidence that their code would work regardless of scenario or mobile platform. If we “played defense” too early in the cycle, we would have confirmed coverage with code that the customer would never see – because it would be changed to include the inevitable fixes. In the illustration below, the curve to the left illustrates what would have developed organically by passively finding bugs, and in the illustration to the right is the curve we had after making a conscious effort to get bugs to the development team early in the process.
Here were the two ways that we used to find the most bugs in the shortest period of time.
1. Wide and shallow testing
Our team performed a lot of smoke testing on many different platforms very early in the cycle. We worked with many different environments as rapidly as possible to flush out the common environmental bugs. The testing wasn’t all encompassing – but it wasn’t intended to be. Its purpose was to check the interaction with the environment it was running on. For example, a date control behaves differently on an iPhone versus an Android phone. Will both types work with the software we were provided? Text labels look fine on a simulator, but might look different when squeezed onto small form-factor display. Will everything be visible and usable? These types of questions were answered very early in the process to avoid bugs and delays further in the project.
2. Difficult testing
In the messaging application that we were running, there were multiple messaging types. Some of them were very easy to test (like sending a message to yourself). Others required extensive setup with time-limits and working with different teams to get them set up. The “easy” messages had no bugs. The really “hard” message types had bugs. The best place to look for bugs is to take the road less traveled. The road less traveled is likely the more difficult path but worth it in the end.
The customer was very satisfied with our contribution to the project. Of the three passes we had, Pass 1 took the longest amount of time. Once all of the bugs were ousted from the code we moved to Pass 2. Due to all of the upfront work we did, Pass 2 and Pass 3 were very uneventful. Our insights into mobile testing, thoroughness with different platforms, and test strategy ensured that there were no “surprises” just before or after the release. If you are using a testing process where you receive large blocks of code at a time, make every effort to find the bugs as early as possible (play offense). Then, once everything has stabilized, work to systematically verify the production level code is ready for production (play defense). With this approach, the code that is being tested is the code that will be used. Having that confidence will allow you to present the best possible product to the customer.
Chris Hasbrouck is a Consultant at SDLC Partners, a leading provider of business and technology solutions. Please feel free to contact Chris at email@example.com with any questions on this blog post or to further discuss software testing.