Insuring quality in testing software for mobile applications requires a high point of entry for a valuable effort to be produced. A number of new obstacles are seen that quite simply don’t exist when testing a desktop website or program. Once a group starts to consider geo-location, connectivity, interrupt points, OS versions, hardware models…it’s enough to overwhelm even the most experienced and knowledgeable professional. Each new consideration adds an exponential number of new permutations for every test case covered. How do we know where to begin? The key is to define a sensible and effective coverage set.
First, let’s identify the problem we face. We have a soul-crushing amount of independent variables to test. The ones listed above barely scratch the surface, however, each of them don’t hold equal weight. If we organize our tests based on potential risk and coverage, we see a very different story. For example, iOS and Android phones are generally the largest visitor base for any mobile website. A comScore article, “Smartphones and Tablets Drive Nearly 7 Percent of Total U.S. Digital Traffic” estimates them at a combined 90% market share of digital traffic for mobile devices.
Research also supports that OS and browser versions within one release of the most recent available, will encompass an overwhelming majority of potential users. Many carriers even push mandatory software updates when new OS versions become available. Wouldn’t it be prudent to focus our testing on varying models of Android 2.X and 4.X as well as the most popular iPhones in everyone’s pocket? Through this, we’ve just narrowed our test hardware from approximately 200 mobile devices released this year, to just a handful.
So at this point, you’ve chosen what environments you’ll be testing in and what devices to test, but you’re still wondering how to use them. It would be quite taxing to repeat the same test cases over and over for each device you have at your disposal. It would be more effective to prioritize your environments based not only on market share, but also the implied market in the coming months. I wouldn’t be very good at my job if I tested the iPhone 3Gs with the same fervor as the iPhone 5. It’s simply not cost effective. If we know the iPhone 5 will steadily grow in market share over the next year, that should be our default environment. For accuracy, everything must be tested in our default environment. We’ll then condense what are only the core test cases and key functions to test in the 3Gs, retest the navigation links and general look-and-feel of the product, and then allow the 3Gs to quietly slip into obscurity over the next year.
In conclusion, when developing a test plan for mobile:
- Organize tests based on risk and potential
- Limit the testing number to the most popular and widely used devices
- Use the device with the greatest market share as your default environment
Using this method, we can approach each consideration or variable in the same fashion, maximizing our coverage and efficiency. In the ever-changing mobile frontier, building a test plan that will mitigate potential risk and efficiently increase coverage is the only way to ensure a quality and timely product to market.
Caleb Abraham is a Consultant II at SDLC Partners, a leading provider of business and technology solutions. Please feel free to contact Caleb at firstname.lastname@example.org with any questions on this blog post or to further discuss mobile testing.