Cross-browser testing is the red-headed stepchild of public-facing software QA, often poorly understood by testers, developers, and management alike. If you are a QA manager, or work closely with one, you might be groaning at the mere mention of it. If any of these grumbles sound familiar –
“Wait, we were supposed to test in IE8?”
“I just finished my test cases. You mean I have to test everything five more times?”
“I checked it in Chrome and Firefox. I’m sure that’s fine…right?”
– then you’re probably acquainted with the challenges involved in cross-browser testing. It’s tedious for your testers, frustrating for your developers, unglamorous for you, and can easily slip through the cracks when time is short or release pressure is heavy. In the past, especially at the enterprise level, it was often easier to dictate a standard approved browser to clients and customers – thou shalt use IE8! – but in 2015, users expect to access any site with the browser of their choosing, and for it to “just work”. And that’s not even including the free-for-all that is the mobile browser ecosystem.
So what’s a QA manager to do? If you’re going to support all major browsers, then you need to test all major browsers. But your testers already have their plates full with functional and regression testing, and they certainly don’t have the time to duplicate their whole test plan four or five times over. A risk-based approach could work, but if your team isn’t experienced with browser testing you might not even know where the likely risks are. Catching all the browser defects might not seem humanly possible. But the answer turns out to be simple – you just need to make your testers superhuman.
Automation is hardly a new concept in the testing world, but too often it gets approached as an either/or proposition: this functionality is covered or it isn’t; the automated test script passed or failed, this person is a manual tester or he’s an automation tester. But when it comes to cross-browser testing, SDLC Partners has found that the best method is to adopt a hybrid approach. Take the strengths of both human and machine, and combine them to create an unstoppable testing cyborg. Consider it this way:
In short, the key to success at cross-browser testing is to automate away the boring parts and save your testers’ time for the things the computer is bad at. An automated test script can manipulate a page and take a screenshot as many times as you like at any hour of the day or night. So let it! Your testers can look at a screenshot like an end user would and can tell right away if there’s a browser issue. So let them! It’s a match made in testing heaven.
A recent client was struggling with a critical customer-facing application that was built on years of legacy code and only supported IE8/9. The user complaints were starting to pile up. Due to internal limitations, the client hadn’t even been able to perform manual cross-browser testing, let alone automate it. Moreover, the client hadn’t been aware that cross-browser compatibility was even a problem until alerted to it by their users – not the best way to discover defects! After performing a thorough manual sweep and prioritizing the areas most severely affected by cross-browser defects, we built them a custom automation suite tailored to their application and their most pressing needs. In addition, we also delivered a code analysis to help ensure that their developers avoid some of the most common cross-browser pitfalls. The result? Testing that previously took two testers six weeks (480 hours) can now be run by one tester in two days – including time for analysis – representing a 96% time savings over manual testing alone. That’s the edge you, your development team, or your QA manager needs in order to make cross-browser testing an integral part of your testing process.
The question is not whether you have to do cross-browser testing. You do. It’s already imperative, and will only become more so as time goes on. Even if consumers were once willing to use a dictated “approved” browser version, increasingly they aren’t even able to. The old paradigm of discrete, numbered browser releases is all but dead. With the evolution of Internet Explorer into Microsoft Edge, the model of small frequent updates pioneered by Firefox and later adopted by Chrome is now totally dominant. More importantly, these browser updates are and will continue to be automatic. New features will be added and old ones will be deprecated – consider Chrome’s recent unilateral end of support for Java plugins – and your development and testing will have to keep up.
Putting a tiny disclaimer in your site footer saying “This site best viewed using Internet Explorer 8” isn’t much different than releasing a movie in 2015 exclusively on video tape.
So if you have to do cross-browser testing, the only question left is how. With a hybrid cross-browser testing solution, you can answer that question with confidence that you’re ensuring the best possible user experience for all your customers, regardless of how they’re trying to access your site. You’re also using your resources – testers, money, time – in the most efficient way you can. Cross-browser testing is a challenge too great for humans or computers alone. So don’t rely on one or the other! Instead, let both your testers and your computers do what they do best, and your developers, testers, business team, and – most importantly – customers will thank you.