Software Testing Debates Part 3 – When Not To Automate
By Ted Smillie on Saturday, January 4th, 2014
Features in QESP Newsletter
Volume 25 , Issue 4 - ISSN 1325-2070
The previous article, Software Testing Debates Part 2- Cloud and Mobility Testing included comments from practitioners with differing views on the use of software testing automation tools. This article revisits some of the earlier comments in more detail and also takes opinions from other veteran software testers for a closer look at both sides of the argument.
The previous article quoted from Network World’s “Software Quality” blog, 20th August 2013, where Ole Lensmar, Chief Architect at SmartBear Software, referred to the “anti-tool movement” within software testing. Expanding on that line of thought, Ole says
“Many testers refrain from using tools in general, as they don’t want to be “trapped” in a tool-imposed line of thought. Many testers feel (with good reason!) that tools hamper their creativity and out-of-the-box mindset which is so essential to successful testing. They have a point; you should be in control of the process and tools, not the other way around.”
In this blog, Ole’s focus is not so much on testing mobile apps as on using mobiles for testing. He says that this “ makes extra sense for testers considering the fact that more and more applications have a mobile component. They have to be tested “in the wild” with fragile networks, bad positioning signals and draining batteries. Empowering testers with the ability to perform their testing (be it automated or exploratory) in the same environment as the end user – on trains, in tunnels, in cities, in the country, etc. – is extremely valuable because this is usually where things go wrong in the end, and not in your test lab at the office.”
Ole’s more controversial comments relate to his view of the testing and QA community as a conservative domain, “with testers as a group often slow to adopt many of the ongoing trends in development.”
See http://www.networkworld.com/community/blog/when-will-software-testing-be-truly-mobile.
The previous article included a closer look at mobile app testing, quoting from an article by Reghunath Balaraman, Enterprise QA Transformation Principal Consultant with Infosys. , Referring to test automation for mobile apps, the author discusses some of the challenges, i.e.
“There are several technologies that drive mobile automation from complex image recognition techniques to a combination of image and text identification. The devices and their form factors have a high level of influence on the way the user interface components are positioned and displayed.” The article notes the implications of these factors in identifying the requirements for an automation tool, the learning curve for using it and the challenge of integrating it with existing test management tools. The author concludes “My personal preference is to extend the existing functional automation tool rather than introducing a new tool that is aimed solely at automation of mobile components.” See http://www.infosys.com/IT-services/independent-validation-testing-services/Documents/technical-tester.pdf.
A 23 May 2013 TechTarget SearchSoftwareQuality.com article by editor James A. Denman treats the differing views on mobile test automation as a debate between two veteran testers, noting “One veteran recommends automating all mobile software tests. Another expert says to focus on planning and automate only where necessary.”
This is an interesting approach and a good read but in switching between the protagonists James appears at times to be comparing a project based testing viewpoint against an enterprise based Software development life cycle viewpoint. The ” veteran tester” who “recommends automating all mobile software tests” has a wider role than testing and is talking about more than testing. The other expert is also has a wider role than project based testing – but more of that later.
The article is titled A debate on the merits of mobile software test automation and the introduction includes the teaser “Two veteran software testers face off on the pros and cons of maximizing test automation for mobile application testing efforts.” See http://searchsoftwarequality.techtarget.com/news/2240184658/A-debate-on-the-merits-of-mobile-software-test-automation (requires free membership to view.)
The two veteran software testers are JeanAnn Harrison, Software Testing & Services Consultant at Project Realms, Inc and Denali Lumma (Nicholson), Engineering Manager, Quality at Okta. Both JeanAnn and Denali have impressive professional profiles and both speak regularly in public forums.
JeanAnn’s LinkedIn page includes a range of her publications, including a link to A debate on the merits of mobile software test automation. The publications include webinars, e.g. a July 31, 2013 XBoSoft/Project Realms Webinar titled To Automate or Not to Automate & Exploratory Testing for Mobile Software and other August 2013 and September 2013 XBoSoft webinars on aspects of mobile software testing. JeanAnn also lists a range of Groups and Associations, including the Mobile Test Automation and the Mobile Testing – Automation & Manual LinkedIn discussion groups see
http://www.linkedin.com/pub/jeanann-harrison/4/b55/865.
Denali is Engineering Manager, Quality at Okta, which provides an enterprise-grade identity management service. Okta is now using a Sauce Labs’ Selenium-based test cloud, with impressive results. A June 25th Sauce Labs blog, Sauce Labs Liberates Okta Developers with Massive Increase in Productivity notes that Okta’s use of Selenium-based test cloud “resulted in boosting developer productivity by 80 percent, while shortening Okta’s key test suite from 24 hours to 10 minutes.” The blog quotes Denali as saying “In deploying Sauce Labs we have reduced the time it takes to just debug Selenium from three days to three hours. Moving our testing system to the cloud also reduced the cost of hardware, electricity and labor. The biggest ROI increase, however, was in developer productivity.” See
In A debate on the merits of mobile software test automation, Denali says “At Okta, we’re dedicated to automation in testing, deployment, operations and every aspect of our application lifecycle.” She notes that due to the need for high scalability and availability, single sign-on cannot be built and tested in a manual way. There are too many tests involved and not enough time to run them all. “We focus most of our efforts on automation and almost none on manual testing.”
This is contrasted with JeanAnn’s comments, which include “People think you can do 90% automation, but that’s ridiculous.” JeanAnn’s own experience on functional testing puts automated tests at about 20% on mobile projects. If testing tools and automation techniques continue to improve, she could see automation reaching the 40% mark, but thinks that “50% would be a bit of a reach.” JeanAnn also thinks “many forms of testing are still much more exploratory on mobile devices than they are with Web or desktop applications.” As noted above, JeanAnn’s webinars include To Automate or Not to Automate & Exploratory Testing for Mobile Software.
Returning to Danelli’s heavily automated testing viewpoint, she notes that their “website is well covered and the testing there is mature, but they’re still building out a lot of new tests with some of the native mobile platforms”. For this her team depends heavily on mobile software test automation tools. However, while very happy with the Sauce Labs toolset “features like video logging, screenshots, breakpoints and the ability to interact with the browser brought the developers’ debugging time down from three to five days to three to five hours” , Danelli also notes that while Sauce Labs is good for the Apple platform, its Windows device support lags. “Automated testing of Safari on Mac was a huge win for us. But with Windows testing, we have a real need to support specific OS versions and service pack combos.” For testing native code on Windows devices, her team uses a separate cloud-based development and testing tool called Skytap. She concludes with a comment on regression testing, a telling point for test automation, noting that some of the focus has shifted from new tests to the existing regression tests. “Test maintenance is an increasing cost over time as applications grow and change, so it’s the biggest area in terms of challenge and reward.”
The debate winds up with some final points from JeanAnn, who notes that in her experience, “mobile application development projects come with frequent periods of drastic change. While most changes come in a small, iterative fashion, the total change from version 2.0 to version 3.0 is generally quite dramatic….mobile applications can undergo sweeping changes and may even restart the codebase from scratch. When the codebase does change dramatically, few if any automated regression tests can be kept and run on the new application.”
JeanAnn also points out some issues regarding the interdependencies of functions within mobile devices, noting that the notification capabilities of some phones are closely tied to their operating system. Add notification features to an existing application could introduce the need for the QA team to test mobile operating system functions that were never part of the mobile application before. “That would change the way you test and completely change your automated tests as well.”
In JeanAnn’s view “If you can test it manually faster than you can write and run the script, you have to ask if it’s worth it to go through the process of automating.” Her advice on what is the best mobile software test automation tool set is that it depends on what the team in question is going to test, why they’re testing it, and what sort of tests they’ll be running. A solid testing plan is the best testing tool. “Plan. Plan. Plan your tests.”
JeanAnn expands on these views in a separate TechTarget SearchSoftwareQuality.com Tip, also by editor James A. Denman, titled Mobile app quality takes more than just software testing automation, see http://searchsoftwarequality.techtarget.com/tip/Mobile-app-quality-takes-more-than-just-software-testing-automation.
Here, JeanAnn notes that in her view “several different types of testing that are important for mobile application software quality are ill-suited to automation,. These types of testing include trainability tests, configuration tests, mobile device performance and usability testing.” She believes that these tests are faster, easier and less expensive to run manually than to automate, commenting on the number of tests that would have to be written and maintained, the frequency with which mobile application code bases change and the need to . reconsider and rewrite any automated tests. In contrast, JeanAnn suggests that manual testers can handle such changes intuitively, with relatively little difference in the overall script used to test multiple devices.
On trainability and configuration testing, JeanAnn notes that for completely Web-based mobile applications, because the Web browser takes care of configuration, so trainability is not an issue. However, for mobile applications with native functionality built in,” both configuration and trainability concerns bubble up to the surface.” JeanAnn discusses the difference between a tablet and phone app. “When we look at Facebook on a tablet and on a mobile phone, side by side, we see two very different user interfaces.The Facebook app not only displays differently on a tablet than it does on a phone, but it has differences in functions such as searching and newsfeed updates.” This difference presents a configuration testing issue because when those configurations behave differently, it becomes necessary to test each configuration separately. JeanAnn provides further comments on why this is also a trainability issue.
On performance testing, JeanAnn points out why on mobile devices is very different from how it is with Web applications and why this will not be easy to automate. On usability tests JeanAnn again gives examples of aspects of testing which are simple and intuitive for the manual tester but difficult to automate.
Again, JeanAnn stresses the importance of planning. She acknowledges that automation has a place in mobile software testing, “but it can’t hit all the corner cases yet”. She advises “Keep the automated tests for the rest, but with specific functionality-combination-type tests, that’s where you’ll need to spend your time in observation and exploratory testing.”
Talking of exploratory testing, there is an old article which is funny as well as informative, so I think worth a reprint. Check out Testing Google’s ‘Drunk E-Mail’ Protector.