“Testing is an infinite process of comparing the invisible to the ambiguous in order to avoid the unthinkable happening to the anonymous.” – James Bach

Our original series of articles on the software testing debate began back in October 2013 with Software Testing Debate Becomes Open Warfare. Another outbreak occurred in September 2014, ostensibly about a new Software Testing Standard but in More Software Testing Warfare we discussed the deeper issues underlying the debate.  Now in August and November 2015 we have further some new insights from the differing parties. The real eye-opener is the November contribution but let’s take the August input as a starting point.

In an Aug 24th, 2015 post,  Schools of Software Testing: A Debate with Rex Black, Cem Kaner provides a recording of the debate between Rex and himself at the 2014 Software Testing Conference (STPCon.) The recording has been merged with slides to create a video and Cem provides a link to additional slides and notes. Cem’s notes are a goldmine of information about how software testing has evolved to keep pace with agile development and then with web-based technology and mobility. Cem credits Bret Pettichord  as the identifier of five Software Testing Schools :

  • Analytic School: sees testing as rigorous and technical with many proponents in academia
  • Standard School: sees testing as a way to measure progress with emphasis on cost and repeatable standards
  • Quality School: emphasizes process, policing developers and acting as the gatekeeper
  • Context-Driven School: emphasizes people, seeking bugs that stakeholders care about
  • Agile School: uses testing to prove that development is complete; emphasize automated testing

In the debate, Rex’s main argument is that the different approaches to software testing should be seen as differences in strategy rather than divisions into schools. Cem disagrees, noting  that this idea is  “plausible, but incomplete in a fundamental way. The problem is that it ignores the social dynamics of the field, which is exactly what we are trying to capture with the idea of ‘schools.’”

Cem expands on the idea of “schools” and it seems that he and Rex are no closer on this concept than they were back in 2013. However, one thing they have agreed on is that the Context-Driven School (Cem and Bret Pettichord are members) has behaved in an insulting way towards Rex. Cem suggests that as individuals, we get to choose how far we go down the path of divisiveness” and agrees that Rex is   “well-justified in feeling that some people are behaving badly and that they have treated him badly.”

The Context-Driven School is represented by the Association for Software Testing (AST) , “an international non-profit professional association with members in over 50 countries. AST is dedicated to advancing the understanding of the science and practice of software testing according to Context-Driven principles.”

The WOPR24 Experience Report is a blog post on November 19, 2015 which relays the findings of a peer workshop on performance and reliability (WOPR) testing and monitoring run by the author, Eric Proegler, on 22-24 Oct. Eric chairs the AST Committee on Standards and Professional Practices (CSPP), which aims to “advance the AST as a credible, trusted, cited, and authoritative source of information regarding Testing Standards, Tester Certification, and the Regulation of Software Testing and Software Quality.” Eric is also a candidate for the AST Board of Directors and his Stump Speech gives details of his professional experience and his work in the testing community.

The WOPR24 Workshop Theme is:

Production is where performance matters most, as it directly impacts our end users and ultimately decides whether our software will be successful or not. Efforts to create test conditions and environments exactly like Production will always fall short; nothing compares to production!

Modern Application Performance Management (APM) solutions are capturing every transaction, all the time. Detailed monitoring has become a standard operations practice – but is it making an impact in the product development cycle? How can we find actionable information with these tools, and communicate our findings to development and testing? How might they improve our testing?”

Experience Reports (ERs) are narratives supported by charts, graphs, results, and other relevant data, then followed by facilitated discussion triggered by the ER. In the WOPR24 Experience Report, Eric gives the details of the results from ER presentations by 7 attendees over 3 days. These cover a range of applications. Between ERs, the participants conducted several guided discussions to explore specific areas of interest. Workshop exercises included:

What should we alert on? This resulted in a long, though incomplete, list of very useful details, e.g.

Monitor What    Threshold                  Notes

CPU > 75% Web/App, > 50-60% DB User + System CPU: For Vus with 4 or less and hyperthreaded, Warning 50 alarm 75, critical 95. With > 4/metal with HT disable warning 75 critical 99 Context-dependent Physical and virtual CPUs Per core and overall 1 minute monitor – number of sequential observations to trigger (two at which level? 3 alarm? 1 critical)? One minute? 5 Minute?

 

What Would an Ideal Dashboard Look Like? Participants broke up into groups to talk about different contexts, producing a dashboard for each, e.g. SaaS Company Dashboard, E-Commerce, BigCorp’s Dashboard, Bonus Screenshot of an Ops Dashboard Someone Built

What Would We Want to Monitor for Mobile Users? 21 attributes were listed on the device, 16 in the app.

Apart from his day job and running WOPRs, Eric has taken the time to review Parts 1 & 2 of the controversial ISO/IEC/IEEE 29119 Software Testing Standards, see Analysis of ISO 29119-1 and Analysis of ISO 29119-2: Test Processes.

Eric’s analyses give a good idea of the gulf between the Context-Driven  and the other testing Schools. Cem’s efforts to bridge the gap are admirable but there is still a long way to go.