In November 2014 I spoke at the Agile Testing Days in Potsdam on the subject of using FitNesse to drive data through a legacy back end system. It was the first time I attended this crazy conference and I will share some of my experiences in another post. For now, here are the slides of my presentation.
On the 18th of March I spoke on the topic of testing as skillful investigation for the Belgium Testing Days conference. It was a honor to be invited to speak at this lively conference. Also, it was great (and a little unnerving) to find the room filled to capacity with listeners, as Gil Zilberfeld mentions in his tweet.
The slides of my presentation are displayed below.
I consider myself to be a tester of the functions of software. It means my expertise is in looking at how (or why, or when etc…) systems and applications function. It also means that I am less experienced in non-functional aspects of systems such as usability, performance or security. I believe that those aspects can be considered areas of expertise in themselves.
I think most testers start out as functional testers. In some introductionary course on software testing they encounter a number of software testing techniques that can be used to evaluate the functions that are contained in an application or a system. Most commonly these techniques aim at paths and decisions and come from a more or less mathemical examination of the function. Combinations and coverage are important aspects of these techniques.
Now profiency in functional testing is not a common trait. For a number of reasons the functional tester is thrown off the path of developing skills in functional testing. One reason is the testing career path. Gaining competence in functional testing is not rewarded as much as climbing the ladder and becoming a test coordinator, a test manager, a test coach, a quality consultant etc… Furthermore; if we define functional testing by a restrictive set of testing techniques then it seems that the only way forward in functional testing is gaining experience with those techniques, which is a dead-end street. Another reason could be that functional testing is not regarded as a valuable area of expertise such as performance testing or security testing. It is remarkable that the latter areas in software testing have gurus expanding the craft. I find it hard to come up with the name of a ‘functional testing guru’. Well, actually there are some in the context-driven school of testing.
Most of the time you see functional testers going off on a quest based on a mixture of techniques, instincts and past experiences. The latter two most of the time provide valuable insight into the functions of a system. They seem to find the better bugs because the functional tester experiments with the application in ways that had not occured to the programmer or the designer. So it seems that there is a lot more to functional testing than just techniques, but there is little explicit knowledge of what more there is.
I stumbled across our common lack of profiency in functional testing quite a number of times. In functional testing most of the time there is that nagging feeling that you’re missing something important but usually it takes a lot of effort to find out how to get to that important thing. For example; I may have covered the paths of a function using some technique and found no bugs. Yet despite that the function could still cause the application to go wrong in a fascinating way. In one project I developed a test strategy based on scenarios (see, for example, Soap Opera Testing [PDF]), which was quite a valuable and very useful approach in itself. Yet I had the feeling that this approach covered a minor part of the ways the functionality of the system could be used. I was looking for other perspectives I could use to research the system and experiment with it.
Most of the time I use James Bach’s SFDPOT heuristic to change my perspective. As a side note, I find that many of the functional testers I meet are awkwardly ignorant of this heuristic. On the particular project I mentioned the heuristic did not include some of the perspectives I was looking for. So through the analysis of a couple of sessions with users on risk assessment, I came up with a couple more. Yesterday I sat down and extended my list to what you see below. This list should probably be turned into a heuristic. Each of the items could use some clarification. I will leave that to later posts. Additions, of course, are welcome!
A final note to close this topic. Yesterday I was reading up on qualitative research (Qualitative Research: An Introduction to Methods and Designs). I got to the part on Grounded Theory and coding. This theory, in short, is a research method in which a theory is formed based on observation. Coding is the abstraction of theory parallel to the observation (for example; the text of an interview with a person). If we, as testers, consider, for example, our bug repositories as raw data and are able to abstract a theory from that by coding, we should be able to come up with numerous functional testing heuristics (such as the list above). One fact is that there is probably no shortage of bug descriptions in repositories world wide. The other fact is that we hardly ever use them to do valuable research. Such failure to scientifically use the data we generate must be one of the reasons for the immaturity of our craft.