Functional testing heuristics – a systems perspective

Standard

In my previous post I promised to get back to the topic of functional testing heuritics. There are several reasons why I get back to it now. The main reason is that I was able to expand the list to 41 heuristics. One of the other reasons is that I learned from making this list that most of the items are probably aspects, characteristics of systems. Therefore I added the phrase ‘a systems perspective’ to the title of the list. I do not claim to be an expert in systems thinking but I do believe that software systems have common characteristics that we often  fail to highlight in our approaches to functional testing. As you go through the list you will notice that most of the aspects are pretty obvious.

Keep in mind that the list is not a checklist. It is not part of a methodology or an approach. It is not a definitive set of ‘functional testing principles’ and if you have other names by which to identify characteristics of systems, please by all means go ahead and use those. Or even better; share them with the testing community. Also, additions are warmly welcomed.

Another reason I get back to it now is that I learned from making this list that we, as testers in the field, may want to reflect a little bit more on the work we do and the things we encounter. I believe that a research methodology such as grounded theory suits that purpose reasonably well.

The list is published on my website with some commentary. If you’re only interested in the PDF file, it can be found here.

Advertisements

A closer look at functional testing heuristics

Standard

I consider myself to be a tester of the functions of software. It means my expertise is in looking at how (or why, or when etc…) systems and applications function. It also means that I am less experienced in non-functional aspects of systems such as usability, performance or security. I believe that those aspects can be considered areas of expertise in themselves.

I think most testers start out as functional testers. In some introductionary course on software testing they encounter a number of software testing techniques that can be used to evaluate the functions that are contained in an application or a system. Most commonly these techniques aim at paths and decisions and come from a more or less mathemical examination of the function. Combinations and coverage are important aspects of these techniques.

Now profiency in functional testing is not a common trait. For a number of reasons the functional tester is thrown off the path of developing skills in functional testing. One reason is the testing career path. Gaining competence in functional testing is not rewarded as much as climbing the ladder and becoming a test coordinator, a test manager, a test coach, a quality consultant etc… Furthermore; if we define functional testing by a restrictive set of testing techniques then it seems that the only way forward in functional testing is gaining experience with those techniques, which is a dead-end street. Another reason could be that functional testing is not regarded as a valuable area of expertise such as performance testing or security testing. It is remarkable that the latter areas in software testing have gurus expanding the craft. I find it hard to come up with the name of a ‘functional testing guru’. Well, actually there are some in the context-driven school of testing.

Most of the time you see functional testers going off on a quest based on a mixture of techniques, instincts and past experiences. The latter two most of the time provide valuable insight into the functions of a system. They seem to find the better bugs because the functional tester experiments with the application in ways that had not occured to the programmer or the designer. So it seems that there is a lot more to functional testing than just techniques, but there is little explicit knowledge of what more there is.

I stumbled across our common lack of profiency in functional testing quite a number of times. In functional testing most of the time there is that nagging feeling that you’re missing something important but usually it takes a lot of effort to find out how to get to that important thing. For example; I may have covered the paths of a function using some technique and found no bugs. Yet despite that the function could still cause the application to go wrong in a fascinating way. In one project I developed a test strategy based on scenarios (see, for example, Soap Opera Testing [PDF]), which was quite a valuable and very useful approach in itself. Yet I had the feeling that this approach covered a minor part of the ways the functionality of the system could be used. I was looking for other perspectives I could use to research the system and experiment with it.

Most of the time I use James Bach’s SFDPOT heuristic to change my perspective. As a side note, I find that many of the functional testers I meet are awkwardly ignorant of this heuristic. On the particular project I mentioned the heuristic did not include some of the perspectives I was looking for. So through the analysis of a couple of sessions with users on risk assessment, I came up with a couple more. Yesterday I sat down and extended my list to what you see below. This list should probably be turned into a heuristic. Each of the items could use some clarification. I will leave that to later posts. Additions, of course, are welcome!

  • Patterns
  • Sequence
  • States
  • Concurrency
  • Confluence
  • Synchronization
  • Sharing
  • Interactions
  • Repetition
  • Hierarchy
  • Dependencies
  • Parameters
  • Rules
  • Configuration
  • Constraints
  • Resources

A final note to close this topic. Yesterday I was reading up on qualitative research (Qualitative Research: An Introduction to Methods and Designs). I got to the part on Grounded Theory and coding. This theory, in short, is a research method in which a theory is formed based on observation. Coding is the abstraction of theory parallel to the observation (for example; the text of an interview with a person). If we, as testers, consider, for example, our bug repositories as raw data and are able to abstract a theory from that by coding, we should be able to come up with numerous functional testing heuristics (such as the list above). One fact is that there is probably no shortage of bug descriptions in repositories world wide. The other fact is that we hardly ever use them to do valuable research. Such failure to scientifically use the data we generate must be one of the reasons for the immaturity of our craft.