Why we do experience reports

Standard

Not so long ago a workshop was held at Improve Quality Services. The theme of the workshop was ‘Test strategy’ and the participants were asked to present a testing strategy that they recently used in a work situation. Mind you that participants were asked to present, as in to show and explain, their test strategy; not to do an experience report on it. Several strategies were presented and the differences were notable. Very broadly the following reports were discussed.

  • A description of the organization (people and processes) around the software system.
  • A description of the way testing was embedded in the development lifecycle.
  • A description of the testing principles that were shared by the testers.
  • A description of the test approach, aimed at communicating with integrating parties.
  • A description of the test approach, aimed at communicating with management.
  • A description of the approach and the actual testing of a screen.
  • A description of the approach and the actual testing of a database trigger.

I am not in a position to say whether any of these testing strategies were right or wrong and I am certain my judgement is irrelevant. I furthermore doubt that any of these strategies can be judged without further investigation, and I am sure that each of these contains elements that are great and elements that can be improved upon. It is not my aim to comment on this. However, as the evening went on, I felt a growing frustration about a number of things.

Considering the open mindedness and critical thinking abilities of the people that were in the room (all of them had, for example, taken the Rapid Software Testing course) there was a remarkably low number of comments on the strategies as presented. Aside from the occasional remark, the presentations were largely taken at face value. Now the fact that were not a lot of remarks can still imply many things about the evening itself, the organization of the meeting, the mental conditions of the people present and so on. Still, I like to think that the setup was conducive to feedback and learning and so I’d like to focus on the presentations themselves to see why they didn’t invite comment, or at the very least, why I did not feel inclined to comment.

My first issue during the evening was that most of the presentations did not discuss what happened when the rubber actually hit the road. If the proof of the pudding is in the eating, we hardly discussed how the strategy went down. Whether it cracked under the first strain or whether it was able to stay the course. If there is a way to evaluate a testing strategy (to put it to the test) it is to carefully note what happens to it when it is applied. Evaluation was the part that was largely missing from our presentations.

You see this kind of thing quite a lot at (testing) conferences. The speaker presents an approach, a framework, a general theory or general solution without getting into how this solution came into being or how it developed when it was actually applied in practice. The mere presentation of a theory does not lend itself to criticism. My reaction to this form of presentation is to shrug and move on with my business. I am unable to criticize it from my specific context, because usually the presented approach is not very specific about context so I cannot check whether my context applies. The only other form of criticism available to me is then to reason about the approach in an abstract way, either by checking the internal logic of the theory, or by comparing the theory to other theories in the same domain and to see if it is consistent. This is not ideal and certainly to do this in the timespan of the 40 minutes of a conference talk, without having access to reference material, is a tall order.

I had this feeling when a couple of months ago, at my work, an ATDD (Acceptance Test Driven Development) framework was presented as the new Agile way of working. The thing I can remember from that presentation is that there was a single image with bits and pieces and connections explaining the framework. The rest is a blur. I never heard anything of it since.

So the question is: what do we need to do to open up our theories to evaluation and investigation?

My second issue with the presented strategies was initially about distance. Quite a number of the strategies that were presented seemed to be distanced from the subject under test (SUT). By the subject under test I mean the actual software that is to be tested. And by distance, I mean that there were strategies that did not primarily discuss the subject under test. I was absolutely puzzled to see that some of the presented strategies did not discuss the execution of that strategy. As I stated above, the proof of the strategy should be in its execution. Discussing strategy without execution just didn’t make sense to me. But looking back at this experience I think I wrestled with the purpose of offering up the strategies to (more or less) public scrutiny. At least one or two presentations discussed not so much the test strategy itself but the communication of the strategy with management, integrating parties or the test department. This focuses on an entirely different aspect of testing, being the communication about the test strategy in order to reach a common ground or to align people along a common goal. The purpose of the presentation is not to scrutinize the test strategy, but to invite an examination of the way it was communicated. This purpose should be clear from the start. Otherwise the ensuing discussion is partly consumed by determining that purpose (which may be a waste of time) or, if there is no quest for the purpose, the discussion follows a winding path that has a good chance of not leading the audience, nor the presenter, anywhere at all.

The third thing that bothered me was that the displayed strategies rarely, if ever, discussed people. They discussed roles, but not people. That is the reason why, in my very short presentation, I decided (on the spur of the moment) to hardly pay any attention to the actual strategy that I selected and focused on the characteristics of the individuals in my team. There are two aphorisms that I had in mind while doing this: “Not matter what the problem is, it is always a people problem” and “Culture eats strategy for breakfast”. It appears to me that no matter how excellent your plan is, the result of its execution largely depends on the people that are aligned to play a part in this strategy. Wherever the strategy leaves room for personal interpretation, there will be interpretation. And basically, no strategy will ever be executed in the same way twice, not even with the same group of people, because the decisions people make will differ from time to time, influenced by many factors. If this is true, and I think it is, then I wonder why the human factor is not present in a more explicit and defined way in our testing strategy and in our reports in general. We seem prejudiced (primed?) to talk about processes and artifacts, and to fear the description of flesh and bone. This is a general remark on the evaluation of the context. If a report displays people as ‘puppet A’ and ‘puppet B’ then this is a sure sign of trouble. I know this from experience because our famed Dutch testing approach TMap Next exclusively discusses cardboard figures as a replacement for humans.

In conclusion; for an experience report to be open to evaluation and investigation and for a meaningful discussion to ensue, it should contain at least these three things.

  • The purpose (research question) of the report should be clear,
  • the context of the report should be described (including the hominids!) and
  • the results of applying the approach should be presented.

Hopefully I have been able to clarify these demands by sharing my feelings above. The discussion loops back to the usage of experience reports within the peer conferences as organized by the Dutch Exploratory Workshop on Testing. The way we look at an experience report is evolving and the road towards a better understanding of what we do (as a workshop) and how we do it, has been a very meaningful one.