On the Value of Test Cases

Standard

Something is rotten in the state of Denmark.

William Shakespeare – Hamlet

Over the period of a couple of weeks, I was able to observe the usage of test cases in a software development project. The creation of test cases was started at the moment when the functional specifications were declared to be relatively crystallized. The cases were detailed in specific steps and entered into a test management tool, in this case HP Quality Center. They’d be reviewed, and in due time executed and the results would be reported to the project management.

During these weeks after the finalization of the functional specifications, not a lot of software was actually built, so the testers involved in the project saw the perfect chance to prepare for the coming release by typing their test cases. They believed that they had been given a blissful moment before the storm, in which they would strengthen their approach and do as much of the preparatory work as they could, in order to be ready when the first wave of software would hit. Unfortunately, preparation, to these testers, meant the detailed specification of test cases for software changes that still had to be developed, a system that was partly unknown or unexplored by them, and functional specifications that proved to be less than ready.

There is no need to guess what happened next. When eventually software started coming down the line, the technical implementation of the changes was not quite as expected, the functional specifications had changed, and the project priorities and scope had shifted because of new demands. It was like the testers had shored up defenses to combat an army of foot soldiers carrying spears and they were now, much to their surprise, facing cannons of the Howitzer type. Needless to say, the defenders were scattered and forced to flee.

It is easy to blame our software development methods for these situations. One might argue that this project has characteristics of a typical waterfall project and that the waterfall model of software development invites failure. Such was argued in the 1970s (PDF, opens in new window). But instead of blaming the project we could ask ourselves why we prepare for software development the way we do. My point is that by pouring an huge amount of energy into trying to fixate our experiments in test cases (and rid them of meaning — but that’s another point), we willingly and knowingly move ourselves into a spot where we know we will be hurt the most when something unexpected happens (see Nassim Nicholas Taleb’s Black Swan for reference). Second of all, I think we seriously need to reassess the value of drawing up test cases as a method of preparation for the investigation of software. There are dozens of other ways to prepare for the investigation of software. For one, I think, even doing nothing beats defining elaborate and specific test cases, mainly because the former approach causes less damage. It goes without saying that I do not advocate doing nothing in the preparation for the investigation of software.

As a side note, among these dozens of other ways of preparing for the investigation of software, we can name the investigation of the requirements, the investigation of comparable products, having conversations with stake holders, having conversations with domain experts or users, the investigation of the current software product, the investigation of the history of the product, the reading of manuals etc… An excellent list can be found in Rikard Edgren’s Little Black Book on Test Design (PDF, opens in new window). If you’re a professional software tester, this list is not new to you. What it intends to say is that testers need to study in order to keep up.

Yet the fact remains that the creation of test cases as the best way to prepare for the investigation of software still seems to be what is passed on to testers starting a career in software testing. This is what is propagated in the testing courses offered by the ISTQB or, in the Netherlands, by TMap. This approach should have perished long ago for two reasons. On the one hand, and I’ve seen this happen, it falsely lures the tester in thinking that once we’re done specifying our test cases, we have exhausted and therefore finalized our tests. It strengthens the fallacy that the brain is only engaged during the test case creation ‘phase’ of the project. We’re done testing when the cases are complete and what remains is to run them, obviously the most uninspiring part of testing.

The second thing I’ve seen happening is that test case specification draws the inquiring mind away from what it does best, namely to challenge the assumptions that are in the software and the assumptions that are made by the people involved in creating the (software) system — including ourselves. Test case creation is a particular activity that forces the train of thought down a narrowing track of confirmation of requirements or acceptance criteria, specifically at a time when we should be widening our perspectives. By its focus on the confirmation of what we know about the software, it takes the focus away from what is unknown. Test case creation stands in the way of critical thinking and skepticism. It goes against the grain of experimentation, in which we build mental models of the subject we want to test and iteratively develop our models through interaction with the subject under test.

mcl82If there is one thing that I was forced to look at again during the last couple of weeks — during which I was preparing for the testing of software changes — it was the art of reasoning and asking meaningful questions. Though I feel confident when asking questions, and though I pay a lot of attention to the reasoning that got me to asking exactly that particular set of questions, I also still feel that I need to be constantly aware that there are questions I didn’t ask that could lead down entirely different avenues. It is possible to ask only those questions that strengthen your assumptions, even if your not consciously looking for confirmation. And very much so, it is possible that answers are misleading.

So for the sake of better testing, take your ISTQB syllabus and — by any means other than burning — remove the part on test cases. Replace it with anything by Bacon, Descartes or Dewey.

“Criticism is the examination and test of propositions of any kind which are offered for acceptance, in order to find out whether they correspond to reality or not. The critical faculty is a product of education and training. It is a mental habit and power. It is a prime condition of human welfare that men and women should be trained in it. It is our only guarantee against delusion, deception, superstition, and misapprehension of ourselves and our earthly circumstances. Education is good just so far as it produces well-developed critical faculty. A teacher of any subject who insists on accuracy and a rational control of all processes and methods, and who holds everything open to unlimited verification and revision, is cultivating that method as a habit in the pupils. Men educated in it cannot be stampeded. They are slow to believe. They can hold things as possible or probable in all degrees, without certainty and without pain. They can wait for evidence and weigh evidence. They can resist appeals to their dearest prejudices. Education in the critical faculty is the only education of which it can be truly said that it makes good citizens.”

William Graham Sumner – Folkways: A Study of Mores, Manners, Customs and Morals

11 thoughts on “On the Value of Test Cases

  1. While I agree pre-documenting has it drawbacks, I just wonder what other means do you suggest?
    Should we send these testers to the beach (hoping its summer time 🙂 ) and leave it all for post-delivery exploratory testing?
    Will we “Think before we act?” and if so – in which means?
    Ain’t there a middle way of setting just enough effort on planning test cases name & purpose, detailing just a sample of items which will be useful for in-depth review of requirements, our required tools and other needs, etc. ?
    And then benefit of both worlds of early feedback as well as testing based on actual received artifacts, while keeping an open mind…

    Kobi Halperin (@halperinko)

    • Hi Kobi,

      Thanks for your reply! I think there are many things you can do before the delivery of the software other than strictly writing detailed test cases. But it depends on the context. If you’re testing in a new domain (like I am at the moment) you can learn as much as possible about the domain. You can read (news) articles about the domain you’re working in. If there is an existing application you can analyze different aspects of the application. Right now I’m digging through database stored procedures and table triggers, read the code comments, and try to get a general picture of how the code is tied together. You can go about the office and see what the other people are doing and how they feel about their work. You can sit in the company restaurant during lunch hours and get a feel for culture of the organisation. You can see which tools you might use during the project and set up those. You can perhaps come up with some heuristics that you want to try. Or create a checklist that you think might be useful. You can think of the quality characteristics that might apply to the project and invent tests for those. You can dig through old test scripts or manuals, read the opinions of users about the software system. I do not think there is a limit to the things you can do in the hiatus before the delivery of software. Visit a library! All of the activities I mentioned will provide you with some piece of information that will prove useful. However, none of these are regarded as ‘official’ software preparation activities. I think that’s dumb.

      I think we have to stop thinking of testing in terms of artifacts and start thinking in terms of activities. I am not the only one who has that opinion and I certainly was not the first one who came up with that. Any material from the context-driven school will affirm that testing is an act of investigation, not a production line for documents. I also think we should not allow ourselves to be harnessed by traditional structured methods (such as ISTQB and TMap). The straight jackets that are imposed by the structured methods seem to have been designed to restrain exactly those skills that a software tester nowadays needs; critical thinking, skepticism, creativity, questioning. In order to do these things well we need room to move, we need to be able to switch between mind sets. Creating detailed test cases is an activity that gets the mind in a certain state (confirmation, working towards results, completing a task, covering the acceptance criteria). This state may be useful from time to time, but it certainly isn’t the only state of mind that we need during testing. That is why I think the single focus on creating test cases is damaging.

      Of course there is a middle way if that suits your purpose. But I do not believe there is a lot of good, testing-wise, in doing the confirmatory, mind-locking tasks. From time to time we need those tasks in order to satisfy some project goal. Even in those situations I’d take a very good look at the goal you are trying to satisfy.
      I hope I answered your question. If not, let me know.

      Thanks again and best regards!
      Joris

      • Thanks for the detailed reply Joris (and the trigger in twitter),
        If you will notice – I haven’t once mentioned writing detailed test cases is the right method,
        But I do think (as you elaborated above) that testing activities start long before we get the 1st drop, and while we stress ET too much – some less experienced people get the notion that no pre-planning is needed at all.
        As some said lately all tests are ET at some point.
        I agree with CDT notion of adapting to the project needs – But its hardly ever means not planning before hand.
        Writing test ideas during all these activities you elaborated based on the things learnt through that – is not considered ET by its formal definition (as we do not execute the code) BUT these have major benefits to the process.
        I tried to write above in which cases I think a *sample* of detailed test cases are beneficial – so I will not repeat it again. (in those cases one who will skip that portion will loose improtant knowledge and wil come less prepared)
        I’m just saying – we should focus on what should be done and in which proportion these may be useful, rather than focus on attacking different methods which have their benefits when used in right portions.
        @halperinko – Kobi Halperin

    • All testing is exploratory and it can be done at any time. Explicit test cases are just a rather poor explicit communications device to try to communicate known requirements to testers. It’s better, easier and cheaper for testers to leverage their own understanding of requirements both tacit and explicit.

      Further reading

      Click to access againsttestcases.pdf

  2. Pingback: Five Blogs – 4 June 2015 | 5blogs

  3. In My Humble opinion there are strategic ways of doing this(forgive me for putting UX as a bases of my reply but i think its important and ill share my view “its only my view”) . UX is a great way to start off with. Interviewing and questioning the future users (Not the idea of what management or board members think is the solution which would’ve ultimatley been the spec) of the product you are going to build. Creating user stories and user journeys. Building the visual for the client prior to building the actual product is a great way of eliminating irrelevant test cases. Testers then instead of building test cases from functional specifications (in which case the testers build test cases of the interpretation of what the BA’s idea of a clients project is ) will take user stories, do boundary value analysis and perhaps equivalence partitioning type test cases on the actual stories of the users. In this event you have avoided your testers of building irrelevant test Cases and have subsequently built test cases around a working solution rather than testing the ideas of a spec (which is probably not going to be the end product). Testers should not be a part of these interactions with users as they need to always use their own objectives when testing and will likely be influenced by another users way of interacting if exposed to them on that level (in my opinion). However having the “Test Lead” involved in the project from day one is advantageous as you can get rid of many defects earlier rather than later. We all know Finding defects(testers) before they are built (by a dev) is much better than finding you wasted your time building a defect. My Personal opinion is that “UX” is The Game Changer and i am not a “UX Guy”.

    @ClydeCupido

    • Hi Clyde,

      Thanks for your comment! I do not know a lot about UX, but, from your comment, it seems to me that there are UX practices that invite customer collaboration and move design problems to the front. That sure is an improvement over waterfall (which is sort of what I am in at the moment). Also, you say it leaves more room for testers to apply their own thinking processes, which is good. Do you feel that these practices require more skills from a tester? Do you see different types of thinking applied during testing? Do you feel there is more room / time to explore the software?

      Regards!
      Joris

      • Joris – that seems to be a wrong interpertation of Waterfall – If you look at the V model which is based on dev waterfall and adding corelated test tasks – the top brick is acceptance testing – which IS counting what the user expects to get.
        I see some problems on Clyde definition above which starts with saying we should consider user needs, but later on restrain testers from being involved in it – which leaves them basing their beliefs on “hear say”…
        Istead they should be involved with real users, and review the requirements validity.
        @halperinko – Kobi Halperin

  4. Hi Joris

    While I may not think testers are needed at a UX level, UX certainly do have there own methods of testing but it’s not the same as what our kind of testing would be. It’s more working with the client on practical solutions (strategy). Many companies use UX after the spec has been signed off and this in some cases isn’t practical . you may as well have a tester writing their own user stories to test against which is common practice but clearly does not make for a good test case. you won’t be testing a working solution but rather then have already induced errors into a test case which would need to be done again numerous times through the life cycle of that project.

    I would personally rather deliver a working solution to a client than to deliver a working product that isn’t the solution for the client. Basically the strategy is to eliminate errors with UX (instead of testers) such as design and user journey’s prior to coding. This saves a lot of time during a project but also takes time prior to the coding. Which leaves the question.. would you rather spend more time before coding to eliminate errors early or spend time fixing/removing code you didn’t need in the first place. I personally would rather have a dev doing upgrades or maintenance that brings in a cash flow than have a dev write unnecessary code wasting resources .

    That being said… yes this leaves lots of room for testers to do much more in depth testing such as functionality. Testers should not be asking the question is what the developers built the solution or how can we improve the process (that should be UX ). Instead the question testers should be asking themselves (their objective) are….. has the user stories been executed by the developer? What if I add numbers in this text field? What if I insert an incorrect email address or password.? What happens if the Internet connection is lost do I lose the data I have entered? What if the device switches off while I am busy? Etc etc (hear is the best time to use the skills( equivalence partitioning, boundary value analysis etc) that ISTQB teachers you ).

    To often testers are busy creating working solutions (by removing defects and adding additional functionality that’s only thought of after the fact) and not doing enough test scenarios (exploratory) . Testers should be asking the what if’s more often than the how. Like u said “explore the software”. It is still common practice and part of most software syllabi to test early and I agree to an extent but I would not spend my time on writing a test case based on a functional spec (unless it’s maintenance on an existing system or changes required and no UX was done). Personally I would say Plan early so that you have the materials to execute later. The world of testing is still in its infancy but growth and improvements are inevitable so in my opinion don’t “play the game”. “change the game” with impact that matters.

    @ClydeCupido

  5. Hi Joris and Kobi

    User experience design(UX) most frequently defines a sequence of interactions between a user (individual person) and a system, virtual or physical, designed to meet or support user needs and goals, primarily, while also satisfying systems requirements and organizational objectives.

    Typical outputs include(but are not limited to):

    Site Audit (usability study of existing assets)
    Flows and Navigation Maps
    User stories or Scenarios
    User segmentations and Persona (Fictitious users to act out the scenarios)
    Site Maps and Content Inventory
    Wireframes (screen blueprints or storyboards)
    Prototypes (For interactive or in-the-mind simulation)
    Written specifications (describing the behavior or design)
    Graphic mockups (Precise visual of the expected end result)

    By definition UX has covered the common(Early) practice of a tester such as validity as you state above and so forth. This does exclude a tester role(at this stageof a project) as this is/was a testers duty prior to UX. What i think should be done is the following, This is a qoute from Michael Bolton’s Blog “consider misuse cases, abuse cases, obtuse cases, abstruse cases, diffuse cases, confuse cases, and loose cases; (based on user journey’s. not to be confused with use cases.) and then act on them, as real people would.

    In my opinion this opens a new door to test practices but much adaptation is still needed. and much objections will still be made around this. However, first hand i can tell you that it makes an emmence diffrence in having a UX team on board.

    @ClydeCupido

  6. Pingback: Five People and Their Thoughts (Part 2) | One Software Tester

Leave a reply to Kobi Halperin (@halperinko) Cancel reply