My selection for the TestNet Autumn event

Standard

Each year TestNet, the Dutch society for testers, organizes a spring event and an autumn event. The event is a single days conference with the morning reserved for workshops and the afternoon and the evening reserved for two key notes and parallel tracks. On Wednesday 14 October, the autumn event takes places. Its theme is ‘Trends in Testing’. Most of the presentations are in Dutch and therefore the descriptions of the sessions may not make a lot of sense to those who do not understand Dutch, but I am going to try and describe some of them, together with my personal selection for this conference.

The conference attracts more than five hundred visitors! Part of its attraction must be that the program is varied, the venue is excellent, and the admission price is zero euros if you’re a member of TestNet. TestNet membership costs next to nothing, so basically you get two big conferences for free each year. There is no reason why you, as a professional tester, would not show up at these conferences, other than personal circumstances.

As I said the theme of the autumn event is ‘Trends in Testing’. Let’s have a quick look how this theme is reflected in the presentations. Besides the key notes there are four presentations about test automation, two about performance testing, two about security, one about big data and one about the internet of things and that’s just about it for the buzz words. The other presentations are about the selection of test data, about roles in testing, about information ethics (nice find Nathalie!), about operational intelligence, about handling functional specifications in a more efficient way, and one (only one?) about exploratory testing. For a conference theme that could easily end up facilitating a buzz word bingo, it turned out pretty well.

From the four track sessions I selected the sessions that I am going to attend. The first one is ‘Subset: Less is more’ by Marten Bakker. I do not know the speaker, but I am drawn to the topics of his talk. Marten is going to talk about test data management and the creation of subsets of data. From personal experience I know that testers easily focus on functional specifications and easily lose sight of test data. In my current project there is almost complete ignorance of test data. This, of course, is very bad, but when one deals with large sets of data and a large variety of data, it can be become quite intimidating to handle this with some form of elegance. I hope Marten has some good suggestions.

The second track session that I am going to attend is ‘Panacea – A test framework for all’ by Adonis Stanislas Sheeban. Again, I do not know the speaker. The talk is about test automation, specifically about the tools Protractor and Cucumber. I heard of these tools, but never worked with them, and I am taking this chance to get to know a little bit more about them.

The track session that I am really looking forward to is ‘Google naar fouten met operational intelligence’ (Google for errors using operational intelligence) by Albert Witteveen. Albert is going to talk about complex, linked systems and how to check if these systems are functioning correctly. In my current project, and in the one before that, I’ve been digging through databases and logs quite a lot, in order to establish how well the system is functioning or to find out the root cause of a defect. This can be a tedious and yet very difficult, time-consuming job. Albert envisions software that can do the gathering of relevant data for us and that make this data easily accessible. I am sure such tools can save a lot of time. I tried my hand at building a log analyzer once and loved doing that. So in this talk I am also going to look for opportunities for self development.

The last track session that I want to visit is ‘Trifolium Repens: de nieuwe testbasis voor Agile en Waterval testen’ Trifolium Repens: the new test basis for Agile and waterfall testing) by Rudi Niemeijer. I am not entirely sure what Rudi is going to talk about. I think he is going to introduce a method to reduce overhead (functional specifications) by combining the strong points of the tester and the developer. At the very least he is going to have to explain why he uses a plant (white clover) as a metaphor. I am looking forward to his explanation.

I do not know if any of the sessions that I am going actually describe trends in testing. I hope they do.

Making Sense of the Legacy Database, Introduction

Standard

For the past 3 years I spent a lot of time digging through relational databases. Most of these databases that I looked at could be considered legacy databases. In one case it was an Oracle database, in another a Sybase database. They are legacy systems in the sense that the technology with which they have been built has been around for quite a while. Heck, relational databases were conceived in the 1970’s (link opens PDF), so from the perspective of computer history the concept of the relational database is pretty old.

The databases I worked with can be considered ‘legacy’ in another way. At least one of them has been in existence, and thus has been under maintenance, for 15 years. So, in terms of a software life cycle this database is relatively old. It is a monolith that has been crafted and honed by generations of software developers, so to speak.

And there is a third way in which these databases could be considered ‘legacy’. It appears that for at least a couple of these systems, someone was able to make the documentation about the inner workings of the database disappear almost entirely. And not only that; over time the people who knew the database intimately moved away from the company. So what remains is a database that is largely undocumented, with very few people who can tell you the intricacies of its existence. Clearly this is an exceptional situation… or maybe not.

Now, both knowledge and joy can be gained from studying the database. It is reasonable to assume that the database is a reflection (a model, if you will) of the world as the company sees it. In it is stored knowledge about elements of the world outside and the relationships between these elements. As a way of gaining deep knowledge about how its builders classify and structure the world, studying the database is fine starting point.

Indeed, one of the goals I have when I study the database, is to get to know how knowledge is classified. I study in order to gain knowledge about the functional aspects and the meaning of the system. Some may regard the database as a technical thing. Those sentiments may be strengthened by the fact that knowing how to write SQL queries is usually considered to be a ‘technical’ skill, one that ‘functional’ testers stay away from. That is a very damaging misconception, withholding a fine research instrument from the hands of the budding tester. SQL is not anything anyone should ever be afraid of.

But gaining knowledge about functional aspects is just one object of the study. Another one is that mastering a database allows you to verify what information is stored and whether it is stored correctly or not. This, clearly, allows you to check (test) the functioning of the system. Mastering the database also allows you to manipulate data for the purpose of testing and to create new data sets. And finally, it allows you conjure up answers to questions such as “Show me the top 100 books that were sold last year to customers from the Netherlands above the age of 45” much more quickly than through any GUI. So it can be used for analytical purposes.

The question I would like to answer is how do we analyze the database. What methods do we employ to get to the core of knowledge that may be spread out over hundreds of tables. There are a few methods that make use of the typical features of a relational database. Other methods are more general in nature. In other blog posts I would like to dig deeper into these methods of investigation. But for now I would like to leave you with a quick overview and a description of the first method.

Organization

When you are going to analyze a database, you will be writing SQL queries. Many of them, probably. If you are going to place all of your queries in a single file, that will become chaos rather quickly.  Bad organization will be a really heavy dead weight during the whole of your investigation. I like to organize my queries in folders, in which case these folders represent broad classifications. For examples, I create a folder containing all queries relating to orders, another one relating to customers and another one relating to financial data. Usually, this division really helps you to quickly zoom in on a specific area.

Within a folder there may be many different files containing queries. Again I try to group queries by ‘functional’ area. So I may create a file containing queries relating to ‘cancelled orders’ or ‘orders that are waiting to be processed’, or ‘orders from returning customers’. I first tried to group queries in files based on the table that was queried (such as a single file containing all queries on the ‘order’ table) but for some reason a ‘functional’ classification is easier to comprehend.

One of integrated development tools I worked with – Oracle SQL Developer – has a plugin for Subversion, so it allows you to keep and maintain a repository and a version history of the files containing your queries and to share it easily with other team members. I learned the hard way that keeping your laboriously built up set of queries solely on a single internal hard disk is not such a good idea.

Like in other programming languages, in SQL it is possible to add comments to your queries. I found out that comments are an essential part of your investigation. In SQL queries comments are usually written with two hyphens (–) at the beginning. Queries can quickly become long and complex. If your files that contain the queries contain only SQL statements and nothing else, this will seriously hamper your investigation. You will need to read each query again and again in order to find out what it meant. Reading a SQL query that joins many tables can be really tough, if you want to find exactly what information you are pulling from the database. Also, queries may look very similar but may differ on a slight detail that you will only find when you read and conceptualize the whole thing. And last but not least, since writing queries takes time and mental effort, your do not want to write duplicate queries. The only way to prevent you from doing things many times over is to have an easy way to detect if you already ‘have a query for that’.

So comments are essential as a means of identification or indexing. I like to go over my files with queries from time to time, ‘grooming’ and improving the comments and the queries themselves.

So far for the organization of queries. Below are the other topics that I like to cover as methods for the investigation of the legacy database. I will get back to these in posts to come.

  • Structuring the query
  • Testing your queries
  • What to do with the data model (if it exists)
  • Searching for distinct values
  • Paying attention to numbers
  • Scanning data & data patterns
  • Using dates
  • Complex joins
  • Emptiness (null values)
  • Querying the data model
  • Comparing the query results with the application
  • Looking for names
  • Use of junction tables
  • Use of lookup tables
  • Database tools and integrated development environments

New Nationalities in the Testing Blogosphere

Standard

Recently I’ve been adding some new weblogs to my overview of software testing weblogs. The overview now lists 267 weblogs on software testing. I do not claim to have an overview of all software testing weblogs, but I think I have quite a large number. So for me, it was strange to notice that in the last few days I added four weblogs from countries that were not yet on my list. Proof of the fact that blogging about testing is truly an international sport.

These are the newly added weblogs, their authors and their countries…

Argentina

Martial Testing by Andrés Curcio and Ignacio Esmite

Vietnam

AskTester by Thanh Huynh and others

Philippines

One Software Tester by Jason B. Ogayon

Bulgaria

Mr. Slavchev by Viktor Slavchev

On the Value of Test Cases

Standard

Something is rotten in the state of Denmark.

William Shakespeare – Hamlet

Over the period of a couple of weeks, I was able to observe the usage of test cases in a software development project. The creation of test cases was started at the moment when the functional specifications were declared to be relatively crystallized. The cases were detailed in specific steps and entered into a test management tool, in this case HP Quality Center. They’d be reviewed, and in due time executed and the results would be reported to the project management.

During these weeks after the finalization of the functional specifications, not a lot of software was actually built, so the testers involved in the project saw the perfect chance to prepare for the coming release by typing their test cases. They believed that they had been given a blissful moment before the storm, in which they would strengthen their approach and do as much of the preparatory work as they could, in order to be ready when the first wave of software would hit. Unfortunately, preparation, to these testers, meant the detailed specification of test cases for software changes that still had to be developed, a system that was partly unknown or unexplored by them, and functional specifications that proved to be less than ready.

There is no need to guess what happened next. When eventually software started coming down the line, the technical implementation of the changes was not quite as expected, the functional specifications had changed, and the project priorities and scope had shifted because of new demands. It was like the testers had shored up defenses to combat an army of foot soldiers carrying spears and they were now, much to their surprise, facing cannons of the Howitzer type. Needless to say, the defenders were scattered and forced to flee.

It is easy to blame our software development methods for these situations. One might argue that this project has characteristics of a typical waterfall project and that the waterfall model of software development invites failure. Such was argued in the 1970s (PDF, opens in new window). But instead of blaming the project we could ask ourselves why we prepare for software development the way we do. My point is that by pouring an huge amount of energy into trying to fixate our experiments in test cases (and rid them of meaning — but that’s another point), we willingly and knowingly move ourselves into a spot where we know we will be hurt the most when something unexpected happens (see Nassim Nicholas Taleb’s Black Swan for reference). Second of all, I think we seriously need to reassess the value of drawing up test cases as a method of preparation for the investigation of software. There are dozens of other ways to prepare for the investigation of software. For one, I think, even doing nothing beats defining elaborate and specific test cases, mainly because the former approach causes less damage. It goes without saying that I do not advocate doing nothing in the preparation for the investigation of software.

As a side note, among these dozens of other ways of preparing for the investigation of software, we can name the investigation of the requirements, the investigation of comparable products, having conversations with stake holders, having conversations with domain experts or users, the investigation of the current software product, the investigation of the history of the product, the reading of manuals etc… An excellent list can be found in Rikard Edgren’s Little Black Book on Test Design (PDF, opens in new window). If you’re a professional software tester, this list is not new to you. What it intends to say is that testers need to study in order to keep up.

Yet the fact remains that the creation of test cases as the best way to prepare for the investigation of software still seems to be what is passed on to testers starting a career in software testing. This is what is propagated in the testing courses offered by the ISTQB or, in the Netherlands, by TMap. This approach should have perished long ago for two reasons. On the one hand, and I’ve seen this happen, it falsely lures the tester in thinking that once we’re done specifying our test cases, we have exhausted and therefore finalized our tests. It strengthens the fallacy that the brain is only engaged during the test case creation ‘phase’ of the project. We’re done testing when the cases are complete and what remains is to run them, obviously the most uninspiring part of testing.

The second thing I’ve seen happening is that test case specification draws the inquiring mind away from what it does best, namely to challenge the assumptions that are in the software and the assumptions that are made by the people involved in creating the (software) system — including ourselves. Test case creation is a particular activity that forces the train of thought down a narrowing track of confirmation of requirements or acceptance criteria, specifically at a time when we should be widening our perspectives. By its focus on the confirmation of what we know about the software, it takes the focus away from what is unknown. Test case creation stands in the way of critical thinking and skepticism. It goes against the grain of experimentation, in which we build mental models of the subject we want to test and iteratively develop our models through interaction with the subject under test.

mcl82If there is one thing that I was forced to look at again during the last couple of weeks — during which I was preparing for the testing of software changes — it was the art of reasoning and asking meaningful questions. Though I feel confident when asking questions, and though I pay a lot of attention to the reasoning that got me to asking exactly that particular set of questions, I also still feel that I need to be constantly aware that there are questions I didn’t ask that could lead down entirely different avenues. It is possible to ask only those questions that strengthen your assumptions, even if your not consciously looking for confirmation. And very much so, it is possible that answers are misleading.

So for the sake of better testing, take your ISTQB syllabus and — by any means other than burning — remove the part on test cases. Replace it with anything by Bacon, Descartes or Dewey.

“Criticism is the examination and test of propositions of any kind which are offered for acceptance, in order to find out whether they correspond to reality or not. The critical faculty is a product of education and training. It is a mental habit and power. It is a prime condition of human welfare that men and women should be trained in it. It is our only guarantee against delusion, deception, superstition, and misapprehension of ourselves and our earthly circumstances. Education is good just so far as it produces well-developed critical faculty. A teacher of any subject who insists on accuracy and a rational control of all processes and methods, and who holds everything open to unlimited verification and revision, is cultivating that method as a habit in the pupils. Men educated in it cannot be stampeded. They are slow to believe. They can hold things as possible or probable in all degrees, without certainty and without pain. They can wait for evidence and weigh evidence. They can resist appeals to their dearest prejudices. Education in the critical faculty is the only education of which it can be truly said that it makes good citizens.”

William Graham Sumner – Folkways: A Study of Mores, Manners, Customs and Morals

On Performing an Autopsy

Standard

On Tuesday the 3rd of March 2015, a Quality Boost! Meetup was held by Improve Quality Services and InTraffic in Nieuwegein, the Netherlands. The evening was organized around a session by James Bach, who performed a ‘testing autopsy’ — or ‘testopsy’. Huib Schoots facilitated the questions and the discussion and Ruud Cox created a sketch note. James’ aim was to test a product for ten minutes, narrate his train of thought during that session, and afterwards discuss what happened. He chose this approach in order to be able to do a close examination of what happens during testing.

The definition of an autopsy, according to Merriam-Webster, is as follows.

a critical examination, evaluation, or assessment of someone or something past

On narration and obsession-based testing

The definition above describes pretty accurately what James was trying to do with the testing session. By making explicit the thoughts that guided him during the testing of the application, he made them available for examination. He told the audience about narration — the ability to tell a story — and how important it is for a tester to explain what he is doing and why he is doing it. There are many reasons why narration is important; for example because you want to explain to your team mates what you did. But James’ main reason for narration in this session was to be able to teach us about testing and about the particular skills that are involved in testing.

James said he does not recommend spelling out a testing session word for word. He showed us an example of a report that he created when he was challenged at the Let’s Test conference to test a volume control for a television. In the report he explains everything that is related to his testing. The report contains mental notes, records of conversations, sketches and models and revisions thereof, graphs, experiments and also pathways that eventually proved to be dead ends. The report contains a huge amount of material, but only part of that material would be useful in a practical report out to, for example, management. The full narration of a testing session has its uses, but you’d have to be pretty obsessed with testing to create such an elaborate report. Therefore James dubbed it obsession-based testing.

A very detailed report of testing can serve at least two purposes that were mentioned during the evening.

  • It can be used for teaching testing and to have a discussion about it. The Quality Boost! Meetup that I attended was an example of such usage.
  • It can be used to investigate the skills that are involved in testing. James recently received a detailed test report from Ruud Cox that matched his own obsession-based report. Ruud used the report that he created to find more about the mental models that testers use when testing.

On survey testing

The tested appplication
The tool that James tested during his session is Raw. Raw is an open web app to create custom vector-based visualizations.

The subject under test was an online tool that can be used to generate — among other things — Voronoi diagrams. The Voronoi diagram is a mathematical diagram in which a plane is partitioned into regions based on distance to points in a specific subset of the plane. Through the tool it was possible to provide a data set as input, based on which the tool would generate the diagram. James had prepared some data sets in Excel in advance and during the ten minute session he ran the data sets through the tool and with the audience he examined the diagrams that were generated. This way we all got to know a little bit more about Voronoi diagrams and about how we could detect if the diagrams that were shown were more or less correct.

The type of testing James performed during this particular session is what he himself described as survey testing; a way of learning about the product as fast as possible. He did not focus particularly on, for example, the user interface or on the handling of erroneous data. He just wanted to get to know the application. Later on in the evening, when asked what method he used to explore an application in such a survey, James mentioned the Lévy flight; a random walk that appears to resemble his own type of search. This scanning pattern is made up of long, shallow investigations and short, deep investigations, after which the long, shallow walk is resumed. It seems to be a pattern that is used by animals looking for food (though scientific studies in this direction have been contested), or even by human hunter-gatherers (PDF).

A Lévy flight

A Lévy flight

 On sense making

Because his aim was to learn about the product by examining it through testing, he called his investigation an act of sense making. To make sense of a software product we need a number of skills. Sense making is something we all have to do in software testing. If the application under test does not make sense to us, it will be very hard to test it. Yet sense making is a difficult art. During the evening we discussed how sense making depends on you being able to handle your emotions about complexity. When faced with a complex problem it is not uncommon to become frustrated or to panic. As testers we have to deal with these emotions in order to progress and get closer to the problem. It may take time to get to the core of the problem and it is possible that we make mistakes. In order to make sense of a situation we have to allow for these phenomena. Other tools that help in making sense are guideword heuristics that aid us in remembering what we know.

On breaking down complexity and using a simplified data oracle

In order to make sense of an application or a system, we usually need to break down the complexity of this application. In our craft it is not very helpful to be in awe of, or afraid of complexity; we need to have ways to tackle it. James mentioned how systems thinking (and particularly Gerald Weinberg’s An Introduction to General Systems Thinking) helped him to handle complexity.

Some Voronoi diagrams
Below are displayed some Voronoi diagrams that were generated during the testing session, using data from the following Excel sheet: voronoi data.
As you can see all diagrams except the second display regular patterns that can be checked quite easily for correctness. The titles of the diagrams correspond with the titles of the data in the Excel sheet.
1) Diagonal
voronoi2
2) Randomvoronoi5
3) Cartesian Plane w/o Diagonals
voronoi4
4) Widening Spiral
voronoi1

The trick is to break complexity down into simple parts, to find the underlying simplicity of a complex system. There are many ways to find this underlying simplicity. One way is to break down the system until you have parts that you are able to understand. Another way is what James showed during his testing session. When you look at Voronoi diagrams, this subject matter may be considered complex by many, especially by those who do no have a background in mathematics. James tackled the problem by preparing sets of data by which it would be easy to predict what the generated diagram would look like. As James puts it:

to choose input data and configuration parameters that will result in output that is highly patterned or otherwise easy to evaluate by eye.

By simplifying the data you throw at the problem, you are better able to predict what the observed result should look like. James calls this a simplified data oracle. He used, for example, his own tool for generating pairs (I believe it was ALLPAIRS, but I am not 100% sure) to generate a simple set of combinations that would serve as input data (figure 3). Also he used his knowledge of mathematics to generate data that would display a spiraling Voronoi pattern. And indeed, a spiraling pattern was displayed (see figure 4).

On flow

A couple of loosely connected things were said about the flow of testing — or what James called the ‘tempo of testing’ — during the session. The flow of testing is impacted by the aim of testing. Testing is a deliberate art, but there is room for spontaneity to guide your testing. The balance you strike between deliberation and spontaneity (serendipity?) impacts the flow of testing. Also, a session may be interrupted, or you may want to interrupt your session at certain moments. Furthermore we talked about alteration, switching back and forth between different ideas, different parts of the application or between the application and the requirements.

On skills

In order to generate the test data for the testing of the application above, James used the following Excel sheet: voronoi data (click to download). He briefly discussed his usage of Excel and mentioned that being skilled with Excel can be a huge advantage for software testers. It is an extremely versatile tool that can be used to generate data, analyze data, gather statistics or draw up reports. I have personally used Excel, for example, to quickly analyze differences between the structures of large database tables in different test environments. Easily learned functions can help you a lot to generate insight into larger data sets.

James furthermore related a story about an assignment in which he was asked to evaluate the process that was used by a group of testers to investigate bugs. When he first asked the testers how they investigated bugs he was presented with a pretty generic four step process, such as identify > isolate > reproduce > retest or something similar. But when he investigated further and worked with the testers for some time, he learned that the testers used quite a large number of skills to judge their problems and come up with solutions. The generic process that they described when first questioned about what they did, diverted the attention from the core skills that they possessed but perhaps were unable to identify and name. Narration, as mentioned above, serves to identify and understand the skills that you use.

On acquiring skills

There are many ways to acquire the skills that are needed for testing. One way is to acquire a skill— for example a tool, a technique, or a programming language — by developing it on the job, while you’re doing the work. However, sometimes we need to have the knowledge beforehand and do not want to spend the time on the job for learning a tool or a language. For such situations James recommends creating a problem for yourself so that you can practice the tool or the technique. He showed that he is currently working on learning the programming language R this way. James reminded me of my own work for my website Testing References and having to learn (object-oriented) PHP, CSS, MySQL and the use of Eclipse for software development. This prepared me for learning other programming languages that I can use in projects. Also, I recently bought a Raspberry Pi and I am looking to do something with a NoSQL database (particularly MongoDB) on that machine, just for fun. During the evening James mentioned Apache Hadoop as a possible point of interest.

So far for this summary of the Quality Boost! Meetup with James Bach. I want to thank James Bach and Ruud Cox for providing me with additional material. I hope you enjoyed reading it.

Not a Conference on Test Strategy

Standard

A response to this blog post was written by Colin Cherry on his weblog. His article is entitled (In Response to DEWT5) – What Has a Test Strategy Ever Done for Us?


On page one, line two of my notes of the 5th peer conference of the Dutch Exploratory Workshop on Testing — the theme was test strategy — the following is noted:

Test (strategy) is dead!

And scribbled in the sideline:

Among a conference of 24 professionals there seems to be no agreement at all on what test strategy is.

In putting together a talk for DEWT5 I struggled to find examples of me creating and handling a test strategy. In retrospect, perhaps this struggle was not as much caused by a lack of strategizing on my part, as it was caused by my inability to recognize a test strategy as such.

Still I find it utterly fascinating that in the field of study that we call ‘software testing’ — which has been in existince since (roughly) the 1960s — we are at a total loss when we try to define even the most basic terms of our craft. During the conference it turned out that there are many ways to think of a strategy. During the open season after the first talk, by the very brave Marjana Shammi, a discussion between the delegates turned into an attempt to come to a common understanding of the concept of test strategy. Luckily this attempt was nipped in the bud by DEWT5 organizers Ruud Cox and Philip Hoeben.

For the rest of the conference we decided to put aside the nagging question of what me mean when we call something a test strategy, and just take the experience reports at face value. In hindsight, I think this was a heroic decision, and it proved to be right because the conference blossomed with colourful takes on strategy. Particularly Richard Bradshaw‘s persistent refusal to call his way of working — presented during his experience report —  a ‘strategy’, now does not stand out so much as an act of defiance, yet as an act of sensibility.

A definition of test strategy that reflects Richard’s point of view and was mentioned in other experience reports as well,  is that a strategy is “the things (that shape what) I do”.

And yet I couldn’t help myself by overturning the stone yet one more time during lunch on Sunday with Joep Schuurkes and Maaret Pyhäjärvi. Why is it that we are in a field of study that is apparently in such a mess that even seasoned professionals among themselves are unable to find agreement on definitions and terms. I proposed that, for example, the field of surgery will have very specific and exact definitions of, for example, the way to cut through human tissue. Why don’t we have such a common language?

Maaret offered as an answer that there may have been a time in our field of study when the words ‘test strategy’ meant the same thing to a relatively large number of people. At least we have books that testify of a test strategy in a confident and detailed way. The fact that the participants of the fifth conference of the Dutch Exploratory Workshop on Testing in 2015 are unable to describe ‘strategy’ in a common way, perhaps reflects the development of the craft since then.

Tower of Babel, Pieter Bruegel

The Tower of Babel by Pieter Bruegel the Elder (1563)

As a personal thought I would like to add to this that we should not necessarily think of our craft as a thing that progresses (constantly). It goes through upheavals that are powerful enough to destroy it, or to change it utterly. It may turn out that DEWT5 happened in the middle of one of these upheavals; one that forced us to rethink the existence of a common language. The biblical tale of the tower of Babel suggests that without a common language, humans are unable to work together and build greater things. Perhaps the challenge of working together and sharing knowledge without having access to a common language is what context-driven testing is trying to solve by adhering to experience reports. ISTQB and ISO 29119 are trying to fix the very same problem by declaring the language and forcing it upon the testing community. This is a blunt, political move, but, like the reaction from the context-driven community, it is also an attempt to survive.

With regards to my ‘surgery’ analogy, Joep suggested that surgeons deal with physical things and as such, they have the possibility to offer a physical representation of the definition. Software testing deals with the intangible, and as such our definitions are, forever, abstractions. If we want to look for analogies in other domains then perhaps the field of philosophy is closer to software testing. And in philosophy the struggle with definitions is never ending; it runs through the heart of this field. Maybe it is something we just need to accept.

On Organization by Circumstance

Standard

One of the books that influenced my thinking in the past couple of months is The Peter Principle by the Canadian teacher and author Laurence J. Peter. The book is famous for its principle, which goes as follows:

In a hierarchy every employee tends to rise to his level of incompetence given enough time and enough levels in the hierarchy.

And there is Peter’s Corollary to this principle.

In time, every post tends to be occupied by an employee who is incompetent to carry out its duties.

At first glance it appears that the book is an attempt at satire or parody. In many ‘case studies’ Peter humors the way that employees move upward in an organization to their level of incompetence and paints a somewhat melancholic and bleak picture of the employee who is caught at this level, like a rat in a cage.

Once you progress through the book and read about the symptoms and syndromes of ‘final placement’, you start to realize that this is actually happening all around you. The principle is viciously simple and Peter shows over and over again that when you try to explain why the hierarchy of the organization is the way it is, the Peter Principle is the only way to account for that.

Though the principle is a philosophic contemplation rather than a scientific fact, it has made me realize that the hierarchy of an organization is not formed of individuals being placed in these positions because they are the best fit for the job. I know that this, like the realization in my previous post, is a wide open door. And yet it made me look at the organization as an organism; as an entity consisting of people who are organized along other guiding principles than you might expect or suspect.

Especially the fact that you expect people in a certain position in the hierarchy to behave in a way or to show traits that are characteristic of that position, reduces your chances of interacting with the organization in meaningful way. Right now I am looking at the organization as a system in which people move around rather like molecules in a gas, bouncing off other molecules. Thus, the reasons for a person to be in a certain position are circumstantial and should be analyzed through the evolution of his or her environment, rather than from the perspective of organizational intent.