It could be argued that the Dutch have a natural tendency towards methodologies, standards, rules and processes to govern complexity. Although I cannot offer scientific proof for this statement it feels like we think that by putting a tremendous amount of intellectual effort into complex problem we can abstract from that problem a generic framework, a mechanized pattern, to govern and master the problem’s domain. I think our Delta Works – the effort to defend our country against the onslaught of the North Sea – counts as such a framework.
In software testing the seven hundred plus pages of TMap Next outline in very specific and careful detail the steps we should follow to tackle software reality. While I believe that this work is the result of a huge intellectual effort concerning the amalgamation and sublimation of an extensive body of historic knowledge in software development, I also believe that it has little to do with software testing. I would even go so far as to say that software testing, as defined in our methodologies, does not exist. There! I officially upped Alberto Savoia.
The disconnect is the gap between the finely tuned parts of the mechanism described in a methodology and the supreme insight that is a prerequisite for the establishing of the quality of a solution specified in software. The parts are the scripted test cases, the test plans, the risk matrices, the product risk assessments, the requirements analysis, the test environment intake checklist, the test case prioritization and the test effort estimations.The insight comes from a myriad of sources that are often specific scientific fields in themselves, such as mathematics, systems theory, epistemology, experimentation, analysis, design, language, critical thinking, decision making, communication and learning. While competence in some of these areas is acknowledged to be a prerequisite for testing even in the TMap approach, the methodologies are quick to assume that these elements, which are at the core of dealing with complex problems, are inherently present in the software tester.
As soon as we define testing solely by the machine we use to effect it – if we set out by thinking about testing in terms of constituents driving a process – we willingly refuse to notice any of the relevant aspects of software such as complexity, communication, knowledge, change, uncertainty, design, analysis and reasoning at large. From this principal exclusion, which either supposes that the tester inherently possesses all competences to deal with the relevant aspects of software or supposes that such competences are not needed, there is no turning back.
If, for example, we use a test case to test something, then ideally we use the right test case, at the right time, under the right conditions, executed by the right person. If we define testing by specifying what a test case is but omit all competences that are needed to adequately assess what is ‘right’ then what remains is an empty shell. And it is not safe to assume that the competences needed for the adequate assessment (the “judgment and skill” as mentioned in the principles of context-driven testing) grow on trees bountifully in La La Land.
If we want to define testing let’s start by explicating the competences that are required for making an adequate assessment.