Solving a Ten Thousand Piece Puzzle

Standard

On the third of March a meeting was organized by Improve Quality Services (my employer) and the Federation of Agile Testers in Utrecht, the Netherlands. The evening featured James Bach as speaker and his talk focused on the paper A Context-Driven Approach to Automation in Testing, which was written by him and Michael Bolton. My favorite part of the evening was the part during which James tested some functionality of an application and explains his way of working. He provided such a demonstration a year ago when introducing the test autopsy.

The exercise

inkscape_logoThis time around the subject under test was the distribution function of the open source drawing tool Inkscape and the focus was on the usage of tools to test this functionality. It must be said that Inkscape lends itself to the usage of tools because it stores all the images that are generated using this tool in the Scalable Vector Graphics (SVG) format, which is an open standard developed by the World Wide Web Consortium (W3C). This greatly increases the intrinsic testability (links opens PDF) of the product, as we will see.

The SVG format is described in XML and as such, the image is a text file that can be analyzed using different tools. It is also possible to create text files in the SVG format that can then be opened and rendered in Inkscape. As such, the possibilities for creating images by generating the XML script using code are virtually limitless. Before the start of the presentation James had created a drawing containing 10,000 squares. He created this drawing using some script (I am not sure he mentioned in which language this script was written). My initial reaction to James showing the drawing that he generated was one of astonishment. I was impressed by his idea of testing this functionality with 10,000 squares, by the drawing itself and the fact that it was generated using a script.

Impressed by complexity

Looking back, my amazement may have been caused my lack of experience with Inkscape and the SVG format. But it also reminded me that it is easy to be impressed by something new and especially if this new thing seems to be complex. I believe that, in testing, if you really want to impress people — for all the wrong reasons — all you need to do is to present to them a certain subject as being complex. People will revere you because you are the only one who seems to understand the subject. The exercise, as James walked us through it, seemed complex to me and this is what triggered me to investigate it.

So why use ten thousand squares?

I am sure it was not James’ intention to impress us, so then the question is: why would he use ten thousand squares? Actually this question occurred to me halfway through doing the exercise myself, when tinkering with the distribution function. Distributing, for example, 3 squares is easy; it does not require a file generated with a script. Furthermore, it is easy to draw conclusions from the distribution of 3 squares. Equal distribution can be ascertained visually (by looking at the picture), with a reasonable degree of certainty. So if equal distribution functions correctly with 3 squares, why would there be a problem with 10,000 squares? I am assuming that the distribution algorithm does not function differently based on the amount of input. I mean, why would it? So, taking this assumption into account, using 10,000 squares during testing does only the following things:

  1. It complicates testing, because it is no longer possible to ascertain equal distribution visually.
  2. Because of this, it forces the tester to use tools to generate the picture and to analyze the results.
  3. It complicates testing, because the loading of the large SVG file and the distribution function take a significant amount of time.
  4. It tests the performance of the distribution function in Inkscape.

Now the testing of the performance is not something I want to do as a part of this test. But it seems that working with 10,000 squares adds something meaningful to the exercise. A distributed image generated from 10,000 squares does not allow for a quick visual check and therefore simulates a degree of complexity that requires ingenuity and the use of tools if we want to check the functioning of distribution. Working with large data sets and having to distill meaning from a large set is, I believe, a problem that testers often face. So, as an exercise, it is interesting to see how this can be handled.

A deep dive into the matter

Some of the tools I use

  • Inkscape (for viewing and manipulating images)
  • Python (for writing scripts)
  • Kate (for editing scripts and viewing text files)
  • KSnapshot (for creating screen shots)
  • Google (for looking up examples & info)
  • R (for statistical analysis)

In an attempt to reproduce James’ exercise, I create a script to generate this drawing myself. In order to do so, I need to find out a little bit about the SVG standard. Then I create an Inkscape drawing containing one square, in order to find out the XML format of the square. Now I have an SVG file that I can manipulate so I have enough to start scripting. I install Python on my old Lubuntu netbook which is easy to do. I never did much programming in Python before. I could have written the script in PHP or Java, which are the two programming languages about which I know a fair amount, but it seems to me that Python is fairly light-weight and suitable for the job. It can be run from the command line without compilation, which contributes to its ease of use.

So I write a Python script that creates an SVG file with 10,000 squares in it. Part of the script is displayed below. I look up most of the Python code by Googling it and copy-pasting from examples, so the code is not written well, but it works. I can run the script from the command line and it generates the file in the blink of an eye. The file size of the file is just about 2.4 Mb, which is fairly large for a text file and when I open it using Inkscape, the program becomes unresponsive for a couple of seconds. Apparently the program has some difficulty generating the drawing, which is understandable, given that the file is large and the netbook on which I run the application is limited in both processing power and internal memory (2 Gb). Yet, the file opens without errors and shows a nice grid of 10,000 squares.

Python script for creating the squares

with open('many_squares.svg', 'w') as f:
    f.write(begin)
    
    x=0
    y=0
    offset = 12
    number_of_squares = 10000

    while number_of_squares > 0:
        square = '''<rect
        style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
        id="rect3336"
        width="10"
        height="10"
        x="%d"
        y="%d" />''' % (x,y)
        if x + offset > 800 :
            y = y + offset
            x=0
        else:
            x = x + offset
        f.write(square)
        number_of_squares = number_of_squares -1
    f.write(end)

Which results in the following picture.

ten thousand squares grid

The regular grid of 10,000 squares created with the Python script

ten thousand squares grid close up

A close up of the grid created with the Python script

 

inkscape - align and distribute

The Inkscape distribution functions

I now have a grid of 10,000 squares with which I am trying to reproduce James’ exercise. The thing that I run into is that Inkscape has a number of distribution options. I am not sure which distribution James applied, so I try a couple. None of them however show as a final result the image that James showed during his presentation – as far as I can remember it was an oval shape with a higher density of objects near the edges. Initially it seems strange that I am unable to reproduce this, but through tinkering with the distribution function, I conclude that the fact that I am unable to reproduce James’ distributed image probably depends on the input. The grid that I create with the script contains identical squares of 10 by 10 pixels, evenly spaced (12 pixels apart) along the x and the y axes. It may differ in many aspects (for example, size, spacing and placement of the objects) from the input that James created.

Developing an expected result

I apply the Inkscape distribution functionality (distribute centers horizontally and vertically) to my drawing containing the 10,000 squares and the result is as shown below. The resulting picture looks somewhat chaotic to me. I cannot identify a pattern and even if I could identify a pattern, I would not be sure if this pattern is what I should be seeing. There seem to be some lines running through the picture, which seems odd. But in order to check the distribution properly, I need to develop an expected result, using oracles, by which I can check if the distribution is correct.

ten thousand squares distributed

The entire distributed drawing

ten thousand squares distributed close up

A close up of the distributed drawing (it kind of looks like art)

 

I do several things to arrive at a description of what distribution means. First I consult the Inkscape manual with regards to the distributions functions that I used. The description is as follows.

Distribute centers equidistanly horizontally or vertically

Apart from the spelling mistake in the manual, the word that I want to investigate is ‘equidistant’. It means — according to Merriam-Webster —

of equal distance : located at the same distance

Distance is complex concept. The Wikipedia page on distance is a nice starting point for the subject. I simplify the concept of distance to suit my exercise by assuming a couple of things. My definition is as follows: Distance is the space between two points, expressed as the physical length of the shortest possible path through space between these points that could be taken if there were no obstacles. In short, the distance is the length of the path in a straight line between two points.

There are other things I need to consider. The space in which the drawing is made is two dimensional. This might seem obvious, but it is important to realize that every single point in the picture can be identified with a two dimensional Cartesian coordinate system. In short, every point has a x and a y coordinate (which we already saw when generating the SVG file) and this realization greatly helps me when I try to analyze the picture.  Another question I need to answer is which two points I use. This is tricky, because in my exercise, I used the center of the square as reference point for distribution. Since I am dealing with squares, width and height are equal and since all the squares in my drawing have the same width and height, I simplified my problem in such a way that I can use the x an y coordinates of the top left corner (which can be found in the SVG file) of each square for further analysis. There is no need for me to calculate the center of each object and do my analysis on those coordinates.

And lastly I need to clarify what distribution means. It turns out that there are at least two ways to distribute things. I came across an excellent example in a Stack Exchange question. In this question the distinction is made between spreading out evenly and spacing evenly. To spread out evenly means that the centers of all objects are distributed evenly across the space. To space evenly means that the distance between the objects is the same for all objects. The picture below clarifies this.

Types of distribution

Types of distribution (source: Stack Exchange)

In my special case — I am working exclusively with squares that are all the same size — to spread out evenly means to space evenly. So the distinction, while relevant when talking about distribution, matters less to me. Aside from the investigation described above, I spoke with several co workers about this exercise and they gave me some useful feedback on how I should regard distribution.

To make a long story short, my expected result is as follows.

Given that all the objects in the drawing are squares of equal size, if the centers of all the squares are distributed equally along the x axis then I can analyze the x coordinates of the top left of all squares. If the x coordinates are sorted in ascending order, I should find that difference between one x coordinate and the x coordinate immediately following it, is the same for all x coordinates. The same should go for the y coordinates (vertical distribution).

This is what I’m looking for in the drawing with the distributed squares.

Some experiments in R

In order to do some analysis, I need the x and y coordinates of the top left corner of all the squares in the drawing. It turns out to be fairly easy to distill these values from the SVG file using Python. Again, I create a Python script by learning from examples found on the internet. The script, as displayed below, subtracts from the SVG file the x and y coordinates of the top left corner of each square and then writes these coordinates to an comma separated file (csv). The csv file is very suitable as input for R.

Python script for generating the csv file containing the coordinates

svg = open("many_squares_distr.svg", "r")

coordinates = []
for line in svg:
    print line
    if line.find(' x=') <> -1 or line.find(' y=') <> -1: # line containing x or y coordinate found
        positions = []
        for pos, char in enumerate(line):
            if char == '"':
                positions.append(pos)
        print line[positions[0]+1:positions[1]]
        if line.find('x=') <> -1 :
            x = line[positions[0]+1:positions[1]]
        if line.find('y=') <> -1 :
            y = line[positions[0]+1:positions[1]]
            new_coordinate = [x,y]
            coordinates.append(new_coordinate)

with open('coordinates.csv', 'w') as f:
    point = 1
    line = line = 'X,Y\n'
    f.write(line)
    for row in coordinates:
        line = '%s,%s\n' % (row[0],row[1])
        f.write(line)
        point = point + 1

Now we come to the part that is, for me, the toughest part of the exercise on which I consequently spent the most time. The language that I use for the analysis of the data is R. This is the language that James also used in his exercise. The language is entirely new to me. Furthermore, R is a language for statistical computation and I am not a hero at statistics. What I know about statistics dates back to a university course that I took some twenty years ago. So you’ll have to bear with the simplicity of my attempts.

It is not difficult to load the csv file into R. It can be done using this command.

coor <- read.csv(file="coordinates.csv",head=TRUE,sep=",")

A graph of this dataset can be created (plotted).

plot(co)

Resulting in the picture below.

plotted x and y
After that, I am creating a new dataset that only contains the x values, using the command below.

xco = coor[,1]

And then I sort the x values ascending.

xcor <- sort(xco)

Then I use the following command

plot(xcor)

to create a graph of the result as displayed below.

plotted sorted x coordinates

The final result is a relatively perfectly straight line because (as I expected) for each data point the (x) value is increased by the same amount, resulting in a lineair function. As a result this satisfies my needs, so this is where I stop testing. I could have created, using a Python script, a dataset containing all the differences between the consecutive x coordinates and I could have checked the distribution of these differences with R. I leave this for another time.

Afterthoughts

One of the questions you might ask is if I really tested the distribution functionality. My answer would be a downright ‘No’. I used the distribution functionality in an exercise, but the goal of the exercise was not to test the functionality. The goal was to see what tools can do in a complex situation. If I had really investigated the distribution functionality, I would have created a coverage outline and I would certainly have tried different kinds of input. Also I would have had to take a more in depth look at the concept of distribution.

One of the results of this exercise is that I know a little bit more about scripting, about the language R and about vector images. Also, I learned that the skills related to software testing are manifold and that it is not easy to describe them. I particularly liked describing how I arrived at my definition of the expected result, which meant investigating different sources and drawing conclusions from that investigation. I feel that the software tester should be able to do such an investigation and to build the evidence of testing on it. I also learned again that complexity is a many headed monster that often roams freely in software development. Testers need to master the tools that can tame that beast.

Some exploration took place in the form of ‘tinkering’ with the distribution function of Inkscape. This helped me build a mental model of distribution. Furthermore I toyed with R on a ‘trail and error’ basis, in order to find out how it works.

The Last Retrospective

Standard

The Bustle in a House

The Morning after Death

Is solemnest of industries

Enacted opon Earth –

 

The Sweeping up the Heart

And putting Love away

We shall not want to use again

Until Eternity.

Emily Dickinson

 

Thirty odd sticky notes hang on the whiteboard, carefully categorized this time. Written upon in neat, thick pencil strokes, they await inspection, analysis.

The last retrospective didn’t happen. It was canceled at the last minute by the person who was assigned to be the scrum master, because the fixing of incidents in production had priority over the retrospective session. The stickies have been on that whiteboard for two weeks now, unattended. I like to believe they serve as a monument, as a tombstone for our exploits into an Agile way of working. One of these days though, I am going to have to clean up the board.

It would have been the last retrospective anyway because we are returning from an Agile way of working to waterfall software development – which is, euphemistically, being sold as Kanban. We tried Agile for some ten months but it never took hold. On the ‘what went well’ side of the retrospective board there are stickies stating ‘At least we try to keep on smiling’ and ‘Somehow we manage to deliver some work’. On the ‘what could have been better’ side there is an exhaustive to the point of masochistic description of the swamp that we’ve been ploughing through for the last couple of months. The stickies state that we do not work as a team, there is no focus, there are no priorities, it is not clear who does what, there is no clear goal, we are continuously interrupted and there is lack of understanding of quality.

I feel it is okay to move away from Agile. In fact, we (the team) discussed this a couple of weeks ago as a viable option. We decided it would be beneficial to let go of the Agile expectations. I think the main reason for not succeeding at Agile was lack of commitment from management at every bloody step of the way. While the company envisioned to charge into the future the Agile way, middle management just basically ignored that message. There is an abundance of other reasons as well. I like to think that I tried to make Agile work but that the opposition was just overwhelming. In retrospect, I think it was.

Why we do experience reports

Standard

Not so long ago a workshop was held at Improve Quality Services. The theme of the workshop was ‘Test strategy’ and the participants were asked to present a testing strategy that they recently used in a work situation. Mind you that participants were asked to present, as in to show and explain, their test strategy; not to do an experience report on it. Several strategies were presented and the differences were notable. Very broadly the following reports were discussed.

  • A description of the organization (people and processes) around the software system.
  • A description of the way testing was embedded in the development lifecycle.
  • A description of the testing principles that were shared by the testers.
  • A description of the test approach, aimed at communicating with integrating parties.
  • A description of the test approach, aimed at communicating with management.
  • A description of the approach and the actual testing of a screen.
  • A description of the approach and the actual testing of a database trigger.

I am not in a position to say whether any of these testing strategies were right or wrong and I am certain my judgement is irrelevant. I furthermore doubt that any of these strategies can be judged without further investigation, and I am sure that each of these contains elements that are great and elements that can be improved upon. It is not my aim to comment on this. However, as the evening went on, I felt a growing frustration about a number of things.

Considering the open mindedness and critical thinking abilities of the people that were in the room (all of them had, for example, taken the Rapid Software Testing course) there was a remarkably low number of comments on the strategies as presented. Aside from the occasional remark, the presentations were largely taken at face value. Now the fact that were not a lot of remarks can still imply many things about the evening itself, the organization of the meeting, the mental conditions of the people present and so on. Still, I like to think that the setup was conducive to feedback and learning and so I’d like to focus on the presentations themselves to see why they didn’t invite comment, or at the very least, why I did not feel inclined to comment.

My first issue during the evening was that most of the presentations did not discuss what happened when the rubber actually hit the road. If the proof of the pudding is in the eating, we hardly discussed how the strategy went down. Whether it cracked under the first strain or whether it was able to stay the course. If there is a way to evaluate a testing strategy (to put it to the test) it is to carefully note what happens to it when it is applied. Evaluation was the part that was largely missing from our presentations.

You see this kind of thing quite a lot at (testing) conferences. The speaker presents an approach, a framework, a general theory or general solution without getting into how this solution came into being or how it developed when it was actually applied in practice. The mere presentation of a theory does not lend itself to criticism. My reaction to this form of presentation is to shrug and move on with my business. I am unable to criticize it from my specific context, because usually the presented approach is not very specific about context so I cannot check whether my context applies. The only other form of criticism available to me is then to reason about the approach in an abstract way, either by checking the internal logic of the theory, or by comparing the theory to other theories in the same domain and to see if it is consistent. This is not ideal and certainly to do this in the timespan of the 40 minutes of a conference talk, without having access to reference material, is a tall order.

I had this feeling when a couple of months ago, at my work, an ATDD (Acceptance Test Driven Development) framework was presented as the new Agile way of working. The thing I can remember from that presentation is that there was a single image with bits and pieces and connections explaining the framework. The rest is a blur. I never heard anything of it since.

So the question is: what do we need to do to open up our theories to evaluation and investigation?

My second issue with the presented strategies was initially about distance. Quite a number of the strategies that were presented seemed to be distanced from the subject under test (SUT). By the subject under test I mean the actual software that is to be tested. And by distance, I mean that there were strategies that did not primarily discuss the subject under test. I was absolutely puzzled to see that some of the presented strategies did not discuss the execution of that strategy. As I stated above, the proof of the strategy should be in its execution. Discussing strategy without execution just didn’t make sense to me. But looking back at this experience I think I wrestled with the purpose of offering up the strategies to (more or less) public scrutiny. At least one or two presentations discussed not so much the test strategy itself but the communication of the strategy with management, integrating parties or the test department. This focuses on an entirely different aspect of testing, being the communication about the test strategy in order to reach a common ground or to align people along a common goal. The purpose of the presentation is not to scrutinize the test strategy, but to invite an examination of the way it was communicated. This purpose should be clear from the start. Otherwise the ensuing discussion is partly consumed by determining that purpose (which may be a waste of time) or, if there is no quest for the purpose, the discussion follows a winding path that has a good chance of not leading the audience, nor the presenter, anywhere at all.

The third thing that bothered me was that the displayed strategies rarely, if ever, discussed people. They discussed roles, but not people. That is the reason why, in my very short presentation, I decided (on the spur of the moment) to hardly pay any attention to the actual strategy that I selected and focused on the characteristics of the individuals in my team. There are two aphorisms that I had in mind while doing this: “Not matter what the problem is, it is always a people problem” and “Culture eats strategy for breakfast”. It appears to me that no matter how excellent your plan is, the result of its execution largely depends on the people that are aligned to play a part in this strategy. Wherever the strategy leaves room for personal interpretation, there will be interpretation. And basically, no strategy will ever be executed in the same way twice, not even with the same group of people, because the decisions people make will differ from time to time, influenced by many factors. If this is true, and I think it is, then I wonder why the human factor is not present in a more explicit and defined way in our testing strategy and in our reports in general. We seem prejudiced (primed?) to talk about processes and artifacts, and to fear the description of flesh and bone. This is a general remark on the evaluation of the context. If a report displays people as ‘puppet A’ and ‘puppet B’ then this is a sure sign of trouble. I know this from experience because our famed Dutch testing approach TMap Next exclusively discusses cardboard figures as a replacement for humans.

In conclusion; for an experience report to be open to evaluation and investigation and for a meaningful discussion to ensue, it should contain at least these three things.

  • The purpose (research question) of the report should be clear,
  • the context of the report should be described (including the hominids!) and
  • the results of applying the approach should be presented.

Hopefully I have been able to clarify these demands by sharing my feelings above. The discussion loops back to the usage of experience reports within the peer conferences as organized by the Dutch Exploratory Workshop on Testing. The way we look at an experience report is evolving and the road towards a better understanding of what we do (as a workshop) and how we do it, has been a very meaningful one.

Agile Testing Days 2015 – At the Lab

Standard

This year I attended the Agile Testing Days for the second time and I can say for sure that I will be attending in 2016. It is worth to return to this conference, because it has as delightful mix of topics across the Agile spectrum. The program offers technical talks and workshops that focus on coding, debugging, frameworks and test automation. It also features talks and workshops on Agile leadership and collaboration, with focus on human aspects, motivation and learning. And if offers sessions on many aspects of testing, such as exploratory testing, note-taking, modeling and the relation of testing to Agile methodologies. There is more than enough to go round for the modern tester.

CodeBug

CodeBug

From this year’s conference I took away some particular things. I spend quite some time in the Anything Build Party & TestLab, expertly hosted by James Lyndsay and Bart Knaack. I played around with CodeBug, which is a programmable and wearable device with 25 leds (see picture). The device can be programmed using a Blockly-based programming interface, which makes it easy to build controls for the LEDs. I spend a while thinking about what I would like to create and then got stuck halfway through coding. At that point I paired up with another participant trying to improve the program that I wrote. We seemed to run into the limits of the programming interface, but it was fun doing this together and we were both energized by the experience.

I also finally took a serious look at some black box puzzles, created by James Lyndsay. At the Belgium Testing Days last year I struggled with a black box puzzle, turned into a physical puzzle by Altom. Now I wanted to look at the other puzzles. With some hints from James I found a satisfying description of the functional behavior of puzzle 8. After that success I picked up puzzle 7 which I was unable to fully figure out, also because the conference was drawing to an end.

I learned the following things from the puzzles.

  • Taking notes helps. Using the notes as a guideline, it is easier to explain your testing. At least, that was how I was able to explain my testing when people who stopped by asked what I had been testing so far. Taking notes in a notebook with a pen or pencil is quicker, much more flexible and much more intuitive than taking notes in the form of a mind map. I tried both the mind map and the pencil. Creating a mind map (even tough it appears to be very light weight tool), seemed to interrupt the flow of thoughts and the flow of testing exactly when I needed that flow. I chucked the mind map and grabbed the pencil. I always thought of a mind map as a ‘informal’ tool that enables creativity. I do so now to a lesser degree.Black box puzzle 8
  • Having a mental model helps you devise theories and experiments. The mental model is the idea you have about how the subject under test might function. Whether the idea is right or wrong doesn’t matter that much. I got pretty far in explaining the behavior of puzzle 8 using a wrong (but very useful!) mental model. The beauty is that once you have an idea about how an application might function, you can test that theory. Often this provides you with more information about its behavior and about the validity of your model. Having gained that information I was able to move to a more advanced model. I think I moved through several mental models during puzzle 7 and 8.
  • Frequent interaction with the subject under test is important. With regards to the puzzles I tried, I found it not very useful to sit back and philosophize a whole lot about the excercise. Reflection is needed in order to move ahead through the theories about the functioning of the device, but you need data to reflect on. Interacting with the subject under test makes you notice certain behavior and it will generates test data. This is the data that you need in order to build a theory. I observed people playing the dice game during the Agile Games Market on Wednesday evening and one participant in this game took long pauses (of up to a minute!) between ‘throws’. I was almost yelling ‘Come on, test, test, test!!’ to urge him to gather data faster. The brain needs something to work with.
  • And yet, taking a break helps. This has been mentioned time and again in testing literature and blog posts. It is true that when one detaches himself from the situation, usually the mind starts offering clues as to how the application functions. I took a cigarette break and it helped me. James told me a story about a man who tried one of his puzzles at a conference and got stuck. During the ensuing keynote he had an epiphany, went back to the lab and presented, in one go, the answer to the puzzle.
  • Use tools. Since puzzles 7 & 8 are about timing, it is nice to have a timer and use it. I took my smartphone and used the timer to at least get an idea of the time between the flashing of the red light. It helped me to be a bit more confident about the reproducibility of certain situations and it helped me to falsify (or confirm) my theories. Yet at my current assignment quite a number of testers seem to be very wary of using tools, whether it be a SQL query tool, a tool to transfers loan data from one database to another, to make notes, to call a SOAP interface, to view logs, to drive a GUI, to debug the environment etc… I think it is inexperience in using such tools that is keeping them.

Since I didn’t ‘solve’ puzzle 7, James gave me a final hint and he urged me to look at it at home. I will do this, but in the meantime I want to thank James for creating these puzzles, and James and Bart for running the testlab at so many conferences, making it possible for testers to reflect on their testing and to learn from it.

My selection for the TestNet Autumn event

Standard

Each year TestNet, the Dutch society for testers, organizes a spring event and an autumn event. The event is a single days conference with the morning reserved for workshops and the afternoon and the evening reserved for two key notes and parallel tracks. On Wednesday 14 October, the autumn event takes places. Its theme is ‘Trends in Testing’. Most of the presentations are in Dutch and therefore the descriptions of the sessions may not make a lot of sense to those who do not understand Dutch, but I am going to try and describe some of them, together with my personal selection for this conference.

The conference attracts more than five hundred visitors! Part of its attraction must be that the program is varied, the venue is excellent, and the admission price is zero euros if you’re a member of TestNet. TestNet membership costs next to nothing, so basically you get two big conferences for free each year. There is no reason why you, as a professional tester, would not show up at these conferences, other than personal circumstances.

As I said the theme of the autumn event is ‘Trends in Testing’. Let’s have a quick look how this theme is reflected in the presentations. Besides the key notes there are four presentations about test automation, two about performance testing, two about security, one about big data and one about the internet of things and that’s just about it for the buzz words. The other presentations are about the selection of test data, about roles in testing, about information ethics (nice find Nathalie!), about operational intelligence, about handling functional specifications in a more efficient way, and one (only one?) about exploratory testing. For a conference theme that could easily end up facilitating a buzz word bingo, it turned out pretty well.

From the four track sessions I selected the sessions that I am going to attend. The first one is ‘Subset: Less is more’ by Marten Bakker. I do not know the speaker, but I am drawn to the topics of his talk. Marten is going to talk about test data management and the creation of subsets of data. From personal experience I know that testers easily focus on functional specifications and easily lose sight of test data. In my current project there is almost complete ignorance of test data. This, of course, is very bad, but when one deals with large sets of data and a large variety of data, it can be become quite intimidating to handle this with some form of elegance. I hope Marten has some good suggestions.

The second track session that I am going to attend is ‘Panacea – A test framework for all’ by Adonis Stanislas Sheeban. Again, I do not know the speaker. The talk is about test automation, specifically about the tools Protractor and Cucumber. I heard of these tools, but never worked with them, and I am taking this chance to get to know a little bit more about them.

The track session that I am really looking forward to is ‘Google naar fouten met operational intelligence’ (Google for errors using operational intelligence) by Albert Witteveen. Albert is going to talk about complex, linked systems and how to check if these systems are functioning correctly. In my current project, and in the one before that, I’ve been digging through databases and logs quite a lot, in order to establish how well the system is functioning or to find out the root cause of a defect. This can be a tedious and yet very difficult, time-consuming job. Albert envisions software that can do the gathering of relevant data for us and that make this data easily accessible. I am sure such tools can save a lot of time. I tried my hand at building a log analyzer once and loved doing that. So in this talk I am also going to look for opportunities for self development.

The last track session that I want to visit is ‘Trifolium Repens: de nieuwe testbasis voor Agile en Waterval testen’ Trifolium Repens: the new test basis for Agile and waterfall testing) by Rudi Niemeijer. I am not entirely sure what Rudi is going to talk about. I think he is going to introduce a method to reduce overhead (functional specifications) by combining the strong points of the tester and the developer. At the very least he is going to have to explain why he uses a plant (white clover) as a metaphor. I am looking forward to his explanation.

I do not know if any of the sessions that I am going actually describe trends in testing. I hope they do.

Making Sense of the Legacy Database, Introduction

Standard

For the past 3 years I spent a lot of time digging through relational databases. Most of these databases that I looked at could be considered legacy databases. In one case it was an Oracle database, in another a Sybase database. They are legacy systems in the sense that the technology with which they have been built has been around for quite a while. Heck, relational databases were conceived in the 1970’s (link opens PDF), so from the perspective of computer history the concept of the relational database is pretty old.

The databases I worked with can be considered ‘legacy’ in another way. At least one of them has been in existence, and thus has been under maintenance, for 15 years. So, in terms of a software life cycle this database is relatively old. It is a monolith that has been crafted and honed by generations of software developers, so to speak.

And there is a third way in which these databases could be considered ‘legacy’. It appears that for at least a couple of these systems, someone was able to make the documentation about the inner workings of the database disappear almost entirely. And not only that; over time the people who knew the database intimately moved away from the company. So what remains is a database that is largely undocumented, with very few people who can tell you the intricacies of its existence. Clearly this is an exceptional situation… or maybe not.

Now, both knowledge and joy can be gained from studying the database. It is reasonable to assume that the database is a reflection (a model, if you will) of the world as the company sees it. In it is stored knowledge about elements of the world outside and the relationships between these elements. As a way of gaining deep knowledge about how its builders classify and structure the world, studying the database is fine starting point.

Indeed, one of the goals I have when I study the database, is to get to know how knowledge is classified. I study in order to gain knowledge about the functional aspects and the meaning of the system. Some may regard the database as a technical thing. Those sentiments may be strengthened by the fact that knowing how to write SQL queries is usually considered to be a ‘technical’ skill, one that ‘functional’ testers stay away from. That is a very damaging misconception, withholding a fine research instrument from the hands of the budding tester. SQL is not anything anyone should ever be afraid of.

But gaining knowledge about functional aspects is just one object of the study. Another one is that mastering a database allows you to verify what information is stored and whether it is stored correctly or not. This, clearly, allows you to check (test) the functioning of the system. Mastering the database also allows you to manipulate data for the purpose of testing and to create new data sets. And finally, it allows you conjure up answers to questions such as “Show me the top 100 books that were sold last year to customers from the Netherlands above the age of 45” much more quickly than through any GUI. So it can be used for analytical purposes.

The question I would like to answer is how do we analyze the database. What methods do we employ to get to the core of knowledge that may be spread out over hundreds of tables. There are a few methods that make use of the typical features of a relational database. Other methods are more general in nature. In other blog posts I would like to dig deeper into these methods of investigation. But for now I would like to leave you with a quick overview and a description of the first method.

Organization

When you are going to analyze a database, you will be writing SQL queries. Many of them, probably. If you are going to place all of your queries in a single file, that will become chaos rather quickly.  Bad organization will be a really heavy dead weight during the whole of your investigation. I like to organize my queries in folders, in which case these folders represent broad classifications. For examples, I create a folder containing all queries relating to orders, another one relating to customers and another one relating to financial data. Usually, this division really helps you to quickly zoom in on a specific area.

Within a folder there may be many different files containing queries. Again I try to group queries by ‘functional’ area. So I may create a file containing queries relating to ‘cancelled orders’ or ‘orders that are waiting to be processed’, or ‘orders from returning customers’. I first tried to group queries in files based on the table that was queried (such as a single file containing all queries on the ‘order’ table) but for some reason a ‘functional’ classification is easier to comprehend.

One of integrated development tools I worked with – Oracle SQL Developer – has a plugin for Subversion, so it allows you to keep and maintain a repository and a version history of the files containing your queries and to share it easily with other team members. I learned the hard way that keeping your laboriously built up set of queries solely on a single internal hard disk is not such a good idea.

Like in other programming languages, in SQL it is possible to add comments to your queries. I found out that comments are an essential part of your investigation. In SQL queries comments are usually written with two hyphens (–) at the beginning. Queries can quickly become long and complex. If your files that contain the queries contain only SQL statements and nothing else, this will seriously hamper your investigation. You will need to read each query again and again in order to find out what it meant. Reading a SQL query that joins many tables can be really tough, if you want to find exactly what information you are pulling from the database. Also, queries may look very similar but may differ on a slight detail that you will only find when you read and conceptualize the whole thing. And last but not least, since writing queries takes time and mental effort, your do not want to write duplicate queries. The only way to prevent you from doing things many times over is to have an easy way to detect if you already ‘have a query for that’.

So comments are essential as a means of identification or indexing. I like to go over my files with queries from time to time, ‘grooming’ and improving the comments and the queries themselves.

So far for the organization of queries. Below are the other topics that I like to cover as methods for the investigation of the legacy database. I will get back to these in posts to come.

  • Structuring the query
  • Testing your queries
  • What to do with the data model (if it exists)
  • Searching for distinct values
  • Paying attention to numbers
  • Scanning data & data patterns
  • Using dates
  • Complex joins
  • Emptiness (null values)
  • Querying the data model
  • Comparing the query results with the application
  • Looking for names
  • Use of junction tables
  • Use of lookup tables
  • Database tools and integrated development environments

New Nationalities in the Testing Blogosphere

Standard

Recently I’ve been adding some new weblogs to my overview of software testing weblogs. The overview now lists 267 weblogs on software testing. I do not claim to have an overview of all software testing weblogs, but I think I have quite a large number. So for me, it was strange to notice that in the last few days I added four weblogs from countries that were not yet on my list. Proof of the fact that blogging about testing is truly an international sport.

These are the newly added weblogs, their authors and their countries…

Argentina

Martial Testing by Andrés Curcio and Ignacio Esmite

Vietnam

AskTester by Thanh Huynh and others

Philippines

One Software Tester by Jason B. Ogayon

Bulgaria

Mr. Slavchev by Viktor Slavchev