What next?

Our last meeting (in October) was enjoyable, but now we need to choose another topic. I suggest we meet up to discuss it in January – we’ll have the Christmas break to do some reading. So add your suggestions in the comments – 3 books in order of preference, as usual.

Test Driven Development

There were six of us at the Lamb and Flag to discuss test driven development (TDD). I was joined by Miquel, Tom, David, Inigo and new boy Julian. The books chosen to kick off the discussion were two different books called “Test Driven Development”, the first by Kent Beck and the second by Dave Astels.

David revealed that he’d actually read a third book, Test Driven by Lasse Koskela. He said it was not a bad book but he wasn’t convinced by the part which talked about driving development through acceptance tests. The section on test driving the database was fine for simple applications, but in David’s experience it is not always simple to swap the database used in live with an in-memory database such as HSQL. Often the application will use stored procedures or other specific features of the target database. Inigo pointed out that if you have a requirement to support both Oracle and SQL Server, say, then using a third database would actually not be such a big step.

Inigo referred to a study by Microsoft which showed that TDD took 15-35% longer than traditional development but was 40-90% better in terms of bug counts.
He suggested that it is the fact that the tests are written at the same time as the code that is important, not the fact that tests come first. David suggested that you lose some advantages if you don’t follow the rules. Writing the tests first mean you always know where you are and that every feature should have a test. Tom said that be coded test-first when bug fixing, but found that doing true TDD was a bit tedious if followed to the letter. I mentioned Ping Pong programming, where a pair of developers play a game by adding a failing test and getting their partner to make it pass and add their own failing test. The idea is to try to write as little code as possible. There was some surprise that this was not mentioned in the books we had read.

We talked about the Kent Beck book. Inigo found it to be a little lightweight and quite philosophical while Miquel thought it was easy to read. Inigo thought it had extra sections that were not strictly needed in a book about TDD, for example the chapters on refactoring and patterns. However, he liked the design of the code that is explored in the book, which is a currency conversion application. Inigo had expected some interesting details of JUnit 4, but once he saw the copyright date he realised that this was not possible.

I found the Dave Astels book felt quite outdated. It is quite a few years old now, so many of the technologies such as JUnit and mocking have moved on quite significantly. It could also do with some typesetting love as it is not the most beautiful book to look at. I also felt that it was rather long – I couldn’t bring myself to read through the extended example which makes up most of the book. Julian (as a newcomer to the subject) did feel that it was good that it was so practical, unlike Kent Beck’s more theoretical approach. So perhaps it is the better of the two for someone new to this area.

We moved on to talking about different testing techniques. I mentioned Jumble, a mutation testing framework which I have described on my work blog. David talked about how he used Fuzzing in his previous job to find bugs in an XML parser. His experience was that it was best to have a smart fuzzer that understood XML as this was more likely to find bugs than just adding random noise to a file.

Inigo mentioned Hamcrest matchers and the use of assertThat in JUnit. It was generally agreed to be good for readability of code and error messages, but not so good with respect to IDE auto completion. David pointed out that JUnit 4 has this problem over JUnit 3 since the assertion methods need to be statically imported.

Finally Miquel asked about how to test JavaScript. Tom and David described Selenium/WebDriver as the best tool in this area.

How does testing fit in?

Unfortunately it was just Miquel and me at this meeting to discuss testing and how testers fit into an agile project. The discussions were based around Agile Testing by Lisa Crispin and Janet Gregory. I’d managed to read about half of it, while Miquel had managed a little less.

Miquel started off by saying that he found the book quite hard to read a possibly and little repetitive. I agreed that it seemed to be longer than it needed to be. Miquel did like the mind maps that appeared at the start of each chapter though.

ponte vasco da gama by rolhasWe talked about the idea of having the tester be a bridge between the customer and the developers. I liked this alternative approach to the traditional idea of having testers simply being the last line in the chain before software goes into production.

For me, the most important part of the book was the idea of the testing quadrants. They act as a checklist which make it easy to consider what kinds of testing you are or are not doing and if you are doing enough. There are 4 quadrants, along 2 axes. Tests are either business or technology-facing and either support the team or critique the product. There is a diagram in this blog posting which describes it a little more.

Other things which I found interesting were “Wizard of Oz” testing (p138), Ripple Effects (p143) and Exploratory Testing (p197).

Wizard of Oz usability testing is a prototyping technique where a paper mock up is animated by testers simply using pieces of paper. So it is like traditional prototyping but with the addition of a limited amount of interactivity rather than just a static picture.

The idea of “Ripple Effects” is to try to avoid unexpected knock-on effects of changes made to a system. The idea is to explicitly run through a checklist of existing areas of functionality and consider any impacts. This very simple thing is done to avoid the shortsightedness that can sometimes arise when an agile project focusses on each story as it comes up without considering the larger picture.

Finally, I liked the comparison between Exploratory Testing and the Agile Manifesto as explained by Michael Bolton. Since I didn’t know too much about Exploratory Testing before reading this, it was intriguing to see the parallels. There is an article by James Bach at Stickyminds that explains the concepts quite well.