A few weeks ago a copy of ‘Clean Code’ feel into my hands. Actually it was literally falling of the desk of a colleague where it was starting to collect some dust. So I borrowed it and read through it during a couple of nights. This eventually got me started on a few other books which have been on my wishlist for quite a while now:
- Working Effectively With Legacy Code
- The Clean Coder
All books have one thing in common. They all recommend to write your tests first and strictly follow the three rules of test driven development (TDD), as defined by UncleBob, The Three Rules of TDD. Those are
- You are not allowed to write any production code unless it is to make a failing unit test pass.
- You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
- You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
After a bit of reading I convinced myself to explicitly follow those three rules to finally get a real feeling for that method. Not that I haven’t written tests for the software I created so far. Indeed I did. But I never really took the time trying to write tests before production code.
Now I use TDD for a few weeks and I don’t have the feeling I ever want to go back to the old style.
To The Point
Besides all the points mentioned in the books and elsewhere on the web one thing all of a sudden occurred to me
If you write a failing test first and then satisfy it by implementing the production code you will be absolutely sure the test covers the intended production code.
Such a test is very valuable as it is not just an other statement in the code base but rather represents a proven theorem about the production code and asserts the feature stays intact.
In contrast, if you write production code first and than later on implement some tests, how do you know which parts of code actually satisfys the test? Well you don’t really know! But you might assume you do.
This might seem rather academic. But what if your test is slightly wrong and actually tests something different? How do you know? You can’t easily know! After all the test doesn’t fail and never failed before. So how do you know your test will fail under the intended circumstance? You won’t easily know! Two methods come into my mind how one could show a test covers the intended production code and fails if the behavior of that code changes.
- Prove you program correct.
- Remove the production code or change it’s behavior, then run the test and check if the test can detect such a change and actually fails.
If you are not Donald E. Knuth I don’t expect you to proof your program correct. So you will probably choose method number two. But why would you follow this lengthy procedure
- Write production code.
- Write a test.
- Run test and get it to succeed.
- Remove production code or alter its behavior in an incompatible way.
- Run test and check if it fails.
- If the test still succeeds start over at step 2.
- If the test fails, revert production code and call it done.
if you may instead just
- Write a test that fails
- Add production code that satisfies the test and call it done.
Compare the two procedures the ‘Write Test First’ paradigm clearly is the winner!