The new millenium has seen growing academic interest in test-driven development. Here, I'll review the papers I've seen so far, as a starting point for further research, a reference for on-line discussions, and a source for resolving bar bets. I've squashed some interesting nuances in each study regarding the exact processes compared, and I'm happy to revise if any reader feels I've mis-quoted or over-simplified.
First, a couple studies, both using students, have found either no significant difference, or just slight improvement, in quality and productivity from using TDD.
- Muller and Hagner found that TDD does not accelerate the implementation, and the resulting programs are not more reliable, but TDD developers end up with better program understanding.
Laurie Williams' group at North Carolina State has conducted several studies of TDD using professional programmers.
- George and Williams find that TDD developers took 16% more time, but passed 18% more independently-created tests. They also find that non-TDD teams almost never wrote any tests.
- Maximilien and Williams performed a case study at IBM that showed a TDD project had half the defect rate of a similar non-TDD project, and, again, non-TDD developers never wrote tests.
- Williams, Maximilien, and Youk performed a case study showing a 40% reduction in defects in a TDD project compared to a similar non-TDD project, with similar overall productivity.
Other groups have also found productivity and quality gains from using TDD.
- Erdogmus find that quality increased linearly with the number of programmer tests, that students who wrote more tests were more productive, and that TDD students write more tests.
- Bhat and Nagappan found that a TDD project had twice the quality of a non-TDD project, and actually wrote tests, although writing the tests required 15% more time.
- Geras, Smith, and Miller found no productivity difference between TDD and non-TDD teams, but did find fewer unplanned test failures on the TDD team.
- Janzen found that a TDD team produced code that scored higher on many code metrics than non-TDD teams, and developed more features in the same time, with a similar defect rate. Again, half of the non-TDD developers never found time to write tests.
Many of these papers are well-summarized by Janzen and Saiedian
, in a survey paper that takes a positive view of TDD and predicts growing acceptance. Janzen and Saiedian also suggest test-driven learning
, an application of TDD to the software engineering classroom.
To draw some tentative conclusions:
- For students only, although some student groups can gain productivity and quality from using TDD, it can at worst be no better than a disciplined, non-TDD approach.
- All studies on professional developers show productivity or quality gains from using TDD.
- In some cases, TDD teams took longer, but produced higher-quality code. Considering that the cost of finding a bug in QA and sending it back to the developers for fixing is greater than fixing it in the first place, I consider this an unqualified win for TDD.
- In many cases, although all developers were encouraged to write tests, only TDD teams did. A suite of reliable unit tests has benefits beyond initial development, for catching regressions in functionality and design and as documentation to maintainers, but these benefits were not directly measured in any of the studies above.