Is there hard evidence of the ROI of unit testing?

We've demonstrated with hard evidence that it's possible to write crappy software without Unit Testing. I believe there's even evidence for crappy software with Unit Testing. But this is not the point.

Unit Testing or Test Driven Development (TDD) is a Design technique, not a test technique. Code that's written test driven looks completely different from code that is not.

Even though this is not your question, I wonder if it's really the easiest way to go down the road and answer questions (and bring evidence that might be challenged by other reports) that might be asked wrong. Even if you find hard evidence for your case - somebody else might find hard evidence against.

Is it the business of the bean counters to determine how the technical people should work? Are they providing the cheapest tools in all cases because they believe you don't need more expensive ones?

This argument is either won based on trust (one of the fundamental values of agile teams) or lost based on role power of the winning party. Even if the TDD-proponents win based on role power I'd count it as lost.


Yes. This is a link to a study by Boby George and Laurie Williams at NCST and a another by Nagappan et al. I'm sure there are more. Dr. Williams publications on testing may provide a good starting point for finding them.

[EDIT] The two papers above specifically reference TDD and show 15-35% increase in initial development time after adopting TDD, but a 40-90% decrease in pre-release defects. If you can't get at the full text versions, I suggest using Google Scholar to see if you can find a publicly available version.


" I have to convice the other programmers and, more importantly, the bean-counters in management, that all the extra time spent learning the testing framework, writing tests, keeping them updated, etc.. will pay for itself, and then some."

Why?

Why not just do it, quietly and discretely. You don't have to do it all at once. You can do this in little tiny pieces.

The framework learning takes very little time.

Writing one test, just one, takes very little time.

Without unit testing, all you have is some confidence in your software. With one unit test, you still have your confidence, plus proof that at least one test passes.

That's all it takes. No one needs to know you're doing it. Just do it.


I take a different approach to this:

What assurance do you have that your code is correct? Or that it doesn't break assumption X when someone on your team changes func1()? Without unit tests keeping you 'honest', I'm not sure you have much assurance.

The notion of keeping tests updated is interesting. The tests themselves don't often have to change. I've got 3x the test code compared to the production code, and the test code has been changed very little. It is, however, what lets me sleep well at night and the thing that allows me to tell the customer that I have confidence that I can implement the Y functionality without breaking the system.

Perhaps in academia there is evidence, but I've never worked anywhere in the commercial world where anyone would pay for such a test. I can tell you, however, that it has worked well for me, took little time to get accustomed to the testing framework and writing test made me really think about my requirements and the design, far more than I ever did when working on teams that wrote no tests.

Here's where it pays for itself: 1) You have confidence in your code and 2) You catch problems earlier than you would otherwise. You don't have the QA guy say "hey, you didn't bother bounds-checking the xyz() function, did you? He doesn't get to find that bug because you found it a month ago. That is good for him, good for you, good for the company and good for the customer.

Clearly this is anecdotal, but it has worked wonders for me. Not sure I can provide you with spreadsheets, but my customer is happy and that is the end goal.