2010/01/18

Spitting out the KoolAid

85% code coverage
5000 unit test passing
Red. Green. Refactor.

We've all heard the TDD folks spewing comments like this on blogs, twitter, or facebook. Quite frankly, I've had enough. I've been on agile teams (little 'a' agile as there is a difference) and yes it works much better than the good 'ole waterfall methodology. However, TDD is NOT a requirement of being an agile team. Actually, I've seen where having your tests up front and coding to the tests can make you less able to make changes to your project.

If you're not driving your design from tests, what is driving your design?

How about the business requirements and chosen technologies? If you start limiting your design decisions by their testability, you can miss out on some great emerging technologies (try finding anything on unit testing app fabric (velocity/caching).

Making TDD your meal ticket.

I'd say learn new features available in your current language, learn a new language, refresh basic OOP design principles, learn about database design (or UI, or web service, or security, blah, blah, blah). There is always something new to learn methodologies are included but you really shouldn't limit yourself.

Write less code.

I call bull$#!%. So the red/green/refactor can potentially limit you to only writing exactly the methods you need but there is the rub. You have to continually make sure you are not future coding or you lose the "write less" mentality. For starters the "less" code has the addition of the test code. Also, I see many technologies, tools, and patterns being bundled to the project by the TDD evangelists. Mocks, Dependency Injection, Object/Relational Mapping, Inversion of Control all tend to added "bonuses" to the end product. So you take your desired functionality and you wrap all these layers of fluff around them. Each tech and tool requires bending your project to the way the tool needs and typically add new files/projects to be added to the end solution (interfaces, repositories, factories, .hbm.xml, .castle, and on and on). Sounds to me like the promise of "less" leads to more.

Assuming everything else works, does this do its job?

Sounds great doesn't it? We break down our code into these tiny bits and we put a loader over here, a translator over there, we dumb down the classes to base object so we can reuse them in other projects (which NEVER happens). We put the domain classes in one project, put the interfaces in another, have another project for the business logic (because our objects don't know how to do anything with itself). And since we have mocking set up, we stub out the calls of all the inter-connections so we don't actually have to see the parts work together.

Who cares if Billy plays nice when he's at the park? He gets along so well with his imaginary friends.

So I shouldn't test my code?

I'm not saying that. There are very good reasons to do testing. Items behind a service call is a good candidate for putting up some tests. By its nature, the service call is disconnected so slamming the call with numerous inputs is straight forward and easier than a testing UI. Additionally, items with a great deal of logic branching or computations do well. Also, anything that has surfaced as a bug more than once. More importantly, UI testing (automated or manually) covers more of what matters... what the user experiences. It doesn't matter if your Is_it_a_dog tests pass because it has 4 legs and barks. What happens when the user is looking for a cow? TDD evangelists whould say you would make tests for a cow as needed. What about a horse? A turtle? A lorikeet? We know that the test suite has now become the pig.

Seeing the forest through the trees.

So we have now bloated up our number of tests to cover all these scenarios while adding no real functionality. We've changed the tests and the real code to meet new business requirements or add functionality. And yet through it all, the TDD tribe will stick to the tests until one crucial point. When a development team is confronted with 200 development hours and only 150 available hours they will try to push the timeframe back or shrink the feature count. When neither of these are possible due to contractual or regulatory requirements, they say screw the testing and start slinging bits in every way possible. If TDD was so much of a time savings, why do you see teams abandon in what should be its golden hour?

TDD isn't coding with seatbelts. You're tests don't keep you from making mistakes. You'll still misinterpret requirements (and have to change it in at least one additional place). You'll never get enough fringe cases to match your users day-to-day interaction. Testing the individual units makes no guarantee that the system, as a whole, works together as needed. So in many cases, TDD is purely a mastorbatory exercise.

I am currently working to remove many of these practices out of my current project because they have done nothing but degrade system performance and added confusion to the project. I'll take a rich domain objects with a real inhertence model that know how to load, save, and translate themselves. Then I know that I can do more with less code and I can make changes to the objects when the requirements change without jumping through some 30 different base, controlling, and god classes after re-working dozens, hundreds, or thousands of tests.

If TDD is working for you, great. If not, don't feel bad. It's not the silver bullet that it has been touted as.

Comments