« The Fertile Middle-Ground Between Static and Dynamic Typing | Main | The String Calculator Kata in Haskell »

December 30, 2010

Comments

Telanis_

I think "refactor" has been autocorrected to "refractor".

Anonymouse

Overall, I agree with your main point (TDD pretends we don't need any design.) But I disagree with your example.

> We may discover later, however, that maintaining a list of lines might be better.

This should be an implementation detail. Are you saying the API will be different because we're storing text by lines instead of a single string? If so, your API needed fixing anyway. The API will always have different levels of granularity (select word, select line, select paragraph). The whole point of a good API is NOT to expose the representation that's internal to the class.

On an embedded system with little RAM, we may opt for "insert this byte by manual copy/shifting the remaining text" and "count lines via brute-force search for \n". For small text files, the caller will never notice.

The internal representations change for performance/maintenance needs only. If the new internal representation "leaks out" into the API, then that's a leaky abstraction. This should only be allowed when performance targets can't be met with a "pure" API.

So if a programmer makes a bad API, he'll be forced to fix it later. So eventually, he'll learn not to do that. (And start thinking ahead a little bit.) But I don't really think that's entirely TDD-specific. After all, people do the same thing outside of TDD all the time.

Grok2

Excellent writeup -- made me think! Also you should separate out the earlier paragraphs about the scientist and engineer types from the later paragraphs about tdd, language granularity and grain (the latter portion is what I found interesting, btw).

Rachel Blum

Worth considering:

The things that force the large steps are usually two kinds of stories, IME.

1) Stories that push the boundaries of your system. (Performance requirements get you off the string representation. Space requirements for docs larger than available RAM get you to a paged representation. Per-character formatting might push you into yet another direction)

2) Stories that require deeper insight and that can't be approached incrementally. (Famously, the Sudoku solver. But really, any algorithmic problem)

Both of those seem to require some amount of up-front design, just to gain understanding of the problem. One possible conclusion is that there is a certain class of stories, lurking on the outer perimeter of your system and at the very core, that should be answered on paper/napkin first, before you write large amounts of code.

Iain_nl

Same topic as Uncle Bob. Did you plan this? And what's your opinion about his article?

Link: http://cleancoder.posterous.com/the-transformation-priority-premise

Joel Parker Henderson

@Rachel, I agree and would add a third kind of thing that forces large steps, which is a sea change feature.

An simple example of this for the string editor would be adding internationalization. TDD could be especially valuable for detecting tricky bugs, for example due to string representation, string searching algorithms, and interactions between the app's strings and any of the app's libraries.

Apo

I like the "design grain" and "granularity" concepts. To me they summarize the feeling I've had that relying on plain unit testing is not enough, instead you should diversify your tests to cover the different granularity levels of the software.

That way you'll have simple unit tests for the insides, functional tests to cover the functionality etc. When you are going to change the internal implementation, ideally you don't have to touch functional tests, instead you just need to write unit tests for the new functionality.

Planning a good test coverage becomes an act of balancing between the different granularity levels and their pros and cons.

Michael Feathers

@Iain_nl It wasn't planned, but Bob's article did inspire mine. I think there's truth in his hypothesis, but I also think that there are real gulfs that we encounter when we work on problems which make cul-de-sacs unavoidable.

Michael Feathers

@Anonymouse I think that's a seductive mode of thought... that we only have to change representation for performance reasons, but sometimes it's just a matter of realizing that new features can be added in much more easily if we revisit an an earlier decision.

Pbadenski

The way I see it, there's more general problem to that - a lack of linguistic apparatus to constructively discuss software design. In the subject that's being discussed there's obviously some property, a part of something bigger and I can't help feeling it is something fundamental (same as you, if I do understand correctly). It's a pity that we're not able to talk about these basic things, that we do not have that model, that theory of software design with all the basic forces identified. Is it really that we haven't moved any further than coupling and cohesion? Or am I just an ignorant by not knowing more.

Obviously we're trading one thing for another. The question is what for what and why? I'm sure everyone could think of many concepts on the spot, but we're doing that development, that software evolution every day for some time now and we don't have that figured out yet, what the hell. It might be that it sounds like just "academic stuff", but if at the end of the day you're trying to convince each other speaking of beauty of particular design choice then guess what - joke's on you. Moreover I believe it's just a start because as soon as more complex ideas come into play, such as TDD in the article, we are lost like a child in the woods. That stuff's what we have not figured out at all.

Maybe, just maybe it's only because recently I'm fixated on the lack of methodical grasp on software design by programmers.. or it is my inner scientist who's responsible for myself seeing this problem that way :)

Can't help feeling there are that basic forces which shape our design - viewed both as a structure and as a process.. we're just not able to see them yet. I would really love to see people studying something like "philosophy of software design", people exploring, giving full rein to their imagination. We had alchemy before chemistry, astrology before astronomy, philosophy before.. pretty much every science - damn, we need that creative process!

Hope I didn't digress to much.. :)

Paul Saieg

@Michael
I enjoyed your post, especially the scientist/engineer distinction. Those really are the poles between which developers oscillate.

As far as the "you can't easily get there from here" problem goes, where a fundamental representation needs to change (like the string/array representations in the text editor example), I think you are perhaps a quarter-inch off the mark.

The real problem, it seems to me, is not that incremental design work via TDD can’t address this issue — its just that it’s hard to know to when to buy the complexity you need to keep the refactor simple and the cost of change low.

Since this kind change is a shear along what Uncle Bob calls "The Data/Object Anti-Symmetry", what you really seem to be saying is that, if you have been programming procedurally (with data structures or, worse, primitive obsession) changing the shape of that data structure will cause all of the functions which interact with that data to change. This form shotgun surgery is what makes the refactor hard; this is because BOTH the signatures of your methods must change (which stops potentially thousands of your tests from even compiling) AND the logic of your methods must change (which often invalidates your tests outright). The root of the problem is that deep knowledge of the structure of your data is duplicated throughout not only your production code base, but also the tests you need in order to insure the production code still works.

The trick, is think, is to know when to buy the complexity of a true object that exposes a tested API, because — in all likely-hood — by the time the requirements demand the complexity, the cost of change is already high and we need to do some kind of seam-introducing-acrobatics() && advanced-refactoring-ninjitsu() to move forward safely.

We are left thus in a bit of a pickle on this one because most programmers (even we “scientists”), when it came time to implement the design that would insulate the code base against these difficult, disparate changes, would call YAGNI and not do it until there was a requirement that made us.

@Pbadenski: I think something we do need, as craftsmen, is a better way to talk about when to introduce complexity. In the present example, the usual TDD minimalist axioms like YAGNI and friends end up costing us in design debt. At present, we have no language or axiom for this intuition, which guides every experienced TDDer to break these rules on occasion, but is hard to describe or justify to a larger organizational context.

Rachel Blum

@Joel I'd almost argue that "sea change features" as e.g. introducing i18n late in the game, mean that you built the wrong product. Unfortunately, while that allows me to complain about management once more missing the boat, it doesn't change the fact that I'll still have to do it ;)

Had we done it at the start, it would be a boundary story....

Michael Feathers

@Rachel I think that's one of the things we're struggling with right now.. we're not looking at the major refactorings as part of business as usual because there's the sense that "well, if we had done it right." To me, it seems that even good codebases end up with hard to change assumptions, whether from unanticipated features or late insight into a better way of structuring things. One of my favorite stories on the latter is in Eric Evan's book.. when a team discovered they needed share pies.

Philip Schwarz

@Michael

You said: "I once read that people who end up as developers usually have one of two different temperaments: scientist or engineer. You either get a deep thrill out of learning things or a deep thrill out of creating things."

Here is how Fred Brooks put it: "A scientist builds in order to learn; an engineer learns in order to build".

Luca Minudel

Found those thoughts interesting.
Till now I thought of what is described here as a black-box called 'creative process', curious to see what can come out digging into it.

Found the title 'Making Too Much of TDD' (maybe good to drive some more traffic) bit misleading

Luca Minudel


will rephrase "Red/Green/Refactor is a generative process, but it is extremely dependent upon the quality of our input." like this:

"Red/Green/Refactor is a generative process that depend upon the co-evolution of ones practicing it and their skills, knowledge, experience, intelligence"

SEO services

That was a great writeup, very unique of its kind, left me asking for more!!..

George Harrison

You guys are all such integers! TDD is not a lifestyle choice, it is genetic.

The comments to this entry are closed.