Sunday, March 07, 2010 

You can't control what you can't …

… measure, Tom de Marco used to say ("Controlling Software Projects: Management, Measurement, and Estimation", 1982). Tom recently confessed he no longer subscribes to that point of view. Now, I like Tom and I've learnt a lot from him, but I don't really agree about most of what he's saying in that paper.

Sure, the overall message is interesting: earth-shaking projects have a ROI so big that you don't really care about spending a little more money. But money isn't the only thing you may need to control (what about time, and your window of opportunity?) and not each and every project can be a earth-shaking project. If you need to comply with some policy or regulation by a given date, it may well be a moot project, but you better control for time :-). More examples (tons, actually) on demand. Relinquishing control is a fascinating concept, and by all means, if you can redefine your projects so that control is no longer necessary, just do it. But frankly, it's not always an option.

Still, can we control what we can't measure? As usual, it depends. It depends on what you want to control, and on your definition of control. We can watch over some things informally, that is, using a rough, imprecise, perhaps intuitive measure ("feeling") and still keep inside reasonable boundaries. This might be enough to be "in control". As others have noted (see for instance Managing What You Can’t Measure) sometimes all we need is a feeling that we're going off track, and a sensible set of tactics to get back on.

All that said, I feel adventurous enough today :-) to offer my own version of Tom's (repudiated) law. I just hope I won't have to take it back in 30 years :-).

You can't control what you can't name.

I would have said "define", but a precise definition is almost like a measure. But if you can't even name the concept (which, yes, requires at least a very informal definition of the term), you're consciously unaware of it. Without awareness, there is no control.

I can say that better: you can't control it intentionally. For a long time, people have controlled forces they didn't fully understand, and perhaps couldn't even name, for instance in building construction. They did that through what Alexander called the unselfconscious process, by relying on tradition (which was largely based on trial and error).

I see this very often in software projects too. People doing things because tradition taught them to do so. They don't really understand why - and would react vehemently if you dare to question their approach or suggest another way. They do so because tradition provides safety, and you're threatening their safety.

The problem with the unselfconscious process is that it doesn't scale well. When the problem is new, when the rate of change in the problem domain increases, whenever the right answer can't be found in tradition, the unselfconscious process doesn't work anymore. We gotta move to the selfconscious process. You gotta learn concepts. Names. Forces. Nonlinear interactions. We gotta think before we do. We gotta ask questions. Question the unquestionable. Move outside our comfort area. Learn, learn, learn.

Speaking of learning, I've got something to say, which is why I wrote this post in the first place, but I'll save that for tomorrow :-).

Labels: , , ,

Wednesday, July 12, 2006 

Self-fulfilling expectations

A reader reminded me via email that I've been using the expression "self-fulfilling expectation" twice when answering comments from the agile crowd (see the recent post on TDD and an older post on project management) without ever really explaining what I meant.
A self-fulfilling expectation (also, self-fulfilling prophecy) is a belief that, when held [by enough people, in some cases] works to make itself true. A popular example comes from finance: for some unjustified reasons, a number of people start to believe that a traded company has financial troubles; they start to sell stocks, the company value begins to drop, they sell at even lower prices, and so on; in the end, the company may experience real financial troubles. For a variation, see the wikipedia.

What has this to do with software engineering? Well, quite a lot. I'll give an example, using non-code artifacts, say diagrams. Of course, it's just one among many possible examples.

Some people start with an initial belief that diagrams are a useful way to reason about some problems (if they're naive, they may think all problems, but let's skip that). They believe that having an abstract view of the code (without all the details of code) makes some reasoning simpler. They start to use diagrams, they hone their ability to create useful diagrams. They also keep diagrams alive as the project grows: not using reverse-engineering tools, because that would quickly make the diagram useless (full of details, basically a graphical view of code, not a model). Since they believe that diagrams must be abstract views, they make abstract models, so small changes to the code won't change the diagrams. Since they believe that some reasoning is simpler on diagrams than on code, when the change is so big as to impact the diagrams, they will first think on the diagram, find a good solution by playing with it, then implement the change in code. Anyway, the diagram will be kept alive, so at any time, they can switch the reasoning from code to diagram. Diagrams will probably prove to be also a good communication device. Naturally, there will be a cost (sometimes you won't even see an immediate value in keeping the diagrams alive) but in the end, the diagram will prove itself really useful, and they will more than pay back, just as expected

Some people believe that the only useful artifact (long-term) is code. They may sketch some diagrams in the beginning (or maybe not), but they won't keep them alive too long. As soon as they don't see immediate value on keeping the diagrams alive, they will scrap them. As they don't value diagrams too much, they won't invest much in making useful, long-lasting diagrams; in some cases, they won't even spend time learning something like UML in depth, learning maybe just the basic syntax. Therefore, they won't even be able to model much using diagrams (which immediately makes them useless, already fulfilling the expectation). Given the cursory knowledge of the notation, it will be hard for a group of such people to use diagrams as an effective communication tool (again, making it pretty useless, and fulfilling the expectation). If diagrams are not entirely disposed, they will soon get totally out of synch with code (since even large changes are done directly on code, not thought on diagrams). Therefore, they will give less and less value to the project, finally becoming useless, just as expected.

Note that you can't just say stuff like "use diagrams until they add value", because that implies that as soon as you don't see a short-term value you'll scrap them. In two weeks (or 2 months) they could prove exceptionally useful, if you keep them alive (prophecy 1). Of course, they won't be if you don't (prophecy 2).

The fact is, what we believe tends to get true. We come at points where an option is logical only within a belief system, and acting according to our beliefs makes those beliefs true. More on beliefs and values in Software Engineering another time :-)

This is one more reason why I don't like fanatism of any kind. Fanatics tend to close themselves in self-fulfilling, self-feeding belief systems, and lose the ability to see beyond that. My simple recipe? Keep your beliefs flexible - it's science and technology, not religion. Understand that there is not a single perfect approach to any problem, although we can try to find a specific almost-perfect approach to any specific problem. Finally, if you really have to believe in something, believe in what you would like to get true :-).

Labels: , ,