Tuesday, March 30, 2010 

This blog has moved


This blog is now located at http://carlopescio.blogspot.com/.
You will be automatically redirected in 30 seconds, or you may click here.

For feed subscribers, please update your feed subscriptions to
http://carlopescio.blogspot.com/feeds/posts/default.

Blog Maintenance

I'm using Blogger to manage my blog, and so far I've been publishing to my website using the FTP Publishing feature, that Google decided to remove.
I don't want to switch to another blogging platform (yet), so I'll have to go with the flow, migrate my blog to a custom domain, and hope Google's FTP Migration Tool works :-).

When all is said and done, you'll no longer be able to reach my blog at www.eptacom.net/blog. It will be moved to www.carlopescio.com, where I have hosted a copy of the blog so far. If you subscribed to the atom feed, you'll have to re-subscribe after the blog is moved (sorry).

A message from Google should appear here after migration. I guess it will take a few days, as some DNS stuff is involved. That will be the right time to update your favorites, or re-subscribe the feed, etc.

See you on www.carlopescio.com (yeah, I'm being optimistic :-))

Labels:

Sunday, March 21, 2010 

Quote of the Day

Problems worthy of attack prove their worth by hitting back (Piet Hein)

Labels:

Friday, March 12, 2010 

Where is your Knowledge?

Software development is a process of knowledge acquisition and knowledge encoding (see Phillip Armour, copiously quoted in this blog). Where, and how, do we store that knowledge? In several places, in several ways:

In source code: that's executable knowledge
In models: that's formal knowledge
In other kind of documents: that's written knowledge
In our brain, consciously: that's explicit knowledge
In our brain, unconsciously: that's tacit knowledge

Knowledge stored in source code has the extremely useful property of being executable, but we can't store the entire development knowledge in executable statements. Design Rationale, for instance, is not present in code (and not even in most UML diagrams, for that matter), and is basically stored at the conscious/unconscious level. My forcefield diagram is much better at formally capturing rationale.

Explicit knowledge is often passed by as oral tradition, while tacit knowledge is often passed by as "a way of doing things", just by working together. Pair programming, reviews, joint design sessions (and so on) help distribute both explicit and tacit knowledge.

Knowledge has value, but that value is not constant over time. In 1962, Fritz Machlup came up with the concept of Half-life of knowledge: the amount of time that has to elapse before half of the knowledge in a particular area is superseded or shown to be untrue.

Moreover, the initial value of a particular piece of knowledge can be very high, like a new algorithm that took you years to get right, or very small, like a trivial validation rule.

Recently, I began to think about the half-life of our knowledge repositories as well. With my usual boldness, I'll go ahead and define the Half-Life of a Knowledge Repository: the amount of time that has to elapse before half of the knowledge in a repository is unrecoverable or just too costly to recover. I could even define "too costly" as "higher than the discounted value of that knowledge when lookup is attempted".

The concept of recoverable knowledge is slightly deeper than it may seem. Sure, it does cover the obvious problems of losing knowledge for lack of backup procedures, or because it's stored in a proprietary format no longer supported, and so on. But it covers also several interesting cases:

- the knowledge is in the brain of an employee, who leaves the company
- the knowledge is in source code, but it's in an obsolete language
- the knowledge is in source code, but it's extremely hard to read
- etc.

I'll leave it up to you to define the half-life of source code, models, documents, brain (conscious and unconscious). Of course, more details are needed: niche languages, for instance, tend to have a shorter half-life.

Now, here is the real boon :-). We can combine the concept of Knowledge Half-Life, Knowledge Value, and Knowledge Repository Half-Life to map the risk of storing a particular piece of knowledge in a particular repository (only). Here is my first-cut map:

Knowledge Half-Life Knowledge (initial) Value Repository Half-Life Result
Long Long Long OK
Long Long Short Risk
Long Short Long Little Waste
Long Short Short Little Risk
Short Long Long Little Waste
Short Long Short Little Risk
Short Short Long Waste
Short Short Short OK


It's interesting to review one of the values in the Agile Manifesto (Working software over comprehensive documentation) under this perspective.

Let's say we have a piece of knowledge, and that knowledge can be indeed stored in code (as I said, you can't store everything in code).

If the half-life of knowledge is short, storing it in code only is probably the best economical choice. If the half-life of knowledge is long, we have to worry a little more. If we add relevant unit tests to that piece of code, we increase the repository half-life, as they make it easier to recover knowledge from code. If we use a mainstream language, we can also increase the repository half-life.

This may still not be enough. If you had to recover the entire knowledge stored in a non-trivial piece of code (say, an mp4 codec) having only the source code, and no (comprehensive) documentation on what that piece of code is doing, why, and how, it would take you far too much. The half-life of code is shorter than the half-life of code + documents.
Actually, depending on context, given the choice to have just the code and nothing else, or just comprehensive documentation and nothing else, we better be careful about what we choose (when knowledge half-life is long, of course).

Of course, the opposite is also true: if you store knowledge with short half-life outside code, you seriously risk wasting your time.

I've often been critic about teaching and applying principles and techniques without the necessary context. I hope that somehow, the table above and the underlying concepts can move our understanding of when to use what a little further.

Labels: ,

Monday, March 08, 2010 

Why you should learn AOP

A few days ago, I've spent some time reading a critic of AOP (The Paradoxical Success of Aspect-Oriented Programming by Friedrich Steimann). As often, I felt compelled to read some of the bibliographical references too, which took me a little more (week-end) time.

Overall, in the last few years I've devoted quite some time to learn, think, and even write a little about AOP. I'm well aware of the problems Steimann describes, and I share some skepticism about the viability of the AOP paradigm as we know it.

Too much literature, for instance, is focused on a small set of pervasive concerns like logging. I believe that as we move toward higher-level concerns, we must make a clear distinction between pervasive concerns and cross-cutting concerns. A concern can be cross-cutting without being pervasive, and in this sense, for instance, I don't really agree that AOP is not for singletons (see my old post Some notes on AOP).
Also, I wouldn't dismiss the distinction between spectators and assistants so easily, especially because many pervasive concerns can be modeled as spectators. Overall, the paradigm seems indeed a little immature when you look at the long-term maintenance effects of aspects as they're known today.

Still, I think the time I've spent pondering on AOP was truly well spent. Actually, I would suggest that you spend some time learning about AOP too, even if you're not planning to use AOP in the foreseeable future.

I don't really mean learning a specific language - unless you want/need to try out a few things. I mean learning the concepts, the AOP perspective, the AOP terminology, the effects and side-effects of an Aspect Oriented solution.

I'm suggesting that you learn all that despite the obvious (or perhaps not so obvious) deficiencies in the current approaches and languages, the excessive hype and the underdeveloped concepts. I'm suggesting that you learn all that because it will make you a better designer.

Why? Because it will expand your mind. It will add a new, alternative perspective through which you can look at your problems. New questions to ask. New concepts. New names. Sometimes, all we need is a name. A beacon in the brainstorm, and a steady hand.

As I've said many times now, as designers we're shaping software. We can choose many shapes, and ideally, we will find a shape that is in frictionless contact with the forcefield. Any given paradigm will suggest a set of privileged shapes, at macro and micro-level. Including the aspect-oriented paradigm in your thinking will expand the set of shapes you can apply and conceive.

Time for a short war story :-). In the past months I've been thinking a lot about some issues in a large CAD system. While shaping a solution, I'm constantly getting back to what I could call aspect-thinking. There are many cross-cutting concerns to be resolved. Not programming-level concerns (like the usual, boring logging stuff). Full-fledged application-domain concerns, that tend to cross-cut the principal decomposition.

Now, you see, even thinking "principal decomposition" and "cross-cutting" is making your first step into aspect-thinking. Then you can think about ways to bring those concerns inside the principal decomposition (if appropriate and/or possible and/or convenient) or think about the best way to keep them outside without code-level tangling. Tangling. Another interesting name, another interesting concept.

Sure, if you ain't using true AOP (for instance, we're using plain old C++), you'll have to give up some oblivousness (another name, another concept!), but it can be done, and it works fine (for a small scale example, see part 1 and part 2 of my "Can AOP inform OOP?")

So far, the candidate shape is causing some discomfort. That's reasonable. It's not a "traditional" solution. Which is fine, because so far, tradition didn't work so well :-). Somehow, I hope the team will get out of this experience with a new mindset. Nobody used to talk about "principal decomposition" or "cross-cutting concern" in the company. And you can't control what you can't name.

I hope they will gradually internalize the new concepts, as well as the tactics we can use inside traditional languages. That would be a major accomplishment. Much more important than the design we're creating, or the tons of code we'll be writing. Well, we'll see...

Labels: , , ,

Sunday, March 07, 2010 

You can't control what you can't …

… measure, Tom de Marco used to say ("Controlling Software Projects: Management, Measurement, and Estimation", 1982). Tom recently confessed he no longer subscribes to that point of view. Now, I like Tom and I've learnt a lot from him, but I don't really agree about most of what he's saying in that paper.

Sure, the overall message is interesting: earth-shaking projects have a ROI so big that you don't really care about spending a little more money. But money isn't the only thing you may need to control (what about time, and your window of opportunity?) and not each and every project can be a earth-shaking project. If you need to comply with some policy or regulation by a given date, it may well be a moot project, but you better control for time :-). More examples (tons, actually) on demand. Relinquishing control is a fascinating concept, and by all means, if you can redefine your projects so that control is no longer necessary, just do it. But frankly, it's not always an option.

Still, can we control what we can't measure? As usual, it depends. It depends on what you want to control, and on your definition of control. We can watch over some things informally, that is, using a rough, imprecise, perhaps intuitive measure ("feeling") and still keep inside reasonable boundaries. This might be enough to be "in control". As others have noted (see for instance Managing What You Can’t Measure) sometimes all we need is a feeling that we're going off track, and a sensible set of tactics to get back on.

All that said, I feel adventurous enough today :-) to offer my own version of Tom's (repudiated) law. I just hope I won't have to take it back in 30 years :-).

You can't control what you can't name.

I would have said "define", but a precise definition is almost like a measure. But if you can't even name the concept (which, yes, requires at least a very informal definition of the term), you're consciously unaware of it. Without awareness, there is no control.

I can say that better: you can't control it intentionally. For a long time, people have controlled forces they didn't fully understand, and perhaps couldn't even name, for instance in building construction. They did that through what Alexander called the unselfconscious process, by relying on tradition (which was largely based on trial and error).

I see this very often in software projects too. People doing things because tradition taught them to do so. They don't really understand why - and would react vehemently if you dare to question their approach or suggest another way. They do so because tradition provides safety, and you're threatening their safety.

The problem with the unselfconscious process is that it doesn't scale well. When the problem is new, when the rate of change in the problem domain increases, whenever the right answer can't be found in tradition, the unselfconscious process doesn't work anymore. We gotta move to the selfconscious process. You gotta learn concepts. Names. Forces. Nonlinear interactions. We gotta think before we do. We gotta ask questions. Question the unquestionable. Move outside our comfort area. Learn, learn, learn.

Speaking of learning, I've got something to say, which is why I wrote this post in the first place, but I'll save that for tomorrow :-).

Labels: , , ,

Friday, February 12, 2010 

Form vs. Function: a Space Odyssey

I was teaching Object Oriented Design past week, and I mentioned the interplay between form and function (form follows function; function follows form). I'm rather cautious not to spend too much time on philosophy, although a little philosophy shouldn't hurt, and people who know me tend to expect a short philosophical digression every now and then.

Function follows form: that is to say, the shape of an object will suggest possible uses. Form follows function: the intended usage of an object will constrain, and therefore guide, its physical shape. This is true for software objects as well. It's just a different material, something we can't immediately see and shape with our hands.

Realizing that function follows form is a pivotal moment in the development of intelligence. You probably remember the opening of 2001: A Space Odyssey. The apes are touching the monolith and, shortly after, one of them is playing with a bone and bang! - it's not just a bone anymore: it's a tool. Function follows form. This chapter is known as "The Dawn of Man", and rightly so.

Watch a little more, and you'll see a doughnut-shaped space station. That's a very good example of form following function (exercise for the reader :-)

By the way, if you have never seen that apes stuff till now :-), here it is, at least until it gets removed from YouTube...

Labels: , ,

Monday, January 25, 2010 

[Long] Quote of the Day

"I learned this, at least, by my experiment: that if one advances confidently in the direction of his dreams, and endeavours to live the life which he has imagined, he will meet with a success unexpected in common hours. He will put some things behind, will pass an invisible boundary; new, universal, and more liberal laws will begin to establish themselves around and within him; or the old laws be expanded, and interpreted in his favour in a more liberal sense, and he will live with the license of a higher order of beings. In proportion as he simplifies his life, the laws of the universe will appear less complex, and solitude will not be solitude, nor poverty poverty, nor weakness weakness. If you have built castles in the air, your work need not be lost; that is where they should be. Now put the foundations under them."

Henry David Thoreau

Labels: