Monday, March 08, 2010 

Why you should learn AOP

A few days ago, I've spent some time reading a critic of AOP (The Paradoxical Success of Aspect-Oriented Programming by Friedrich Steimann). As often, I felt compelled to read some of the bibliographical references too, which took me a little more (week-end) time.

Overall, in the last few years I've devoted quite some time to learn, think, and even write a little about AOP. I'm well aware of the problems Steimann describes, and I share some skepticism about the viability of the AOP paradigm as we know it.

Too much literature, for instance, is focused on a small set of pervasive concerns like logging. I believe that as we move toward higher-level concerns, we must make a clear distinction between pervasive concerns and cross-cutting concerns. A concern can be cross-cutting without being pervasive, and in this sense, for instance, I don't really agree that AOP is not for singletons (see my old post Some notes on AOP).
Also, I wouldn't dismiss the distinction between spectators and assistants so easily, especially because many pervasive concerns can be modeled as spectators. Overall, the paradigm seems indeed a little immature when you look at the long-term maintenance effects of aspects as they're known today.

Still, I think the time I've spent pondering on AOP was truly well spent. Actually, I would suggest that you spend some time learning about AOP too, even if you're not planning to use AOP in the foreseeable future.

I don't really mean learning a specific language - unless you want/need to try out a few things. I mean learning the concepts, the AOP perspective, the AOP terminology, the effects and side-effects of an Aspect Oriented solution.

I'm suggesting that you learn all that despite the obvious (or perhaps not so obvious) deficiencies in the current approaches and languages, the excessive hype and the underdeveloped concepts. I'm suggesting that you learn all that because it will make you a better designer.

Why? Because it will expand your mind. It will add a new, alternative perspective through which you can look at your problems. New questions to ask. New concepts. New names. Sometimes, all we need is a name. A beacon in the brainstorm, and a steady hand.

As I've said many times now, as designers we're shaping software. We can choose many shapes, and ideally, we will find a shape that is in frictionless contact with the forcefield. Any given paradigm will suggest a set of privileged shapes, at macro and micro-level. Including the aspect-oriented paradigm in your thinking will expand the set of shapes you can apply and conceive.

Time for a short war story :-). In the past months I've been thinking a lot about some issues in a large CAD system. While shaping a solution, I'm constantly getting back to what I could call aspect-thinking. There are many cross-cutting concerns to be resolved. Not programming-level concerns (like the usual, boring logging stuff). Full-fledged application-domain concerns, that tend to cross-cut the principal decomposition.

Now, you see, even thinking "principal decomposition" and "cross-cutting" is making your first step into aspect-thinking. Then you can think about ways to bring those concerns inside the principal decomposition (if appropriate and/or possible and/or convenient) or think about the best way to keep them outside without code-level tangling. Tangling. Another interesting name, another interesting concept.

Sure, if you ain't using true AOP (for instance, we're using plain old C++), you'll have to give up some oblivousness (another name, another concept!), but it can be done, and it works fine (for a small scale example, see part 1 and part 2 of my "Can AOP inform OOP?")

So far, the candidate shape is causing some discomfort. That's reasonable. It's not a "traditional" solution. Which is fine, because so far, tradition didn't work so well :-). Somehow, I hope the team will get out of this experience with a new mindset. Nobody used to talk about "principal decomposition" or "cross-cutting concern" in the company. And you can't control what you can't name.

I hope they will gradually internalize the new concepts, as well as the tactics we can use inside traditional languages. That would be a major accomplishment. Much more important than the design we're creating, or the tons of code we'll be writing. Well, we'll see...

Labels: , , ,

Sunday, January 10, 2010 

Delaying Decisions

Since microblogging is not my thing, I decided to start 2010 by writing my longer post ever :-). It will start with a light review of a well-known principle and end up with a new design concept. Fasten your seatbelt :-).

The Last Responsible Moment
When we develop a software product, we make decisions. We decide about individual features, we make design decisions, we make coding decisions, we even decide which bugs we really want to fix before going public. Some decisions are taken on the fly; some, at least in the old school, are somewhat planned.

A key principle of Lean Development is to delay decisions, so that:
a) decisions can be based on (yet-to-discover) facts, not on speculation
b) you exercise the wait option (more on this below) and avoid early commitment

The principle is often spelled as "Delay decisions until the last responsible moment", but a quick look at Mary Poppendieck's website (Mary co-created the Lean Development approach) shows a more interesting nuance: "Schedule Irreversible Decisions at the Last Responsible Moment".

Defining "Irreversible" and "Last Responsible" is not trivial. In a sense, there is nothing in software that is truly irreversible, because you can always start over. I haven't found a good definition for "irreversible decision" in literature, but I would define it as follows: if you make an irreversible decision at time T, undoing the decision at a later time will entail a complete (or almost complete) waste of everything that has been created after time T.

There are some documented definitions for "last responsible moment". A popular one is "The point when failing to decide eliminates an important option", which I found rather unsatisfactory. I've also seen some attempts to quantify that better, as in this funny story, except that in the real world you never have a problem which is that simple (very few ramifications in the decision graph) and that detailed (you know the schedule beforehand). I would probably define the Last Responsible Moment as follows: time T is the last responsible moment to make a decision D if, by postponing D, the probability of completing on schedule/budget (even when you factor-in the hypothetical learning effect of postponing) decreases below an acceptable threshold. That, of course, allows us to scrap everything and restart, if schedule and budget allows for it, and in this sense it's kinda coupled with the definition of irreversible.

Now, irreversibility is bad. We don't want to make irreversible decisions. We certainly don't want to make them too soon. Is there anything we can do? I've got a few important things to say about modularity vs. irreversibility and passive vs. proactive option thinking, but right now, it's useful to recap the major decision areas within a software project, so that we can clearly understand what we can actually delay, and what is usually suggested that we delay.

Major Decision Areas
I'll skip on a few very-high-level, strategic decisions here (scope, strategy, business model, etc). It's not that they can't be postponed, but I need to give some focus to this post :-). So I'll get down to the more ordinarily taken decisions.

People
Choosing the right people for the project is a well-known ingredient for success.

Approach/Process
Are we going XP, Waterfall, something in between? :-).

Feature Set
Are we going to include this feature or not?

Design
What is the internal shape (form) of our product?

Coding
Much like design, at a finer granularity level.

Now, "design" is an overly general concept. Too general to be useful. Therefore, I'll split it into a few major decisions.

Architectural Style
Is this going to be an embedded application, a rich client, a web application? This is a rather irreversible decision.

Platform
Goes somewhat in pair with Architectural Style. Are we going with an embedded application burnt into an FPGA? Do you want to target a PIC? Perhaps an embedded PC? Is the client a Windows machine, or you want to support Mac/Linux? A .NET server side, or maybe Java? It's all rather irreversible, although not completely irreversible.

3rd-Party Libraries/Components/Etc
Are we going to use some existing component (of various scale)? Unless you plan on wrapping everything (which may not even be possible), this often end up being an irreversible decision. For instance, once you commit yourself to using Hibernate for persistence, it's not trivial to move away.

Programming Language
This is the quintessential irreversible decision, unless you want to play with language converters. Note that this is not a coding decisions: coding decisions are made after the language has been chosen.

Structure / Shape / Form
This is what we usually call "design": the shape we want to impose to our material (or, if you live in the "emergent design" side, the shape that our material will take as the final result of several incremental decisions).

So, what are we going to delay? We can't delay all decisions, or we'll be stuck. Sure, we can delay something in each and every area, but truth is, every popular method has been focusing on just a few of them. Of course, different methods tried to delay different choices.

A Little Historical Perspective
Experience brings perspective; at least, true experience does :-). Perspective allows to look at something and see more than it's usually seen. For instance, perspective allows to look at the old, outdated, obsolete waterfall approach and see that it (too) was meant to delay decisions, just different decisions.

Waterfall was meant to delay people decisions, design decisions (which include platform, library, component decisions) and coding decisions. People decision was delayed by specialization: you only have to pick the analyst first, everyone else can be chosen later, when you know what you gotta do (it even makes sense -)). Design decision was delayed because platform, including languages, OS, etc, were way more balkanized than today. Also, architectural styles and patterns were much less understood, and it made sense to look at a larger picture before committing to an overall architecture.
Although this may seem rather ridiculous from the perspective of a 2010 programmer working on Java corporate web applications, most of this stuff is still relevant for (e.g.) mass-produced embedded systems, where choosing the right platform may radically change the total development and production cost, yet choosing the wrong platform may over-constrain the feature set.

Indeed, open systems (another legacy term from late '80s - early '90s) were born exactly to lighten up that choice. Choose the *nix world, and forget about it. Of course, the decision was still irreversible, but granted you some latitude in choosing the exact hw/sw. The entire multi-platform industry (from multi-OS libraries to Java) is basically built on the same foundations. Well, that's the bright side, of course :-).

Looking beyond platform independence, the entire concept of "standard" allows to delay some decision. TCP/IP, for instance, allows me to choose modularly (a concept I'll elaborate later). I can choose TCP/IP as the transport mechanism, and then delay the choice of (e.g.) the client side, and focus on the server side. Of course, a choice is still made (the client must have TCP/IP support), so let's say that widely adopted standards allow for some modularity in the decision process, and therefore to delay some decision, mostly design decisions, but perhaps some other as well (like people).

It's already going to be a long post, so I won't look at each and every method/principle/tool ever conceived, but if you do your homework, you'll find that a lot of what has been proposed in the last 40 years or so (from code generators to MDA, from spiral development to XP, from stepwise refinement to OOP) includes some magic ingredient that allows us to postpone some kind of decision.

It's 2010, guys
So, if you ain't agile, you are clumsy :-)) and c'mon, you don't wanna be clumsy :-). So, seriously, which kind of decisions are usually delayed in (e.g.) XP?

People? I must say I haven't seen much on this. Most literature on XP seems based on the concept that team members are mostly programmers with a wide set of skills, so there should be no particular reason to delay decision about who's gonna work on what. I may have missed some particularly relevant work, however.

Feature Set? Sure. Every incremental approach allows us to delay decisions about features. This can be very advantageous if we can play the learning game, which includes rapid/frequent delivery, or we won't learn enough to actually steer the feature set.
Of course, delaying some decisions on feature set can make some design options viable now, and totally bogus later. Here is where you really have to understand the concept of irreversible and last responsible moment. Of course, if you work on a settled platform, things get simpler, which is one more reason why people get religiously attached to a platform.

Design? Sure, but let's take a deeper look.

Architectural Style: not much. Quoting Booch, "agile projects often start out assuming a given platform and environmental context together with a set of proven design patterns for that domain, all of which represent architectural decisions in a very real sense". See my post Architecture as Tradition in the Unselfconscious Process for more.
Seriously, nobody ever expected to start with a monolithic client and end up with a three-tier web application built around a MVC pattern just by coding and refactoring. The architectural style is pretty much a given in many contemporary projects.

Platform: sorry guys, but if you want to start coding now, you gotta choose your platform now. Another irreversible decision made right at the beginning.

3rd-Party Libraries/Components/Etc: some delay is possible for modularized decisions. If you wanna use hibernate, you gotta choose pretty soon. If you wanna use Seam, you gotta choose pretty soon. Pervasive libraries are so entangled with architectural styles that it's relatively hard to delay some decisions here. Modularized components (e.g. the choice of a PDF rendering library) are simple to delay, and can be proactively delayed (see later).

Programming Language: no way guys, you have to choose right here, right now.

Structure / Shape / Form: of course!!! Here we are. This is it :-). You can delay a lot of detailed design choices. Of course, we always postpone some design decision, even when we design before coding. But let's say that this is where I see a lot of suggestions to delay decisions in the agile literature, often using the dreaded Big Upfront Design as a straw man argument. Of course, the emergent design (or accidental architecture) may or may not be good. If I had to compare the design and code coming out of the XP Episode with my own, I would say that a little upfront design can do wonders, but hey, you know me :-).

Practicing
OK guys, what follows may sound a little odd, but in the end it will prove useful. Have faith :-).
You can get better at everything by doing anything :-), so why not getting better at delaying decisions by playing Windows Solitaire? All you have to do is set the options in the hardest possible way:

now, play a little, until you have to make some decision, like here:

I could move the 9 of spades or the 9 of clubs over the 10 of hearts. It's an irreversible decision (well, not if you use the undo, but that's lame :-). There are some ramifications for both choices.
If I move the 9 of clubs, I can later move the king of clubs and uncover a new card. After that, it's all unknown, and no further speculation is possible. Here, learning requires an irreversible decision; this is very common in real-world projects, but seldom discussed in literature.
If I move the 9 of spades, I uncover the 6 of clubs, which I can move over the 7 of aces. Then, it's kinda unknown, meaning: if you're a serious player (I'm not) you'll remember the previous cards, which would allow you to speculate a little better. Otherwise, it's just as above, you have to make an irreversible decision to learn the outcome.

But wait: what about the last responsible moment? Maybe we can delay this decision! Now, if you delay the decision by clicking on the deck and moving further, you're not delaying the decision: you're wasting a chance. In order to delay this decision, there must be something else you can do.
Well, indeed, there is something you can do. You can move the 8 of aces above the 9 of clubs. This will uncover a new card (learning) without wasting any present opportunity (it could still waste a future opportunity; life it tough). Maybe you'll get a 10 of aces under that 8, at which point there won't be any choice to be made about the 9. Or you might get a black 7, at which point you'll have a different way to move the king of clubs, so moving the 9 of spades would be a more attractive option. So, delay the 9 and move the 8 :-). Add some luck, and it works:

and you get some money too (total at decision time Vs. total at the end)


Novice solitaire players are also known to make irreversible decision without necessity. For instance, in similar cases:

I've seen people eagerly moving the 6 of aces (actually, whatever they got) over the 7 of spades, because "that will free up a slot". Which is true, but irrelevant. This is a decision you can easily delay. Actually, it's a decision you must delay, because:
- if you happen to uncover a king, you can always move the 6. It's not the last responsible moment yet: if you do nothing now, nothing bad will happen.
- you may uncover a 6 of hearts before you uncover a king. And moving that 6 might be more advantageous than moving the 6 of aces. So, don't do it :-). If you want to look good, quote Option Theory, call this a Deferral Option and write a paper about it :-).

Proactive Option Thinking
I've recently read an interesting paper in IEEE TSE ("An Integrative Economic Optimization Approach to Systems Development Risk Management", by Michel Benaroch and James Goldstein). Although the real meat starts in chapter 4, chapters 1-3 are probably more interesting for the casual reader (including myself).
There, authors recap some literature about Real Options in Software Engineering, including the popular argument that delaying decisions is akin to a deferral option. They also make important distinctions, like the one between passive learning through deferral of decisions, and proactive learning, but also between responsiveness to change (a central theme in agility literature) and manipulation of change (relatively less explored), and so on. There is a a lot of food for thought in those 3 chapters, so if you can get a copy, I suggest that you spend a little time pondering over it.
Now, I'm a strong supporter of Proactive Option Thinking. Waiting for opportunities (and then react quickly) is not enough. I believe that options should be "implanted" in our project, and that can be done by applying the right design techniques. How? Keep reading : ).

The Invariant Decision
If you look back at those pictures of Solitaire, you'll see that I wasn't really delaying irreversible decisions. All decisions in solitaire are irreversible (real men don't use CTRL-Z). Many decisions in software development are irreversible as well, especially when you are in a tight budget/schedule, so starting over is not an option. Therefore, irreversibility can't really be the key here. Indeed, I was trying to delay Invariant Decisions. Decisions that I can take now, or I can take later, with little or no impact on the outcomes. The concept itself may seem like a minor change from "irreversible", but it allows me to do some magic:
- I can get rid of the "last responsible moment" part, which is poorly defined anyway. I can just say: delay invariant decisions. Period. You can delay them as much as you want, provided they are still invariant. No ambiguity here. That's much better.
- I can proactively make some decisions invariant. This is so important I'll have to say it again, this time in bold: I can proactively make some decisions invariant.

Invariance, Design, Modularity
If you go back to the Historical Perspective paragraph, you can now read it under a different... perspective :-). Several tools, techniques, methods can be adopted not just to delay some decision, but to create the option to delay the decision. How? Through careful design, of course!

Consider the strong modularity you get from service-oriented architecture, and the platform independence that comes through (well-designed) web services. This is a powerful weapon to delay a lot of decisions on one side or another (client or server).

Consider standard protocols: they are a way to make some decision invariant, and to modularize the impact of some choices.

Consider encapsulation, abstraction and interfaces: they allow you to delay quite a few low-level decisions, and to modularize the impact of change as well. If your choice turn out to be wrong, but it's highly localized (modularized) you may afford undoing your decision, therefore turning irreversible into reversible. A barebone example can be found in my old post (2005!) Builder [pattern] as an option.

Consider a very old OOA/OOD principle, now somehow resurrected under the "ubiquitous language" umbrella. It states that you should try to reflect the real-world entities that you're dealing with in your design, and then in your code. That includes avoiding primitive types like integer, and create meaningful classes instead. Of course, you have to understand what you're doing (that is, you gotta be a good designer) to avoid useless overengineering. See part 4 of my digression on the XP Episode for a discussion about adding a seemingly useless Ball class (that is: implanting a low cost - high premium option).
Names alter the forcefield. A named concept stands apart. My next post on the forcefield theme, by the way, will explore this issue in depth :-).

And so on. I could go on forever, but the point is: you can make many (but not all, of course!) decisions invariant, if you apply the right design techniques. Most of those techniques will also modularize the cost of rework if you make the wrong decision. And sure, you can try to do this on the fly as you code. Or you may want to to some upfront design. You know what I'm thinking.

OK guys, it took quite a while, but now we have a new concept to play with, so more on this will follow, randomly as usual. Stay tuned.

Labels: , , , , , ,

Tuesday, December 15, 2009 

A little more on DSM and Gravity

In a recent paper ("The Golden Age of Software Architecture" Revisited, IEEE Software, July/August 2009), Paul Clements and Mary Shaw conclude by talking about Conformance Checking. Indeed, although many would say that the real design/architecture is represented by code, a few :-) of us still think that code should reflect design, and that conformance of code to design should be automatically checked when possible (not necessarily in any given project; not all projects are equal).
Conformance checking is not always simple; quoting Clements and Shaw: "Many architectural patterns, fundamental to the system’s design taken forward into code, are undetectable once programmed. Layers, for instance, usually compile right out of existence."

The good news is that layers can be easily encoded in a DSM. While doing so, I would use an extension of the traditional yes/no DSM, as I've anticipated in a comment to the previous post. While the traditional DSM is basically binary (yes/no), in many cases we are better off with a ternary DSM. That way, we can encode three different decisions:
Yes-now: there is a dependency, and it's here, right now.
Not-now: there is no dependency right now, but it wouldn't be wrong to have one.
Never: adding this dependency would violate a fundamental design rule.

A strong layered system requires some kind of isolation between layers. Remember gravity: new things are naturally attracted to existing things.
Attraction is stronger in the direction of simplicity and lack of effort: if no effort is required to violate architectural integrity, sooner or later it will be violated. Sure, conformance checking may help, but it would be better to set up the gravitational field so that things are naturally attracted to the right place.

The real issue, therefore, is the granularity of the DSM for a layered system. Given the fractal nature of software, a DSM can be applied at any granularity level: between functions, classes, "logical" components, "physical" components. Unless your system is quite small, you probably want to apply the DSM at the component level, which also means your layers should appear at the component level.

Note the distinction between logical and physical component. If you're working in a modern language/environment (like .NET or Java), creating a physical component is just a snap. Older languages, like C++, never got the idea of component into the standard, for a number of reasons; in fact, today this is one of the most limiting factors when working on large C++ system. In that case, I've often seen designer/programmers creating "logical" components out of namespaces and discipline. I've done that myself too, and it kinda works.

Here is the catch: binary separation between physical components is stronger than the logical separation granted from using different namespaces, which in turn is stronger than the separation between two classes in the same namespace, which is much stronger than the separation between two members of the same class.
More exactly, as we'll see in a forthcoming post, a binary component may act as a better shield and provide stronger isolation.

If a binary component A uses binary component B, and B uses binary component C, but does not reveal so in its interface (that is, public/protected members of public classes in B do not mention types defined in C) A knows precious nothing about C.
Using C from A requires that you discover C existence, then the existence of some useful class inside C. Most likely, to do so, you have to look inside B. At that point, adding a new service inside B might just be more convenient. This is especially true if your environment does not provide you with free indirect references (that is, importing B does not inject a reference to C "for free").
Here is again the interplay between good software design and properly designed languages: a better understanding of software forces could eventually help to design better languages as well, where violating a design rule should be harder than following the rule.

Now, if A and B are logical components (inside a larger, physical component D), then B won't usually act as a shield, mostly because the real (physical) dependency will be placed between D and C, not between B and D. Whatever B can access, A can access as well, without any additional effort. The gravitational field of B is weaker, and some code might be attracted to A, which is not what the designer wanted.

Therefore, inasmuch as your language allows you to, a physical component is always the preferred way to truly isolate one system from another.

OK, this was quite simple :-). Next time, I'll go back to the concept of frequency and then move to isolation!

Labels: , , , ,

Wednesday, October 21, 2009 

The Dependency Structure Matrix

Design is about making decisions; diagrams encode some of those decisions. Consider this simple component diagram:



We have 3 "physical" components (e.g. DLLs) X, C, D. X is further partitioned in 2 logical components: in this real-world case, the designer used namespaces to identify separate logical components inside a single physical component. The designers is also telling us that A and B depends on D, B depends on C, C depends on D. So far, so good.

UML diagrams, however, cannot easily convey some part of the reasoning. In a sense, to fully grasp the designer's intention, we have to understand not only what is in the diagram, but also what is not in the diagram. This may seem unusual, but is easily explained. Consider the picture above again. There is no dependency between A and C. Now, maybe A doesn't currently need to access C (and therefore there is no dependency) but if we need to access C from A tomorrow, it's just fine to add a dependency. Or maybe the designer's intent was to shield A from C, possibly using B as a man-in-the-middle.
That's not obvious from the diagram, and there is no place in the diagram to say that (not with a formal, standard UML syntax). Of course, good names may help. Replacing B with something more meaningful, maybe mentioning a bridge or proxy pattern, may suggest that A is not supposed to interact with C.

Is there a better way? Maybe something that can be actually checked against code? Checking code compliance with diagrams may seem so passe' or even plain absurd, given the current trend of discarding diagrams and/or reverse-engineering diagrams from code. Still, here is a real-world story:
The design above (which is, of course, largely simplified) was handed out from the original designers-implementers to a larger (offshore) team. They explained some of the design rationale (informally), and after a while, they left the company. Months later, the offshore team needed a new service from C inside A, so they did the simplest thing that can possibly work: they called C from A. After all, A and B are inside the same physical component. Whatever B can do, A can do too.
Unfortunately, a cornerstone of the original design was that A should never talk to C. The dependency was not in the diagram, because it was not supposed to exist, ever.

The team manager knew that, but given the size of the real X (about 500 KLOC) she couldn't possibly review all the changes from the offshore team. Of course, at least someone in the offshore team didn't fully grasp the designer's intent.

So, back to the original question: is there a better way? I could say "a forcefield diagram" :-), but in this specific case, there is also a well-known engineering tool: the Dependency Structure Matrix (also known as the Design Structure Matrix). A DSM encodes dependencies between "things". Not just dependencies, but also forbidden dependencies. See the following picture:



The 5 green "Y" cells correspond to the 5 existing dependencies; the "N" cells correspond to the "missing" dependencies, but they say something more: that those dependencies are forbidden. Now, this is a useful piece of information, something that can be easily checked against code. That does not mean that we can't change the design: it simply means we don't want to change the design inadvertently, just by typing in some code that was not supposed to be there. Checking code against the abstract design should just prompt a review; the design could be wrong, in which case, it should be changed (along with the DSM).

There is some interesting literature about DSM in software, most from Baldwin and Clark of "Design Rules" fame, but also from others (like one I mentioned back in 2005). There are also quite a few tools to reverse-engineer a DSM from code, which makes checking code against the designed DSM relatively trivial (the bad side is that some languages, like C++, are notably hard to reverse engineer, so tools are lacking; Java and C# have both free and commercial tools available). I'm not aware of any UML tool that can generate a DSM from the diagrams, but that's theoretically trivial, and could even be built as a plug-in for some CASE tools.

As usual, there is more to say about the DSM, gravity, and the forcefield. I'll save that for my next post!

Labels: ,

Monday, October 05, 2009 

A ForceField Diagram

The Design Rationale Diagram I discussed in my previous post is hardly complete, and it could be vastly improved by asking slightly different questions, leading to different decision paths. Still, it's a reasonable first-cut attempt to model the decision process. It can be used to communicate the reasoning behind a specific decision, in a specific context.

That, however, is not the way I really think. Sure, I can rationalize things that way, but it's not the way I store, recall, organize information inside my head. It's not the way I see the decision space.
In the end, software design is about things going together and things staying apart, at all the granularity levels (see also my post on partitioning).
As I progress in my understanding of forces, I tend to form clusters. Clusters are born out of attraction and rejection inside the decision space. I've found that thinking this way helps me reach a better understanding of my design instinct, and to communicate my thoughts more clearly.

Now, although I've been thinking about this for long while (not full-time, lucky me :-), I can't say I have found the perfect representation. The decision space in inherently multi-dimensional, and I always end up needing more dimensions that I can fit either in 2D or 3D. Over time, I tried several notations, inventing things from scratch or borrowing from other domains. Most were dead ends. In the end, I've chosen (so far :-) a very simple representation, based on just 3 concepts (possibly 4 or 5).

- nodes
Nodes represent information, which is our material. Information has fractal nature, and I don't bother if I'm mixing up levels. Therefore, a node may represent a business goal, or the adoption of a tool or library, or a nonfunctional requirement, or a specific component, class, function. While most methods are based on a strict separation of concepts, I find that very limiting.

- an attraction relationship
Nodes can attract each other. For instance, a node labeled "reliable" may attract a node labeled "redundant" when reasoning about the large display problem. I just connect the two nodes using a thick line with little "hands" on the ends. I place attracted nodes close to each other.

- a rejection relationship
Nodes can reject each other. For instance, stateful most clearly reject stateless :-). Some technology might be at odd with another. A subsystem must not depend on another. And so on. Nodes that reject each other are placed at some distance.

It's all very simple and unsophisticated. Here is an example based on the large display problem, inspired by the discussion on design rationale:



and here are two diagrams I've used in real-world projects recently, scaled down to protect the innocent:





The relationship between a node, a cluster, and an Alexandrian center is better left for another time. Still, a node in one diagram may represent an entire cluster, or an entire diagram. Right now I'm tempted to use a slightly different symbol (which would be the fourth) to represent "expandable" nodes, although I'm really trying to keep symbols to a bare minimum. I'm also using colors, but so far in a very informal way.

As simple as it is, I've found this diagram to be very effective as a reasoning device, while too many diagrams end up being mere documentation devices. I can sit in front of my (large :-) screen, think and draw, and the drawing helps me focus. I can draw this on a whiteboard in a meeting, and everyone get up to speed very quickly on what I'm doing.

This, however, is just half the story. We can surely work with informal concepts and diagrams, and that's fine, but what I'm trying to do is to add precision to the diagram. Precision is often confused with details, like "a class diagram is more precise if you show all the parameters and types". I'm not looking for that kind of "precision". Actually, I don't want this diagram to be redundant with code at all; we already have many code-like diagrams, and they all get down the same roads (generate code from diagrams or generate diagrams from code). I want a reasoning device: when I want to code, I'm comfortable with code :-).

I mostly want to add precision about relationships. Why, for instance, is there an attraction between Slow Client and Stateful? Informally, because if we have a stateful system, the slow client can poll on its own terms, or alternatively, because the client may use a sophisticated subscription based on the previous state. Those options, by the way, could be represented on the forcefield diagram itself (adding more nodes, or a nested diagram); but that's still the "informal" reasoning. Can we make it any more formal, precise, grounded on sound principles?

This is where the ongoing work on concepts like gravity, frequency, and so on kicks in. Slow Client and Stateful are attracted because on a finer granularity (another, perhaps better, diagram) "Slow Client" means a publisher and a subscriber operating at different frequencies, and a stateful repository is a well-known strategy (a pattern!) to provide Isolation between systems operating at different frequencies (together with synchronization or transactions).

Now, I haven't introduced the concept of Isolation yet (though I mentioned something on my Facebook page :-), so this is sort of a spoiler :-)), but in the end I hope to come up with a simple reasoning system, where you can start with informal concepts and refine nodes and forces until you reach the "universal", fractal forces I'm discussing in the "Notes on Software Design" posts. That would give a solid ground to the entire diagram.

A final note on the forcefield diagram: at this stage, I'm just using Visio, or more exactly, I'm abusing some stencils in the Visio library. I wanted something relatively organic, mindmap-like. Maybe one day I'll move back to some 3D ideas (molecular structures come to mind), but I've yet to see how this scales to newer concepts, larger problems, and so on. If you want to play with it, I can send you the VSS file with the stencils.

Ok, I'll get back to Frequency (and Interference and Isolation and more :-) soon. Before that, however, I'd like to take a diversion on the Dependency Structure Matrix. See ya!

Labels: ,

Monday, August 31, 2009 

Representing Design Rationale Inside Activity Diagrams

Design is about making choices. We often do so on the fly, leaning on experience and intuition, by talking about the problem with colleagues, or borrowing from literature (e.g. patterns). We also make some choice by habit, which is a different form of experience, one that has higher risk of becoming disconnected with the real problem.
Most of this process is tacit, and even when we discuss choices openly, it doesn't get recorded. Sometimes, a list of pros/cons is made when there is some disagreement about the best option.

This all works well when the problem is simple, but sometimes even experienced designers feel like they're not grasping the essential issues, that something has not yet been found, named, disentangled. This is when having yet one more tool can prove useful.
Now, I don't usually go through the effort to model and transcribe the rationale behind each and every design choice I make. It could be interesting, also from a pedagogical point of view, but it would take a lot of time and would probably disrupt my thought processes. However, when the issues are particularly thorny/unclear, or when there is a large disagreement on the best choice (or even on the goals and criteria), I've found that getting design rationale out of our individual heads and talk on a shared representation can move things a little forward.

Over the years, I've tried out a number of tools, approaches, and so on; lately, I've tried using Activity Diagrams in a rather unorthodox way, to represent my reasoning about design, not design itself. The idea is not to encode your decisions ex-post, but ex-ante, while you're thinking (that is, while they're still options, not decisions). Also, the diagram must be considered quite fluid, as it shows our current understanding, and we're building the diagram to improve our understanding.

Enough talk, let's see a realistic example. I'll refer to the Large Display problem I discussed a few months ago. Actually, I'll just cover the initial choice between using a real-time database or an IPC/messaging system. It's gonna be quite a mouthful anyway!

To start, I'll have to draw a line between a messaging system and a RTDB, and that in itself is not easy. I'll go for a very simple distinction, because my goal here is not really to talk about RTDBs, but about design rationale (the usual "look at the moon, not at the finger" concept).
So, consider a control system that reads some data from the field and then needs to publish those data for other processes. It could just send data through a messaging (publish/subscribe) system. Here I define a messaging system as stateless, meaning it simply keeps track of subscriptions, and sends everything that is published to the subscribers (according to some criteria, like message type or tag). It does not keep an history, or a snapshot of what has been last sent. Therefore, it cannot apply some filters, like "notify me only if the difference between the previous value and the current value is above a threshold" because the previous value is just not stored. Also, when a subscriber is started, it cannot get the current snapshot of the system, because it is not there: it will have to wait for messages to come, incrementally. Shortly stated, a RTDB will keep a snapshot of the system, and well, you can figure out the difference. Of course, a RTDB is also more complex.

So, how do we choose between a messaging system and a RTDB? We may write down a long list of pro/cons, but that's really unstructured, and that's not the way our brain works. To provide more structure, I use an Activity Diagram with orthogonal swimlanes (all the following pictures are taken from Star UML, a free tool that is rather fast and unobtrusive).
The vertical swimlanes are flexible: they represent the main concerns. The horizontal swimlanes are fixed: they provide structure.


For the Large Display problems, we could start with a few main concerns like performance, reliability, cost, and so on. We just drop the names on the vertical swimlanes. My template is then partitioned in 3 horizontal bands: the root question, the reasoning, the outcome. Everything inside is dynamic, and changes as we understand more: even the root questions may change, as we discover larger or smaller, independent problems. Sometimes, even the main concerns change, as we discover options or issues we didn't consider before.

We can focus on just one concern right now, let's say performance (don't we all like performance? :-). A first-cut, interesting top-level question could be: is the published data rate high or low? If the rate is low and we have no persistent state, when you turn on the large display you see nothing: you have to wait till some data gets published. On the other hand, if the rate is high, it may even overwhelm the display system: there is little need to refresh a value a thousand times per second. That actually depends on the display: if it's a real-time plot, you may want a high refresh rate too.

Ok, we could start modeling this part of our reasoning using the familiar activity diagram symbols. Actually, since most of the nodes here would be decision nodes, I just omit the diamond and use an activity node with multiple outgoing paths to show choices.


Note: The empty boxes are just placeholders for some later reasoning. It's just laziness on my side :-) and they wouldn't appear in a real diagram.

Now, this seems just like a decision tree, but it's slightly different. First, it's a decision graph: common choices between paths are shared, and this is a precious information because it shows crucial choices (more on this later). Second, it's a multifaceted graph: every vertical swimlane shows a facet of a more complex reasoning; for instance, what is good for performance might not be good for reliability or cost.

Let's try to move ahead a little. When the incoming data rate is higher than what [most] clients need, we have basically two choices:
1) smarter subscriptions; they could still be rather dumb, like "no more than 3 times per second" or much smarter like "when relative change is higher than 5%, but no more than 5 times per second". Note that the latter is more suited to a RTDB than to a stateless messaging system.
2) change paradigm and move to client-initiated polling. The clients will ask for data with their own timing. Of course, at this point we give up the possibility of not asking for data if the value has not changed. Anyway, this again requires some kind of stateful middleware; a messaging system won't do.
When data rate is low, but high startup time for clients is not an option, we can't wait for data to come: we have to poll, at least at startup. So, polling can solve two problems, of course at expense of bandwidth if it is the only available option.



While drawing this, we may come to the conclusion that we need to ask better questions: are we building a publisher-driven or a client-driven system? If it's client-driven, it cannot be stateless! What do we really know about clients? How many there will be? What about publishers? What is the typical data rate and configuration? What are we aiming for? Do we need to narrow the expectations? This might change the top question (client Vs. publisher driven) or even some concern. That's fine, it means the technique is working :-) and that it's helping us thinking.

Now, it would take quite a lot of time to explore all the facets of even a simple system like this. Actually, most people won't even do it in real life: they will fall in love with one idea, spend most of their time preaching and rationalizing about the virtues of their idea, and never really take the time to go through this kind of process. Still, trying to work out the "Reliability" swimlane would prove interesting. For instance, a common technique to achieve reliability is redundancy. Redundancy is much easier for a stateless system. Redundancy is easier when clients don't have to subscribe at all, but can simply poll. And so on. If you have some spare time, you may want to give it a try.

The notation I use is quite informal. I could improve that easily: UML is fairly flexible; so far I didn't, because people can grasp it anyway, even when I drop in the < < or > > to represent options or when I have just one arrow coming out, meaning that I've just decomposed a choice and a consequence. It's just a reasoning workflow, and I haven't felt the need to make it any more precise than that.

Back to the forcefield: the rationale is not the forcefield. The rationale, however, is talking about forces and centers. Outcomes (messaging and RTDB) are centers. Main choices, like "client driven" or "stateless", are again centers. Those centers are attracting or rejecting each other. This is the forcefield. This is closer to the way I think in the back of my mind, how I "see" the system, how I keep options open. Now, I just need a way to show this. That's for my next post :-).

Labels: ,

Wednesday, July 08, 2009 

When in doubt, do the right thing

The bright side of spending most of my professional time on real-world projects is that I have an endless stream of inspiration, and what is even more important, the possibility of trying out new ideas, concepts, and methods. The dark side is that the same source of inspiration is taking away the precious time I would need to encode, structure, articulate knowledge, that therefore remains largely implicit, tacit, intuitive. The pitch black side is that quite often I'd like to share some real-world story, but I can't, as the details are kinda classified or just to protect the innocent. Sometimes, however, the story can be told with just a little camouflage.

Weeks ago, I was trying to figure out the overall architecture of a new system, intended to replace an obsolete framework. I could see a few major problems, two of which were truly hard to solve without placing a burden on everyone using the framework. Sure, we had other details to work out, but I could see no real showstoppers except for those two. The project manager, however, didn't want to face those problems. She wanted to start with the easy stuff, basically re-creating structures she was familiar with. I tried to insist about the need to figure out an overall strategy first, but to no avail. She wanted progress, right here, right now. That was a huge mistake.

Now, do not misunderstand me: I'm not proposing to stop any kind of development before you work every tiny detail out. Also, in some cases, the only real way to understand a system is by building it. However, building the wrong parts first (or in this case, building the easy parts first) is always a big mistake.

Expert designers know that in many cases, you have to face the most difficult parts early on. Why? Because if you do it too late, you won't have the same options anymore; previous decisions will act like constraints on late work.

Diomidis Spinellis has recently written a very nice essay on this subject (IEEE Software, March/April 2009). Here is a relevant quote: On a blank sheet of paper, the constraints we face are minimal, but each design decision imposes new restrictions. By starting with the most difficult task, we ensure that we’ll face the fewest possible constraints and therefore have the maximum freedom to tackle it. When we then work on the easier parts, the existing constraints are less restraining and can even give us helpful guidance.

I would add more: even if you take the agile stance against upfront design and toward emergent design, the same reasoning applies. If you start with the wrong part, the emergent design will work against you later. Sure, if you're going agile, you can always refactor the whole thing. But this reasoning is faulty, because in most cases, the existing design will also limit your creativity. It's hard to come up with new, wild ideas when those ideas conflict with what you have done up to that moment. It's just human. And yeah, agile is about humans, right? :-)

Expert designer start with the hard parts, but beginners don't. I guess I can quote another nice work, this time from Luke Hohmann (Journey of the Software Professional - a Sociology of Software Development): Expert developer's do tend to work on what is perceived to be the hard part of the problem first because their cognitive libraries are sufficiently well developed to know that solving the "hard part first" is critical to future success. Moreover, they have sufficient plans to help them identify what the hard part is. Novices, as noted often fail to work on the hard-part-first for two reasons. First, they may not know the effectiveness of the hard part first strategy. Second, even if they attempt to solve the hard part first, they are likely to miss it.

Indeed, an expert analyst, or designer, knows how to look at problems, how to find the best questions before looking for answers. To do this, however, we should relinquish preconceived choices. Sure, experts bring experience to the table, hopefully in several different fields, as that expands our library of mental plans. But (unlike many beginners) we don't approach the problem with pre-made choices. We first want to learn more about the forces at play. Any choice is a constraint, and we don't want artificial constraints. We want to approach the problem from a clean perspective, because freedom gives us the opportunity to choose the best form, as a mirror of the forcefield. By the way, that's why zealots are often mediocre designers: they come with too many pre-made choices, or as a Zen master would say, with a full cup.

Of course, humans being humans, it's better not to focus exclusively on the hard stuff. For instance, in many of my design sessions with clients, I try to focus on a few simple things as we start, then dig into some hard stuff, switch back to something easy, and so on. That gives us a chance to take a mental break, reconsider things in the back of our mind, and still make some progress on simpler stuff. Ideally, but this should be kinda obvious by now, the easy stuff should be chosen to be as independent/decoupled as possible from the following hard stuff, or we would be back to square one :-).

In a sense, this post is also about the same thing: writing about some easy stuff, to take a mental break from the more conceptual stuff on the forcefield. While, I hope, still making a little progress in sharing some useful design concept. See you soon!

Labels: , , , , , ,

Tuesday, June 09, 2009 

Design Rationale

In the past few weeks I've taken a little time to write down more about the concept of frequency; while doing so, I realized I had to explore the concept of forcefield better, and while doing so (yeap :-)) I realized there was a rather large overlap between the notion of forcefield and the notion of design rationale.

Design rationale extends beyond software engineering, and aims to capture design decisions and the reasoning behind those decisions. Now, design decisions are (ideally) taken as trade-offs between several competing forces. Those forces creates the forcefield, hence the large overlap between the two subjects.

The concept of design rationale has been around for quite a few years, but I haven't seen much progress either in tools or notations. Most often, tools fall into the “rationalize after the fact” family, while I'm more interested in reasoning tools and notations, that would help me (as a designer) get a better picture about my own thoughts while I'm thinking. That resonates with the concept of reflection in action that I've discussed in Listen to Your Tools and Materials a few years ago.

So, as I was reading a recent issue of IEEE Software (March/April 2009), I found a list of recent (and not so recent) tools dealing with design rationale in a paper by Philippe Kruchten, Rafael Capilla, Juan Carlos Dueñas (The Decision View’s Role in Software Architecture Practice), and I decided to take a quick ride. Here is a very quick summary of what I've found.

Seurat
Seurat (see also the PDF tutorial on the same website) is based on a very powerful language / model, but the tool (as implemented) is very limiting. It's based on a tree structure, which makes for a nice todo list, but makes visual reasoning almost impossible. Actually, in the past I've investigated on using the tree format myself (and while doing so, I discovered others have done the same: see for instance the Reasoning Tree pattern), but restricting visualization to (hyperlinked) nodes in a tree just does not work when you're facing difficult problems.

Sysiphus
Sysiphus seems to have recently morphed into another tool (UniCase), but from the demo of UniCase it's hard to appreciate any special support for design rationale (so far).

AREL
(see also some papers from Antony Tang on the same page; Antony also had an excellent paper on AREL in the same issue of IEEE Software)
AREL is integrated with Enterprise Architect. Integration with existing case tools (either commercial or free) seems quite a good idea to me. AREL uses a class diagram (through a UML profile) to model design rationale, so it's not limited to a tree format. Still, I've found the results rather hard to read. It seems more like a tool to give structure to design knowledge than a tool to reason about design. As I go through the examples, I have to study the diagram; it doesn't just talk back to me. I have to click around and look at other artifacts. The reasoning is not in the diagram, it's only accessible through the diagram.

PAKME
Honestly, PAKME seems more like an exercise in building a web-based collaboration tool for software development than a serious attempt at providing a useful / usable tool to record design rationale. It does little more than organize artifacts, and it requires so many clicks / page refresh to get anything done that I doubt a professional designer could ever use it (sorry guys).

ADDSS
ADDSS is very much like PAKME, although it adds a useful Patterns section. It's so far from what I consider a useful design tool (see my for more) that I can't really think of using it (sorry, again).

Knowledge Architect
Again, a tool with some good ideas (like Word integration) but far from what I'm looking for. It's fine to create a structured design document, but not to reason about difficult design problems.

In the end, it seems like most of those tools suffer from the same problems:
- The research is good; a nice metamodel is built, some of the problems faced by professional designers seem to be well understood.
- The tool does little more than organize knowledge, would get in the way of the designer thinking about thorny issues, does not help through visualization, and is at best useful at the end of the design process, possibly to fake some rationality, a-la Parnas/Clements.

That said, AREL is probably the most promising tool of the pack, but in the end I've being doing pretty much the same for years now, using (well, abusing :-) plain old use case diagrams to model goals and issues, with a few ideas taken from KAOS and the like.

Recently, I began experimenting with another standard UML diagram (the activity diagram) to model some portion of design reasoning. I'll show an example in my next post, and then show how we can change our perspective and move from design reasoning to the forcefield.

Labels: , , ,

Tuesday, June 02, 2009 

Good Design

I rarely (if ever) blog about technology, mostly because once you cut the marketing cr@p, consumer technology is often so moot. Still, a few days ago I read about local dimming in the news section of IEEE Computer. A good designer should be quick to spot good (or intriguing) design, and that idea struck me as an excellent use of technology.

It's also interesting to look at it from a forcefield perspective. CCFLs had several drawbacks as light sources for LCD displays. Some of those issues have been resolved using LED backligthing instead, but if we stop there, we're just using new technology to solve the exact same problem we solved with yesterday's technology. That's usually the wrong approach, as the old technology was part of a larger design, a larger forcefield, and it managed to resolve only some of those forces.

Back to local dimming, the idea is amazingly simple from the forcefield perspective: instead of using lamps for ligthing and LCD for contrast, color, etc, split some of the work between the LEDs and the LCD. This can be done because once we introduce a LED matrix, the forcefield itself changes. This has long been known: when we introduce technology, we can even change the problem itself.

Of course, we face similar issues in software all the time. I wrote something along the same lines in IEEE Software back in 1997 (When Past Solutions Cause Future Problems). I wasn't talking forcefield back then, but the "ask why" suggestion is very much forcefield friendly. More on this shortly, as I'm trying to catch up with many ideas I didn't have time to blog about, and write them down in small chunks...

Labels: , ,

Sunday, May 10, 2009 

Interesting paper

While looking for something else, I stumbled on a paper with an intriguing title: The Ambiguity Criterion in Software Design by Álvaro García and Nelson Medinilla.

I encourage readers interested in the concepts of design and form to take a look. Although I don't really like the term "ambiguity" (it makes for a catchy title, but it's commonly used with quite a different semantics) I think the paper is dealing with an interesting, pervasive attribute of software.

If you have read my previous posts on software design, you may recognize (although not spelled that way) the [almost] fractal nature of "ambiguity". Actually, as I spoke of "n-degrees of separation" in a previous post, I had some overlapping concepts in mind. Curiously enough, subtyping is also mentioned in another article I've recommended time ago about symmetry and symmetry breaking.

I think there is something even more primitive than that at play here, something more fractal in nature, something that has to do with names and identities or (as the authors note) abstractions and instances. I also mentioned a problem with compile-time names in the post above, so there is a lot of stuff pointing the same direction!

I have to think more about that, but first I'll have to write down what's left about frequency...

Labels: , ,

Sunday, April 26, 2009 

Bad Luck, or "fighting the forcefield"

In my previous post, I used the expression "fighting the forcefield". This might be a somewhat uncommon terminology, but I used it to describe a very familiar situation: actually, I see people fighting the forcefield all the time.

Look at any troubled project, and you'll see people who made some wrong decision early on, and then stood by it, digging and digging. Of course, any decision may turn out to be wrong. Software development is a knowledge acquisition process. We often take decisions without knowing all the details; if we didn't, we would never get anything done (see analysis paralysis for more). Experience should mitigate the number of wrong decisions, but there are going to be mistakes anyway; we should be able to recognize them quickly, backtrack, and take another way.

Experience should also bring us in closer contact with the forcefield. Experienced designers don't need to go through each and every excruciating detail before they can take a decision. As I said earlier, we can almost feel, or see the forcefield, and take decisions based on a relatively small number of prevailing forces (yes, I dare to consider myself an experienced designer :-).
This process is largely unconscious, and sometimes it's hard to rationalize all the internal reasoning; in many cases, people expect very argumentative explanations, while all we have to offer on the fly is aesthetics. Indeed, I'm often very informal when I design; I tend to use colorful expressions like "oh, that sucks", or "that brings bad luck" to indicate a flaw, and so on.

Recently, I've found myself saying that "bad luck" thing twice, while reviewing the design of two very different systems (a business system and a reactive system), for two different clients.
I noticed a pattern: in both cases, there was a single entity (a database table, a in-memory structure) storing data with very different timing/life requirements. In both cases, my clients were a little puzzled, as they thought those data belonged together (we can recognize gravity at play here).
Most naturally, they asked me why I would keep the data apart. Time to rationalize :-), once again.

Had they all been more familiar with my blog, I would have pointed to my recent post on multiplicity. After all, data with very different update frequency (like: the calibration data for a sensor, and the most recent sample) have a different fourth-dimensional multiplicity. Sure, at any given point in time, a sensor has one most recent sample and one set of calibration data; therefore, in a static view we'll have multiplicity 1 for both, suggesting we can keep the two of them together. But bring in the fourth dimension (time) and you'll see an entirely different picture: they have a completely different historical multiplicity.

Different update frequencies also hint at the fact that data is changing under different forces. By keeping together things that are influenced by more than one force, we expose them to both. More on this another time.

Hard-core programmers may want more than that. They may ask for more familiar reasons not to put data with different update frequencies in the same table or structure. Here are a few:

- In a multi-threaded software, in-memory structures requires locking. If your structure contains data that is seldom updated, that means it's being read more than written: if it's seldom read and seldom written, why keep it around at all?
Unfortunately, the high-frequency data is written quite often. Therefore, either we accept to slow down everything using a simple mutex, or we aim for higher performances through a more complex locking mechanism (reader/writer lock), which may or may not work, depending on the exact read/write pattern. Separate structures can adopt a simpler locking mechanism, as one is being mostly read, the other mostly written; even if you go with a R/W lock, here it's almost guaranteed to have good performance.

- Even on a database, high-frequency writes may stall low-frequency reads. You even risk a lock escalation from record to table. Then you either go with dirty reads (betting on your good luck) or you just move the data in another table, where it belongs.

- If you decide to cache database data to improve performances, you'll have to choose between a larger cache with the same structure of the database (with low frequency data too) or a smaller and more efficient cache with just the high-frequency data (therefore revealing once more that those data do not belong together).

- And so on: I encourage you to find more reasons!

In most cases, I tend to avoid this kind of problems instinctively: this is what I really call experience. Indeed, Donald Schön reminds us that good design is not for everyone, and that you have to develop your own sense of aesthetics (see "Reflective Conversation with Materials. An interview with Donald Schön by John Bennett", in Bringing Design To Software, Addison-Wesley, 1996). Aesthetics may not sound too technical, but consider it a shortcut for: you have to develop your own ability to perceive the forcefield, and instinctively know what is wrong (misaligned) and right (aligned).

Ok, next time I'll get back to the notion of multiplicity. Actually, although I've initially chosen "multiplicity" because of its familiarity, I'm beginning to think that the whole notion of fourth-dimensional multiplicity, which is indeed quite important, might be confusing for some. I'm therefore looking for a better term, which can clearly convey both the traditional ("static") and the extended (fourth-dimensional, historical, etc) meaning. Any good idea? Say it here, or drop me an email!

Labels: , , ,

Monday, March 30, 2009 

Notes on Software Design, Chapter 5: Multiplicity

Gravity, as we have seen, provides a least resistance path, leading to monolithic software. If gravity was the only force at play, all software would be a monolithic blob. That being not the case, there must be other forces at play. Pervasive, primitive forces just like gravity, setting up a different forcefield, so that it's more convenient to keep things apart.

Consider an amateur programmer, writing a simple program to keep track of his numerous books. He starts with a database-centric approach, and without much knowledge of conceptual modeling, he jumps into creating tables. He creates a Book table, and adds a few fields:
AuthorFirstName, AuthorLastName, Title, Publisher, ISBN, …
It doesn't take much for him to realize that an author could be present several times in his database. He may begin to realize that he could perhaps add an Author table and move AuthorFirstName and AuthorLastName to that table.

Why? He doesn't know squat about database normalization. It's just a simple matter of multiplicity. One author - many books. Different multiplicity suggests to keep things apart. It is quite a good suggestion, as different multiplicity basically requires different gravitational centers, lest we end up with an unfavorable forcefield.
Consider what happens when our amateur programmer discovers he wants to add more biographical data about authors. Without an Author table, there is not any good gravitational center that could possibly attract those data. There is only the Book table, so there they go - adding more data redundancy.

Our amateur programmer, however, might not be so eager to give in. A single table is easier to manage. No foreign keys, no referential integrity, no nothing. It's just simpler, and he doesn't live in the future. He wants to do the simplest thing that could possibly work, so he keeps the Author fields inside the Book table.

He doesn't need much more, however, to realize that many books have more than one author. One book - many authors. That's a different forcefield again, with a many-to-many relationship. Now, our amateur is rather stubborn. He wants to keep things inside a single table anyway. So he goes on and adds more fields:

AuthorFirstName1, AuthorLastName1, AuthorFirstName2, AuthorLastName2, AuthorFirstName3, AuthorLastName3, Title, Publisher, ISBN, …

Of course, at this point he can basically feel he's no longer going along the path of least resistance. Actually, he's fighting the forcefield. Sure, gravity wants him to keep things together, but multiplicity doesn't. The form he's trying to give to the Book table is not in frictionless contact with the forcefield. The forcefield wants Book and Author to stay on their own.

Multiplicity is the primordial force that keeps [software] things apart. It shouldn't come as a surprise, then, that a great emphasis is given to multiplicity in the Entity-Relationship model and also in the static view of OO models (class diagram).
Multiplicity, however, goes much deeper than that. Reusability is a special case of multiplicity. What? :-). Well, it that sounds odd, you're not thinking fourth dimensionally (as Doc said in "Back to the future").

Consider a different problem, at a different granularity. Our amateur programmer is writing another small application, to keep track of who has borrowed some of his precious books. He's doing the simplest thing again, so he's basically going GUI-centered, and he's putting all the business logic inside the form itself. When you click on "Ok", the form will validate data and store a record into some table. The form requires, among other things, a phone number, which must be validated. It's the only place where he has to validate a phone number, so he puts the validation logic right inside the OnOk method generously provided by his RAD tool.

What's wrong? Apparently, there is no multiplicity at play here. There is one function, where he's doing two distinct things (validation and insertion), and inside validation he's doing different things, but each one is intended to validate one field, so it wouldn't pay to move the field validation logic elsewhere. Gravity keeps things together.

Multiplicity is hidden in the fourth dimension: time. Reusability means being able to take something you have already written (in the past) and use it again, unchanged, in the future. It means you have multiple callers, just not at the same time. If you think fourth dimensionally, multiplicity comes out quite clearly.

Multiplicity is an interesting force, one we need to be very familiar with. It will take a few posts to give it justice. Right now, it's time for me to put my running shoes on and hit the road :-). Still, here are a few pointers to some important issues that I'm going to cover in the next weeks (or months :-)

The fractal nature of multiplicity
Maintainability
Scalability
Conway's Law
Tools and Languages - lowering costs
Good questions to ask while doing analysis and design.
Is multiplicity stronger than gravity?
Examples from patterns. On truly understanding Abstract Factory.
N-degrees of separation.
Interfaces and Multiplicity - what is separation, anyway?
Cross-cutting concerns.
Down-to-earth guidelines.
The Display problem, once again.

Labels: ,

Sunday, February 22, 2009 

Notes on Software Design, Chapter 4: Gravity and Architecture

In my previous posts, I described gravity and inertia. At first, gravity may seem to have a negative connotation, like a force we constantly have to fight. In a sense, that's true; in a sense, it's also true for its physical counterpart: every day, we spend a lot of energy fighting earth gravity. However, without gravity, like as we know it would never exist. There is always a bright side :-).

In the software realm, gravity can be exploited by setting up a favorable force field. Remember that gravity is a rather dumb :-) force, merely attracting things. Therefore, if we come up with the right gravitational centers early on, they will keep attracting the right things. This is the role of architecture: to provide an initial, balanced set of centers.

Consider the little thorny problem I described back in October. Introducing Stage 1, I said: "the critical choice [...] was to choose where to put the display logic: in the existing process, in a new process connected via IPC, in a new process connected to a [RT] database".
We can now review that decision within the framework of gravitational centers.

Adding the display logic into the existing process is the path of least resistance: we have only one process, and gravity is pulling new code into that process. Where is the downside? A bloated process, sure, but also the practical impossibility of sharing the display logic with other processes.
Reuse requires separation. This, however, is just the tip of the iceberg: reuse is just an instance of a much more general force, which I'll cover in the forthcoming posts.

Moving the display logic inside a separate component is a necessary step toward [independent] reusability, and also toward the rarely understood concept of a scaled-down architecture.
A frequently quoted paper from David Parnas (one of the most gifted software designers of all times) is properly titled "Designing Software for Ease of Extension and Contraction" (IEEE Transactions on Software Engineering, Vol. 5 No. 2, March 1979). Somehow, people often forget the contraction part.
Indeed, I've often seen systems where the only chance to provide a scaled-down version to customers is to hide the portion of user interface that is exposing the "optional" functionality, often with questionable aesthetics, and always with more trouble than one could possibly want.

Note how, once we have a separate module for display, new display models are naturally attracted into that module, leaving the acquisition system alone. This is gravity working for us, not against us, because we have provided the right center. That's also the bright side of the thorny problem, exactly because (at that point, that is, stage 2) we [still] have the right centers.

Is the choice of using an RTDB to further decouple the data acquisition system and the display system any better than having just two layers?
I encourage you to think about it: it is not necessarily trivial to undestand what is going on at the forcefield level. Sure, the RTDB becomes a new gravitational center, but is a 3-pole system any better in this case? Why? I'll get back to this in my next post.

Architecture and Gravity
Within the right architecture, features are naturally attracted to the "best" gravitational center.
The "right" architecture, therefore, must provide the right gravitational centers, so that features are naturally attracted to the right place, where (if necessary) they will be kept apart from other features at a finer granularity level, through careful design and/or careful refactoring.
Therefore, the right architeture is not just helping us cope with gravity: it's helping us exploit gravity to our own advantage.

The wrong architecture, however, will often conjure with gravity to preserve itself.
As part of my consulting activity, I’ve seen several systems where the initial partitioning of responsibility wasn’t right. The development team didn’t have enough experience (with software design and/or with the problem domain) to find out the core concepts, the core issues, the core centers.
The system was partitioned along the wrong lines, and as mass increased, gravity kicked in. The system grew with the wrong form, which was not in frictionless contact with the context.
At some point, people considered refactoring, but it was too costly, because mass brings Inertia, and inertia affects any attempt to change direction. Inertia keeps a bad system in a bad state. In a properly partitioned system, instead, we have many options for change: small subsystems won’t put up much of a fight. That’s the dream behind the SOA concept.
I already said this, but is worth repeating: gravity is working at all granularity levels, from distributed computing down to the smallest function. That's why we have to keep both design and code constantly clean. Architecture alone is not enough. Good programmers are always essential for quality development.

What about patterns? Patterns can lower the amount of energy we have to spend to create the right architecture. Of course, they can do so because someone else spent some energy re-discovering good ideas, cleaning them up, going through shepherding and publishing, and because we spent some time learning about them. That said, patterns often provide an initial set of centers, balancing out some forces (not restricted to gravity).
Of course, we can't just throw patterns against a problem: the form must be in effortless contact with the real problem we're facing. I've seen too many good-intentioned (and not so experienced :-) software designers start with patterns. But we have to understand forces first, and adopt the right patterns later.

Enough with mass and gravity. Next time, we're gonna talk about another primordial force, pushing things apart.

See you soon, I hope!

Labels: , , , , ,

Wednesday, January 14, 2009 

Notes on Software Design, Chapter 3: Mass, Gravity and Inertia

I thought I could discuss the whole concept of Gravity and its implications in 2 or 3 (long) posts. While writing, I realized I'll need at least 4 or 5. So, this time I'll talk a little about how we can cope with gravity, and about the concept of Inertia. Next time, I'll discuss how we can exploit gravity, and why (despite the obvious cost) it is important that we do not surrender to (or ignore) gravity.

How do we cope with gravity? Needless to say, we have to spend some energy to move away from the amorphous big blob. As usual, we can also borrow some of that energy from someone (or something) else. Here are a few well-proven ideas:

- Architecture. I used to define architecture as "an overall structure, providing a natural place for features and concepts". I could now say that architecture must provide the right centers, or (from the viewpoint of mass and gravity) the right gravitational centers, so that the system can grow harmoniously. The right architecture is also the key to exploit gravity. More about this (and about the role of design patterns) next time.

- Refactoring. While architecture requires some kind of upfront investment, refactoring fights gravity in a more piecemeal, continuous fashion.
Although Refactoring and Emergent Design are often seen as the arch-enemies of Architecture, they are not. Experienced developers know that both are needed, as they work at different scales.
No amount of architecture, for instance, will ever prevent small-scale gravity to attract more code into existing functions. When we add a new feature (maybe under a tight deadline) gravity suggests to add that feature in place, often without even breaking the smallest separation unit – the function.
Conversely, gravity (and even more so Inertia) does not allow refactoring to scale economically beyond some (hard to identify) threshold.

- Measurement and Correction. While refactoring is often performed on-the-fly by programmers, fixing bad smells as they go, we can also use automatic tools to help us keep the code within some quality bounds. See Simple Metrics and More on Code Clones for a few ideas. Of course, measures provide guidance, but then the usual refactoring techniques must be applied.

- Visualization. More on this another time.

- Better Languages and Technologies. At some granularity level, technology becomes either a boon or an hindrance. Consider components: creating binary, release-to-release compatible components in C++ is a nightmare. .NET, for instance, does a much better job. Languages with a simple grammar, like Java and C#, or with strong support for reflection, also allows better tools to be built (see next point)

- Better Tools. Consider web services. They provide a relatively painless way to create a distributed system. The lack of pain doesn't really come from SOAP (which isn't that stroke of genius), but from the underlying HTTP/XML infrastructure and from the widely available, easily interoperable WSDL tools. Consider also refactoring: without good tools, it's a relatively error-prone activity. Refactoring tools make it much easier to fight gravity, moving code around with relatively little effort.


On Inertia
Mass brings gravity. Gravitational attraction works to preserve the existing structure (at the fractal levels I discussed in Chapter 1). In the physical world, however, we have another interesting manifestation of mass, called Inertia. There are many formulations of the concept (see the wikipedia page for details), but what is most interesting here is the simple F=m*a equation. We apply external forces (human work) to a system, but systems with a large mass won't easily change their state of rest or motion (including their current direction).

What is, then, the state of rest/motion for a software system? We could provide several analogies. To find the best analogy for acceleration, we need the best analogy for speed. To find the best analogy for speed, we need the best analogy for space.

The underlying idea must be that we apply some effort to move our software through space. What is the nature of that space? A few real-world examples are needed. Consider a C++/MFC application; we want to migrate the GUI layer to C#/.NET (interestingly, "migration" is commonly used to indicate motion in space). Consider a monolithic, legacy application that must be exposed as a service; or a web application that requires some performance improvement. Sure, all this may require some change in mass too (as some code will be added, some removed), but what is required is to move the software to a different place. What is that place, or, inside which kind of space do we want to move? I encourage you to think about this on your own for a while, before reading further.

My answer is rather simple: that space is the decision space. Software is built by making a number of decisions: we choose languages, technologies, architectural styles, coding styles (e.g. error handling styles, readability/efficiency trade offs, etc.), and so on. We also choose a development process, a team, etc.
Some of those decisions are explicit and carefully worked out. Some are taken on the fly as we code. At any given time, our software is located in a specific (albeit difficult to define) place inside a huge, multi-dimensional decision space. Each decision affects some portion of code. Some are clearly separated. Some are pervasive or cross-cutting.

Software development is a learning process; therefore, some of those decisions will be wrong. Some will be right for a while, but since real-world software does not live in a vacuum, we'll have to change them anyway later.
Changing a decision requires moving our software through the decision space: every decomposition unit affected by that decision will be touched, therefore adding to the mass to be moved (hence the deadly cost of cross-cutting, pervasive concerns).

Inertia explains why some decisions are so hard to change. Any decision we change is bound to require a change in the state of rest, or motion, of our software, because we want to move it into another place.
Some of those decisions impact a large mass of software, and therefore a strong force must be applied. Experience shows that after a critical mass is reached, it becomes so hard to even understand what to do, that software becomes an immovable object (therefore requiring an irresistible force :-).

Of course, small systems won't show much inertia, which explains why the dynamics of programming in the small are different from the dynamics of programming in the large.

Also, speed and acceleration depends also on time. I'll save this for a later time, as I still have to understand a few things better :-)

Enough for today. See you guys soon!

Labels: , ,

Saturday, December 06, 2008 

Notes on Software Design, Chapter 2: Mass and Gravity

Mass is a simple concept, which is better understood by comparison. For instance, a long function has bigger mass than a short one. A class with several methods and fields has bigger mass than a class with just a few methods and fields. A database with a large number of tables has bigger mass than a database with a few. A database table with many fields has bigger mass than a table with just a few. And so on.

Mass, as discussed above, is a static concept. We don't look at the number of records in a database, or at the number of instances for a class. Those numbers are not irrelevant, of course, but they do not contribute to mass as discussed here.

Although we can probably come up with a precise definition of mass, I'll not try to. I'm fine with informal concepts, at least at this time.

Mass exerts gravitational attraction, which is probably the most primitive force we (as software designers) have to deal with. Gravitational attraction makes large functions or classes to attract more LOCs, large components to attract more classes and functions, monolithic programs to keep growing as monoliths, 1-tier or 2-tiers application to fight as we try to add one more tier. Along the same lines, a single large database will get more tables; a table with many fields will attract more fields, and so on.

We achieve low mass, and therefore smaller and balanced gravity, through careful partitioning. Partitioning is an essential step in software design, yet separation always entails a cost. It should not surprise you that the cost of [fighting] gravity has the same fractal nature of separation.

A first source of cost is performance loss:
- Hardware separation requires serialization/marshaling, network transfer, synchronization, and so on.
- Process separation requires serialization/marshaling, synchronization, context switching, and so on.
- In-process component separation requires indirect function calls or load-time fix-up, and may require some degree of marshaling (depending on the component technology you choose)
- Interface – Implementation separation requires (among other things) data to be hidden (hence more function calls), prevents function inlining (or makes it more difficult), and so on.
- In-component access protection prevents, in many cases, exploitation of the global application state. This is a complex concept that I need to defer to another time.
- Function separation requires passing parameters, jumping to a different instruction, jumping back.
- Mass storage separation prevents relational algebra and query optimization.
- Different tables require a join, which can be quite costly (here the number of records resurfaces!).
- (the overhead of in-memory separation is basically subsumed by function separation).

A second source of cost is scaffolding and plumbing:
- Hardware separation requires network services, more robust error handling, protocol design and implementation, bandwidth estimation and control, more sophisticated debugging tools, and so on.
- Process separation requires most of the same.
- And so on (useful exercise!)

A third source of cost is human understanding:
Unfortunately, many people don’t have the ability to reason at different abstraction levels, yet this is exactly what we need to work effectively with a distributed, component-based, multi-database, fine-grained architecture with polymorphic behavior. The average programmer will find a monolithic architecture built around a single (albeit large) database, with a few large classes, much easier to deal with. This is only partially related to education, experience, and tools.

The ugly side of gravity is that it’s a natural, incremental, attractive, self-sustaining force.
It starts with a single line of code. The next line is attracted to the same function, and so on. It takes some work to create yet another function; yet another class; yet another component (here technology can help or hurt a lot); yet another process.
Without conscious appreciation of other forces, gravity makes sure that the minimum resistance path is followed, and that’s always to keep things together. This is why so much software is just a big ball of mud.

Enough for today. Still, there is more to say about mass, gravity and inertia, and a lot more about other (balancing) forces, so see you guys soon...

Breadcrumb trail: instance/record count cannot be ignored at design time. Remember to discuss the underlying forces.

Labels: , , ,

Sunday, November 30, 2008 

Notes on Software Design, Chapter 1: Partitioning

In a previous post, I discussed Alexander’s theory of Centers from a software design perspective. My (current) theory is that a Center is (in software) a locus of highly cohesive information.

It is worth noting that in order to create highly cohesive units, we must be able to separate things. This may seem odd at first, since cohesion (as a force) is about keeping things together, not apart, but is easily explained.
Without some way to partition knowledge, we would have to keep everything together. In the end, conceptual cohesion will be low, because a multitude of concepts, abstractions, etc., would all mash up into an incoherent mess.

Let’s focus on "executable knowledge", and therefore leave some artifacts (like requirement documents) away for a while. We can easily see that we have many ways to separate executable knowledge, and that those ways apply at different granularity levels.

- Hardware separation (as in distributed computing).
- Process separation (a lightweight form of distributed computing, with co-located processes).
- In-process component separation (e.g. DLLs).
- Interface – Implementation separation (e.g. interface inheritance in OO languages).
- In-component access protection, like public/private class members, or other visibility mechanism like modules in Modula 2.
- Function separation (simply different functions).

Knowledge is not necessarily encoded in code – it can be encoded in data too. We have several ways to partition data as well, and they apply to the entire hierarchy of storage.

- Mass storage separation (that is, using different databases).
- Different tables (or equivalent concept) within the same mass storage.
- Module or class static data (inaccessible outside the module).
- Data member (inaccessible outside the instance).
- Local / stack based variables (inaccessible outside the function).

It is interesting to see how poor data separation can harm code separation. Sharing tables works against hardware separation. Shared memory works against process separation. Global data with extern visibility works against module separation. Get/Set functions work against in-component access protection.
Code and data separation are not orthogonal concepts, and therefore they can interfere with each other.

There is more to say about separation and its relationship with old concepts like coupling (straight from the '70s). More on this another time; right now, I need to set things up for Chapter 2.

In the same post above, I mentioned the idea that centers have fractal nature, that is, they appear at different abstraction and granularity levels. If there are primordial forces in software, it seems reasonable that they follow the same fractal nature: in other words, they should apply at all abstraction levels, perhaps with a different slant.

The first force we have to deal with is Gravity. Gravity works against separation, and as such, is a force we cannot ignore. Gravity, as in physics, has to do with Mass, and another manifestation of Mass is Inertia. Gravity, like in the physical world, is a pervasive force, and therefore, separation always entails a cost. Surrending to gravity, however, won't make your software fly :-). I’ll talk about all this very soon.

On a more personal note, I haven’t said much about running lately. I didn’t give up; I just have nothing big to tell :-). Anyway: there is still a little snow around here, but I was beginning to feel like a couch potato today, so I geared up and went for a 10Km (slow :-) run. At Km 4 it started raining :-)), but not so much to require an about face. At Km 8 the rain stopped, and I ran my last 2 Km slightly faster. It feels so great to be alive :-).

Labels: , ,