Monday, March 08, 2010 

Why you should learn AOP

A few days ago, I've spent some time reading a critic of AOP (The Paradoxical Success of Aspect-Oriented Programming by Friedrich Steimann). As often, I felt compelled to read some of the bibliographical references too, which took me a little more (week-end) time.

Overall, in the last few years I've devoted quite some time to learn, think, and even write a little about AOP. I'm well aware of the problems Steimann describes, and I share some skepticism about the viability of the AOP paradigm as we know it.

Too much literature, for instance, is focused on a small set of pervasive concerns like logging. I believe that as we move toward higher-level concerns, we must make a clear distinction between pervasive concerns and cross-cutting concerns. A concern can be cross-cutting without being pervasive, and in this sense, for instance, I don't really agree that AOP is not for singletons (see my old post Some notes on AOP).
Also, I wouldn't dismiss the distinction between spectators and assistants so easily, especially because many pervasive concerns can be modeled as spectators. Overall, the paradigm seems indeed a little immature when you look at the long-term maintenance effects of aspects as they're known today.

Still, I think the time I've spent pondering on AOP was truly well spent. Actually, I would suggest that you spend some time learning about AOP too, even if you're not planning to use AOP in the foreseeable future.

I don't really mean learning a specific language - unless you want/need to try out a few things. I mean learning the concepts, the AOP perspective, the AOP terminology, the effects and side-effects of an Aspect Oriented solution.

I'm suggesting that you learn all that despite the obvious (or perhaps not so obvious) deficiencies in the current approaches and languages, the excessive hype and the underdeveloped concepts. I'm suggesting that you learn all that because it will make you a better designer.

Why? Because it will expand your mind. It will add a new, alternative perspective through which you can look at your problems. New questions to ask. New concepts. New names. Sometimes, all we need is a name. A beacon in the brainstorm, and a steady hand.

As I've said many times now, as designers we're shaping software. We can choose many shapes, and ideally, we will find a shape that is in frictionless contact with the forcefield. Any given paradigm will suggest a set of privileged shapes, at macro and micro-level. Including the aspect-oriented paradigm in your thinking will expand the set of shapes you can apply and conceive.

Time for a short war story :-). In the past months I've been thinking a lot about some issues in a large CAD system. While shaping a solution, I'm constantly getting back to what I could call aspect-thinking. There are many cross-cutting concerns to be resolved. Not programming-level concerns (like the usual, boring logging stuff). Full-fledged application-domain concerns, that tend to cross-cut the principal decomposition.

Now, you see, even thinking "principal decomposition" and "cross-cutting" is making your first step into aspect-thinking. Then you can think about ways to bring those concerns inside the principal decomposition (if appropriate and/or possible and/or convenient) or think about the best way to keep them outside without code-level tangling. Tangling. Another interesting name, another interesting concept.

Sure, if you ain't using true AOP (for instance, we're using plain old C++), you'll have to give up some oblivousness (another name, another concept!), but it can be done, and it works fine (for a small scale example, see part 1 and part 2 of my "Can AOP inform OOP?")

So far, the candidate shape is causing some discomfort. That's reasonable. It's not a "traditional" solution. Which is fine, because so far, tradition didn't work so well :-). Somehow, I hope the team will get out of this experience with a new mindset. Nobody used to talk about "principal decomposition" or "cross-cutting concern" in the company. And you can't control what you can't name.

I hope they will gradually internalize the new concepts, as well as the tactics we can use inside traditional languages. That would be a major accomplishment. Much more important than the design we're creating, or the tons of code we'll be writing. Well, we'll see...

Labels: , , ,

Sunday, January 10, 2010 

Delaying Decisions

Since microblogging is not my thing, I decided to start 2010 by writing my longer post ever :-). It will start with a light review of a well-known principle and end up with a new design concept. Fasten your seatbelt :-).

The Last Responsible Moment
When we develop a software product, we make decisions. We decide about individual features, we make design decisions, we make coding decisions, we even decide which bugs we really want to fix before going public. Some decisions are taken on the fly; some, at least in the old school, are somewhat planned.

A key principle of Lean Development is to delay decisions, so that:
a) decisions can be based on (yet-to-discover) facts, not on speculation
b) you exercise the wait option (more on this below) and avoid early commitment

The principle is often spelled as "Delay decisions until the last responsible moment", but a quick look at Mary Poppendieck's website (Mary co-created the Lean Development approach) shows a more interesting nuance: "Schedule Irreversible Decisions at the Last Responsible Moment".

Defining "Irreversible" and "Last Responsible" is not trivial. In a sense, there is nothing in software that is truly irreversible, because you can always start over. I haven't found a good definition for "irreversible decision" in literature, but I would define it as follows: if you make an irreversible decision at time T, undoing the decision at a later time will entail a complete (or almost complete) waste of everything that has been created after time T.

There are some documented definitions for "last responsible moment". A popular one is "The point when failing to decide eliminates an important option", which I found rather unsatisfactory. I've also seen some attempts to quantify that better, as in this funny story, except that in the real world you never have a problem which is that simple (very few ramifications in the decision graph) and that detailed (you know the schedule beforehand). I would probably define the Last Responsible Moment as follows: time T is the last responsible moment to make a decision D if, by postponing D, the probability of completing on schedule/budget (even when you factor-in the hypothetical learning effect of postponing) decreases below an acceptable threshold. That, of course, allows us to scrap everything and restart, if schedule and budget allows for it, and in this sense it's kinda coupled with the definition of irreversible.

Now, irreversibility is bad. We don't want to make irreversible decisions. We certainly don't want to make them too soon. Is there anything we can do? I've got a few important things to say about modularity vs. irreversibility and passive vs. proactive option thinking, but right now, it's useful to recap the major decision areas within a software project, so that we can clearly understand what we can actually delay, and what is usually suggested that we delay.

Major Decision Areas
I'll skip on a few very-high-level, strategic decisions here (scope, strategy, business model, etc). It's not that they can't be postponed, but I need to give some focus to this post :-). So I'll get down to the more ordinarily taken decisions.

People
Choosing the right people for the project is a well-known ingredient for success.

Approach/Process
Are we going XP, Waterfall, something in between? :-).

Feature Set
Are we going to include this feature or not?

Design
What is the internal shape (form) of our product?

Coding
Much like design, at a finer granularity level.

Now, "design" is an overly general concept. Too general to be useful. Therefore, I'll split it into a few major decisions.

Architectural Style
Is this going to be an embedded application, a rich client, a web application? This is a rather irreversible decision.

Platform
Goes somewhat in pair with Architectural Style. Are we going with an embedded application burnt into an FPGA? Do you want to target a PIC? Perhaps an embedded PC? Is the client a Windows machine, or you want to support Mac/Linux? A .NET server side, or maybe Java? It's all rather irreversible, although not completely irreversible.

3rd-Party Libraries/Components/Etc
Are we going to use some existing component (of various scale)? Unless you plan on wrapping everything (which may not even be possible), this often end up being an irreversible decision. For instance, once you commit yourself to using Hibernate for persistence, it's not trivial to move away.

Programming Language
This is the quintessential irreversible decision, unless you want to play with language converters. Note that this is not a coding decisions: coding decisions are made after the language has been chosen.

Structure / Shape / Form
This is what we usually call "design": the shape we want to impose to our material (or, if you live in the "emergent design" side, the shape that our material will take as the final result of several incremental decisions).

So, what are we going to delay? We can't delay all decisions, or we'll be stuck. Sure, we can delay something in each and every area, but truth is, every popular method has been focusing on just a few of them. Of course, different methods tried to delay different choices.

A Little Historical Perspective
Experience brings perspective; at least, true experience does :-). Perspective allows to look at something and see more than it's usually seen. For instance, perspective allows to look at the old, outdated, obsolete waterfall approach and see that it (too) was meant to delay decisions, just different decisions.

Waterfall was meant to delay people decisions, design decisions (which include platform, library, component decisions) and coding decisions. People decision was delayed by specialization: you only have to pick the analyst first, everyone else can be chosen later, when you know what you gotta do (it even makes sense -)). Design decision was delayed because platform, including languages, OS, etc, were way more balkanized than today. Also, architectural styles and patterns were much less understood, and it made sense to look at a larger picture before committing to an overall architecture.
Although this may seem rather ridiculous from the perspective of a 2010 programmer working on Java corporate web applications, most of this stuff is still relevant for (e.g.) mass-produced embedded systems, where choosing the right platform may radically change the total development and production cost, yet choosing the wrong platform may over-constrain the feature set.

Indeed, open systems (another legacy term from late '80s - early '90s) were born exactly to lighten up that choice. Choose the *nix world, and forget about it. Of course, the decision was still irreversible, but granted you some latitude in choosing the exact hw/sw. The entire multi-platform industry (from multi-OS libraries to Java) is basically built on the same foundations. Well, that's the bright side, of course :-).

Looking beyond platform independence, the entire concept of "standard" allows to delay some decision. TCP/IP, for instance, allows me to choose modularly (a concept I'll elaborate later). I can choose TCP/IP as the transport mechanism, and then delay the choice of (e.g.) the client side, and focus on the server side. Of course, a choice is still made (the client must have TCP/IP support), so let's say that widely adopted standards allow for some modularity in the decision process, and therefore to delay some decision, mostly design decisions, but perhaps some other as well (like people).

It's already going to be a long post, so I won't look at each and every method/principle/tool ever conceived, but if you do your homework, you'll find that a lot of what has been proposed in the last 40 years or so (from code generators to MDA, from spiral development to XP, from stepwise refinement to OOP) includes some magic ingredient that allows us to postpone some kind of decision.

It's 2010, guys
So, if you ain't agile, you are clumsy :-)) and c'mon, you don't wanna be clumsy :-). So, seriously, which kind of decisions are usually delayed in (e.g.) XP?

People? I must say I haven't seen much on this. Most literature on XP seems based on the concept that team members are mostly programmers with a wide set of skills, so there should be no particular reason to delay decision about who's gonna work on what. I may have missed some particularly relevant work, however.

Feature Set? Sure. Every incremental approach allows us to delay decisions about features. This can be very advantageous if we can play the learning game, which includes rapid/frequent delivery, or we won't learn enough to actually steer the feature set.
Of course, delaying some decisions on feature set can make some design options viable now, and totally bogus later. Here is where you really have to understand the concept of irreversible and last responsible moment. Of course, if you work on a settled platform, things get simpler, which is one more reason why people get religiously attached to a platform.

Design? Sure, but let's take a deeper look.

Architectural Style: not much. Quoting Booch, "agile projects often start out assuming a given platform and environmental context together with a set of proven design patterns for that domain, all of which represent architectural decisions in a very real sense". See my post Architecture as Tradition in the Unselfconscious Process for more.
Seriously, nobody ever expected to start with a monolithic client and end up with a three-tier web application built around a MVC pattern just by coding and refactoring. The architectural style is pretty much a given in many contemporary projects.

Platform: sorry guys, but if you want to start coding now, you gotta choose your platform now. Another irreversible decision made right at the beginning.

3rd-Party Libraries/Components/Etc: some delay is possible for modularized decisions. If you wanna use hibernate, you gotta choose pretty soon. If you wanna use Seam, you gotta choose pretty soon. Pervasive libraries are so entangled with architectural styles that it's relatively hard to delay some decisions here. Modularized components (e.g. the choice of a PDF rendering library) are simple to delay, and can be proactively delayed (see later).

Programming Language: no way guys, you have to choose right here, right now.

Structure / Shape / Form: of course!!! Here we are. This is it :-). You can delay a lot of detailed design choices. Of course, we always postpone some design decision, even when we design before coding. But let's say that this is where I see a lot of suggestions to delay decisions in the agile literature, often using the dreaded Big Upfront Design as a straw man argument. Of course, the emergent design (or accidental architecture) may or may not be good. If I had to compare the design and code coming out of the XP Episode with my own, I would say that a little upfront design can do wonders, but hey, you know me :-).

Practicing
OK guys, what follows may sound a little odd, but in the end it will prove useful. Have faith :-).
You can get better at everything by doing anything :-), so why not getting better at delaying decisions by playing Windows Solitaire? All you have to do is set the options in the hardest possible way:

now, play a little, until you have to make some decision, like here:

I could move the 9 of spades or the 9 of clubs over the 10 of hearts. It's an irreversible decision (well, not if you use the undo, but that's lame :-). There are some ramifications for both choices.
If I move the 9 of clubs, I can later move the king of clubs and uncover a new card. After that, it's all unknown, and no further speculation is possible. Here, learning requires an irreversible decision; this is very common in real-world projects, but seldom discussed in literature.
If I move the 9 of spades, I uncover the 6 of clubs, which I can move over the 7 of aces. Then, it's kinda unknown, meaning: if you're a serious player (I'm not) you'll remember the previous cards, which would allow you to speculate a little better. Otherwise, it's just as above, you have to make an irreversible decision to learn the outcome.

But wait: what about the last responsible moment? Maybe we can delay this decision! Now, if you delay the decision by clicking on the deck and moving further, you're not delaying the decision: you're wasting a chance. In order to delay this decision, there must be something else you can do.
Well, indeed, there is something you can do. You can move the 8 of aces above the 9 of clubs. This will uncover a new card (learning) without wasting any present opportunity (it could still waste a future opportunity; life it tough). Maybe you'll get a 10 of aces under that 8, at which point there won't be any choice to be made about the 9. Or you might get a black 7, at which point you'll have a different way to move the king of clubs, so moving the 9 of spades would be a more attractive option. So, delay the 9 and move the 8 :-). Add some luck, and it works:

and you get some money too (total at decision time Vs. total at the end)


Novice solitaire players are also known to make irreversible decision without necessity. For instance, in similar cases:

I've seen people eagerly moving the 6 of aces (actually, whatever they got) over the 7 of spades, because "that will free up a slot". Which is true, but irrelevant. This is a decision you can easily delay. Actually, it's a decision you must delay, because:
- if you happen to uncover a king, you can always move the 6. It's not the last responsible moment yet: if you do nothing now, nothing bad will happen.
- you may uncover a 6 of hearts before you uncover a king. And moving that 6 might be more advantageous than moving the 6 of aces. So, don't do it :-). If you want to look good, quote Option Theory, call this a Deferral Option and write a paper about it :-).

Proactive Option Thinking
I've recently read an interesting paper in IEEE TSE ("An Integrative Economic Optimization Approach to Systems Development Risk Management", by Michel Benaroch and James Goldstein). Although the real meat starts in chapter 4, chapters 1-3 are probably more interesting for the casual reader (including myself).
There, authors recap some literature about Real Options in Software Engineering, including the popular argument that delaying decisions is akin to a deferral option. They also make important distinctions, like the one between passive learning through deferral of decisions, and proactive learning, but also between responsiveness to change (a central theme in agility literature) and manipulation of change (relatively less explored), and so on. There is a a lot of food for thought in those 3 chapters, so if you can get a copy, I suggest that you spend a little time pondering over it.
Now, I'm a strong supporter of Proactive Option Thinking. Waiting for opportunities (and then react quickly) is not enough. I believe that options should be "implanted" in our project, and that can be done by applying the right design techniques. How? Keep reading : ).

The Invariant Decision
If you look back at those pictures of Solitaire, you'll see that I wasn't really delaying irreversible decisions. All decisions in solitaire are irreversible (real men don't use CTRL-Z). Many decisions in software development are irreversible as well, especially when you are in a tight budget/schedule, so starting over is not an option. Therefore, irreversibility can't really be the key here. Indeed, I was trying to delay Invariant Decisions. Decisions that I can take now, or I can take later, with little or no impact on the outcomes. The concept itself may seem like a minor change from "irreversible", but it allows me to do some magic:
- I can get rid of the "last responsible moment" part, which is poorly defined anyway. I can just say: delay invariant decisions. Period. You can delay them as much as you want, provided they are still invariant. No ambiguity here. That's much better.
- I can proactively make some decisions invariant. This is so important I'll have to say it again, this time in bold: I can proactively make some decisions invariant.

Invariance, Design, Modularity
If you go back to the Historical Perspective paragraph, you can now read it under a different... perspective :-). Several tools, techniques, methods can be adopted not just to delay some decision, but to create the option to delay the decision. How? Through careful design, of course!

Consider the strong modularity you get from service-oriented architecture, and the platform independence that comes through (well-designed) web services. This is a powerful weapon to delay a lot of decisions on one side or another (client or server).

Consider standard protocols: they are a way to make some decision invariant, and to modularize the impact of some choices.

Consider encapsulation, abstraction and interfaces: they allow you to delay quite a few low-level decisions, and to modularize the impact of change as well. If your choice turn out to be wrong, but it's highly localized (modularized) you may afford undoing your decision, therefore turning irreversible into reversible. A barebone example can be found in my old post (2005!) Builder [pattern] as an option.

Consider a very old OOA/OOD principle, now somehow resurrected under the "ubiquitous language" umbrella. It states that you should try to reflect the real-world entities that you're dealing with in your design, and then in your code. That includes avoiding primitive types like integer, and create meaningful classes instead. Of course, you have to understand what you're doing (that is, you gotta be a good designer) to avoid useless overengineering. See part 4 of my digression on the XP Episode for a discussion about adding a seemingly useless Ball class (that is: implanting a low cost - high premium option).
Names alter the forcefield. A named concept stands apart. My next post on the forcefield theme, by the way, will explore this issue in depth :-).

And so on. I could go on forever, but the point is: you can make many (but not all, of course!) decisions invariant, if you apply the right design techniques. Most of those techniques will also modularize the cost of rework if you make the wrong decision. And sure, you can try to do this on the fly as you code. Or you may want to to some upfront design. You know what I'm thinking.

OK guys, it took quite a while, but now we have a new concept to play with, so more on this will follow, randomly as usual. Stay tuned.

Labels: , , , , , ,

Friday, January 01, 2010 

Inspirational reading

The Self as a Center of Narrative Gravity

By the way guys, happy new year : )

Labels: ,

Tuesday, December 15, 2009 

A little more on DSM and Gravity

In a recent paper ("The Golden Age of Software Architecture" Revisited, IEEE Software, July/August 2009), Paul Clements and Mary Shaw conclude by talking about Conformance Checking. Indeed, although many would say that the real design/architecture is represented by code, a few :-) of us still think that code should reflect design, and that conformance of code to design should be automatically checked when possible (not necessarily in any given project; not all projects are equal).
Conformance checking is not always simple; quoting Clements and Shaw: "Many architectural patterns, fundamental to the system’s design taken forward into code, are undetectable once programmed. Layers, for instance, usually compile right out of existence."

The good news is that layers can be easily encoded in a DSM. While doing so, I would use an extension of the traditional yes/no DSM, as I've anticipated in a comment to the previous post. While the traditional DSM is basically binary (yes/no), in many cases we are better off with a ternary DSM. That way, we can encode three different decisions:
Yes-now: there is a dependency, and it's here, right now.
Not-now: there is no dependency right now, but it wouldn't be wrong to have one.
Never: adding this dependency would violate a fundamental design rule.

A strong layered system requires some kind of isolation between layers. Remember gravity: new things are naturally attracted to existing things.
Attraction is stronger in the direction of simplicity and lack of effort: if no effort is required to violate architectural integrity, sooner or later it will be violated. Sure, conformance checking may help, but it would be better to set up the gravitational field so that things are naturally attracted to the right place.

The real issue, therefore, is the granularity of the DSM for a layered system. Given the fractal nature of software, a DSM can be applied at any granularity level: between functions, classes, "logical" components, "physical" components. Unless your system is quite small, you probably want to apply the DSM at the component level, which also means your layers should appear at the component level.

Note the distinction between logical and physical component. If you're working in a modern language/environment (like .NET or Java), creating a physical component is just a snap. Older languages, like C++, never got the idea of component into the standard, for a number of reasons; in fact, today this is one of the most limiting factors when working on large C++ system. In that case, I've often seen designer/programmers creating "logical" components out of namespaces and discipline. I've done that myself too, and it kinda works.

Here is the catch: binary separation between physical components is stronger than the logical separation granted from using different namespaces, which in turn is stronger than the separation between two classes in the same namespace, which is much stronger than the separation between two members of the same class.
More exactly, as we'll see in a forthcoming post, a binary component may act as a better shield and provide stronger isolation.

If a binary component A uses binary component B, and B uses binary component C, but does not reveal so in its interface (that is, public/protected members of public classes in B do not mention types defined in C) A knows precious nothing about C.
Using C from A requires that you discover C existence, then the existence of some useful class inside C. Most likely, to do so, you have to look inside B. At that point, adding a new service inside B might just be more convenient. This is especially true if your environment does not provide you with free indirect references (that is, importing B does not inject a reference to C "for free").
Here is again the interplay between good software design and properly designed languages: a better understanding of software forces could eventually help to design better languages as well, where violating a design rule should be harder than following the rule.

Now, if A and B are logical components (inside a larger, physical component D), then B won't usually act as a shield, mostly because the real (physical) dependency will be placed between D and C, not between B and D. Whatever B can access, A can access as well, without any additional effort. The gravitational field of B is weaker, and some code might be attracted to A, which is not what the designer wanted.

Therefore, inasmuch as your language allows you to, a physical component is always the preferred way to truly isolate one system from another.

OK, this was quite simple :-). Next time, I'll go back to the concept of frequency and then move to isolation!

Labels: , , , ,

Wednesday, July 08, 2009 

When in doubt, do the right thing

The bright side of spending most of my professional time on real-world projects is that I have an endless stream of inspiration, and what is even more important, the possibility of trying out new ideas, concepts, and methods. The dark side is that the same source of inspiration is taking away the precious time I would need to encode, structure, articulate knowledge, that therefore remains largely implicit, tacit, intuitive. The pitch black side is that quite often I'd like to share some real-world story, but I can't, as the details are kinda classified or just to protect the innocent. Sometimes, however, the story can be told with just a little camouflage.

Weeks ago, I was trying to figure out the overall architecture of a new system, intended to replace an obsolete framework. I could see a few major problems, two of which were truly hard to solve without placing a burden on everyone using the framework. Sure, we had other details to work out, but I could see no real showstoppers except for those two. The project manager, however, didn't want to face those problems. She wanted to start with the easy stuff, basically re-creating structures she was familiar with. I tried to insist about the need to figure out an overall strategy first, but to no avail. She wanted progress, right here, right now. That was a huge mistake.

Now, do not misunderstand me: I'm not proposing to stop any kind of development before you work every tiny detail out. Also, in some cases, the only real way to understand a system is by building it. However, building the wrong parts first (or in this case, building the easy parts first) is always a big mistake.

Expert designers know that in many cases, you have to face the most difficult parts early on. Why? Because if you do it too late, you won't have the same options anymore; previous decisions will act like constraints on late work.

Diomidis Spinellis has recently written a very nice essay on this subject (IEEE Software, March/April 2009). Here is a relevant quote: On a blank sheet of paper, the constraints we face are minimal, but each design decision imposes new restrictions. By starting with the most difficult task, we ensure that we’ll face the fewest possible constraints and therefore have the maximum freedom to tackle it. When we then work on the easier parts, the existing constraints are less restraining and can even give us helpful guidance.

I would add more: even if you take the agile stance against upfront design and toward emergent design, the same reasoning applies. If you start with the wrong part, the emergent design will work against you later. Sure, if you're going agile, you can always refactor the whole thing. But this reasoning is faulty, because in most cases, the existing design will also limit your creativity. It's hard to come up with new, wild ideas when those ideas conflict with what you have done up to that moment. It's just human. And yeah, agile is about humans, right? :-)

Expert designer start with the hard parts, but beginners don't. I guess I can quote another nice work, this time from Luke Hohmann (Journey of the Software Professional - a Sociology of Software Development): Expert developer's do tend to work on what is perceived to be the hard part of the problem first because their cognitive libraries are sufficiently well developed to know that solving the "hard part first" is critical to future success. Moreover, they have sufficient plans to help them identify what the hard part is. Novices, as noted often fail to work on the hard-part-first for two reasons. First, they may not know the effectiveness of the hard part first strategy. Second, even if they attempt to solve the hard part first, they are likely to miss it.

Indeed, an expert analyst, or designer, knows how to look at problems, how to find the best questions before looking for answers. To do this, however, we should relinquish preconceived choices. Sure, experts bring experience to the table, hopefully in several different fields, as that expands our library of mental plans. But (unlike many beginners) we don't approach the problem with pre-made choices. We first want to learn more about the forces at play. Any choice is a constraint, and we don't want artificial constraints. We want to approach the problem from a clean perspective, because freedom gives us the opportunity to choose the best form, as a mirror of the forcefield. By the way, that's why zealots are often mediocre designers: they come with too many pre-made choices, or as a Zen master would say, with a full cup.

Of course, humans being humans, it's better not to focus exclusively on the hard stuff. For instance, in many of my design sessions with clients, I try to focus on a few simple things as we start, then dig into some hard stuff, switch back to something easy, and so on. That gives us a chance to take a mental break, reconsider things in the back of our mind, and still make some progress on simpler stuff. Ideally, but this should be kinda obvious by now, the easy stuff should be chosen to be as independent/decoupled as possible from the following hard stuff, or we would be back to square one :-).

In a sense, this post is also about the same thing: writing about some easy stuff, to take a mental break from the more conceptual stuff on the forcefield. While, I hope, still making a little progress in sharing some useful design concept. See you soon!

Labels: , , , , , ,

Tuesday, June 09, 2009 

Design Rationale

In the past few weeks I've taken a little time to write down more about the concept of frequency; while doing so, I realized I had to explore the concept of forcefield better, and while doing so (yeap :-)) I realized there was a rather large overlap between the notion of forcefield and the notion of design rationale.

Design rationale extends beyond software engineering, and aims to capture design decisions and the reasoning behind those decisions. Now, design decisions are (ideally) taken as trade-offs between several competing forces. Those forces creates the forcefield, hence the large overlap between the two subjects.

The concept of design rationale has been around for quite a few years, but I haven't seen much progress either in tools or notations. Most often, tools fall into the “rationalize after the fact” family, while I'm more interested in reasoning tools and notations, that would help me (as a designer) get a better picture about my own thoughts while I'm thinking. That resonates with the concept of reflection in action that I've discussed in Listen to Your Tools and Materials a few years ago.

So, as I was reading a recent issue of IEEE Software (March/April 2009), I found a list of recent (and not so recent) tools dealing with design rationale in a paper by Philippe Kruchten, Rafael Capilla, Juan Carlos Dueñas (The Decision View’s Role in Software Architecture Practice), and I decided to take a quick ride. Here is a very quick summary of what I've found.

Seurat
Seurat (see also the PDF tutorial on the same website) is based on a very powerful language / model, but the tool (as implemented) is very limiting. It's based on a tree structure, which makes for a nice todo list, but makes visual reasoning almost impossible. Actually, in the past I've investigated on using the tree format myself (and while doing so, I discovered others have done the same: see for instance the Reasoning Tree pattern), but restricting visualization to (hyperlinked) nodes in a tree just does not work when you're facing difficult problems.

Sysiphus
Sysiphus seems to have recently morphed into another tool (UniCase), but from the demo of UniCase it's hard to appreciate any special support for design rationale (so far).

AREL
(see also some papers from Antony Tang on the same page; Antony also had an excellent paper on AREL in the same issue of IEEE Software)
AREL is integrated with Enterprise Architect. Integration with existing case tools (either commercial or free) seems quite a good idea to me. AREL uses a class diagram (through a UML profile) to model design rationale, so it's not limited to a tree format. Still, I've found the results rather hard to read. It seems more like a tool to give structure to design knowledge than a tool to reason about design. As I go through the examples, I have to study the diagram; it doesn't just talk back to me. I have to click around and look at other artifacts. The reasoning is not in the diagram, it's only accessible through the diagram.

PAKME
Honestly, PAKME seems more like an exercise in building a web-based collaboration tool for software development than a serious attempt at providing a useful / usable tool to record design rationale. It does little more than organize artifacts, and it requires so many clicks / page refresh to get anything done that I doubt a professional designer could ever use it (sorry guys).

ADDSS
ADDSS is very much like PAKME, although it adds a useful Patterns section. It's so far from what I consider a useful design tool (see my for more) that I can't really think of using it (sorry, again).

Knowledge Architect
Again, a tool with some good ideas (like Word integration) but far from what I'm looking for. It's fine to create a structured design document, but not to reason about difficult design problems.

In the end, it seems like most of those tools suffer from the same problems:
- The research is good; a nice metamodel is built, some of the problems faced by professional designers seem to be well understood.
- The tool does little more than organize knowledge, would get in the way of the designer thinking about thorny issues, does not help through visualization, and is at best useful at the end of the design process, possibly to fake some rationality, a-la Parnas/Clements.

That said, AREL is probably the most promising tool of the pack, but in the end I've being doing pretty much the same for years now, using (well, abusing :-) plain old use case diagrams to model goals and issues, with a few ideas taken from KAOS and the like.

Recently, I began experimenting with another standard UML diagram (the activity diagram) to model some portion of design reasoning. I'll show an example in my next post, and then show how we can change our perspective and move from design reasoning to the forcefield.

Labels: , , ,

Tuesday, June 02, 2009 

Good Design

I rarely (if ever) blog about technology, mostly because once you cut the marketing cr@p, consumer technology is often so moot. Still, a few days ago I read about local dimming in the news section of IEEE Computer. A good designer should be quick to spot good (or intriguing) design, and that idea struck me as an excellent use of technology.

It's also interesting to look at it from a forcefield perspective. CCFLs had several drawbacks as light sources for LCD displays. Some of those issues have been resolved using LED backligthing instead, but if we stop there, we're just using new technology to solve the exact same problem we solved with yesterday's technology. That's usually the wrong approach, as the old technology was part of a larger design, a larger forcefield, and it managed to resolve only some of those forces.

Back to local dimming, the idea is amazingly simple from the forcefield perspective: instead of using lamps for ligthing and LCD for contrast, color, etc, split some of the work between the LEDs and the LCD. This can be done because once we introduce a LED matrix, the forcefield itself changes. This has long been known: when we introduce technology, we can even change the problem itself.

Of course, we face similar issues in software all the time. I wrote something along the same lines in IEEE Software back in 1997 (When Past Solutions Cause Future Problems). I wasn't talking forcefield back then, but the "ask why" suggestion is very much forcefield friendly. More on this shortly, as I'm trying to catch up with many ideas I didn't have time to blog about, and write them down in small chunks...

Labels: , ,

Sunday, May 10, 2009 

Interesting paper

While looking for something else, I stumbled on a paper with an intriguing title: The Ambiguity Criterion in Software Design by Álvaro García and Nelson Medinilla.

I encourage readers interested in the concepts of design and form to take a look. Although I don't really like the term "ambiguity" (it makes for a catchy title, but it's commonly used with quite a different semantics) I think the paper is dealing with an interesting, pervasive attribute of software.

If you have read my previous posts on software design, you may recognize (although not spelled that way) the [almost] fractal nature of "ambiguity". Actually, as I spoke of "n-degrees of separation" in a previous post, I had some overlapping concepts in mind. Curiously enough, subtyping is also mentioned in another article I've recommended time ago about symmetry and symmetry breaking.

I think there is something even more primitive than that at play here, something more fractal in nature, something that has to do with names and identities or (as the authors note) abstractions and instances. I also mentioned a problem with compile-time names in the post above, so there is a lot of stuff pointing the same direction!

I have to think more about that, but first I'll have to write down what's left about frequency...

Labels: , ,

Sunday, February 22, 2009 

Notes on Software Design, Chapter 4: Gravity and Architecture

In my previous posts, I described gravity and inertia. At first, gravity may seem to have a negative connotation, like a force we constantly have to fight. In a sense, that's true; in a sense, it's also true for its physical counterpart: every day, we spend a lot of energy fighting earth gravity. However, without gravity, like as we know it would never exist. There is always a bright side :-).

In the software realm, gravity can be exploited by setting up a favorable force field. Remember that gravity is a rather dumb :-) force, merely attracting things. Therefore, if we come up with the right gravitational centers early on, they will keep attracting the right things. This is the role of architecture: to provide an initial, balanced set of centers.

Consider the little thorny problem I described back in October. Introducing Stage 1, I said: "the critical choice [...] was to choose where to put the display logic: in the existing process, in a new process connected via IPC, in a new process connected to a [RT] database".
We can now review that decision within the framework of gravitational centers.

Adding the display logic into the existing process is the path of least resistance: we have only one process, and gravity is pulling new code into that process. Where is the downside? A bloated process, sure, but also the practical impossibility of sharing the display logic with other processes.
Reuse requires separation. This, however, is just the tip of the iceberg: reuse is just an instance of a much more general force, which I'll cover in the forthcoming posts.

Moving the display logic inside a separate component is a necessary step toward [independent] reusability, and also toward the rarely understood concept of a scaled-down architecture.
A frequently quoted paper from David Parnas (one of the most gifted software designers of all times) is properly titled "Designing Software for Ease of Extension and Contraction" (IEEE Transactions on Software Engineering, Vol. 5 No. 2, March 1979). Somehow, people often forget the contraction part.
Indeed, I've often seen systems where the only chance to provide a scaled-down version to customers is to hide the portion of user interface that is exposing the "optional" functionality, often with questionable aesthetics, and always with more trouble than one could possibly want.

Note how, once we have a separate module for display, new display models are naturally attracted into that module, leaving the acquisition system alone. This is gravity working for us, not against us, because we have provided the right center. That's also the bright side of the thorny problem, exactly because (at that point, that is, stage 2) we [still] have the right centers.

Is the choice of using an RTDB to further decouple the data acquisition system and the display system any better than having just two layers?
I encourage you to think about it: it is not necessarily trivial to undestand what is going on at the forcefield level. Sure, the RTDB becomes a new gravitational center, but is a 3-pole system any better in this case? Why? I'll get back to this in my next post.

Architecture and Gravity
Within the right architecture, features are naturally attracted to the "best" gravitational center.
The "right" architecture, therefore, must provide the right gravitational centers, so that features are naturally attracted to the right place, where (if necessary) they will be kept apart from other features at a finer granularity level, through careful design and/or careful refactoring.
Therefore, the right architeture is not just helping us cope with gravity: it's helping us exploit gravity to our own advantage.

The wrong architecture, however, will often conjure with gravity to preserve itself.
As part of my consulting activity, I’ve seen several systems where the initial partitioning of responsibility wasn’t right. The development team didn’t have enough experience (with software design and/or with the problem domain) to find out the core concepts, the core issues, the core centers.
The system was partitioned along the wrong lines, and as mass increased, gravity kicked in. The system grew with the wrong form, which was not in frictionless contact with the context.
At some point, people considered refactoring, but it was too costly, because mass brings Inertia, and inertia affects any attempt to change direction. Inertia keeps a bad system in a bad state. In a properly partitioned system, instead, we have many options for change: small subsystems won’t put up much of a fight. That’s the dream behind the SOA concept.
I already said this, but is worth repeating: gravity is working at all granularity levels, from distributed computing down to the smallest function. That's why we have to keep both design and code constantly clean. Architecture alone is not enough. Good programmers are always essential for quality development.

What about patterns? Patterns can lower the amount of energy we have to spend to create the right architecture. Of course, they can do so because someone else spent some energy re-discovering good ideas, cleaning them up, going through shepherding and publishing, and because we spent some time learning about them. That said, patterns often provide an initial set of centers, balancing out some forces (not restricted to gravity).
Of course, we can't just throw patterns against a problem: the form must be in effortless contact with the real problem we're facing. I've seen too many good-intentioned (and not so experienced :-) software designers start with patterns. But we have to understand forces first, and adopt the right patterns later.

Enough with mass and gravity. Next time, we're gonna talk about another primordial force, pushing things apart.

See you soon, I hope!

Labels: , , , , ,

Saturday, December 06, 2008 

Notes on Software Design, Chapter 2: Mass and Gravity

Mass is a simple concept, which is better understood by comparison. For instance, a long function has bigger mass than a short one. A class with several methods and fields has bigger mass than a class with just a few methods and fields. A database with a large number of tables has bigger mass than a database with a few. A database table with many fields has bigger mass than a table with just a few. And so on.

Mass, as discussed above, is a static concept. We don't look at the number of records in a database, or at the number of instances for a class. Those numbers are not irrelevant, of course, but they do not contribute to mass as discussed here.

Although we can probably come up with a precise definition of mass, I'll not try to. I'm fine with informal concepts, at least at this time.

Mass exerts gravitational attraction, which is probably the most primitive force we (as software designers) have to deal with. Gravitational attraction makes large functions or classes to attract more LOCs, large components to attract more classes and functions, monolithic programs to keep growing as monoliths, 1-tier or 2-tiers application to fight as we try to add one more tier. Along the same lines, a single large database will get more tables; a table with many fields will attract more fields, and so on.

We achieve low mass, and therefore smaller and balanced gravity, through careful partitioning. Partitioning is an essential step in software design, yet separation always entails a cost. It should not surprise you that the cost of [fighting] gravity has the same fractal nature of separation.

A first source of cost is performance loss:
- Hardware separation requires serialization/marshaling, network transfer, synchronization, and so on.
- Process separation requires serialization/marshaling, synchronization, context switching, and so on.
- In-process component separation requires indirect function calls or load-time fix-up, and may require some degree of marshaling (depending on the component technology you choose)
- Interface – Implementation separation requires (among other things) data to be hidden (hence more function calls), prevents function inlining (or makes it more difficult), and so on.
- In-component access protection prevents, in many cases, exploitation of the global application state. This is a complex concept that I need to defer to another time.
- Function separation requires passing parameters, jumping to a different instruction, jumping back.
- Mass storage separation prevents relational algebra and query optimization.
- Different tables require a join, which can be quite costly (here the number of records resurfaces!).
- (the overhead of in-memory separation is basically subsumed by function separation).

A second source of cost is scaffolding and plumbing:
- Hardware separation requires network services, more robust error handling, protocol design and implementation, bandwidth estimation and control, more sophisticated debugging tools, and so on.
- Process separation requires most of the same.
- And so on (useful exercise!)

A third source of cost is human understanding:
Unfortunately, many people don’t have the ability to reason at different abstraction levels, yet this is exactly what we need to work effectively with a distributed, component-based, multi-database, fine-grained architecture with polymorphic behavior. The average programmer will find a monolithic architecture built around a single (albeit large) database, with a few large classes, much easier to deal with. This is only partially related to education, experience, and tools.

The ugly side of gravity is that it’s a natural, incremental, attractive, self-sustaining force.
It starts with a single line of code. The next line is attracted to the same function, and so on. It takes some work to create yet another function; yet another class; yet another component (here technology can help or hurt a lot); yet another process.
Without conscious appreciation of other forces, gravity makes sure that the minimum resistance path is followed, and that’s always to keep things together. This is why so much software is just a big ball of mud.

Enough for today. Still, there is more to say about mass, gravity and inertia, and a lot more about other (balancing) forces, so see you guys soon...

Breadcrumb trail: instance/record count cannot be ignored at design time. Remember to discuss the underlying forces.

Labels: , , ,

Sunday, October 26, 2008 

Microblogging is not my thing...

A few weeks ago I got a phone call from a client. They want to insource a mission-critical piece of code. I talked about the concept of Habitable Software and thought I could write something here.

As I started to write, words unfolded in unexpected rivers. Apparently, I've got too much to say [and too little time].

So, I tried to use a mind map to get an overview of what I was trying to say.

Here it is (click to get a full-scale pdf):



Strictly speaking, it's not even a mind map, as I drew a graph, not a tree. I find the tree format very limiting, which is probably a side-effect of keeping a lot of connections in my mind.

Another side effect is that I find micro-blogging unsatisfactory. Sure, I could post something like:

interesting book, take a look

and get over, but it's just not my style.

Anyway, I'll try to keep this short and just add a link to the presentation on form Vs. function that I mentioned in the mind map: Integrating Form and Function. Don't mind the LISP stuff :-). That thing about the essential and contingent interpreter is great.

More on all this another time, as I manage to unravel the fabric of my mind :-)

Labels: , , , ,

Wednesday, September 03, 2008 

Horizontal scaling

I finally took the time to read BASE: An ACID Alternative by Dan Pritchett in ACM Queue (the paper is free also for non-members). It's short and simple, but highlights a frequent problem in horizontal scaling: when you start partitioning databases (or services, for that matter) you step into distributed transactions, and that's kinda slow, man.

Of course, we get into distributed transactions because we want to preserve transactional thinking and the ACID properties. However, as Dan notes, we can trade some (short-term) consistency for higher availability and performance. As you'll see, that raises the bar for infrastructure, requiring at least a persistent queuing mechanism.

The author comes from Ebay, so we can assume he knows a thing or two about scalability :-). Also, the paper is very readable, and the same techniques can be successfully used in very complex systems (I remember I used similar techniques in quite a few banking applications). The trick is to let go some of the orthodoxy about transactions, and design your data layer for scalability.

Still on the issue of scalability, I can also recommend another article from ACM Queue (again, free to non-members): Learning from THE WEB by Adam Bosworth (VP of engineering at Google).
I especially like point 3 because it's so damn unorthodox: It is acceptable to be stale much of the time. Again, it's not easy to accept this, and to adapt our applications (and requirements!) accordingly. In many cases, we just can't do it. In many (many more than people are inclined to accept) we can.

There is also a strong connection with the BASE approach above, and both requires a little of out-of-the-box thinking to be applied. We may have to tweak requirements a little, to move closer to the sweet spot between technology and business. More on this another time :-).

Labels: , ,

Wednesday, June 25, 2008 

More on Code Clones

I've been talking about code clones before. It's a simple metric that I've used in several projects with encouraging results.

Till no long ago, however, I thought code clones detection was useful mostly to:

1) Assess and monitor an interesting quality aspect of a product
This requires that we constantly monitor code clones. If some code already exists, we can create a baseline and enforce a rule that things can only get better, not worse. I usually monitor several internal quality attributes at build time, because that's a fairly flexible moment, where most tools allow to insert some custom steps.

2) Identify candidates for refactoring, mostly in large, pre-existing projects.
This requires, of course, a certain willingness to act on your knowledge, that is, to actually go ahead and refactor duplicated code.

Sometimes, when the codebase is large, resources are scarce, or the company interest in software quality is mostly a marketing statement disconnected from reality, a commitment to refactor the code is never taken, or never taken seriously, which is about the same.

Here comes the third use of code clones. It is quite obvious, and I should have considered it earlier, but for some reason I didn't. I guess I was somehow blinded by the idea that if you care about quality, you must get in there and refactor the damn code. Strong beliefs are always detrimental to creativity :-).

Now: clones are bad because (in most cases) you have to keep them in synch during maintenance. If you don't, something bad is gonna happen (and yes, if you do, you waste a lot of time anyway, so you could as well refactor; but this is that strong belief rearing its head again :-).
So, if you don't want to use a code clones list to start a refactoring campaign, what else can you do? Use it to make sure you didn't forget to update a clone!

Unfortunately, with the tools I know, a large part of this process can't be easily automated. You would have to run a clone detection tool and keep the log somewhere. Then, whenever you change some portion of code, you'll have to check if that portion is cloned elsewhere (from the log). You then port your change in the other clones (and test everything). The clones list must be periodically updated, also to account for changes coming from different programmers.

Better tools can be easily conceived. Ideally, this could be integrated in your IDE: as I suggested in Listen to Your Tools and Materials, editors could provide unobtrusive backtalk, highlighting the fact that you're changing a portion of code that has been cloned elsewhere. From there, you could jump into the other files, or ask the editor to apply the same change automatically. In the end, that would make clones more tolerable; while this is arguably bad, it's still much better than leave them out of synch.

From that perspective, I would say that another interesting place in our toolchain where we would benefit from an open, customizable process is the version control system. Ideally, we may want to verify and enforce rules right at check-in time, without the need to delay checks until build time. Open source tools are an obvious opportunity to create a better breed of version control systems, which so far (leaving a few religious issues aside) have been more or less leveled in term of available features.

Note: I've been writing this post on a EEE PC (the Linux version), and I kinda like it. Although I'm not really into tech toys, and although the EEE looks and feels :-) like a toy, it's just great to carry around while traveling. The tiny keyboard is a little awkward to use, but I'll get used to it...

Labels: , , ,

Tuesday, May 13, 2008 

Natural language

Some (most :-) of my clients are challenging. Sometimes the challenge comes from the difficult technical problems they face. That's the best kind of challenge.
Sometimes the challenge comes from people: that's the worst kind of challenge, and one that right now is better left alone.
Sometimes the challenge comes from the organization, which means it also comes from people, but with a different twist. Challenges coming from the organization are always tough, but overcoming those challenges can really make a difference.

One of my challenging clients is a rather large company in the financial domain. They are definitely old-school, and although upper management can perfectly see how software is permeating and enabling their business, middle management tend to see software as a liability. In their eternal search for lower costs, they moved most of the development offshore, keeping only an handful of designers and all the analysts in-house. Most often, design is done offshore as well, for lack of available designers on this side of the world.

Analysts have a tough job there. On one side, they have to face the rest of the company, which is not software-friendly. On the other side, they have to communicate clear requirements to the offshore team, especially to the designers, who tend to be very technology-oriented.
To make things more complicated, the analysts often find themselves working on unfamiliar sub-domains, with precise regulations but also with large gray areas that must be somehow understood and communicated.
Icing on the cake: some of those financial instruments do not even exist in the local culture of the offshore team, making communication as difficult as ever.

Given this overall picture, I've often recommended analysts to spend some time creating a good domain model (usually, a UML class diagram, occasionally complemented by some activity diagrams).
The model, with unambiguous associations, dependencies, multiplicities, and so on, will force them to ask the right questions, and will make it easier for the offshore designer to acquaint himself with the problem. Over time, this suggestion has been quite helpful.
However, as I said, the organization is challenging. Some of the analysts complained that their boss is not satisfied by a few diagrams. He wants a lengthy, wordy explanation, so he can read it over and see if they got it right (well, that's his theory anyway). The poor analyst can't possibly do everything in the allotted time.

Now, I always keep an eye on software engineering research. I've seen countless attempts to create UML diagrams from natural language specifications. The results are usually unimpressive.
In this case, however, I would need exactly the opposite: a tool to generate a precise, yet verbose domain description out of a formal domain model. The problem is much easier to solve, especially because analysts can help the tool, by using the appropriate wording.

Guess what, the problem must be considered unworthy, because there is a dearth of works in that area. In practice, the only relevant paper I've been able to find is Generating Natural Language specifications from UML class diagrams by Farid Meziane, Nikos Athanasakis and Sophia Ananiadou. There is also Nikos' thesis online, with a few more details.
The downside is that (as usual) the tool they describe does not seem to be generally available. I've yet to contact the authors: I just hope it doesn't turn out to be one of those Re$earch Tool$ that never get to be used.

From the paper above, I've also learnt about ModelExplainer , a similar tool from a commercial company. Again, the tool doesn't seem to be generally available, but I'll get in touch with people there and see.

Overall, the problem doesn't seem so hard, especially if we accept the idea that the analyst will help the tool, choosing appropriate wording. An XMI-to-NL (Natural Language) would make for a perfect open source project. Any takers? :-)

Labels: , , , ,

Friday, April 25, 2008 

Can AOP inform OOP (toward SOA, too? :-) [part 2]

Aspect-oriented programming is still largely code-centric. This is not surprising, as OOP went through the same process: early emphasis was on coding, and it took quite a few years before OOD was ready for prime time. The truth about OOA is that it never really got its share (guess use cases just killed it).

This is not to say that nobody is thinking about the so-called early aspects. A notable work is the Theme Approach (there is also a good book about Theme). Please stay away from the depressing idea that use cases are aspects; as I said a long time ago, it's just too lame.

My personal view on early aspects is quite simple: right now, I mostly look for cross-cutting business rules as candidate aspects. I guess it's quite obvious that the whole "friend gift" concept is a business rule cutting through User and Subscription, and therefore a candidate aspect. Although I'm not saying that all early aspects are cross-cutting business rules (or vice-versa), so far this simple guideline has served me well in a number of cases.

It is interesting to see how early aspects tend to be cross-cutting (that is, they hook into more than one class) but not pervasive. An example of pervasive concern is the ubiquitous logging. Early aspects tend to cut through a few selected classes, and tend to be non-reusable (while a logging aspect can be made highly reusable).
This seems at odd with the idea that "AOP is not for singleton", but I've already expressed my doubt on the validity of this suggestion a long time ago. It seems to me that AOP is still in its infancy when it comes to good principles.

Which brings me to obliviousness. Obliviousness is an interesting concept, but just as it happened with inheritance in the early OOP days, people tend to get carried over.
Remember when white-box inheritance was applied without understanding (for instance) the fragile base class problem?
People may view inheritance as a way to "patch" a base class and change its behaviour in unexpected ways. But truth is, a base class must be designed to be extended, and extension can take place only through well-defined extensions hot-points. It is not a rare occurrence to refactor a base class to make extension safer.

Aspects are not really different. People may view aspects as a way to "patch" existing code and change its behaviour in unexpected ways. But truth is, when you move outside the safe realm of spectators (see my post above for more), your code needs to be designed for interception.
Consider, for instance, the initial idea of patching the User class through aspects, adding a data member, and adding a corresponding data into the database. Can your persistence logic be patched through an aspect? Well, it depends!
Existing AOP languages can't advise any given line: there is a fixed grammar for pointcuts, like method call, data member access, and so on. So if your persistence code was (trivially)

class User
{
void Save()
{
// open a transaction
// do your SQL stuff
// close the transaction
}
}
there would be no way to participate to the transaction from an aspect. You would have to refactor your code, e.g. by moving the SQL part in a separate method, taking the transaction as a parameter. Can we still call this obliviousness? That's highly debatable! I may not know the details of the advice, but I damn sure know I'm being advised, as I refactored my code to be pointcut-friendly.

Is this really different from exposing a CreateSubscription event? Yeah, well, it's more code to write. But in many data-oriented applications, a well-administered dose of events in the CRUD methods can take you a long way toward a more flexible architecture.

A closing remark on the SOA part. SOA is still much of a buzzword, and many good [design] ideas are still in the decontextualized stage, where people are expected to blindly follow some rule without understanding the impact of what they're doing.

In my view, a crucial step toward SOA is modularity. Modularity has to take place at all levels, even (this will sound like an heresy to some) at the database level. Ideally (this is not a constraint to be forced, but a force to be considered) every service will own its own tables. No more huge SQL statements traversing every tidbit in the database.

Therefore, if you consider the "friend gift" as a separate service, it is only natural to avoid tangling the User class, the Subscription class, and the User table with information that just doesn't belong there. In a nutshell, separating a cross-cutting business rule into an aspect-like class will bring you to a more modular architecture, and modularity is one of the keys to true SOA.

Labels: , , ,

Friday, April 04, 2008 

Asymmetry

I'm working on an interesting project, trying to squeeze all the available information from sampled data and make that information useful for non-technical users. I can't provide details, but in the end it boils down to reading a log file from a device (amounting to about 1 hour of sampled data from multiple channels), do the usual statistics, noise filtering, whatever :-), calculate some pretty useful stuff, and create a report that makes all that accessible to a business expert.

The log file is (guess what :-) in XML format, meaning it's really huge. However, thanks to modern technology, we just generated a bunch of classes from the XSD and let .NET do the rest. Parsing is actually pretty fast, and took basically no time to write.
In the end, we just get a huge collection of SamplingPoint objects. Each Sampling point is basically a structure-like class, synthesized from the XSD:

class SamplingPoint
{
public DateTime Timestamp { // get, set }
public double V1 { // get, set }
// ...
public double Vn { // get, set }
}

each value (V1...Vn) is coming from a different channel and may have a different unit of measurement. They're synchronously sampled, so it made sense for whoever developed the data acquisition module to group them together and dump them together in a single SamplingPoint tag.

We extract many interesting facts from those data, but for each Vi (i=1...N) we also show some "traditional" statistics, like average, standard deviation and so on.
Reasoning about average and standard deviation is not for everyone: I usually consider an histogram of the distribution much easier to understand (and to compare with other histograms):



Here we see the distribution of V1 over time: for instance, V1 had a value between 8 and 9 for about 6% of the time. Histograms are easy to read, and users quickly asked to see histograms for each V1..Vn over time. Actually, since one of the Vj is monotonically increasing with time, they also asked to see the histogram of the remaining Vi against Vj too. So far, so good.

Now, sometimes I hate writing code :-). It usually happens when my language doesn't allow me to write beautiful code. Writing a function to calculate the histogram of (e.g.) V1 against time is trivial: you end up with a short piece of code taking an array of SamplingPoints and using the V1 and Timestamp properties to calculate the histogram. No big deal.

However, that function is not reusable, exactly because it's using V1 and Timestamp. You can deal with this in at least 3 unpleasant :-) ways:

1) you don't care: you just copy/paste the whole thing over and over. If N = 10, you get 19 almost-identical functions (10 for time, 9 for Vj).

2) you restructure your data before processing. Grouping all the sampled data at a given time in a single SamplingPoint structure makes a lot of sense from a producer point of view, but it's not very handy from a consumer point of view. Having a structure of arrays (of double) instead of an array of structures would make everything so much simpler.

3) you write an "accessor" interface and N "accessors" classes, one for each Vi. You write your algorithms using accessors. Passing the right accessors (e.g. for time and V1) will get you the right histogram.

All these options have some pros and cons. In the end, I guess most people would go with (2), because that brings us into the familiar realm of array-based algorithms.

However, stated like this, it seems more like a "data impedance" problem between two subsystems than a language problem. Why did I say it's the language fault? Because the language is forcing me to access data members with compile-time names, and does not (immediately) allow me to access data members using run-time names.

Don't get me wrong: I like static typing, and I like compiled languages. I know from experience that I tend to make little stupid mistakes, like typing the wrong variable name and stuff like that. Static typing and compiled languages catch most of those stupid mistakes, and that makes my life easier.

Still, the fact that I like something doesn't mean I want to use that thing all the time. I want to have options. Especially when those options would be so simple to provide.

In a heavily reflective environment like .NET, every class can be easily considered an associative container, from the property/data member names to property/data member values. So I shold be able to write (if I wanted):

SamplingPoint sp = ... ;
double d1 = sp[ "V1" ] ;

which should be equivalent to

double d1 = sp.V1 ;

Of course, that would make my histogram code instantly reusable: I'll just pass the run-time names of the two axes. You can consider this equivalent to built-in accessors.

Now, I could implement something like that on my own, using reflection. It's not really difficult: you just have to gracefully handle collections, nested objects, and so on. Unfortunately, C# (.NET) do not allow a nice implementation of the concept, mostly for a bunch or constraints they added to conversion operators: no generic conversion operators (unlike C++), no conversion to/from Object, and so on. In the end you may need a few more casts that you'd like to, but it can be done.

I'll also have to evaluate the performance implications for this kind of application, but I know it would make my life much easier in other applications (like binding smart widgets to a variety of classes, removing the need for braindead "controller" classes). It's just a pity that we don't have this as built-in language feature: it would be much easier to get this right (and efficient) at the language level, not at the library level (at least, given C# as it is now).

Which brings me to the concept of symmetry. A few months ago I stumbled upon a paper by Jim Coplien and Zhao Liping (Understanding Symmetry in Object-Oriented Languages, published in Journal of Object Technology, an interesting, free publications that's filling the void left by the demise of JOOP). Given my interest on the concept of form in software, the paper caught my attention, but I postponed further thinking till I could read more on the subject (there are several papers on symmetry in Cope's bibliography, but I need a little more time than I have). A week ago or so, I've also found (and read) another paper from Zhao in Communications of ACM, March 2008: Patterns, Symmetry, and Symmetry breaking.

Some of their concepts sound odd to me. The concept of symmetry is just fine, and I think it may help to unravel some issues in language design.
However, right now the idea that patterns are a way to break symmetry doesn't feel so good. I would say exactly the opposite, but I really have to read their more mathematically-inclined papers before I say more, because words can be misleading, while theorems usually are not :-).

Still, the inability to have built-in, natural access to fields and properties through run-time names struck me as a lack of symmetry in the language. In this sense, the Accessor would simply be a way to deal with that lack of symmetry. Therefore it seems to me that patterns are a way to bring back symmetry, not to break symmetry. In fact, I can think of many cases where patterns "expose" some semantic symmetry that was not accessible because of (merely) syntactic asymmetry.

More on this as I dig deeper :-).

Labels: , , , ,

Friday, March 07, 2008 

Problem frames and the DNC

If you've read any of my posts before, you probably know I'm not a fan of pre-packaged, one-size-fits-all ideas. Methods, technology, languages, models, even specific values (design values; business values; etc.) must always be considered in context.

With this background, it's hardly surprising that I've always found the notion of a Domain Neutral Component quite uncomfortable. It really sounded like an attempt to shoehorn the world into a predefined model, while we should carefully look for the relevant portion of the world we want to represent into our model.

Still, in many cases (especially for a junior analyst) starting with the DNC might be better than starting with a blank page. How could this be? Does it work all the time? If not, when? Honestly, in the past years I haven't spent too much time trying to answer those questions. The DNC was part of my bag of tricks, but I didn't use it often.

Recently, however, I was thinking (once more :-) about colors and UML, and while looking into some of Peter Coad's works for a specific reference, I stumbled on the DNC again. So I thought, maybe I've learnt something in the past few years that could shed some light on the inner quality of the DNC, and its suitability in any (?) given context.

The DNC can be considered as an overengineered "standard" model representing something (an event / moment / interval) happening somewhere (a place) involving one party or more (originally an actor), usually exchanging or dealing with some good (a thing). The party plays a role, hence the later shift from actor to party + role. Indeed, you can start with a very simple model, and "derive" the DNC by following a very reasonable line of reasoning: see From Associations To Domain Neutral Component for the full story.

Of course, in many cases, the DNC might be overengineered. But you can always simplify the unnecessary parts. The real question, however, is when the DNC can give you a head start, and when it won't (context, context, context :-).

That's where Problem Frames Patterns can shed some light. I recommend that you keep the PFP paper at hand while reading what follows.

Consider, for instance, the Commanded Behavior problem frame. Shortly, the problem is stated as:
There is some part of the world whose behavior is to be controlled in accordance with commands issued by
an operator. The problem is to build a machine that will accept the operator's commands and impose the
control accordingly.

and the frame concerns are:
1. When the Operator issues a Command
2. AND the Machine rejects invalid Commands
3. AND the Machine either ignores it if unviable, OR issues Control Events
4. AND the Control Events change the Controlled Domain
5. ENSURE the changed state meets the Commanded Behavior in every case.

That's not at odds with the DNC. Control Events map nicely to moment-interval; moreover, the text above suggests that multiple events might be issued for a single command (MomentInterval-MomentIntervalDetails). The Control Events change the Controlled Domain. Therefore, they must describe external entities (each probably having a Role) that must somehow influence internal entities (Party, having a Role, or Places, having a Role).
Using the Email Client example, "Email Retrieval" is an event, composed of individual retrieval events (one for each email message). Each Message is a Thing, although with a dubious Role. Retrieval needs (at least) an Account, which is a Party playing a specific Role (Receiver). Retrieval takes place on a specific Server, playing a Role (POP3 or IMAP server). Not so bad.

What if we look into another problem frame? Let's try Transformation:
There are some given inputs which must be transformed to give certain required outputs. The output data
must be in a particular format, and it must be derived from the input data according to certain rules. The
problem is to build a machine that will produce the required outputs from the inputs.

Concerns:
1. BY traversing the input in sequence, and simultaneously traversing the outputs in sequence
2. AND finding values in the input domain, and creating values in the output domain
3. AND that the input values produce the correct output values
4. ENSURES the I/O relation is satisfied.

Hmmm. Doesn't map so nicely, but the text is really too abstract. Let's try the actual problem: an HTML email to be converted in plain text to be shown on a limited device (I added some context to the equally abstract problem described in the original paper). Well, there is hardly a dominant MomentInterval here. Hardly a party, place, thing triad gravitating around the central MI. Hardly any value in adopting the DNC as a starting point. What can be helpful here? Concepts from grammars, taxonomies, language theory. We're basically modeling a translator, and language theory will give you the head start.

So here it is. The DNC is an interesting concept because it maps nicely to some recurring problem frames (if you got time to spare, you may want to investigate which frames are a good fit for the DNC). Some problem frames, however, just don't match with the DNC. It's not just about individual problems: it's about a whole class of problems, all those within the mismatched problem frames.

For me, this is actually good news. Once we know which problem frames map nicely to the DNC, I would say the DNC itself becomes a more powerful tool, one that can be applied wisely and not blindly.

Winding down: since it all began with colors, it's interesting to see how people used the DNC to reason about more general issues. For instance, in Whole Part Relationships in Color Models, David J. Anderson starts with the DNC and ends up recommending that we avoid some whole-part colors, like a green whole with a yellow part. There is probably more to investigate along those lines. Next time I get back to my idea of coloring associations and dependencies, I'll give it a deeper thought.

Labels: ,

Sunday, February 17, 2008 

Cognitive Dimensions of Notations

I constantly hear that the blogsphere is highly self-referential, so what's better than starting a post by quoting yourself? Here is an excerpt from Listen to Your Tools and Materials:
Our material is knowledge, or information. We acquire, understand, filter, and structure information; and we encode it in a variety of formats: text, diagrams, code, and so on.[...]the diagram isn't a material, just a representation. We use a tool to represent the material, which is intangible knowledge. [...] What's peculiar with software is that, in many cases, the tools and the materials have the same nature.

Note that I'm using "tool" in a very broad sense there. UML is not our material, is a tool we use to encode our material (and yeah, we may also use a software tool to draw the UML). Source code is not our material, is a tool we use to encode our material.
The fact that we often confuse tools and materials, and that we can get along with that just fine, empirically proves that they are somehow similar in nature.

Now, If our tools and materials have similar nature, then perhaps by understanding better the properties of our tools we can also understand better the properties of our materials, and ultimately of our creations. Assuming, of course, that we can somehow define and classify some interesting properties of our tools (that is, notations).

Turns out that people have been doing so for quite a while now. There is an interesting body of knowledge about the so-called Cognitive Dimensions of notations. I'll quote some portions from a short paper (Cognitive Dimensions of Notations: Design Tools for Cognitive Technology), but if you're interested, I recommend you dig deeper by reading Cognitive Dimensions of Information Artefacts: a tutorial.

When we use a notation, we are both limited and enabled by its own peculiarities. Our own style further emphasizes some of those attributes. For instance, a class diagram is relatively easy to change: you can quickly reshape your design into a very different one. In the Cognitive Dimensions vocabulary, we would say that its viscosity is low. However, if you have also modeled some scenarios on that class diagram (using for instance a sequence diagram) you have to work much harder to make changes. Sequence diagram have high viscosity.

Now, I really like the choice of viscosity as an attribute. I have to confess: I like it because it's not nerdy. When we're dealing with intangible information, it's easy to find ourselves lacking the necessary words. In software development, there is then a wide tendency to adopt some weird, geeky term.
Perhaps that's why I've immediately appreciated the Cognitive Dimensions framework. They put some considerable effort in creating the vocabulary itself. Here is a quote from the aforementioned paper: We believe that this problem is best addressed by providing a vocabulary for discussing the design problems that might arise – a vocabulary informed by research in cognitive psychology, but oriented toward the understanding of a system developer. The Cognitive Dimensions of Notations are such a vocabulary. Bingo! :-)

If you spend some time familiarizing with the vocabulary, you'll see how useful it can be to settle down some long-standing debates, like code Vs. diagrams or even static typing Vs. dynamic typing. Consider, for instance:
Premature commitment: constraints on the order of doing things.
Hidden dependencies: important links between entities are not visible.
Error-proneness: the notation invites mistakes and the system gives little protection.

and so on. They provide a sound reference against which the pros and cons of different notations can be evaluated.

The real boon (for me) is: some of those concepts can be extended from the tool to the material! Actual code may or may not have Hidden Dependencies. Actual code may have different degrees of Viscosity. Actual code may or may not have Closeness of Mapping. And so on. Of course, the notation itself is providing a bottom line on some properties. But ultimately, the form you give to your material may move quite far from the bottom line.
Indeed, I believe the form itself will be heavily influenced by the process and notation you used when you conceived the form. If you conceive your form using diagrammatic reasoning, it will keep some of the inherent properties of that notation even when it's translated into another notation (like source code). If you start with source code, your form will be more heavily influenced by the properties of source code.

I also like some of the candidate dimensions. For instance, I'm especially fond of Creative Ambiguity. This, I would say, it's one of the properties that code is lacking most. It's also something academics are trying hard to remove :-) from UML. And in a sense, it's what makes some practices like TDD so limited.
Code doesn't afford much creative ambiguity, yet what we call "modeling" should probably be called "incubation". Shaping a great form ain't easy. We need low viscosity, a good degree of creative ambiguity, high provisionality, and so on. Code alone just won't cut it.
Of course, we might not be after great form, just run-of-the-mill form. Then anything sensible is gonna work: see also my recent post on old-fashioned architectures "designed" for mass adoption.

Labels: , ,

Sunday, January 27, 2008 

Being 10 Years Behind (part 2)

Do you remember "Windows DNA"? If you can't, don't blame yourself, because the MSDN doesn't remember either :-).
Indeed, it seems like Microsoft took good care of removing most of the material on Windows DNA from its developer-oriented website.
However, here comes the TechNet website to the rescue (well, at least till they realize it :-). As you see, the much touted "Distributed interNet Applications Architecture" was the usual 3 tier blurb. There is no date on that web page, but the "Windows DNA" stuff is about ten years old.

Sure, Windows DNA was all based on COM+ components, most likely implemented in Visual Basic, maybe glued to a presentation logic written in old-style ASP (VBScript all around). But look at that architecture again. Does it look familiar?

Let's take a look at some recent Microsoft-oriented paper on application architecture. For instance, in Microsoft .NET Pet Shop 3.x: Design Patterns and Architecture of the .NET Pet Shop the Microsoft-flavored "Data Access Layer" is introduced, along with a general architecture (see fig. 3) which looks absolutely identical to the old "Windows DNA" stuff.

Dig deeper (fig. 5, 6, 8, 9) and you'll also realize that the DAL structure is a mirror of the DB structure (that is, basically one class for each table). Looks really like the decade-old, fragile architecture I described in my previous post, except this paper is "just" 5 years old. Particularly dreadful is the "business entities" yellow box in fig. 8, spanning the 3 tiers with a set of hard-coded structures (which end up being a mirror of the database tables).

Fast forward to the present (sort of), and you get introductory papers like Creating a Data Access Layer where again the same basic architecture is rehashed under the .NET 2.0 newfangled classes and wizards.

And oh yeah, if you really wanna feel up-to-date, LINQ will take care of the DB, no more SQL, thank you. Except they've just embedded SQL in C#, thereby exposing your code to the same fragility WRT changes in the database schema.

Now, why is Microsoft pushing (through authors and evangelists) old stuff like that? I've partially answered in a comment to a previous post, but I'll add a little more. It's not that they're not smart enough to do better. It's that they think we are not smart enough to do better (Sun doesn't think much differently either).
Indeed, the architecture they're selling is easy to explain, easy to understand, easy to implement piecemeal, without much thinking. It's almost a Marketecture (short for Marketing Architecture, contrast with Technical Architecture).

Here are a few half-baked thoughts for those of you with a little time to spare :-) and a sincere interest about creating modern (or post-modern) architectures:

A) Information Hiding is about hiding likely changes. Likely changes in a database-oriented architecture are:
1) the database engine itself (oracle, sql server, etc). That includes the SQL-dialect of the database, so don't rely entirely on odbc, ado and the like.
2) the data access technology (remember odbc, rdo, dao, ado, ado.net, ado.net 2.0, linq, all have been sold as the ultimate technology, yet every 2 years or so we get a new one).
3) the database schema itself.
The old-style architecture may do something about 1 and 2, but precious nothing for 3 (which is going to consume most of your time anyway).

Now, the database schema may change for several reasons. Over time, you will:
- normalize
- denormalize
- add/drop fields
- add/drop tables
- re-route relationships
- change cardinality in relationships
You need to understand the most likely changes, as these are shaping your context (and therefore influence the best form)

B) The interface between a (well-designed) Data Layer and the Business Layer must be loose. It shouldn't break when the database schema changes because you added a field. Therefore, if the interface is based on strongly typed entities which mirror the database schema, you're doomed.

C) The interface between a (well-designed) Business Layer and the UI Layer or Service Layer must be loose. See above.

D) Don't lock the architecture on the worst case. We all know that a lot of code behind the UI is not that smart.
In many cases, given a robust validation layer, which can be designed to be very flexible and dynamic, the business layer won't do much except routing data to / from the data layer.
Don't make the business layer a necessary burden. Make it an important, yet optional component that kicks in only when important business logic is needed.

E) Reflection is the key to flexible DB applications.

F) You can only get so far with language-based reflection at the Data Layer level, because SQL is too old/primitive. Sooner or later, you'll need to attach more semantics to each field than your DB wants you to (especially if you don't want to tie yourself to a single DB vendor). Be creative :-), as this would take too much space for a single post.

G) Static typing is great inside each layer. It's also great at the interface level when the structure we're talking about is stable. It's truly bad when you want to expose a flexible or changing structure.
Remember why we conceived XML in the first place? Data are fluid!

Ok, there would be more to say about semistructured data, service-oriented architectures and the like, but that will have to wait.

I'll just repeat my caveat: be wary about buying an architecture from your vendor. Apply a good dose of critical thinking and look for the real value in your specific context.
You wouldn't buy the architectural blueprint of your house from a bricks or pipes vendor, no matter the quality of those bricks and pipes. You normally shouldn't buy your application architecture from your language, tools, or operating system vendor either.

Labels: ,