Monday, March 08, 2010 

Why you should learn AOP

A few days ago, I've spent some time reading a critic of AOP (The Paradoxical Success of Aspect-Oriented Programming by Friedrich Steimann). As often, I felt compelled to read some of the bibliographical references too, which took me a little more (week-end) time.

Overall, in the last few years I've devoted quite some time to learn, think, and even write a little about AOP. I'm well aware of the problems Steimann describes, and I share some skepticism about the viability of the AOP paradigm as we know it.

Too much literature, for instance, is focused on a small set of pervasive concerns like logging. I believe that as we move toward higher-level concerns, we must make a clear distinction between pervasive concerns and cross-cutting concerns. A concern can be cross-cutting without being pervasive, and in this sense, for instance, I don't really agree that AOP is not for singletons (see my old post Some notes on AOP).
Also, I wouldn't dismiss the distinction between spectators and assistants so easily, especially because many pervasive concerns can be modeled as spectators. Overall, the paradigm seems indeed a little immature when you look at the long-term maintenance effects of aspects as they're known today.

Still, I think the time I've spent pondering on AOP was truly well spent. Actually, I would suggest that you spend some time learning about AOP too, even if you're not planning to use AOP in the foreseeable future.

I don't really mean learning a specific language - unless you want/need to try out a few things. I mean learning the concepts, the AOP perspective, the AOP terminology, the effects and side-effects of an Aspect Oriented solution.

I'm suggesting that you learn all that despite the obvious (or perhaps not so obvious) deficiencies in the current approaches and languages, the excessive hype and the underdeveloped concepts. I'm suggesting that you learn all that because it will make you a better designer.

Why? Because it will expand your mind. It will add a new, alternative perspective through which you can look at your problems. New questions to ask. New concepts. New names. Sometimes, all we need is a name. A beacon in the brainstorm, and a steady hand.

As I've said many times now, as designers we're shaping software. We can choose many shapes, and ideally, we will find a shape that is in frictionless contact with the forcefield. Any given paradigm will suggest a set of privileged shapes, at macro and micro-level. Including the aspect-oriented paradigm in your thinking will expand the set of shapes you can apply and conceive.

Time for a short war story :-). In the past months I've been thinking a lot about some issues in a large CAD system. While shaping a solution, I'm constantly getting back to what I could call aspect-thinking. There are many cross-cutting concerns to be resolved. Not programming-level concerns (like the usual, boring logging stuff). Full-fledged application-domain concerns, that tend to cross-cut the principal decomposition.

Now, you see, even thinking "principal decomposition" and "cross-cutting" is making your first step into aspect-thinking. Then you can think about ways to bring those concerns inside the principal decomposition (if appropriate and/or possible and/or convenient) or think about the best way to keep them outside without code-level tangling. Tangling. Another interesting name, another interesting concept.

Sure, if you ain't using true AOP (for instance, we're using plain old C++), you'll have to give up some oblivousness (another name, another concept!), but it can be done, and it works fine (for a small scale example, see part 1 and part 2 of my "Can AOP inform OOP?")

So far, the candidate shape is causing some discomfort. That's reasonable. It's not a "traditional" solution. Which is fine, because so far, tradition didn't work so well :-). Somehow, I hope the team will get out of this experience with a new mindset. Nobody used to talk about "principal decomposition" or "cross-cutting concern" in the company. And you can't control what you can't name.

I hope they will gradually internalize the new concepts, as well as the tactics we can use inside traditional languages. That would be a major accomplishment. Much more important than the design we're creating, or the tons of code we'll be writing. Well, we'll see...

Labels: , , ,

Friday, February 12, 2010 

Form vs. Function: a Space Odyssey

I was teaching Object Oriented Design past week, and I mentioned the interplay between form and function (form follows function; function follows form). I'm rather cautious not to spend too much time on philosophy, although a little philosophy shouldn't hurt, and people who know me tend to expect a short philosophical digression every now and then.

Function follows form: that is to say, the shape of an object will suggest possible uses. Form follows function: the intended usage of an object will constrain, and therefore guide, its physical shape. This is true for software objects as well. It's just a different material, something we can't immediately see and shape with our hands.

Realizing that function follows form is a pivotal moment in the development of intelligence. You probably remember the opening of 2001: A Space Odyssey. The apes are touching the monolith and, shortly after, one of them is playing with a bone and bang! - it's not just a bone anymore: it's a tool. Function follows form. This chapter is known as "The Dawn of Man", and rightly so.

Watch a little more, and you'll see a doughnut-shaped space station. That's a very good example of form following function (exercise for the reader :-)

By the way, if you have never seen that apes stuff till now :-), here it is, at least until it gets removed from YouTube...

Labels: , ,

Wednesday, October 21, 2009 

The Dependency Structure Matrix

Design is about making decisions; diagrams encode some of those decisions. Consider this simple component diagram:



We have 3 "physical" components (e.g. DLLs) X, C, D. X is further partitioned in 2 logical components: in this real-world case, the designer used namespaces to identify separate logical components inside a single physical component. The designers is also telling us that A and B depends on D, B depends on C, C depends on D. So far, so good.

UML diagrams, however, cannot easily convey some part of the reasoning. In a sense, to fully grasp the designer's intention, we have to understand not only what is in the diagram, but also what is not in the diagram. This may seem unusual, but is easily explained. Consider the picture above again. There is no dependency between A and C. Now, maybe A doesn't currently need to access C (and therefore there is no dependency) but if we need to access C from A tomorrow, it's just fine to add a dependency. Or maybe the designer's intent was to shield A from C, possibly using B as a man-in-the-middle.
That's not obvious from the diagram, and there is no place in the diagram to say that (not with a formal, standard UML syntax). Of course, good names may help. Replacing B with something more meaningful, maybe mentioning a bridge or proxy pattern, may suggest that A is not supposed to interact with C.

Is there a better way? Maybe something that can be actually checked against code? Checking code compliance with diagrams may seem so passe' or even plain absurd, given the current trend of discarding diagrams and/or reverse-engineering diagrams from code. Still, here is a real-world story:
The design above (which is, of course, largely simplified) was handed out from the original designers-implementers to a larger (offshore) team. They explained some of the design rationale (informally), and after a while, they left the company. Months later, the offshore team needed a new service from C inside A, so they did the simplest thing that can possibly work: they called C from A. After all, A and B are inside the same physical component. Whatever B can do, A can do too.
Unfortunately, a cornerstone of the original design was that A should never talk to C. The dependency was not in the diagram, because it was not supposed to exist, ever.

The team manager knew that, but given the size of the real X (about 500 KLOC) she couldn't possibly review all the changes from the offshore team. Of course, at least someone in the offshore team didn't fully grasp the designer's intent.

So, back to the original question: is there a better way? I could say "a forcefield diagram" :-), but in this specific case, there is also a well-known engineering tool: the Dependency Structure Matrix (also known as the Design Structure Matrix). A DSM encodes dependencies between "things". Not just dependencies, but also forbidden dependencies. See the following picture:



The 5 green "Y" cells correspond to the 5 existing dependencies; the "N" cells correspond to the "missing" dependencies, but they say something more: that those dependencies are forbidden. Now, this is a useful piece of information, something that can be easily checked against code. That does not mean that we can't change the design: it simply means we don't want to change the design inadvertently, just by typing in some code that was not supposed to be there. Checking code against the abstract design should just prompt a review; the design could be wrong, in which case, it should be changed (along with the DSM).

There is some interesting literature about DSM in software, most from Baldwin and Clark of "Design Rules" fame, but also from others (like one I mentioned back in 2005). There are also quite a few tools to reverse-engineer a DSM from code, which makes checking code against the designed DSM relatively trivial (the bad side is that some languages, like C++, are notably hard to reverse engineer, so tools are lacking; Java and C# have both free and commercial tools available). I'm not aware of any UML tool that can generate a DSM from the diagrams, but that's theoretically trivial, and could even be built as a plug-in for some CASE tools.

As usual, there is more to say about the DSM, gravity, and the forcefield. I'll save that for my next post!

Labels: ,

Monday, October 05, 2009 

A ForceField Diagram

The Design Rationale Diagram I discussed in my previous post is hardly complete, and it could be vastly improved by asking slightly different questions, leading to different decision paths. Still, it's a reasonable first-cut attempt to model the decision process. It can be used to communicate the reasoning behind a specific decision, in a specific context.

That, however, is not the way I really think. Sure, I can rationalize things that way, but it's not the way I store, recall, organize information inside my head. It's not the way I see the decision space.
In the end, software design is about things going together and things staying apart, at all the granularity levels (see also my post on partitioning).
As I progress in my understanding of forces, I tend to form clusters. Clusters are born out of attraction and rejection inside the decision space. I've found that thinking this way helps me reach a better understanding of my design instinct, and to communicate my thoughts more clearly.

Now, although I've been thinking about this for long while (not full-time, lucky me :-), I can't say I have found the perfect representation. The decision space in inherently multi-dimensional, and I always end up needing more dimensions that I can fit either in 2D or 3D. Over time, I tried several notations, inventing things from scratch or borrowing from other domains. Most were dead ends. In the end, I've chosen (so far :-) a very simple representation, based on just 3 concepts (possibly 4 or 5).

- nodes
Nodes represent information, which is our material. Information has fractal nature, and I don't bother if I'm mixing up levels. Therefore, a node may represent a business goal, or the adoption of a tool or library, or a nonfunctional requirement, or a specific component, class, function. While most methods are based on a strict separation of concepts, I find that very limiting.

- an attraction relationship
Nodes can attract each other. For instance, a node labeled "reliable" may attract a node labeled "redundant" when reasoning about the large display problem. I just connect the two nodes using a thick line with little "hands" on the ends. I place attracted nodes close to each other.

- a rejection relationship
Nodes can reject each other. For instance, stateful most clearly reject stateless :-). Some technology might be at odd with another. A subsystem must not depend on another. And so on. Nodes that reject each other are placed at some distance.

It's all very simple and unsophisticated. Here is an example based on the large display problem, inspired by the discussion on design rationale:



and here are two diagrams I've used in real-world projects recently, scaled down to protect the innocent:





The relationship between a node, a cluster, and an Alexandrian center is better left for another time. Still, a node in one diagram may represent an entire cluster, or an entire diagram. Right now I'm tempted to use a slightly different symbol (which would be the fourth) to represent "expandable" nodes, although I'm really trying to keep symbols to a bare minimum. I'm also using colors, but so far in a very informal way.

As simple as it is, I've found this diagram to be very effective as a reasoning device, while too many diagrams end up being mere documentation devices. I can sit in front of my (large :-) screen, think and draw, and the drawing helps me focus. I can draw this on a whiteboard in a meeting, and everyone get up to speed very quickly on what I'm doing.

This, however, is just half the story. We can surely work with informal concepts and diagrams, and that's fine, but what I'm trying to do is to add precision to the diagram. Precision is often confused with details, like "a class diagram is more precise if you show all the parameters and types". I'm not looking for that kind of "precision". Actually, I don't want this diagram to be redundant with code at all; we already have many code-like diagrams, and they all get down the same roads (generate code from diagrams or generate diagrams from code). I want a reasoning device: when I want to code, I'm comfortable with code :-).

I mostly want to add precision about relationships. Why, for instance, is there an attraction between Slow Client and Stateful? Informally, because if we have a stateful system, the slow client can poll on its own terms, or alternatively, because the client may use a sophisticated subscription based on the previous state. Those options, by the way, could be represented on the forcefield diagram itself (adding more nodes, or a nested diagram); but that's still the "informal" reasoning. Can we make it any more formal, precise, grounded on sound principles?

This is where the ongoing work on concepts like gravity, frequency, and so on kicks in. Slow Client and Stateful are attracted because on a finer granularity (another, perhaps better, diagram) "Slow Client" means a publisher and a subscriber operating at different frequencies, and a stateful repository is a well-known strategy (a pattern!) to provide Isolation between systems operating at different frequencies (together with synchronization or transactions).

Now, I haven't introduced the concept of Isolation yet (though I mentioned something on my Facebook page :-), so this is sort of a spoiler :-)), but in the end I hope to come up with a simple reasoning system, where you can start with informal concepts and refine nodes and forces until you reach the "universal", fractal forces I'm discussing in the "Notes on Software Design" posts. That would give a solid ground to the entire diagram.

A final note on the forcefield diagram: at this stage, I'm just using Visio, or more exactly, I'm abusing some stencils in the Visio library. I wanted something relatively organic, mindmap-like. Maybe one day I'll move back to some 3D ideas (molecular structures come to mind), but I've yet to see how this scales to newer concepts, larger problems, and so on. If you want to play with it, I can send you the VSS file with the stencils.

Ok, I'll get back to Frequency (and Interference and Isolation and more :-) soon. Before that, however, I'd like to take a diversion on the Dependency Structure Matrix. See ya!

Labels: ,

Wednesday, July 08, 2009 

When in doubt, do the right thing

The bright side of spending most of my professional time on real-world projects is that I have an endless stream of inspiration, and what is even more important, the possibility of trying out new ideas, concepts, and methods. The dark side is that the same source of inspiration is taking away the precious time I would need to encode, structure, articulate knowledge, that therefore remains largely implicit, tacit, intuitive. The pitch black side is that quite often I'd like to share some real-world story, but I can't, as the details are kinda classified or just to protect the innocent. Sometimes, however, the story can be told with just a little camouflage.

Weeks ago, I was trying to figure out the overall architecture of a new system, intended to replace an obsolete framework. I could see a few major problems, two of which were truly hard to solve without placing a burden on everyone using the framework. Sure, we had other details to work out, but I could see no real showstoppers except for those two. The project manager, however, didn't want to face those problems. She wanted to start with the easy stuff, basically re-creating structures she was familiar with. I tried to insist about the need to figure out an overall strategy first, but to no avail. She wanted progress, right here, right now. That was a huge mistake.

Now, do not misunderstand me: I'm not proposing to stop any kind of development before you work every tiny detail out. Also, in some cases, the only real way to understand a system is by building it. However, building the wrong parts first (or in this case, building the easy parts first) is always a big mistake.

Expert designers know that in many cases, you have to face the most difficult parts early on. Why? Because if you do it too late, you won't have the same options anymore; previous decisions will act like constraints on late work.

Diomidis Spinellis has recently written a very nice essay on this subject (IEEE Software, March/April 2009). Here is a relevant quote: On a blank sheet of paper, the constraints we face are minimal, but each design decision imposes new restrictions. By starting with the most difficult task, we ensure that we’ll face the fewest possible constraints and therefore have the maximum freedom to tackle it. When we then work on the easier parts, the existing constraints are less restraining and can even give us helpful guidance.

I would add more: even if you take the agile stance against upfront design and toward emergent design, the same reasoning applies. If you start with the wrong part, the emergent design will work against you later. Sure, if you're going agile, you can always refactor the whole thing. But this reasoning is faulty, because in most cases, the existing design will also limit your creativity. It's hard to come up with new, wild ideas when those ideas conflict with what you have done up to that moment. It's just human. And yeah, agile is about humans, right? :-)

Expert designer start with the hard parts, but beginners don't. I guess I can quote another nice work, this time from Luke Hohmann (Journey of the Software Professional - a Sociology of Software Development): Expert developer's do tend to work on what is perceived to be the hard part of the problem first because their cognitive libraries are sufficiently well developed to know that solving the "hard part first" is critical to future success. Moreover, they have sufficient plans to help them identify what the hard part is. Novices, as noted often fail to work on the hard-part-first for two reasons. First, they may not know the effectiveness of the hard part first strategy. Second, even if they attempt to solve the hard part first, they are likely to miss it.

Indeed, an expert analyst, or designer, knows how to look at problems, how to find the best questions before looking for answers. To do this, however, we should relinquish preconceived choices. Sure, experts bring experience to the table, hopefully in several different fields, as that expands our library of mental plans. But (unlike many beginners) we don't approach the problem with pre-made choices. We first want to learn more about the forces at play. Any choice is a constraint, and we don't want artificial constraints. We want to approach the problem from a clean perspective, because freedom gives us the opportunity to choose the best form, as a mirror of the forcefield. By the way, that's why zealots are often mediocre designers: they come with too many pre-made choices, or as a Zen master would say, with a full cup.

Of course, humans being humans, it's better not to focus exclusively on the hard stuff. For instance, in many of my design sessions with clients, I try to focus on a few simple things as we start, then dig into some hard stuff, switch back to something easy, and so on. That gives us a chance to take a mental break, reconsider things in the back of our mind, and still make some progress on simpler stuff. Ideally, but this should be kinda obvious by now, the easy stuff should be chosen to be as independent/decoupled as possible from the following hard stuff, or we would be back to square one :-).

In a sense, this post is also about the same thing: writing about some easy stuff, to take a mental break from the more conceptual stuff on the forcefield. While, I hope, still making a little progress in sharing some useful design concept. See you soon!

Labels: , , , , , ,

Friday, May 01, 2009 

Einstellung

As I mentioned in previous posts, one of the projects I've been recently involved with is a complete rewriting of the GUI layer for a rather large system. We want to move from an MFC-based framework to .NET, mostly to improve productivity.
Initially, we'll basically move the GUI as-is, without re-designing the human-computer interaction. Therefore, it would pay to recover as much information as possible from the existing system, and do it automatically.

Among other things, we have about 250 dialog boxes to port, so I thought it would be a good idea to write a translator from the Win32 RC format to whatever new format we need. This way, we can recover layout (positioning and sizes) and also translate each control to their nearest equivalent.

That means, of course, that we know the target, and today, the .NET game boils down to choosing between Windows Forms and WPF. The choice is rather hard, althogh I know many programmers would jump immediately on the WPF bandwagon. Anyway, as we discussed the translator above, the project manager observed that WinForms stores everything in code. If we ever have to do this kind of change again, she said, we will miss the simplicity of RC. XAML would make layout and controls easier to move to another technology, just as RC.

That's true; I don't particularly like the idea of having to parse C# to recover layout information, control initialization parameters, and so on.

Funny thing is, for a while I got trapped in this parsing concept. I guess it has to do with education. Any computer scientist will recognize this as a parsing and translation problem. It's a well known problem frame. And that calls for a parser, of course :-).

It took me a while to realize I didn't have to write a parser at all: I could just use reflection! To test the idea, I wrote a simple C# program (about 60 lines of code) which takes a form and recursively dumps layout and initialization parameters in an XML format.

For instance, given this form:



where the blue rectangle is a panel, and the label and button are nested controls, I'll get this XML.

The idea is pretty simple: I dump every property without a Browsable(false) attribute, that is, everything that you can change at design-time. If the Controls collection is not empty, I'll recurse into it. The nice part is that it could be made to work also for dynamic controls, created at run-time and not a design-time. Just call the translator after all the controls have been created, and that's it.

Things could be easily improved. Right now, I don't handle collections (see bindings), non-visual components, and I dump every single property. It would be useful, perhaps, to dump only values that have been changed. That's easy, just create a control of the same class on the fly, and check for differences. Piece of cake.

Now, I wish I could say I thought of this through my understanding of the forcefield :-). But I can't. It just came to me. Dunno how. The problem, of course, is moving past the Einstellung effect of education. What can I say? Keep your mind open, practice lateral thinking, never give up :-). And yeah, well, keep an eye on that forcefield, as that may help too...

Labels: , ,

Sunday, April 26, 2009 

Bad Luck, or "fighting the forcefield"

In my previous post, I used the expression "fighting the forcefield". This might be a somewhat uncommon terminology, but I used it to describe a very familiar situation: actually, I see people fighting the forcefield all the time.

Look at any troubled project, and you'll see people who made some wrong decision early on, and then stood by it, digging and digging. Of course, any decision may turn out to be wrong. Software development is a knowledge acquisition process. We often take decisions without knowing all the details; if we didn't, we would never get anything done (see analysis paralysis for more). Experience should mitigate the number of wrong decisions, but there are going to be mistakes anyway; we should be able to recognize them quickly, backtrack, and take another way.

Experience should also bring us in closer contact with the forcefield. Experienced designers don't need to go through each and every excruciating detail before they can take a decision. As I said earlier, we can almost feel, or see the forcefield, and take decisions based on a relatively small number of prevailing forces (yes, I dare to consider myself an experienced designer :-).
This process is largely unconscious, and sometimes it's hard to rationalize all the internal reasoning; in many cases, people expect very argumentative explanations, while all we have to offer on the fly is aesthetics. Indeed, I'm often very informal when I design; I tend to use colorful expressions like "oh, that sucks", or "that brings bad luck" to indicate a flaw, and so on.

Recently, I've found myself saying that "bad luck" thing twice, while reviewing the design of two very different systems (a business system and a reactive system), for two different clients.
I noticed a pattern: in both cases, there was a single entity (a database table, a in-memory structure) storing data with very different timing/life requirements. In both cases, my clients were a little puzzled, as they thought those data belonged together (we can recognize gravity at play here).
Most naturally, they asked me why I would keep the data apart. Time to rationalize :-), once again.

Had they all been more familiar with my blog, I would have pointed to my recent post on multiplicity. After all, data with very different update frequency (like: the calibration data for a sensor, and the most recent sample) have a different fourth-dimensional multiplicity. Sure, at any given point in time, a sensor has one most recent sample and one set of calibration data; therefore, in a static view we'll have multiplicity 1 for both, suggesting we can keep the two of them together. But bring in the fourth dimension (time) and you'll see an entirely different picture: they have a completely different historical multiplicity.

Different update frequencies also hint at the fact that data is changing under different forces. By keeping together things that are influenced by more than one force, we expose them to both. More on this another time.

Hard-core programmers may want more than that. They may ask for more familiar reasons not to put data with different update frequencies in the same table or structure. Here are a few:

- In a multi-threaded software, in-memory structures requires locking. If your structure contains data that is seldom updated, that means it's being read more than written: if it's seldom read and seldom written, why keep it around at all?
Unfortunately, the high-frequency data is written quite often. Therefore, either we accept to slow down everything using a simple mutex, or we aim for higher performances through a more complex locking mechanism (reader/writer lock), which may or may not work, depending on the exact read/write pattern. Separate structures can adopt a simpler locking mechanism, as one is being mostly read, the other mostly written; even if you go with a R/W lock, here it's almost guaranteed to have good performance.

- Even on a database, high-frequency writes may stall low-frequency reads. You even risk a lock escalation from record to table. Then you either go with dirty reads (betting on your good luck) or you just move the data in another table, where it belongs.

- If you decide to cache database data to improve performances, you'll have to choose between a larger cache with the same structure of the database (with low frequency data too) or a smaller and more efficient cache with just the high-frequency data (therefore revealing once more that those data do not belong together).

- And so on: I encourage you to find more reasons!

In most cases, I tend to avoid this kind of problems instinctively: this is what I really call experience. Indeed, Donald Schön reminds us that good design is not for everyone, and that you have to develop your own sense of aesthetics (see "Reflective Conversation with Materials. An interview with Donald Schön by John Bennett", in Bringing Design To Software, Addison-Wesley, 1996). Aesthetics may not sound too technical, but consider it a shortcut for: you have to develop your own ability to perceive the forcefield, and instinctively know what is wrong (misaligned) and right (aligned).

Ok, next time I'll get back to the notion of multiplicity. Actually, although I've initially chosen "multiplicity" because of its familiarity, I'm beginning to think that the whole notion of fourth-dimensional multiplicity, which is indeed quite important, might be confusing for some. I'm therefore looking for a better term, which can clearly convey both the traditional ("static") and the extended (fourth-dimensional, historical, etc) meaning. Any good idea? Say it here, or drop me an email!

Labels: , , ,

Sunday, February 22, 2009 

Notes on Software Design, Chapter 4: Gravity and Architecture

In my previous posts, I described gravity and inertia. At first, gravity may seem to have a negative connotation, like a force we constantly have to fight. In a sense, that's true; in a sense, it's also true for its physical counterpart: every day, we spend a lot of energy fighting earth gravity. However, without gravity, like as we know it would never exist. There is always a bright side :-).

In the software realm, gravity can be exploited by setting up a favorable force field. Remember that gravity is a rather dumb :-) force, merely attracting things. Therefore, if we come up with the right gravitational centers early on, they will keep attracting the right things. This is the role of architecture: to provide an initial, balanced set of centers.

Consider the little thorny problem I described back in October. Introducing Stage 1, I said: "the critical choice [...] was to choose where to put the display logic: in the existing process, in a new process connected via IPC, in a new process connected to a [RT] database".
We can now review that decision within the framework of gravitational centers.

Adding the display logic into the existing process is the path of least resistance: we have only one process, and gravity is pulling new code into that process. Where is the downside? A bloated process, sure, but also the practical impossibility of sharing the display logic with other processes.
Reuse requires separation. This, however, is just the tip of the iceberg: reuse is just an instance of a much more general force, which I'll cover in the forthcoming posts.

Moving the display logic inside a separate component is a necessary step toward [independent] reusability, and also toward the rarely understood concept of a scaled-down architecture.
A frequently quoted paper from David Parnas (one of the most gifted software designers of all times) is properly titled "Designing Software for Ease of Extension and Contraction" (IEEE Transactions on Software Engineering, Vol. 5 No. 2, March 1979). Somehow, people often forget the contraction part.
Indeed, I've often seen systems where the only chance to provide a scaled-down version to customers is to hide the portion of user interface that is exposing the "optional" functionality, often with questionable aesthetics, and always with more trouble than one could possibly want.

Note how, once we have a separate module for display, new display models are naturally attracted into that module, leaving the acquisition system alone. This is gravity working for us, not against us, because we have provided the right center. That's also the bright side of the thorny problem, exactly because (at that point, that is, stage 2) we [still] have the right centers.

Is the choice of using an RTDB to further decouple the data acquisition system and the display system any better than having just two layers?
I encourage you to think about it: it is not necessarily trivial to undestand what is going on at the forcefield level. Sure, the RTDB becomes a new gravitational center, but is a 3-pole system any better in this case? Why? I'll get back to this in my next post.

Architecture and Gravity
Within the right architecture, features are naturally attracted to the "best" gravitational center.
The "right" architecture, therefore, must provide the right gravitational centers, so that features are naturally attracted to the right place, where (if necessary) they will be kept apart from other features at a finer granularity level, through careful design and/or careful refactoring.
Therefore, the right architeture is not just helping us cope with gravity: it's helping us exploit gravity to our own advantage.

The wrong architecture, however, will often conjure with gravity to preserve itself.
As part of my consulting activity, I’ve seen several systems where the initial partitioning of responsibility wasn’t right. The development team didn’t have enough experience (with software design and/or with the problem domain) to find out the core concepts, the core issues, the core centers.
The system was partitioned along the wrong lines, and as mass increased, gravity kicked in. The system grew with the wrong form, which was not in frictionless contact with the context.
At some point, people considered refactoring, but it was too costly, because mass brings Inertia, and inertia affects any attempt to change direction. Inertia keeps a bad system in a bad state. In a properly partitioned system, instead, we have many options for change: small subsystems won’t put up much of a fight. That’s the dream behind the SOA concept.
I already said this, but is worth repeating: gravity is working at all granularity levels, from distributed computing down to the smallest function. That's why we have to keep both design and code constantly clean. Architecture alone is not enough. Good programmers are always essential for quality development.

What about patterns? Patterns can lower the amount of energy we have to spend to create the right architecture. Of course, they can do so because someone else spent some energy re-discovering good ideas, cleaning them up, going through shepherding and publishing, and because we spent some time learning about them. That said, patterns often provide an initial set of centers, balancing out some forces (not restricted to gravity).
Of course, we can't just throw patterns against a problem: the form must be in effortless contact with the real problem we're facing. I've seen too many good-intentioned (and not so experienced :-) software designers start with patterns. But we have to understand forces first, and adopt the right patterns later.

Enough with mass and gravity. Next time, we're gonna talk about another primordial force, pushing things apart.

See you soon, I hope!

Labels: , , , , ,

Monday, December 08, 2008 

DisableProcessWindowsGhosting

I tend to think that I know the Windows API quite well. In the past few years I've been using mostly the Kernel API, since .NET has made most (but not all) of the others kinda useless. Still, there are times when only knowledge of the right API can save the day. There are also times when the right API can save the day, but nobody (myself included) knows about it :-).

A few days ago, while working on a large, legacy application that we're slowly moving into modern times, we faced an unexpected challenge. The process was busy doing some math. There is only one thread involved, so the GUI was supposed to be frozen, as we do not dispatch Windows messages while doing math. We actually counted on that, for reasons too long to be explained here.

Unfortunately, as we know, in Windows XP and Vista we can move a top-level window around even if the process is not responding, and that broke our expectations.

Now, armed with some background on Windows internals, that XP/Vista feature always seemed odd. Moving a window requires, among other things, that the window itself answers the WM_NCHITTEST message. If our process is stuck doing math, it's unlikely to answer that message.

Looking around with Spy++ we discovered that (most reasonably) the movable window was a fake (ghost) window, hosted by Windows itself into another process. Great: we just needed a way to disable that behavior for our application. Unfortunately, I didn't know any :-).

It took a careful digging to discover the magic API: DisableProcessWindowsGhosting. We never stop learning :-).

As an aside, that old app is supposed to run on Windows 2000 as well (yeap :-), and the magic API requires Windows XP or above, but good old GetProcAddress will take care of that...

Labels: ,

Tuesday, November 25, 2008 

A Tale of Two Methods

Once again, I’ve been absent for quite a while. Lot of work to do, many things to ponder, and some fun too (far away from computers :-) all kept me busy. Still, I’m coming back with a few new insights, which I’d like to share over the next few weeks.

Some of you may remember when, in the late ‘90s, I proposed a transformational approach to object oriented design, which I called Systematic Object Oriented Design, or SysOOD. Most of my writings on the subject are now online, some in English, some in Italian.

The reasoning behind the method was quite simple: in many cases, it's rather easy to devise a working solution to any given problem. However, there is large gap between a working solution and an elegant solution. Elegance is an elusive concept, which is often mapped to a large set of nonfunctional attributes like separation of concerns, information hiding, reusability, scalability, and so on. Still, in practice, we can often go through a transformational process and turn our first-cut solution into a carefully crafted design.

Experienced designers apply those transformations on the fly, as part of their conversation with the material. My idea was to make those transformations explicit, to give them a name, a context and a purpose. I drew heavily from the design patterns movement: ideally, I thought, we should be able to generate patterns by transforming a trivial design. To some extent, I succeeded, as documented in some of my works.

I kept exploring. My ultimate goal is to understand “what we really do as we design”, which is quite ambitious. Hence my investigation of Schon, Alexander, the concept of form and force field, etc.

In the last month or so, I've found myself walking a familiar path, one that I already walked during the SysOOD days. Although I tend to design intuitively, borrowing on several years of experience, I started asking myself (again) the familiar question: how did this came to my mind? Is there a systematic process behind this reasoning? Can I make this reasoning explicit?

This time, however, my focus is different. I'm not looking for transformations anymore. I know I could do more on that side, but I also know the limits of a transformational approach. This time I'm against “primordial” forces, and hopefully against a way to describe and reason upon those forces.

It's a difficult endeavor, and the probability of failure is high. But it's also an excellent learning opportunity, and in a sense, an excellent teaching opportunity, as I'm sure I'll learn a few things worth teaching along the way.

I don't expect to discover anything revolutionary. It's about understanding what we do, not what we don't know how to do. But the same understanding, I believe, can help us when we don't know what to do. If anything comes out of it, it will be an explorative approach, a way to frame and understand our own ideas while we design.

What I've collected so far are a few ideas about partitioning and the fractal nature of software (as I hinted to in a previous post), and a few early attempts to visually model the force field. At some point, I'll probably use the Large Display problem to show how some of those concepts can point us toward a better solution.

More on this very soon, I hope :-)

Labels: ,

Sunday, October 12, 2008 

Some Small Design Issues (part 1)

In a previous post, I talked about some small, yet thorny design problems I was facing. As I started writing about them, it became clear that real-world design problems are never so small: there is always a large context that is somehow needed to fully understand the issues.

Trying to distill the problem to fit a blog post is a nightmare: it takes forever, it slows me down to the point I'm not blogging anymore, and is exactly the opposite of what I meant when I wrote Blogging as Destructuring a few years ago. On a related note, Ed Yourdon (at his venerable age :-) is moving toward microblogging for similar reasons.

Still, there is little sensible design talk around, so what does a [good] man gotta do? Simplify the problem even more, split the tale in a few episodes, and so on.
I said "tale" because I'll frame the design problem as a story. I don't mean to imply that things went exactly this way. Actually, I wasn't even there at the time. However, looking at the existing artifacts, it seems reasonable that they somehow evolved that way.
Also, an incremental story is an interesting narrative device for a design problem, as it allows to put every single decision in perspective, and to reason about the non-linear impact of some choices.

Stage 1 - the beginning
We have some kind of industrial process control system. We want to show a few process variables on a large-size numeric display, like in the picture below:

At this point the problem is quite simple, yet we have one crucial choice to make: where do we put the new logic?
We have basically three alternatives:
1) inside an existing module/process, that is, "close to the data"
2) in a new/external process, connected via IPC to the existing process[es]. Connection might be operating in a push or pull mode, depending on update rate and so on (we'll ignore this for sake of simplicity).
3) in a new/external process, obtaining data through a [real-time] database or a similar middleware. The existing processes would have to publish the process variables on the database. The new process might pull data or be pushed data, depending on the data source.
Even at this early stage, we need to make an important architectural decision. It's interesting to see that in very small systems, where all the data is stored inside one process, alternative (1) is simpler. Everything we need "is just there", so why don't we add the display logic too?
This is how simple, lean systems get to be complex, fragile, bulky systems: it's just easier to add code where the critical mass is.
So, let's say our guys went for alternative (3). We have one data source where all the relevant variables are periodically stored.

Now, we just need to know which variables we want to show, and in which row. For simplicity, the configuration could be stored inside the database itself, like this (through an OO perspective):

Using an ugly convention, "-1" as a row value indicates that the process variable isn't shown at all.

Stage 2 - into the real world
Customers may already have a display, or the preferred supplier may discontinue a model, or sell a better/cheaper/more reliable one, and so on. Different displays have different protocols, but they're just multi-line displays nonetheless.
Polymorphism is just what the doctor ordered: looking inside the Display component, we might find something like this:

It's trivial to keep most of the logic to get and format data unchanged. Only the protocol needs to become an extension point. Depending on the definition of protocol (does it extend to the physical layer too?) we may have a slightly more complex, layered design, but let's keep it simple - there are no challenges here anyway.

Stage 3 - more data, more data!
Processes gets more and more complex, and customers want to see more data. More than one display is needed. Well, it's basically trivial to modify the database to store the display number as well.
A "display number" field is then added to the Process Variable Descriptor. Note that at this point, we need a better physical display, as the one in the picture above has hard-coded labels. We may want to add one more field to the descriptor (a user-readable name), and our protocol class may or may not need some restyling to account for this [maybe optional] information. The multiplicity between "Everything Else" and "Display Protocol" is no longer 1. Actually, we have a qualified association, using the display number as a key (diagram not shown). No big deal.
Note: at this stage, a constraint has been added, I guess by software engineers, not process engineers: the same process variable can't be shown on two different displays. Of course, a different database design could easily handle this limitation, but it wasn't free, and it wasn't done.

Hmmm, OK, so far, so good. No thorny issues. See you soon for part 2 :-).

Labels: , ,

Thursday, August 07, 2008 

Do we need a Theory of Wrapping?

I think it was about 10 years ago. I was teaching OOD principles to a .COM company, and a senior developer said we should really develop a theory of wrapping. We talked a little about it, but then we moved to explore other concepts. I guess I totally forgot the idea till a few weeks ago, when 3 different customers came up with thorny issues, and they all had something in common: some kind of wrapper.

Wrappers are routinely created during software development, yet not all forms of wrapping are benign. Some may look convenient in the short term, but will come back to haunt us in the long term.
Would a theory of wrapping help? Indeed, we already have some organized knowledge on wrapping: several patterns are built upon the idea of wrapping an object or a subsystem. Looking at the GoF patterns alone, Adapter is a wrapper, Facade is a wrapper, Proxy is a wrapper. Decorator, although somehow similar to proxy in implementation, isnt't really a wrapper, as it adds new responsibilities.
Looking at wrapping through patterns, however, doesn't provide much guidance about the long-term consequences of wrappers. So, let's move to anedoctal knowledge for a while.

My first troubled customer is trying to solve an architectural problem by wrapping a few functions, and perform some magic inside those functions. The magic involves threads, fibers, apartments, and stuff like that. As any true form of magic, it must not be revealed to the general public :-)), so I won't.
Magic isn't well-known for reliability, but there is chance that magic might indeed work, which in a sense is even worse. We know the real problem is elsewhere: it has been known for more than 10 years, but the troubled area has never been fixed, just hidden under several layers. The real fix would entail changing 100 to 150 classes. Personally, I would work toward making that change as painless as possible, and then do it. They would rather go the wrapping way.

My second, less troubled customer has a much simpler problem: a legacy database, which in turn is mostly a mirror of the in-memory data structures of a legacy application (still maintained and developed). We need to provide a web front end, and the general agreement is to outsource that development. We basically have three choices:
1) ask the contractor to write the web app against the existing, ugly db. That requires some documentation effort on the company side, as the db can't be used without a detailed knowledge about fields (and bit-fields inside fields).
2) clean the database (in-house), clean the legacy app (in-house), let the contractor write the web app against the clean db. Sounds nice, but requires a lof of in-house work, and even worse, it would delay the contractor. Cleaning the legacy app seems daunting, and sooner or later we want to scrap that code anyway.
3) keep the ugly db, but provide a wrapper, in the form of a facade object model. Ask the contractor to write the web app against the facade. Delay cleaning the db for a while (possibly forever) and hope that a quickly developed facade will withstand the test (or ravages) of time. Because yeah, well, we ain't got much time, or we would go with (2). By the way, we could write the facade in-house, or write the same kind of documents as in (1) and ask the contractor to write the facade.

I would love to recommend (2), but being reality-based, I recommended (3), with the facade written by the contractor. So (dare I say it!!) I actually recommended that we spend our valuable in-house time writing documentation. How non-agile is that :-)). But the thing is, there is only one guy who knows the db, and that's bad risk management. Also, if at some point we want to redesign and clean the db, a documentation of what is currently hidden inside bit-fields stored as integers would be quite valuable. Oh, by the way: we did a little experiment to estimate how long is gonna take to write that doc: about 10 man/days, easily broken in short tasks, which can partially overlap the calendar time consumed by the contractor to build the facade object model. Not so bad after all, except we still keep a crappy db and application.

My third customer doesn't really feel troubled :-). Well, in fact, he's not really troubled: I am troubled :-). Years ago, a small team ventured into building an application-specific framework. Although I often preached about creating a set of small, loosely coupled mini-frameworks, people on that team weren't so interested in listening. So they went on and created a large, tightly coupled framework (which is much easier to build, and much harder to learn, change, etc).
When the company decided to build the first (web) application based on that framework, the application developers found out (guess what :-) that the ambitious design of the large framework was quite hard to understand, to the point that their main goal was surprisingly hard to reach. I proposed that the framework developers could create a simple facade, closer to our problem and therefore easier to use. They did, the application was completed, and is still in use today. So far so good :-).
A few years went by, and more applications have been written against that framework. The original framework developers moved to application development (or leaved the company), actually leading application development. Recently, I discovered that all the subsequent applications have been written against "my" facade, which was never designed to be general-purpose. However, it was so much simpler to use than the framework that people opted to use it anyway. They tweaked the facade when necessary, so we now have multiple versions of the facade around.
Again, the "right" thing to do would have been to make the framework easy to use on the first place. The facade was my backup plan because I never managed to make a dent in the framework developers' minds. Sure, it was instrumental in writing the applications, but somehow, it backfired: it became a superstructure of its own, living well beyond my expectations. Unfortunately, it was never designed to be a general-purpose component.

Of course, not every form of wrapping is evil. For instance, I often add a small facade between my code and any third party components and libraries. That shields me from many instabilities in interfaces, and in many cases allows me to change supplier if needed. That's harder with GUI components making heavy use of design-time properties, but there are techniques to deal with that as well (if warranted). Again, some wisdom is required: it makes little sense to wrap a pervasive library or framework; it makes little sense to invest in wrapping when you're developing a short-lived corporate application, and so on. Still, as someone said (exact quote needed :-), every programming problem can be solved by adding one level of indirection, and in some cases a wrapper provides a useful level of indirection.

Can we learn a general lesson from experience? From the pattern perspective, my first customer is just trying to add some kind of proxy between subsystems. The second and third are using a plain old facade. Nothing wrong with that. Yet no one is making the "right" choice: the wrapper is used to avoid fixing the real problem. We often say words like "postpone", but in software development, "postpone" usually means "don't do it".

So, again, would a theory of wrapping help? And what should this theory tell us? As is often the case, a good theory of software design should help us to make informed decisions, and like it or not, that means informed economical decisions. All too often, sound engineering choices are discarded in favor of cheap solutions, because we have no real economic model, or even a rough economic model, to calculate the (possibly negative :-) ROI of what we propose.

The real, long-term economical consequences for my first customer are way too complex, and I have little hope that we could ever develop a theory that could even come close to help with that. However, I believe the bigger long-term cost factor of the proxy-based solution would be in the increased debugging effort that would fall on the shoulders of all the other programmers working on the same project (about 100 people, I guess). This problem, somehow, could be highlighted in a theory of wrapping. A proxy, especially when doing threads-fibers-apartments magic, makes it hard to get a clear picture of what's going on just by looking at the stack (which is a per-thread, per-fiber structure). Unfortunately, the impact of this problem seems very hard to quantify.

My second customer is facing a much simpler scenario. I think we could eventually build an economic theory that could model the maintenance cost of keeping the legacy code as-is, the risk of building a leaky abstraction as we quickly build a facade over the legacy db, thereby increasing the mass of code that we'll have to change later, as we finally get to create a better database, and compare that with the savings of not doing it. We should also be able to factor in the increased maintenance cost of keeping the facade in-synch with any changes to the db required by the evolving application. It's not rocket science. Or maybe it is, but we can actually send a rocket to Mars :-). A good economic theory should recommend, in this specific context, to go with (3). Why? Because that's what I did :-)).

The third problem, I think, is more of a process problem. There was nothing wrong in the original wrapper. However, it wasn't designed for reuse, and is should not have been reused as-is, or by tweaking. This is a direct consequence of a lack of control over design choices; the team is, well, wrapped :-)) on itself. Unfortunately, lack of control over design issues is exactly what the team wants. A good process is meant to keep that kind of disfunctional behaviour out. Note that it doesn't have to be an heavyweight process. Just a good process :-).

Unfortunately, I don't have Theory of Wrapping to offer. At least, not yet :-). So, for the time being, I'll keep dealing with wrapping using informal reasoning, mostly based on experience and intuition. Got some good idea? Leave me a comment here!

Labels: , , ,

Wednesday, July 09, 2008 

Quote of the Day

"Out of clutter find simplicity;
From discord find harmony;
In the middle of difficulty lies opportunity."

Albert Einstein

Note: I was teaching requirements engineering today, and I spent more time than usual talking about viewpoints, task descriptions, personas, the need for the analyst to invent requirements and so on.
For some reason, while walking to the hotel those words of wisdom came back to my mind. Indeed, those principles apply equally well to analysis, design, or coding, and in a sense, they are at the core of quality software development.
The pursuit of simplicity and harmony is a recurring theme in Einstein's writings, some of which I've quoted before .

Labels: ,

Wednesday, June 25, 2008 

More on Code Clones

I've been talking about code clones before. It's a simple metric that I've used in several projects with encouraging results.

Till no long ago, however, I thought code clones detection was useful mostly to:

1) Assess and monitor an interesting quality aspect of a product
This requires that we constantly monitor code clones. If some code already exists, we can create a baseline and enforce a rule that things can only get better, not worse. I usually monitor several internal quality attributes at build time, because that's a fairly flexible moment, where most tools allow to insert some custom steps.

2) Identify candidates for refactoring, mostly in large, pre-existing projects.
This requires, of course, a certain willingness to act on your knowledge, that is, to actually go ahead and refactor duplicated code.

Sometimes, when the codebase is large, resources are scarce, or the company interest in software quality is mostly a marketing statement disconnected from reality, a commitment to refactor the code is never taken, or never taken seriously, which is about the same.

Here comes the third use of code clones. It is quite obvious, and I should have considered it earlier, but for some reason I didn't. I guess I was somehow blinded by the idea that if you care about quality, you must get in there and refactor the damn code. Strong beliefs are always detrimental to creativity :-).

Now: clones are bad because (in most cases) you have to keep them in synch during maintenance. If you don't, something bad is gonna happen (and yes, if you do, you waste a lot of time anyway, so you could as well refactor; but this is that strong belief rearing its head again :-).
So, if you don't want to use a code clones list to start a refactoring campaign, what else can you do? Use it to make sure you didn't forget to update a clone!

Unfortunately, with the tools I know, a large part of this process can't be easily automated. You would have to run a clone detection tool and keep the log somewhere. Then, whenever you change some portion of code, you'll have to check if that portion is cloned elsewhere (from the log). You then port your change in the other clones (and test everything). The clones list must be periodically updated, also to account for changes coming from different programmers.

Better tools can be easily conceived. Ideally, this could be integrated in your IDE: as I suggested in Listen to Your Tools and Materials, editors could provide unobtrusive backtalk, highlighting the fact that you're changing a portion of code that has been cloned elsewhere. From there, you could jump into the other files, or ask the editor to apply the same change automatically. In the end, that would make clones more tolerable; while this is arguably bad, it's still much better than leave them out of synch.

From that perspective, I would say that another interesting place in our toolchain where we would benefit from an open, customizable process is the version control system. Ideally, we may want to verify and enforce rules right at check-in time, without the need to delay checks until build time. Open source tools are an obvious opportunity to create a better breed of version control systems, which so far (leaving a few religious issues aside) have been more or less leveled in term of available features.

Note: I've been writing this post on a EEE PC (the Linux version), and I kinda like it. Although I'm not really into tech toys, and although the EEE looks and feels :-) like a toy, it's just great to carry around while traveling. The tiny keyboard is a little awkward to use, but I'll get used to it...

Labels: , , ,

Tuesday, May 13, 2008 

Natural language

Some (most :-) of my clients are challenging. Sometimes the challenge comes from the difficult technical problems they face. That's the best kind of challenge.
Sometimes the challenge comes from people: that's the worst kind of challenge, and one that right now is better left alone.
Sometimes the challenge comes from the organization, which means it also comes from people, but with a different twist. Challenges coming from the organization are always tough, but overcoming those challenges can really make a difference.

One of my challenging clients is a rather large company in the financial domain. They are definitely old-school, and although upper management can perfectly see how software is permeating and enabling their business, middle management tend to see software as a liability. In their eternal search for lower costs, they moved most of the development offshore, keeping only an handful of designers and all the analysts in-house. Most often, design is done offshore as well, for lack of available designers on this side of the world.

Analysts have a tough job there. On one side, they have to face the rest of the company, which is not software-friendly. On the other side, they have to communicate clear requirements to the offshore team, especially to the designers, who tend to be very technology-oriented.
To make things more complicated, the analysts often find themselves working on unfamiliar sub-domains, with precise regulations but also with large gray areas that must be somehow understood and communicated.
Icing on the cake: some of those financial instruments do not even exist in the local culture of the offshore team, making communication as difficult as ever.

Given this overall picture, I've often recommended analysts to spend some time creating a good domain model (usually, a UML class diagram, occasionally complemented by some activity diagrams).
The model, with unambiguous associations, dependencies, multiplicities, and so on, will force them to ask the right questions, and will make it easier for the offshore designer to acquaint himself with the problem. Over time, this suggestion has been quite helpful.
However, as I said, the organization is challenging. Some of the analysts complained that their boss is not satisfied by a few diagrams. He wants a lengthy, wordy explanation, so he can read it over and see if they got it right (well, that's his theory anyway). The poor analyst can't possibly do everything in the allotted time.

Now, I always keep an eye on software engineering research. I've seen countless attempts to create UML diagrams from natural language specifications. The results are usually unimpressive.
In this case, however, I would need exactly the opposite: a tool to generate a precise, yet verbose domain description out of a formal domain model. The problem is much easier to solve, especially because analysts can help the tool, by using the appropriate wording.

Guess what, the problem must be considered unworthy, because there is a dearth of works in that area. In practice, the only relevant paper I've been able to find is Generating Natural Language specifications from UML class diagrams by Farid Meziane, Nikos Athanasakis and Sophia Ananiadou. There is also Nikos' thesis online, with a few more details.
The downside is that (as usual) the tool they describe does not seem to be generally available. I've yet to contact the authors: I just hope it doesn't turn out to be one of those Re$earch Tool$ that never get to be used.

From the paper above, I've also learnt about ModelExplainer , a similar tool from a commercial company. Again, the tool doesn't seem to be generally available, but I'll get in touch with people there and see.

Overall, the problem doesn't seem so hard, especially if we accept the idea that the analyst will help the tool, choosing appropriate wording. An XMI-to-NL (Natural Language) would make for a perfect open source project. Any takers? :-)

Labels: , , , ,

Saturday, February 09, 2008 

Do Less

In many software projects lies some dreaded piece of code. Legacy code nobody wants to touch, dealing with a problem nobody wants to look at. In the past few days, I've been involved in a (mostly unpleasant :-) meeting, at the end of which I learnt that some "new" code was actually dealing with a dreaded problem by calling into legacy code. Dealing with that problem inside the "new" code was deemed risky, dirty, hard work.

Now, as I mentioned in my Work Smarter, Not Harder post, I don't believe in hard work. I believe in smart work.

After thinking for a while about the problem, I formulated an alternative solution. It didn't come out of nowhere: I could see the problem frame, and that was a language compiler/interpreter frame, although most of the participants didn't agree when I said that.
They had to agree, however, that my proposed solution was simple, effective, risk-free, and could be actually implemented in a few days of light :-) work. Which I take as an implicit confirmation that the problem frame was right.
I also had to mix some creativity, but not too much. So I guess the dreaded problem can now be solved by doing much less than expected.

This could explain the title of this post, but actually, it doesn't. More than 20 years ago, good ol' Gerald Weinberg wrote:
Question: How do you tell an old consultant from a new consultant?
Answer: The new consultant complains, "I need more business." The old consultant complains, "I need more time."


I guess I'm getting old :-), and I should probably take Seth Godin's advice. Seth is best known for "Purple Cow", a wonderful book that ought to be a mandatory reading for everyone involved in creating a new product. The advice I'm thinking of, though, comes from Do Less, a short free pamphlet he wrote a few years ago. There he writes:
What would happen if you fired half of your clients?

Well, the last few days really got me thinking :-)).

Labels: , ,

Monday, October 29, 2007 

On the concept of Form (1)

I postponed writing on Form (vs. Function) for a while now, mostly because I was struggling to impose a rational structure on my thoughts. Form is a complex subject, barely discussed in the software design literature, with a few exception I'll quote when relevant, so I'm sort of charting new waters here.

The point is, doing so would take too much [free] time, delaying my writings for several more weeks or even months. This is exactly the opposite of what I want to do with this blog (see my post blogging as destructuring), so I'll commit the ultimate sin :-) and talk about form in a rather unstructured way. I'll start a random sequence of posting on form, writing down what I think is relevant, but without any particular order. I won't even define the concept of form in this first instalment.

Throughout these posts, I'll borrow extensively from a very interesting (and warmly suggested to any software designer) book from Christopher Alexander. Alexander is better known for his work on patterns, which ultimately inspired software designers and created the pattern movement. However, his most releavant work on form is Notes on the Synthesis of Form. When quoting, I'll try to remember page numbers, in which case, I'll refer to the paperback edition, which is easier to find.

Alexander defines two processes through which form can be obtained: the unselfconscious and the selfconscious process. I'll get back to these two concepts, but in a few words, the unselfconscious process is the way some ancient cultures proceeded, without an explicit set of principles, but relying instead on rigid tradition mediated by immediate, small scale adaptation upon failure. It's more complex than that, but let's keep it simple right now.

Tradition provides viscosity. Without tradition, and without explicit design principles, the byproducts of the unselfconscious process will quickly degenerate. However, merely repeating a form won't keep up with a changing environment. Yet change happens, maybe shortly after the form has been built. Here is where we need immediate action, correcting any failure using the materials at hand. Of course, small scale changes are sustainable only when the rate of change (or rate of failure) is slow.

Drawing parallels to software is easy, although subjective. Think of quick, continuous, small scale adaptation. The immediate software counterpart is, very naturally, refactoring. As soon as a bad smell emerge, you fix it. Refactoring is usually small scale, using the "materials at hand" (which I could roughly translate into "changing only a small fraction of code"). Refactoring, by definition, does not change the function of code. Therefore, it can only change its form.

Now, although some people in the XP/agile camp might disagree, refactoring is a viable solution only when the desired rate of change is slow, and only when the gap to fill is small. In other words, only when the overall architecture (or plain structure) is not challenged: maybe it's dictated by the J2EE way of doing things, or by the Company One True Way of doing things, or by the Model View Controller police, and so on. Truth is, without an overall architecture resisting change, a neverending sequence of small-scale refactoring may even have a negative large-scale impact.

I've recently said that we can't reasonably turn an old-style, client-server application into a modern web application by applying a sequence of small-scale changes. It would be, if not unfeasible, hardly economic, and the final architecture might be far from optimal. The gap is too big. We're expecting to complete a big change in a comparatively short time, hence the rate of change is too big. The viscosity of the previous solution will fight that change and prevent it from happening. We need to apply change at an higher granularity level, as the dynamics in the small are not the dynamics in the large.

Curiously enough (or maybe not :-), I'll be talking about refactoring the next two days. As usual, I'll try to strike a balance, and get often back to good design princples. After all, as we'll see, when the rate of change grows, and/or when the solution space grows, the unselfconscious process must be replaced by the selfconscious process.

Labels: , , ,

Sunday, October 14, 2007 

Evolving (or rewriting) existing applications

I've been conspicuously absent from my blog in the last two weeks. Hectic life takes its toll :-), and it's not always possible to talk here about what I'm actually doing.
I'm often involved in very different activities, from weird bugs at the hardware/software interface, to coding tricky parts, to designing [small] application-specific frameworks, to making sense of nonsensical requirements. Recently, however, I've been helping a few customers make some rather fundamental decision about how to evolve (or rewrite) their applications.

Indeed, a significant subset of existing applications have been developed with now-obsolete technologies ("old" languages, framework, components, architectures), or even obsolete business concepts (e.g. a client-side application sold as a product, instead of a web application sold as a service).

Now, when you're planning the next generation of a successful application, you often end up trapped into a very natural, logical, linear thinking: we start from the core of the application, build a new/better/modern one, then we keep adding features till we have a complete application.
Sometimes, it's not even about starting at the core: you start with a large framework, and expect that you'll be able to build your applications in a snap (when the humongous framework is completed).

Now, this seems very logical, and if you look at it from a technology perspective, it makes a lot of sense. By dealing with the core first, you have the best chance to make a huge impact into the foundations, making them so much better. All those years on the market taught you a lot, so you know how to make the core better.
It's also easier this way: the new application will be entirely based on the new technology, from the bottom up. No need to mix old and new stuff.

As usual, a successful development strategy is completely context dependent! If your application is small, the natural strategy above is also the best overall strategy. In a relatively short time (since the application is small) you'll have a clean, brand-new application.
Unfortunately, the strategy does not scale at all to large applications. Let's see a few (well known, and often ignored) problems:

- The value delivery curve is pretty flat. Looks more like fig.1 in an old post of mine, as you can't really sell the new application till is finished.
People usually argue that by choosing the "right" core, they can start selling the core before the whole application has been ported. Yeah, sure, maybe in some alternative reality, but truth is, in many cases your existing customers won't downgrade to a less powerful, usually incompatible application (unless they're having major headaches from the old app).
New prospects won't be so exhilarated by a stripped-down core, either. Over the years, for a number of reasons, it' very likely that most innovation happened at the fringes of the old application, not at the core. You're now taking this stuff away for a potentially long time. Your old product will be competing with your new product, and guess what, lack of features tends to be rather visible.

- Although management may seem initially inclined to support your decision to rebuild everything from scratch, they won't stay silent as the market erodes. Soon they will (very reasonably) ask you to add some stuff to the existing application as well.
Now, by doing so, you'll slow down the development of the new application (resource contention) and you'll also create a backlog of features to be ported to the new application, once it is finished.

- All this conjures for very long development times. If you're working with intrinsically unstable technologies (like web applications), there is even a significant chance that your new application will be technologically obsolete before it gets to the market!

Let's try to model these (and a few more) issues using a diagram of effects



You may want to spend a little time looking at the different arrows, and especially at self-sustaining feedback loops. It's not hard to see that this is a recipe for having hard times. Yet companies routinely embark into this, because redoing the core is the most logical, most technologically sound thing to do. Unfortunately, it doesn't always make business sense
As usual, there are always other choices. As usual, they have their share of problems, and often requires better project management practices and skilled software designers. They also happen to be very context-dependent, as you have to find a better balance between your business, your current application, your new application.
Ok, time and space are up :-), I'll add a few details later.

Labels: , ,

Tuesday, June 26, 2007 

Got Multicore? Think Asymmetric!

Multicore CPU are now widely available, yet many applications are not tapping into their true potential. Sure, web applications, and more generally container-based applications have an inherent degree of coarse parallelism (basically at the request level), and they will scale fairly well on new CPU. However, most client-side applications don't fall in the same pattern. Also, some server-side applications (like batch processing) are not intrinsically parallel as well. Or maybe they are?

A few months ago, I was consulting on the design of the next generation of a (server-side) banking application. One of the modules was a batch processor, basically importing huge files into a database. For several reasons (file format, business policies), the file had to be read sequentially, processed sequentially, and imported into the database. The processing time was usually dominated by a single huge file, so the obvious technique to exploit a multicore (use several instances to import different files in parallel) would have not been effective.
Note that when we think of parallelism in this way, we're looking for symmetric parallelism, where each thread performs basically the same job (process a request, or import a file, or whatever). There is only so much you can do with symmetrical parallelism, especially on a client (more on this later). Sometimes (of course, not all the times), it's better to think asymmetrically, that is, model the processing as a pipeline.

Even for the batch application, we can see at least three stages in the pipeline:
- reading from the file
- doing any relevant processing
- storing into the database
You can have up to three different threads performing these tasks in parallel: while thread 1 is reading record 3, thread 2 will process record 2, and thread 3 will store [the processed] record 1. Of course, you need some buffering in between (more on this in a short while).
Actually, in our case, it was pretty obvious that the processing wasn't taking enough CPU to justify a separate thread: it could be merged with the read file operation. What was actually funny (almost exhilarating :-) was to discover that despite the immensely powerful database server, storing into the database was much slower than reading from the file (truth to be said, the file was stored in an immensely powerful file server as well). A smart guy in the bank quickly realized that it was our fault: we could have issued several parallel store operations, basically turning stage two of the pipeline into a symmetrical parallel engine. That worked like a charm, and the total time dropped by a factor of about 6 (more than I expected: we were also using the multi-processor, multi-core DB server better, not just the batch server multicore CPU).

Just a few weeks later (meaningful coincidence?), I stumbled across a nice paper: Understand packet-processing performance when employing multicore processors by Edwin Verplanke (Embedded Systems Design Europe, April 2007). Guess what, their design is quite similar to ours, an asymmetric pipeline with a symmetric stage.

Indeed, the pipeline model is extremely useful also when dealing with legacy code which has never been designed to be thread-safe. I know that many projects aimed at squeezing some degree of parallelism out of that kind of code fails, because the programmers quickly find themselves adding locks and semaphores everywhere, thus slowing down the beast so much that there is either no gain or even a loss.
This is often due to an attempt to exploit symmetrical parallelism, which on legacy, client-side code is a recipe for resource contention.Instead, thinking of pipelined, asymmetrical parallelism often brings some good results.
For instance, I've recently overheard a discussion on how to make a graphical application faster on multicore. One of the guy contended that since the rendering stage is not thread-safe, there is basically nothing they can do (except doing some irrelevant background stuff just to keep a core busy). Of course, that's because he was thinking of symmetrical parallelism. There are actually several logical stages in the pipeline before rendering takes place: we "just" have to model the pipeline explicitly, and allocate stages to different threads.

As I've anticipated, pipelines need some kind of buffering between stages. Those buffers must be thread safe. The banking code was written in C#, and so we simply used a monitor-protected queue, and that was it. However, in high-performance C/C++ applications we may want to go a step further, and look into lock-free data structures.

A nice example comes from Bjarne Stroustrup himself: Lock-free Dynamically Resizable Arrays. The paper has also a great bibliography, and I must say that the concept of descriptor (by Harris) is so simple and effective that I would call it a stroke of genius. I just wish a better name than "descriptor" was adopted :-).

For more predictable environments, like packet processing above, we should also keep in mind a simple, interesting pattern that I always teach in my "design patterns" course (actually in a version tailored for embedded / real-time programming, which does not [yet] appear on my website [enquiries welcome :-)]. You can find it in Pattern Languages of Program Design Vol. 2, under the name Resource Exchanger, and it can be easily made lock-free. I don't know of an online version of that paper, but there is a reference in the online Pattern Almanac.
If you plan to adopt the Resource Exchanger, make sure to properly tweak the published design to suit your needs (most often, you can scale it down quite a bit). Indeed, over the years I've seen quite a few hard-core C programmers slowing themselves down in endless memcpy calls where a resource exchanger would have done the job oh so nicely.

A final note: I want to highlight the fact that symmetric parallelism can still be quite effective in many cases, including some kind of batch processing or client-side applications. For instance, back in the Pentium II times, I've implemented a parallel sort algorithm for a multiprocessor (not multicore) machine. Of course, there were significant challenges, as the threads had to work on the same data structure, without locks, and (that was kinda hard) without having one processor invalidating the cache line of the other (which happens quite naturally in discrete multiprocessing if you do nothing about it). The algorithm was then retrofitted into an existing application. So, yes, of course it's often possible to go symmetrical, we just have to know when to use what, at which cost :-).

Labels: , , , , , , ,

Tuesday, June 19, 2007 

Client Side-Augmented Web Applications

In the last few posts I've been writing a lot about AOP, and very little about what I'm doing every other day. It's plain impossible to catch up, but here is something that has kept me busy for quite a few days lately: Client Side Augmented Web Applications (I should file for a trademark here :-). What I mean is a regular web application, that can be regularly used as a stand-alone application or, when you install some additional modules on the client, can also interact closely with other applications on the client side (e.g. the usual office suite, and so on).

Naturally, that means web pages must have a way to send data to client side application, and to obtain data from the client side application. For several (good) reasons, we wanted this data exchange to be as transparent as possible to the web app developers. Also, we didn't want to write different web applications (regular and augmented). That would have had a negative impact on development and maintenance times, and it could also have proven to be an inconvenience for the users. This had some impact on page navigation design, which could be an interesting subject for a future post.

Now, I can't get into the specifics of the project, or disclose much about the design of the whole infrastructure I've designed and built (yes, I still enjoy writing code :-). However, I can show you the final result. If you want your ASP.NET page to obtain (e.g.) the filename and title of your Word document, and send back to Word a corporate-wide document number, all you have to do is add a few decorated properties in your .aspx.cs source file, like:

public partial class DocumentProperties : System.Web.UI.Page
  {
  [AskClient("filename")]
  public string Filename
    {
    get
      {
      return filename;
      }
    set
      {
      filename = value;
      }
    }

  [AskClient( "title" )]
  public string Title
    {
    get 
      { 
      return title; 
      }
    set 
      { 
      title = value; 
      }
    }

  [SendClient("description")]  
  public string Description
    {
    get
      {
      return description;
      }
    set
      {
      description = value;
      }
    }

  // here goes the usual stuff (methods, data members)
  }

that's it. The attributes define the property name as known on the client side; the invisible infrastructure will take care of everything else. In a sense, communication between the web application and the client side application has been modeled as a virtual machine concern, and the attributes are used to tell the virtual machine where interception is needed. Of course, this is only the tip of the iceberg. There is a lot going on under the hood, also on the client side, as your browser and your favorite application are usually not good friends, and to be honest not even acquaintances.

Is my abstraction leaky? Sure it is. To make all that stuff working, you also have to drop a non-visual component into your page at design time. That component, if you follow the virtual machine metaphor, is indeed the implementation of the (server-side) virtual machine layer that will deal with the communication concern.
If you don't drop the component into the page, the virtual machine layer just won't kick in, and your attributes will stay there silently, stone cold. This is a functional leak that I'm aware of, and that as a designer I have accepted. In fact, all the alternatives I've considered to avoid this leak had some undesirable consequences, and keeping that leak brought the best overall balance. Besides, there are ways to somehow hide the need to drop the control into the page (like using page inheritance or a master page), so it's really no big deal.

A final reflection. I do not believe that something like this could come out of refactoring code that was simply meant "to work". It's relatively trivial to make a web application and a client-side application talk. It's quite different to make this talk transparent. Having prototyped the first option (just to make sure a few ideas could actually work) I can honestly say that without the necessary design effort (and skill), it's extremely unlikely to come close to the final result I got.

Time to drop a few numbers: overall, I've spent roughly 20% of total development time experimenting with ideas (throwaway code), 35% designing, 30% coding (this includes some unit testing), 10% doing end-to-end testing, 5% debugging. Given the relative novelty of several techniques I adopted, I should actually consider the 20% prototyping an inherent portion of the design activity: you can't effectively design much beyond the extent of your knowledge. Sometimes, the best way to gather more knowledge is to talk; sometimes, to read; sometimes, to write code. Of course, I was looking for knowledge, not for code to keep and cherish, so I happily scrapped it away to build much better code. In the end, that's often the difference between intentional and accidental architecture.

Labels: , , ,

Monday, October 31, 2005 

Teaching SOFTWARE Project Management

Over the years, a number of customers asked me to teach project management techniques to their team leaders and project/product managers. The reason is quite simple: traditional project management techniques don't work so well for software projects. Sure, you are better off if you know how to use PERT and GANTT charts, and you may still benefit from some traditional lecturing on risk management, but software is different. The best reason for software to be different comes from Armour: software development is a knowledge acquisition process, not a product manufacturing process. That's a huge difference.
Despite the large number of requests, I've never committed myself to create a set of PM slides. For more than a few years, I've been firm into telling that I knew enough to run a successful project, and even enough to advise on how to run a specific project, but not enough to teach how to do it in general (which is what I would expect from a PM course).
In the last year, I've spent more and more time thinking on what I could actually teach - valuable, modern, software-specific techniques that I've tried in the real world and that I can trust to work. It turned out that I knew more than I thought, but also that I couldn't teach those techniques without first teaching some (even more fundamental) conceptual tools, like Armour Ignorance Orders or Project Portfolios or Option Thinking, and so on.
In these days I'm polishing the slides I've created, and trying to create a natural bridge between those slides and some of my material on Requirements Analysis. This is probably a good chance to review that material as well, along the lines I've envisioned a few months ago. So, very soon I'll have a new, short, hopefully fun course on PM appearing on my course catalogue.

Labels: , ,

Wednesday, September 14, 2005 

Kicking reuse in the A$$

I've spent the last two days (and tomorrow as well) over the design of a new product. The whole project has been an exercise in fighting complexity and "featuritism" or, put another way, in shrinking down requirements and technology to a bare minimum. The goal was clear (at least to me :-) : provide the core benefits to the user, avoid any useless and costly feature, squeeze development times by allocating features where they belong.
We obtained a dramatic improvement in development time (and in my opinion, in product marketability) by moving some features from the PC to the embedded side (!). Another big gain came from kicking a standard protocol out and using a tailored command (within the protocol framework, but still entirely custom); this improved reliability as well. The final, substantial gain came from kicking out a large, reusable infrastructure we built years ago for similar applications, and designing a (much smaller) custom solution.
Now, this is not the kind of suggestion people would expect from me. But reuse is not a value per se. It's a value when it provides an economic advantage. In this case, the initial investment needed to fit the problem inside the large, complex architecture just wouldn't pay off. Besides, the custom solution is not badly designed. We have a clear point where we could theoretically sweep out the custom part and plug a bridge to the more powerful infrastructure. This is, in my opinion, quite sensible design. We have the minimum overengineering needed to move to an higher level later, at low cost, if we ever need this. Meanwhile, the project will sell, and hopefully sustain itself.
Economy 101 :-), or a careful use of Real Options theory? More the first than the latter, but again today, when designing a reporting subsystem, there was a clear separation between what is useful to design now (everything down to an XML document), and what is better (economically better) to postpone until a more careful evaluation of alternatives has been carried out. This is easily explained by Real Options - the option to wait on this part has more economic value than the option of designing it straight away. This (as well as kicking code reuse) is somewhat entangled with the notion of Second Order Ignorance from Armour, but I'll leave this as the dreaded exercise for the reader :-))).

Labels: , ,