Sunday, March 07, 2010 

You can't control what you can't …

… measure, Tom de Marco used to say ("Controlling Software Projects: Management, Measurement, and Estimation", 1982). Tom recently confessed he no longer subscribes to that point of view. Now, I like Tom and I've learnt a lot from him, but I don't really agree about most of what he's saying in that paper.

Sure, the overall message is interesting: earth-shaking projects have a ROI so big that you don't really care about spending a little more money. But money isn't the only thing you may need to control (what about time, and your window of opportunity?) and not each and every project can be a earth-shaking project. If you need to comply with some policy or regulation by a given date, it may well be a moot project, but you better control for time :-). More examples (tons, actually) on demand. Relinquishing control is a fascinating concept, and by all means, if you can redefine your projects so that control is no longer necessary, just do it. But frankly, it's not always an option.

Still, can we control what we can't measure? As usual, it depends. It depends on what you want to control, and on your definition of control. We can watch over some things informally, that is, using a rough, imprecise, perhaps intuitive measure ("feeling") and still keep inside reasonable boundaries. This might be enough to be "in control". As others have noted (see for instance Managing What You Can’t Measure) sometimes all we need is a feeling that we're going off track, and a sensible set of tactics to get back on.

All that said, I feel adventurous enough today :-) to offer my own version of Tom's (repudiated) law. I just hope I won't have to take it back in 30 years :-).

You can't control what you can't name.

I would have said "define", but a precise definition is almost like a measure. But if you can't even name the concept (which, yes, requires at least a very informal definition of the term), you're consciously unaware of it. Without awareness, there is no control.

I can say that better: you can't control it intentionally. For a long time, people have controlled forces they didn't fully understand, and perhaps couldn't even name, for instance in building construction. They did that through what Alexander called the unselfconscious process, by relying on tradition (which was largely based on trial and error).

I see this very often in software projects too. People doing things because tradition taught them to do so. They don't really understand why - and would react vehemently if you dare to question their approach or suggest another way. They do so because tradition provides safety, and you're threatening their safety.

The problem with the unselfconscious process is that it doesn't scale well. When the problem is new, when the rate of change in the problem domain increases, whenever the right answer can't be found in tradition, the unselfconscious process doesn't work anymore. We gotta move to the selfconscious process. You gotta learn concepts. Names. Forces. Nonlinear interactions. We gotta think before we do. We gotta ask questions. Question the unquestionable. Move outside our comfort area. Learn, learn, learn.

Speaking of learning, I've got something to say, which is why I wrote this post in the first place, but I'll save that for tomorrow :-).

Labels: , , ,

Wednesday, July 08, 2009 

When in doubt, do the right thing

The bright side of spending most of my professional time on real-world projects is that I have an endless stream of inspiration, and what is even more important, the possibility of trying out new ideas, concepts, and methods. The dark side is that the same source of inspiration is taking away the precious time I would need to encode, structure, articulate knowledge, that therefore remains largely implicit, tacit, intuitive. The pitch black side is that quite often I'd like to share some real-world story, but I can't, as the details are kinda classified or just to protect the innocent. Sometimes, however, the story can be told with just a little camouflage.

Weeks ago, I was trying to figure out the overall architecture of a new system, intended to replace an obsolete framework. I could see a few major problems, two of which were truly hard to solve without placing a burden on everyone using the framework. Sure, we had other details to work out, but I could see no real showstoppers except for those two. The project manager, however, didn't want to face those problems. She wanted to start with the easy stuff, basically re-creating structures she was familiar with. I tried to insist about the need to figure out an overall strategy first, but to no avail. She wanted progress, right here, right now. That was a huge mistake.

Now, do not misunderstand me: I'm not proposing to stop any kind of development before you work every tiny detail out. Also, in some cases, the only real way to understand a system is by building it. However, building the wrong parts first (or in this case, building the easy parts first) is always a big mistake.

Expert designers know that in many cases, you have to face the most difficult parts early on. Why? Because if you do it too late, you won't have the same options anymore; previous decisions will act like constraints on late work.

Diomidis Spinellis has recently written a very nice essay on this subject (IEEE Software, March/April 2009). Here is a relevant quote: On a blank sheet of paper, the constraints we face are minimal, but each design decision imposes new restrictions. By starting with the most difficult task, we ensure that we’ll face the fewest possible constraints and therefore have the maximum freedom to tackle it. When we then work on the easier parts, the existing constraints are less restraining and can even give us helpful guidance.

I would add more: even if you take the agile stance against upfront design and toward emergent design, the same reasoning applies. If you start with the wrong part, the emergent design will work against you later. Sure, if you're going agile, you can always refactor the whole thing. But this reasoning is faulty, because in most cases, the existing design will also limit your creativity. It's hard to come up with new, wild ideas when those ideas conflict with what you have done up to that moment. It's just human. And yeah, agile is about humans, right? :-)

Expert designer start with the hard parts, but beginners don't. I guess I can quote another nice work, this time from Luke Hohmann (Journey of the Software Professional - a Sociology of Software Development): Expert developer's do tend to work on what is perceived to be the hard part of the problem first because their cognitive libraries are sufficiently well developed to know that solving the "hard part first" is critical to future success. Moreover, they have sufficient plans to help them identify what the hard part is. Novices, as noted often fail to work on the hard-part-first for two reasons. First, they may not know the effectiveness of the hard part first strategy. Second, even if they attempt to solve the hard part first, they are likely to miss it.

Indeed, an expert analyst, or designer, knows how to look at problems, how to find the best questions before looking for answers. To do this, however, we should relinquish preconceived choices. Sure, experts bring experience to the table, hopefully in several different fields, as that expands our library of mental plans. But (unlike many beginners) we don't approach the problem with pre-made choices. We first want to learn more about the forces at play. Any choice is a constraint, and we don't want artificial constraints. We want to approach the problem from a clean perspective, because freedom gives us the opportunity to choose the best form, as a mirror of the forcefield. By the way, that's why zealots are often mediocre designers: they come with too many pre-made choices, or as a Zen master would say, with a full cup.

Of course, humans being humans, it's better not to focus exclusively on the hard stuff. For instance, in many of my design sessions with clients, I try to focus on a few simple things as we start, then dig into some hard stuff, switch back to something easy, and so on. That gives us a chance to take a mental break, reconsider things in the back of our mind, and still make some progress on simpler stuff. Ideally, but this should be kinda obvious by now, the easy stuff should be chosen to be as independent/decoupled as possible from the following hard stuff, or we would be back to square one :-).

In a sense, this post is also about the same thing: writing about some easy stuff, to take a mental break from the more conceptual stuff on the forcefield. While, I hope, still making a little progress in sharing some useful design concept. See you soon!

Labels: , , , , , ,

Sunday, April 26, 2009 

Bad Luck, or "fighting the forcefield"

In my previous post, I used the expression "fighting the forcefield". This might be a somewhat uncommon terminology, but I used it to describe a very familiar situation: actually, I see people fighting the forcefield all the time.

Look at any troubled project, and you'll see people who made some wrong decision early on, and then stood by it, digging and digging. Of course, any decision may turn out to be wrong. Software development is a knowledge acquisition process. We often take decisions without knowing all the details; if we didn't, we would never get anything done (see analysis paralysis for more). Experience should mitigate the number of wrong decisions, but there are going to be mistakes anyway; we should be able to recognize them quickly, backtrack, and take another way.

Experience should also bring us in closer contact with the forcefield. Experienced designers don't need to go through each and every excruciating detail before they can take a decision. As I said earlier, we can almost feel, or see the forcefield, and take decisions based on a relatively small number of prevailing forces (yes, I dare to consider myself an experienced designer :-).
This process is largely unconscious, and sometimes it's hard to rationalize all the internal reasoning; in many cases, people expect very argumentative explanations, while all we have to offer on the fly is aesthetics. Indeed, I'm often very informal when I design; I tend to use colorful expressions like "oh, that sucks", or "that brings bad luck" to indicate a flaw, and so on.

Recently, I've found myself saying that "bad luck" thing twice, while reviewing the design of two very different systems (a business system and a reactive system), for two different clients.
I noticed a pattern: in both cases, there was a single entity (a database table, a in-memory structure) storing data with very different timing/life requirements. In both cases, my clients were a little puzzled, as they thought those data belonged together (we can recognize gravity at play here).
Most naturally, they asked me why I would keep the data apart. Time to rationalize :-), once again.

Had they all been more familiar with my blog, I would have pointed to my recent post on multiplicity. After all, data with very different update frequency (like: the calibration data for a sensor, and the most recent sample) have a different fourth-dimensional multiplicity. Sure, at any given point in time, a sensor has one most recent sample and one set of calibration data; therefore, in a static view we'll have multiplicity 1 for both, suggesting we can keep the two of them together. But bring in the fourth dimension (time) and you'll see an entirely different picture: they have a completely different historical multiplicity.

Different update frequencies also hint at the fact that data is changing under different forces. By keeping together things that are influenced by more than one force, we expose them to both. More on this another time.

Hard-core programmers may want more than that. They may ask for more familiar reasons not to put data with different update frequencies in the same table or structure. Here are a few:

- In a multi-threaded software, in-memory structures requires locking. If your structure contains data that is seldom updated, that means it's being read more than written: if it's seldom read and seldom written, why keep it around at all?
Unfortunately, the high-frequency data is written quite often. Therefore, either we accept to slow down everything using a simple mutex, or we aim for higher performances through a more complex locking mechanism (reader/writer lock), which may or may not work, depending on the exact read/write pattern. Separate structures can adopt a simpler locking mechanism, as one is being mostly read, the other mostly written; even if you go with a R/W lock, here it's almost guaranteed to have good performance.

- Even on a database, high-frequency writes may stall low-frequency reads. You even risk a lock escalation from record to table. Then you either go with dirty reads (betting on your good luck) or you just move the data in another table, where it belongs.

- If you decide to cache database data to improve performances, you'll have to choose between a larger cache with the same structure of the database (with low frequency data too) or a smaller and more efficient cache with just the high-frequency data (therefore revealing once more that those data do not belong together).

- And so on: I encourage you to find more reasons!

In most cases, I tend to avoid this kind of problems instinctively: this is what I really call experience. Indeed, Donald Schön reminds us that good design is not for everyone, and that you have to develop your own sense of aesthetics (see "Reflective Conversation with Materials. An interview with Donald Schön by John Bennett", in Bringing Design To Software, Addison-Wesley, 1996). Aesthetics may not sound too technical, but consider it a shortcut for: you have to develop your own ability to perceive the forcefield, and instinctively know what is wrong (misaligned) and right (aligned).

Ok, next time I'll get back to the notion of multiplicity. Actually, although I've initially chosen "multiplicity" because of its familiarity, I'm beginning to think that the whole notion of fourth-dimensional multiplicity, which is indeed quite important, might be confusing for some. I'm therefore looking for a better term, which can clearly convey both the traditional ("static") and the extended (fourth-dimensional, historical, etc) meaning. Any good idea? Say it here, or drop me an email!

Labels: , , ,

Sunday, October 26, 2008 

Microblogging is not my thing...

A few weeks ago I got a phone call from a client. They want to insource a mission-critical piece of code. I talked about the concept of Habitable Software and thought I could write something here.

As I started to write, words unfolded in unexpected rivers. Apparently, I've got too much to say [and too little time].

So, I tried to use a mind map to get an overview of what I was trying to say.

Here it is (click to get a full-scale pdf):



Strictly speaking, it's not even a mind map, as I drew a graph, not a tree. I find the tree format very limiting, which is probably a side-effect of keeping a lot of connections in my mind.

Another side effect is that I find micro-blogging unsatisfactory. Sure, I could post something like:

interesting book, take a look

and get over, but it's just not my style.

Anyway, I'll try to keep this short and just add a link to the presentation on form Vs. function that I mentioned in the mind map: Integrating Form and Function. Don't mind the LISP stuff :-). That thing about the essential and contingent interpreter is great.

More on all this another time, as I manage to unravel the fabric of my mind :-)

Labels: , , , ,

Saturday, September 13, 2008 

Lost

I’ve been facing some small, tough design problems lately: relatively simple cases where finding a good solution is surprisingly hard. As usual, it’s trivial to come up with something that “works”; it’s also quite simple to come up with a reasonably good solution. It’s hard to come up with a great solution, where all forces are properly balanced and something beautiful takes shape.

I like to think visually, and since standard notations weren’t particularly helpful, I tried to represent the problem using a richer, non-standard notation, somehow resembling Christopher Alexander’s sketches. I wish I could say it made a huge difference, but it didn’t. Still, it was quite helpful in highlighting some forces in the problem domain, like an unbalanced multiplicity between three main concepts, and a precious-yet-fragile information hiding barrier. The same forces are not so visible in (e.g.) a standard UML class diagram.

Alexander, even in his early works, strongly emphasized the role of sketches while documenting a pattern. Sketches should convey the problem, the process to generate or build a solution, and the solution itself. Software patterns are usually represented using a class diagram and/or a sequence diagram, which can’t really convey all that information at once.

Of course, I’m not the first to spend some time pondering on the issue of [generative] diagrams. Most notably, in the late ‘90s Jim Coplien wrote four visionary articles dealing with sketches, the geometrical properties of code, alternative notations for object diagrams, and some (truly) imponderable questions. Those papers appeared on the long-dead C++ Report, but they are now available online:

Space-The final frontier (March 1998)
Worth a thousand words (May 1998)
To Iterate is Human, To Recurse, Divine (July/August 1998)
The Geometry of C++ Objects (October 1998)

Now, good ol’ Cope has always been one of my favorite authors. I’ve learnt a lot from him, and I’m still reading most of his works. Yet, ten years ago, when I read that stuff, I couldn’t help thinking that he lost it. He was on a very difficult quest, trying to define what software is really about, what beauty in software is really about, trying to adapt theories firmly grounded in physical space to something that is not even physical. Zen and the Art of Motorcycle Maintenance all around, some madness included :-).

I re-read those papers recently. That weird feeling is still here. Lights and shadows, nice concepts and half-baked ideas, lot of code-centric reasoning, overall confusion, not a single strong point. Yeah, I still think he lost it, somehow :-), and as far as I know, the quest ended there.
Still, his questions, some of his intuitions, and even some of his most outrageous :-) ideas were too good to go wasted.

The idea of center, that he got from The Nature of Order (Alexander’s latest work) is particularly interesting. Here is a quote from Alexander:
Centers are those particular identified sets, or systems, which appear within the larger whole as distinct and noticeable parts. They appear because they have noticeable distinctness, which makes them separate out from their surroundings and makes them cohere, and it is from the arrangements of these coherent parts that other coherent parts appear.

Can we translate this concept into the software domain? Or, as Jim said, What kind of x is there that makes it true to say that every successful program is an x of x's?. I’ll let you read what Jim had to say about it. And then (am I losing it too? :-) I’ll tell you what I think that x is.

Note: guys, I know some of you already think I lost it :-), and would rather read something about (e.g.) using variadic templates in C++ (which are quite cool, actually :-) to implement SCOOP-like concurrency in a snap. Bear with me. There is more to software design than programming languages and new technologies. Sometimes, we gotta stretch our mind a little.

Anyway, once I get past the x of x thing, I’d like to talk about one of those wicked design problems. A bit simplified, down to the essential. After all, as Alexander says in the preface of “Notes on the Synthesis of Form”: I think it’s absurd to separate the study of designing from the practice of design. Practice, practice, practice. Reminds me of another book I read recently, an unconventional translation of the Analects of Confucius. But I’ll save that for another time :-).

Labels: , , ,

Friday, April 25, 2008 

Can AOP inform OOP (toward SOA, too? :-) [part 2]

Aspect-oriented programming is still largely code-centric. This is not surprising, as OOP went through the same process: early emphasis was on coding, and it took quite a few years before OOD was ready for prime time. The truth about OOA is that it never really got its share (guess use cases just killed it).

This is not to say that nobody is thinking about the so-called early aspects. A notable work is the Theme Approach (there is also a good book about Theme). Please stay away from the depressing idea that use cases are aspects; as I said a long time ago, it's just too lame.

My personal view on early aspects is quite simple: right now, I mostly look for cross-cutting business rules as candidate aspects. I guess it's quite obvious that the whole "friend gift" concept is a business rule cutting through User and Subscription, and therefore a candidate aspect. Although I'm not saying that all early aspects are cross-cutting business rules (or vice-versa), so far this simple guideline has served me well in a number of cases.

It is interesting to see how early aspects tend to be cross-cutting (that is, they hook into more than one class) but not pervasive. An example of pervasive concern is the ubiquitous logging. Early aspects tend to cut through a few selected classes, and tend to be non-reusable (while a logging aspect can be made highly reusable).
This seems at odd with the idea that "AOP is not for singleton", but I've already expressed my doubt on the validity of this suggestion a long time ago. It seems to me that AOP is still in its infancy when it comes to good principles.

Which brings me to obliviousness. Obliviousness is an interesting concept, but just as it happened with inheritance in the early OOP days, people tend to get carried over.
Remember when white-box inheritance was applied without understanding (for instance) the fragile base class problem?
People may view inheritance as a way to "patch" a base class and change its behaviour in unexpected ways. But truth is, a base class must be designed to be extended, and extension can take place only through well-defined extensions hot-points. It is not a rare occurrence to refactor a base class to make extension safer.

Aspects are not really different. People may view aspects as a way to "patch" existing code and change its behaviour in unexpected ways. But truth is, when you move outside the safe realm of spectators (see my post above for more), your code needs to be designed for interception.
Consider, for instance, the initial idea of patching the User class through aspects, adding a data member, and adding a corresponding data into the database. Can your persistence logic be patched through an aspect? Well, it depends!
Existing AOP languages can't advise any given line: there is a fixed grammar for pointcuts, like method call, data member access, and so on. So if your persistence code was (trivially)

class User
{
void Save()
{
// open a transaction
// do your SQL stuff
// close the transaction
}
}
there would be no way to participate to the transaction from an aspect. You would have to refactor your code, e.g. by moving the SQL part in a separate method, taking the transaction as a parameter. Can we still call this obliviousness? That's highly debatable! I may not know the details of the advice, but I damn sure know I'm being advised, as I refactored my code to be pointcut-friendly.

Is this really different from exposing a CreateSubscription event? Yeah, well, it's more code to write. But in many data-oriented applications, a well-administered dose of events in the CRUD methods can take you a long way toward a more flexible architecture.

A closing remark on the SOA part. SOA is still much of a buzzword, and many good [design] ideas are still in the decontextualized stage, where people are expected to blindly follow some rule without understanding the impact of what they're doing.

In my view, a crucial step toward SOA is modularity. Modularity has to take place at all levels, even (this will sound like an heresy to some) at the database level. Ideally (this is not a constraint to be forced, but a force to be considered) every service will own its own tables. No more huge SQL statements traversing every tidbit in the database.

Therefore, if you consider the "friend gift" as a separate service, it is only natural to avoid tangling the User class, the Subscription class, and the User table with information that just doesn't belong there. In a nutshell, separating a cross-cutting business rule into an aspect-like class will bring you to a more modular architecture, and modularity is one of the keys to true SOA.

Labels: , , ,

Saturday, February 09, 2008 

Do Less

In many software projects lies some dreaded piece of code. Legacy code nobody wants to touch, dealing with a problem nobody wants to look at. In the past few days, I've been involved in a (mostly unpleasant :-) meeting, at the end of which I learnt that some "new" code was actually dealing with a dreaded problem by calling into legacy code. Dealing with that problem inside the "new" code was deemed risky, dirty, hard work.

Now, as I mentioned in my Work Smarter, Not Harder post, I don't believe in hard work. I believe in smart work.

After thinking for a while about the problem, I formulated an alternative solution. It didn't come out of nowhere: I could see the problem frame, and that was a language compiler/interpreter frame, although most of the participants didn't agree when I said that.
They had to agree, however, that my proposed solution was simple, effective, risk-free, and could be actually implemented in a few days of light :-) work. Which I take as an implicit confirmation that the problem frame was right.
I also had to mix some creativity, but not too much. So I guess the dreaded problem can now be solved by doing much less than expected.

This could explain the title of this post, but actually, it doesn't. More than 20 years ago, good ol' Gerald Weinberg wrote:
Question: How do you tell an old consultant from a new consultant?
Answer: The new consultant complains, "I need more business." The old consultant complains, "I need more time."


I guess I'm getting old :-), and I should probably take Seth Godin's advice. Seth is best known for "Purple Cow", a wonderful book that ought to be a mandatory reading for everyone involved in creating a new product. The advice I'm thinking of, though, comes from Do Less, a short free pamphlet he wrote a few years ago. There he writes:
What would happen if you fired half of your clients?

Well, the last few days really got me thinking :-)).

Labels: , ,

Monday, January 14, 2008 

Being 10 Years Behind (part 1)

In the last two years I've been working quite closely with a company, designing the fourth generation of a successful product. Indeed, a few of my posts have been inspired by the work I did on that project.
What we have now is a small, flexible, fast web application where we definitely pushed the envelope using AOP-like techniques pervasively, although in .NET/C#.

Compared with the previous generation, our application has more features, is much easier to customize (a must), is much easier to use thanks to the task-oriented design of the HCI (the previous generation was more slanted toward the useless computer approach) and also about 10 times faster (thanks to better database design and a smarter business layer).
Guess what, the source code size is just about 1/30 of what we had before (yeap, 30 times less), excluding generated code to read/write some XML files. The previous application was written in a mixture of C++, VB6, Perl, Python, C#.

Now, the company is considering a strict partnership with an Asian corporation. They have a similar product, in ASP.NET / C# as well. It took them something like 30 times our man-months to write it, so the general feeling was that it should have been "more powerful". Time to look at the features, but hey, features are basically the same, although their product is not task-oriented. If anything, they lack a lot of our customization interface.

Time to look at the code, as the code never lies.
The code is probably 50 times bigger, with no flexibility whatsoever. If you need to attach one more field to a business concept, just to track some custom information, you probably have to change the database, 5-8 classes between the database and the GUI, and the GUI itself.
In most cases, in our application we just need to open the administration console, add the field, specify validation rules, and that's it.
If you have special needs, you write a custom class, decorate the class with attributes, and we take care of all the plumbing to instantiate / call your class in the critical moments (that's one of the several places where the AOP-like stuff kick in).

What struck me when I looked at that code was the (depressing :-) similarity with a lot of old-style Java code I've been seeing over the years, especially in banking environments.

There is an EntityDAO (data access object) package with basically one class for each business entity. That class is quite stupid, basically dealing with persistence (CRUD) and exposing properties. Those classes are used "at the bottom" of the system, that is, near to the database.

Then there is an Entity package where (again!) there is basically one class for each (business) entity. The class is completely stupid, offering only get/set methods. Those classes are used "at the top" of the system, that is, near to the GUI or external services.

There is a BusinessLogic package where Entities gets passed to various classes as parameters, and EntityDAO objects are used to carry out persistency-related tasks.
Actually, inside the BusinessLogic lies a lot of database-related code, sometimes with explicit manipulation of DataRow classes. The alternative would have been to create much more EntityDAO classes.

Here and there, the coupling between the BusinessLogic and the database must have seemed too strong, so an EntityReader package has been created, where more sophisticated (yet still stupid) entities (or collections) are built using EntityDAO classes.

Finally, you just need :-) something to call from your service or GUI layer. The ubiquitous BusinessFacade package is therefore introduced, implementing a very large, use-case driven interface (put in yet another package), taking Entity instances as parameters and using the BusinessLogic.

At that point, people invariably realize that services need much more logic than what is provided by the BusinessLogic package, and so go ahead and create a (very sad) BusinessHelper package, where they complement all the missing parts in the BusinessLogic, most often by direct database access.

Then we have other subsystems (cache, SQL, and so on) all built around XXXManager classes, which we can ignore.

Of course, in the end everything is so coupled with the database schema that just adding a field results in a nightmare. And you get a lot of code to maintain as well. Good luck. Meanwhile, the Ruby On Rails guys are creating (simple) applications faster than the other guys can spell BusinessHelper. Say good-bye to productivity.

We can blame it on static typing, but reality is much simpler. That architecture is wrong. Is at least 10 years behind from the state of practice, which means is probably 15 to 20 years behind from the state of the art.
The problem is, that ancient architecture was popularized years ago mostly by language and tools vendors, or by people who thinks Architects don't have to understand code or the real problem being solved, just to replicate a trivial-yet-humonguous structure everywhere. It's basically a decontextualized architecture (more on this next time).

Indeed, if you look at the Java literature, you can find good books dating back to the early decade (like "EJB Design Patterns" by Floyd Marinescu, year 2002), where the pros and cons of several choices adopted in that overblown yet fragile architectural model are discussed. When a Patterns book appears, a few years of practice are gone by. That was 2002; now, ten years are gone, and yet developers still fall into the same trap.

It gets worse. While even the most outdated Java applications are gradually moving away from that model (see Untangling Enterprise Java by Chris Richardson for a few ideas), Microsoft evangelists are so excited about it. They happily go around (re)selling an architecture that is remarkably similar to the 10-years-behind behemoth above.

Which brings me to "Being 10 Years Behind (part 2)". Stay tuned :-).

Labels: , , ,

Tuesday, December 18, 2007 

Problem Frames

I became aware of problem frames in the late 90s, while reading an excellent book from Michael Jackson (Software Requirements and Specifications; my Italian readers may want to read my review of the book, straight from 1998).

I found the concept interesting, and a few years later I decided to dig deeper by reading another book from Jackson (Problem Frames: Analyzing and structuring software development problems). Quite interesting, although definitely not a light reading. I cannot say it changed my life or heavily influenced the way I work while I analyze requirements, but it added a few interesting concepts to my bag of tricks.

Lately, I've found an interesting paper by Rebecca Wirfs-Brock, Paul Taylor and James Noble: Problem Frame Patterns: An Exploration of Patterns in the Problem Space. I encourage you to read the paper, even if you're not familiar with the concept of problem frame: in fact, it's probably the best introduction on the subject you can get anywhere.

In the final section (Assessment and Conclusions) the authors compare Problem Frames and Patterns. I'll quote a few lines here:

A problem frame is a template that arranges and describes
phenomena in the problem space, whereas a pattern maps forces to a solution in the solution space.


Patterns are about designing things. The fact that we put problem frames into pattern form demonstrates
that when people write specifications, they are designing too—they are designing the overall system, not its
internal structure. And while problem frames are firmly rooted in the problem space, to us they also suggest
solutions.


If you read that in light of what I've discussed in my latest post on Form, you may recognize that Problem Frames are about structuring and discovering context, while Design Patterns helps us structure a fitting form.
When Problem Frames suggests solutions, there is a good chance that they're helping us in the elusive game of (to quote Alexander again) bringing harmony between two intangibles: a form which we have not yet designed, and a context which we cannot properly describe.

Back to the concept of Problem Frames, I certainly hope that restating them in a pattern form will foster their adoption. Indeed, the paper above describes what is probably the closest thing to true Analysis Patterns, and may help analysts look more closely at the problem before jumping into use cases and describe the external behaviour of the system.

Labels: , , , , ,

Wednesday, November 14, 2007 

Process as the company's homeostatic system

I was learning about EPOC (Excess Post-Exercise Oxygen Consumption) a few weeks ago, for no particular reason (as I often do :-). If you ever exercised at medium-to-high intensity, you've probably experienced EPOC: after you stop training, your oxygen intake stays higher than your usual resting intake for quite a while, declining over several hours.
The reason for EPOC "is the general disturbance to homeostasis brought on by exercise" (Brooks and Fahey, "Exercise physiology. Human bioenergetics and its applications", Macmillan Publishing Company, 1984).

In the next few days, inspired by different readings, I began thinking of process as the company's homeostatic system.
I've often claimed that companies have their own immune system, actively killing ideas, concepts and practices misaligned with the true nature of the company ("true" as opposed to, for instance, a theoretical statement of the company's values).
However, the homeostatic system has a different purpose, as it is basically designed to (wikipedia quote) "allow an organism to function effectively in a broad range of environmental conditions". Once you replace "organism" with "team", this becomes the best definition of a good development process that I've ever (or never :-) been able to formulate.

It is interesting to understand that the homeostatic system is very sophisticated. It works smoothly, but can leverage several different strategies to adapt to different external conditions. In fact, when good old Gerald Weinberg classified company's cultural patterns on the 1-5 scale, he called level-2 companies "Routine" (as they always use the same approach, without much care for context), level-3 "Steering" (dynamically picking a routine from a larger repertoire) and level-4 "Anticipating" (you figure :-). Of course, any company using the same process every time, no matter the process (Waterfall, RUP-inspired, XP, whatever), is at level 2.

For instance, if we find ourselves working with stakeholders with highly conflicting requirements and we don't apply a more "formal" process based on the initial assessment of viewpoints and concerns (see, for instance, my posts Value and Priorities and Requirements and Viewpoints for a few references) because "we usually gather users stories, implement, and get feedback quickly", then we're stuck in a Routine culture. Of course, if we always apply the viewpoints, we're stuck in a Routine culture too.

Well, there would be much more to say, especially about contingent methodologies and about congruency (another of Weinberg's favorite concepts) between company's process and company's strategy, but time is up again, so I'll leave you with a question. We all know that some physical activity is good for our health. Although high-intensity exercises bring disturbance to homeostasis, in the end we get a better homeostatic system. So the question is, if process is the company's homeostatic system, what is the equivalent of good training? How do we bring a dose of periodic, healthy disturbance to make our process stronger?

Labels: , ,

Monday, October 29, 2007 

On the concept of Form (1)

I postponed writing on Form (vs. Function) for a while now, mostly because I was struggling to impose a rational structure on my thoughts. Form is a complex subject, barely discussed in the software design literature, with a few exception I'll quote when relevant, so I'm sort of charting new waters here.

The point is, doing so would take too much [free] time, delaying my writings for several more weeks or even months. This is exactly the opposite of what I want to do with this blog (see my post blogging as destructuring), so I'll commit the ultimate sin :-) and talk about form in a rather unstructured way. I'll start a random sequence of posting on form, writing down what I think is relevant, but without any particular order. I won't even define the concept of form in this first instalment.

Throughout these posts, I'll borrow extensively from a very interesting (and warmly suggested to any software designer) book from Christopher Alexander. Alexander is better known for his work on patterns, which ultimately inspired software designers and created the pattern movement. However, his most releavant work on form is Notes on the Synthesis of Form. When quoting, I'll try to remember page numbers, in which case, I'll refer to the paperback edition, which is easier to find.

Alexander defines two processes through which form can be obtained: the unselfconscious and the selfconscious process. I'll get back to these two concepts, but in a few words, the unselfconscious process is the way some ancient cultures proceeded, without an explicit set of principles, but relying instead on rigid tradition mediated by immediate, small scale adaptation upon failure. It's more complex than that, but let's keep it simple right now.

Tradition provides viscosity. Without tradition, and without explicit design principles, the byproducts of the unselfconscious process will quickly degenerate. However, merely repeating a form won't keep up with a changing environment. Yet change happens, maybe shortly after the form has been built. Here is where we need immediate action, correcting any failure using the materials at hand. Of course, small scale changes are sustainable only when the rate of change (or rate of failure) is slow.

Drawing parallels to software is easy, although subjective. Think of quick, continuous, small scale adaptation. The immediate software counterpart is, very naturally, refactoring. As soon as a bad smell emerge, you fix it. Refactoring is usually small scale, using the "materials at hand" (which I could roughly translate into "changing only a small fraction of code"). Refactoring, by definition, does not change the function of code. Therefore, it can only change its form.

Now, although some people in the XP/agile camp might disagree, refactoring is a viable solution only when the desired rate of change is slow, and only when the gap to fill is small. In other words, only when the overall architecture (or plain structure) is not challenged: maybe it's dictated by the J2EE way of doing things, or by the Company One True Way of doing things, or by the Model View Controller police, and so on. Truth is, without an overall architecture resisting change, a neverending sequence of small-scale refactoring may even have a negative large-scale impact.

I've recently said that we can't reasonably turn an old-style, client-server application into a modern web application by applying a sequence of small-scale changes. It would be, if not unfeasible, hardly economic, and the final architecture might be far from optimal. The gap is too big. We're expecting to complete a big change in a comparatively short time, hence the rate of change is too big. The viscosity of the previous solution will fight that change and prevent it from happening. We need to apply change at an higher granularity level, as the dynamics in the small are not the dynamics in the large.

Curiously enough (or maybe not :-), I'll be talking about refactoring the next two days. As usual, I'll try to strike a balance, and get often back to good design princples. After all, as we'll see, when the rate of change grows, and/or when the solution space grows, the unselfconscious process must be replaced by the selfconscious process.

Labels: , , ,

Wednesday, October 24, 2007 

More on evolving (or rewriting) existing applications

Just when I wanted to be more present in my blog, I had to survive the loss of about 200 GB of data. Nothing vital, but exactly for that reason, not routinely backed up either. Anyway, sometimes it's a relief to lose a little information :-).

In my previous post I've only scratched the surface of a large problem. If rewriting the application from the core is so dangerous (unless you can break the feedback loop in the diagram, which you can, but not for free), are we left with any viable option?

We all heard about refactoring in the last few years. Refactoring tools are getting better, to the point that you can actually trust them with your production code. However, refactoring can only bring you so far. You won't change a fat client into a web application by incremental, small scale refactoring. But careful refactoring is truly helpful when dealing with small scale problems. I've recently seen some C/C++ code full of comments, and guess what, the backtalk of those comments (see my "Listen to your tools and materials") is actually "please remove me and refactor the code instead". Refactoring is also hard when the code is strongly coupled with an outdated (or ill-structured) relational database, without any kind of insulation layer. So, refactoring is an interesting technique, and can often be applied in parallel with some more aggressive strategy, but is not really helpful when you need large scale changes.

An interesting strategy, that can be applied in more than a few cases, is the modular rewriting of applications. The concept is not new: you exploit (or improve by refactoring and then exploit) the existing architecture, by gradually replacing modules. With a little oversimplification, we could say that the real problem is which modules you want to replace first.

Again, this depends on a number of factors:

- do you have a different target architecture? If so, is there any module that you can replace in the existing system, and which will effectively move, almost unchanged, to the new architecture?

- are you just tired of maintenance problems? Then use quadrants to isolate the best candidates. Troubled modules with small size, or troubled modules with minimal coupling from the rest, should be your first candidates. Get as much value as you can, as soon as you can. Get momentum. We're not in the 80s anymore; flat value delivery curves are a recipe from project cancellation.

- are you "just" changing technology, or are you trying to innovate the application? Honestly, most companies can't innovate. Not by themselves, and quite often, not even with external help. They're prisoners of their own artifacts. They think of their problems in terms of the existing implementation. They will never, for instance, conceive a real web 2.0 application, because they can't think that way.
If you want to innovate the application, start with innovative features in mind. Design new modules, then bridge them with the existing system. While designing the bridge, push the envelope a little: can we erode a little of the existing system, an replace some subsystem with something more appropriate?
Doing so will move you toward an Incremental Funding project, which will gradually erode the existing system until it will make economic sense to scrap the rest.

- Note that this assumes that innovation is happening at the fringes of your system, not at the core. This is often true (see also "Beyond the core" by Chris Zook, Harvard Business School Press, for a business-oriented perspective), but sometimes, the real innovation requires a new core. Say you need a new core. Are you sure? Maybe you "just" need a new product, barely integrated with the existing system. Don't say no so soon. Play with the idea. Don't get into a self-fulfilling expectation by telling yourself that everything must be tightly integrated into a new core.

- Are you moving toward a truly better architecture, or just following a technology trend? Maybe your company was much less experienced when the current system was built. Still, are you sure the new core won't become brittle very soon? Do you really need a large core? Can't you decompose the system toward (e.g.) a loosely-coupled, service-oriented architecture, instead of a tightly-coupled, database-centric architecture (maybe with a web front end, big deal :-).

- can you apply some non-linear (a.k.a. lateral) thinking? Not familiar with lateral thinking? De Bono should be a required reading for software developers, and even more so for project managers and designers!

- is your new/target architecture aligned with your company strategy? If not, think twice!

OK, it's getting late again. See you soon, I hope, with some reflections on the concept of form (vs function) in the software realm.

Labels: , ,

Saturday, August 04, 2007 

Get the ball rolling, part 4 (of 4, told ya :-)

The past two weeks have been hectic to say the least, and I had no time to conclude this quadrilogy. Fear not :-), however, because here I am. Anyway,I hope you used the extra time to read comments and answers to the previous posts, especially Zibibbo's comments; his contribution has been extremely valuable.
Indeed, I've left quite a few interesting topics hanging. I'll deal with the small stuff first.

Code [and quality]:
I already mentioned that I'm not always so line-conscious when I write code. Among other things, in real life I would have used assertions more liberally.
For instance, there is an implicit contract between Frame and Game. Frame is assuming that Game won't call its Throw method if the frame IsComplete. Of course, Game is currently respecting the contract. Still, it would be good to make the contract explicit in code. An assertion would do just fine.
I remember someone saying that with TDD, you don't need assertions anymore. I find this concept naive. Assertions like the above would help to locate defects (not just detect them) during development, or to highlight maintenance changes that have violated some internal assumption of existing code. Test cases aren't always the most efficient way to document the system. We have several tools, we should use them wisely, not religiously.

When is visual modeling not helpful?
In my experience, there are times when a model just won't cut it.
The most frequent case is when you're dealing with untried (or even unstable) technology. In this context, the wise thing to do is write code first. Code that is intended to familiarize yourself with the technology, even to break it, so that you know what you can safely do, and what you shouldn't do at all.
I consider this learning phase part of the design itself, because the ultimate result (unless you're just hacking your way through it) is just abstract knowledge, not executable knowledge: the code is probably too crappy to be kept, at least by my standards.
Still, the knowledge you acquired will shape the ultimate design. Indeed, once you possess enough knowledge, you can start a more intentional design process. Here modeling may have a role, or not, depending on the scale of what you're doing.
There are certainly many other cases where starting with a UML model is not the right thing to do. For instance, if you're under severe resource constraints (memory, timing, etc), you better do your homework and understand what you need at the code level before you move up to abstract modeling. If you are unsure about how your system will scale under severe loading, you can still use some modeling techniques (see, for instance, my More on Quantitative Design Methods post), but you need some realistic code to start with, and I'm not talking about UML models anyway.
So, again, the process you choose must be a good fit for your problem, your knowledge, your people. Restraining yourself to a small set of code-centric practices doesn't look that smart.

What about a Bowling Framework?
This is not really something that can be discussed briefly, so I'll have to postpone some more elaborated thoughts for another post. Guess I'll merge them with all the "Form Vs. Function " stuff I've been hinting to all this time.
Leaving the Ball and Lane aside for a while, a small but significant improvement would be to use Factory Method on Game, basically making Game an abstract class. That would allow (e.g.) regular bowling to create 9 Frames and 1 FinalFrame, 3-6-9 bowling to change the 3rd, 6th and 9th frame to instances of a (new) StrikeFrame class, and so on.
Note that this is a simple refactoring of the existing code/design (in this sense, the existing design has been consciously underengineered, to the point where making it better is simple). Inded, there is an important, trivial and at the same time deep reason why this refactoring is easy. I'll cover that while talking about options.
If you try to implement 3-6-9 bowling, you might also find it useful to change
if( frames[ currentFrame ].IsComplete() )
   ++currentFrame;
into
while( frames[ currentFrame ].IsComplete() )
   ++currentFrame;

but this is not strictly necessary, depending on your specific implementation.
There are also a few more changes that would make the framework better: for instance, as I mentioned in my previous post, moving MaxBalls from a constant to a virtual function would allow the base class to instantiate the thrownBalls array even if the creational responsibility for frames were moved to the derived class (using Factory Method).
Finally, the good Zibibbo posted also a solution based on a state machine concept. Now, it's pretty obvious that bowling is indeed based on a state machine, but curiously enough, I wouldn't try to base the framework on a generalized state machine. I'll have to save this for another post (again, I suspect, Form Vs. Function), but shortly, in this specific case (and in quite a few more) I'd try to deal with variations by providing a structure where variations can be plugged in, instead of playing the functional abstraction card. More on this another time.

Procedural complexity Vs. Structural complexity
Several people have been (and still are) taught to program procedurally. In some disciplines, like scientific and engineering computing, this is often still considered the way to go. You get your matrix code right :-), and above that, it's naked algorithms all around.
When you program this way (been there, done that) you develop the ability to pour through long functions, calling other functions, calling other functions, and overall "see" what is going on. You learn to understand the dynamics of mutually recursive function. You learn to keep track of side effects on global variables. You learn to pay attention to side effects on the parameters you're passing around. And so on.
In short, you learn to deal with procedural complexity. Procedural complexity is best dealt with at the code level, as you need to master several tiny details that can only be faithfully represented in code.
People who have truly absorbed what objects are about tend to move some of this complexity on the structural side. They use shape to simplify procedures.
While the only shape you can give to procedural code is basically a tree (with the exception of [mutually] recursive functions, and ignoring function pointers), OO software is a different material, which can be shaped in a more complex collaboration graph.
This graph is best dealt with visually, because you're not interested in the tiny details of the small methods, but in getting the collaboration right, the shape right (where "right" is, again, highly contextual), even the holes right (that is, what is not in the diagram can be important as well).
Dealing with structural complexity requires a different set of skills and tools. Note that people can be good at dealing with procedural and with structural complexity; it's just a matter of learning.
As usual, good design is about balance between this two forms of complexity. More on this another time :-).

Balls, Real Options, and some Big Stuff.
Real Options are still cool in project management, and in the past few years software development has taken notice. Unfortunately, most papers on software and real options tend to fall in one of these two categories:
- the mathematically heavy with little practical advice.
- the agile advocacy with the rather narrow view that options are just about waiting.
Now, in a paper I've referenced elsewhere, Avi Kamara put it right in a few words. The problem with the old-fashioned economic theory is the assumption that the investment is an all or nothing, now or never, project. Real options challenge that view, by becoming aware of the costs and benefits of doing or not doing things, build flexibility and take advantage of opportunities over time (italics are quotes from Kamara).
Now, this seems to be like a recipe for agile development. And indeed it is, once we break the (wrong) equation agile = code centric, or agile = YAGNI. Let's look a little deeper.
Options don't come out of nowhere. They came either from the outside, or from the inside. Options coming from the outside are new market opportunities, emerging users's requests or feedback, new technologies, and so on. Of course, sticking to a plan (or a Big Upfront Design) and ignoring those options wouldn't be smart (it's not really a matter of being agile, but to be economically competent or not).
Options come also from the inside. If you build your software upon platform-specific technologies, you won't have an option to expand into a different platform. If you don't build an option for growth inside your software, you just won't have the option to grow. Indeed, it is widely acknowledged in the (good) literature that options have a cost (the option premium). You pay that cost because it provides you with an option. You don't want to make the full investment now (the "all" in "all or nothing") but if you just do nothing, you won't get the option either.
The key, of course, is that the option premium should be small. Also, the exercise cost should be reasonable as well (exercise price, or strike price, is what you pay to actually exercise your option). Note: for those interested in these ideas, the best book I've found so far on real option is "Real options analysis" by Johnathan Mun, Wiley finance series)

Now, we can begin to see the role of careful design in building the right options inside software, and why YAGNI is an oversimplified strategy.
In a previous post I mentioned how a class Lane could be a useful abstraction for a more realistic bowling scorer. The lane would have a sensor in the foul line and in the gutters. This could be useful for some variation of bowling, like Low Ball (look under "Special Games"). So, should I invest into a Lane class I don't really need right now, because it might be useful in the future? Wait, there is more :-).
If you consider 5 pin Bowling, you'll see that different pins are awarded different scores. Yeap. You can no longer assume "number of pins = score". If you think about it, there is just no natural place in the XP episode code to put this kind of knowledge. They went for then "nothing" side of the investment. Bad luck? Well guys, that's just YAGNI in the real world. Zero premium, but potentially high exercise cost.
Now, consider this: a well-placed, under-engineered class can be seen as an option with a very low premium and very low exercise cost. Consider my Ball class. As it stands now, it does precious nothing (therefore, the premium cost was small). However, the Ball class can easily be turned into a full-fledged calculator of the Ball score. It could easily talk to a Lane class if that's useful - the power of modularity allows the rest of my design to blissfully ignore the way Ball gets to know the score. I might even have a hierarchy of Ball classes for different games (well, most likely, I would).
Of course, there would be some small changes here and there. I would have to break the Game interface, that takes a score and builds a Ball. I used that trick to keep the interface compatible with the XP episode code and harvest their test cases, so I don't really mind :-). I would also have to turn the public hitPins member into something else, but here is the power of names: they make it easier to replace a concept, especially in a refactoring-aware IDE.
Bottom line: by introducing a seemingly useless Ball class, I paid a tiny premium cost to secure myself a very low exercise cost, in case I needed to support different kinds of bowling. I didn't go for the "all" investment (a full-blown bowling framework), but I didn't go for the "nothing" either. That's applied real option theory; no babbling about being agile will get you there. The right amount of under-engineering, just like the right amount of over-engineering, comes only from reasoning, not from blindly applying oversimplified concepts like YAGNI.
One more thing about options: in the Bowling Framework section, I mentioned that refactoring my design by applying Factory Method to Game was a trivial refactoring. The reason it was trivial, of course, is that Game is a named entity. You find it, refactor it, and it's done. Unnamed entities (like literals) are more troublesome. Here is the scoop: primitive types are almost like unnamed entities. If you keep your score into an integer, and you have integers everywhere (indexes, counters, whatever), you can't easily refactor your code, because not any integer is a score. That was the idea behind Information Hiding. Some people got it, some didn't. Choose your teacher wisely :-).

A few conclusions
As I said, there are definitely times when modeling makes little sense. There are also times when it's really useful. Along the same lines, there are times when requirements are relatively unknown, or when nobody can define what "success" really means before they see it. There are also times when requirements are largely well-known and (dare I say it!) largely stable.
If you were asked to implement an automatic scoring system for 10 pin bowling and 3-6-9 bowling and 9 pin bowling, would you really start coding the way they did in the XP episode, and then slowly change it to something better?
Would you go YAGNI, even though you perfectly know that you ARE gonna need extensibility? Or would you rather spend a few more minutes on the drawing board, and come up with something like I did?
Do you really believe you can release a working 3-6-9 and a working 9-pin faster using the XP episode code than mine? Is it smart to behave as if requirements were unknown when they're, in fact, largely known? Is it smart not to exploit knowledge you already have? What do you consider more agile, that is, adaptive to the environment?
Now, just to provoke a reaction from a friend of mine (who, most likely, is somewhere having fun and not reading my blog anyway :-). You might remember Jurassik Park, and how Malcom explained chaos theory to Ellie by dropping some water on her hand (if you don't, here is the script, look for "38" inside; and no, I'm not a fan :-).
The drops took completely different paths, because of small scale changes (although not really small scale for a drop, but anyway). Some people contend that requirements are just as chaotic, and that minor changes in the environment will have a gigantic impact on your code anyway, so why bother with upfront design.
Well, I could contend that my upfront design led to code better equipped to deal with change than the XP episode did, but I won't. Because you know, the drops did take a completely different path because of small scale changes. But a very predictable thing happened: they both went down. Gravity didn't behave chaotically.
Good designers learn to see the underlying order inside chaos, or more exactly, predominant forces that won't be affected by environmental conditions. It's not a matter of reading a crystal ball. It's a matter of learning, experience, and well, talent. Of course, I'm not saying that there are always predominant, stable forces; just that, in several cases, there are. Context is the key. Just as ignoring gravity ain't safe, ignoring context ain't agile.

Labels: , , ,

Tuesday, November 14, 2006 

Slip Charts

Software development is a learning process, yet we are often asked for early estimates. In several cases, those estimates turn out to be wrong, because we didn't have enough information. In pathological companies, estimates are considered promises. In more enlightened (should I say realistic :-) companies, estimates are periodically reframed.
It's very common for early estimates to be overoptimistic, so when we review an estimate it's usually to add more effort. Unless we can shrink requirements, or unless we can find a smarter way to satisfy requirements (which is usually what I try to do), we have to postpone delivery.
When the project has high complexity and high uncertainty, it's not uncommon to slip several times, as more and more knowledge is gained. Of course, at some point we have to understand if we are going somewhere, or if we simply don't know how much it's gonna take. A useful, cheap, and simple way to get a better understanding of slippage is to draw a slip chart. You simply add one point every time you review your estimate. On the x axis, you have the current date; on the y, you have the expected delivery date.

Here is a real slip chart (from a real project)

as you can see, after less than one month, the estimate was increased by about one month. Then it remained fixed up, to delivery. After a few reviews, the confidence in making the deadline increases.

Here is another real slip chart (same product as above, but different project):

now, that's a troubled project. The team is learning a lot along the road, but not enough to give better estimates. As you can guess, it wasn't really finished when it was declared finished.

Although learning to read a slip chart (and its counterpart, the slip/lead chart) is easy, if you want to squeeze every ounce of information from the chart, I suggest that you read an excellent book from Gerald Weinberg, "Quality Software Management Vol. 2". Well, if you're any serious about software project management, you probably want to read all the 4 volumes anyway :-).

Note: looking at the slip chart can give you precious information, but information without action is useless. Now, action is always context-dependent. For the real project above, my suggested action would have been (I've seen the slip charts too late) at least to exclude the project from the next delivered version of the product, and possibly to schedule a design review to understand what was really going on. Of course, removing a single project from the next version of a requires some configuration management discipline (which was in place), and in this case it certainly mandates that the development had to be done outside the baseline (so, sorry, no continuous integration).
But then, when you have high technical uncertainty, continuous integration is not necessarily a good practice (quite the opposite, I would say). Continuous integration is good when you have high uncertainty on functional requirements, because you can (ideally) get early feedback from users. It's also good for low-uncertainty projects, because in this way you don't add additional uncertainty (from late integration). But it's not a silver bullet, and should not be applied blindly "by the book".

Labels: ,

Wednesday, March 15, 2006 

Organizational Structure

Tomorrow and the day after I'll be teaching my Software Project Management course. From previous discussions, I could foresee that there will be some interest on organizational structure (functional teams, project teams, matrix structure and the like), so I decided to add some custom material. However, the traditional literature on organizational strategy (I've got more than 20 books on the subject) is very theoretical, and definitely not grounded in software, which is an hallmark of my course. Therefore, I took a rather original road, organized around two major tenets.
1) The organizational structure must support the company strategy. Any misalignment will be a cause for friction and inefficiency. An effective, barebone classification of company strategy can be found in Michael Tracy, Fred Wiersema, "The Discipline of Marker Leaders". A couple of papers from Stan Rifkin ("Why Software Process Innovation Are Not Adopted", IEEE Software, July/August 2001 and "What Makes Measuring Software So Hard?", IEEE Software, May/June 2001) give some hints on the relevance of this work for software development.
2) The organizational structure must be aligned with the product architecture. This is a consequence of Conway's Law: "Any organization which designs a system will inevitably produce a design whose structure is a copy of the organization's communication structure" (Melvin E. Conway, "How Do Committees Invent?", Datamation, April, 1968). Convay's paper is one of those classics often quoted and never read, partially because they used to be hard to find. Fortunately, here is an online version. Luke Hohmann ("Journey of the Software Professional", Prentice-Hall) also had an interesting intuition, that is, the best organizational structure for a product is not necessarily fixed: it may change over time (this is probably an agile management frontier right now :-). Finally, some useful information on how the traditional organizational structures (functional, matrix, etc.) performs on software projects can be found in Jack T. Marchewka, "Information Technology Project Management", John Wiley & Sons.
Quite a mouthful, but being a good manager takes much more than technical skills...
For another interesting point of view on organizational structure, see my post An organization consists of conversations.

Labels: , , , ,

Friday, July 01, 2005 

Build in-house, or buy a customization?

Some questions come up over and over, but they don't get any easier to answer.
Make or buy is a classic, and in practice is often answered in unclear, emotional terms.
One of my customers is facing a though choice. They could create a product in-house, reusing a few components from other projects and using a few commercial components as well. Alternatively, they could ask a third-party to customize an existing, similar product, which would sensibly shorten development time.
We have full data for the in-house effort:
  • requirements are reasonably clear
  • we have an overall architectural model
  • we have two alternative schedules for implementation, both based on the concepts of Incremental Funding and Minimum Marketable Features (see "Software by Numbers", from Mark Denne and Jane Cleland-Huang), so the product should start paying for itself when is roughly 50% "complete".
  • we have a reasonable estimate for development cost and time, and to some extent for maintenance costs over the next few years.
  • we know the in-house process, the kind of documentation we will have, the defect density we can expect from the development team, etc.

The third-party is reasonably eager to provide an extremely favorable estimate (cost and even more so time) so to get a contract signed.
The real question is: how do we make an informed choice? Management shouldn't (and with any luck, won't :-) choose based only on time and money. Will product quality be comparable with that of in-house products (or better)? What is the third-party strategy for customization, e.g. do they have a modular, plug-in based architecture, or will they simply create a branch of their main product? How do they feel about a joint design and code review?
Over time, I've developed a comprehensive approach to deal with this kind of issues. In fact, while I appreciate the need to trust your guts when you make this kind of decision, I find that answering a detailed set of questions provides you with two benefits:
1) you have a clear set of values to consider, where "value" means both something that is valuable to know, and a precise answer to ponder.
2) since in most cases the answers must come through extensive talking with the third-party, you get to know them much better, well under the sugarcoating of glossy brochures and nice presales. This will help your guts too :-).

Labels: , ,