Friday, March 12, 2010 

Where is your Knowledge?

Software development is a process of knowledge acquisition and knowledge encoding (see Phillip Armour, copiously quoted in this blog). Where, and how, do we store that knowledge? In several places, in several ways:

In source code: that's executable knowledge
In models: that's formal knowledge
In other kind of documents: that's written knowledge
In our brain, consciously: that's explicit knowledge
In our brain, unconsciously: that's tacit knowledge

Knowledge stored in source code has the extremely useful property of being executable, but we can't store the entire development knowledge in executable statements. Design Rationale, for instance, is not present in code (and not even in most UML diagrams, for that matter), and is basically stored at the conscious/unconscious level. My forcefield diagram is much better at formally capturing rationale.

Explicit knowledge is often passed by as oral tradition, while tacit knowledge is often passed by as "a way of doing things", just by working together. Pair programming, reviews, joint design sessions (and so on) help distribute both explicit and tacit knowledge.

Knowledge has value, but that value is not constant over time. In 1962, Fritz Machlup came up with the concept of Half-life of knowledge: the amount of time that has to elapse before half of the knowledge in a particular area is superseded or shown to be untrue.

Moreover, the initial value of a particular piece of knowledge can be very high, like a new algorithm that took you years to get right, or very small, like a trivial validation rule.

Recently, I began to think about the half-life of our knowledge repositories as well. With my usual boldness, I'll go ahead and define the Half-Life of a Knowledge Repository: the amount of time that has to elapse before half of the knowledge in a repository is unrecoverable or just too costly to recover. I could even define "too costly" as "higher than the discounted value of that knowledge when lookup is attempted".

The concept of recoverable knowledge is slightly deeper than it may seem. Sure, it does cover the obvious problems of losing knowledge for lack of backup procedures, or because it's stored in a proprietary format no longer supported, and so on. But it covers also several interesting cases:

- the knowledge is in the brain of an employee, who leaves the company
- the knowledge is in source code, but it's in an obsolete language
- the knowledge is in source code, but it's extremely hard to read
- etc.

I'll leave it up to you to define the half-life of source code, models, documents, brain (conscious and unconscious). Of course, more details are needed: niche languages, for instance, tend to have a shorter half-life.

Now, here is the real boon :-). We can combine the concept of Knowledge Half-Life, Knowledge Value, and Knowledge Repository Half-Life to map the risk of storing a particular piece of knowledge in a particular repository (only). Here is my first-cut map:

Knowledge Half-Life Knowledge (initial) Value Repository Half-Life Result
Long Long Long OK
Long Long Short Risk
Long Short Long Little Waste
Long Short Short Little Risk
Short Long Long Little Waste
Short Long Short Little Risk
Short Short Long Waste
Short Short Short OK

It's interesting to review one of the values in the Agile Manifesto (Working software over comprehensive documentation) under this perspective.

Let's say we have a piece of knowledge, and that knowledge can be indeed stored in code (as I said, you can't store everything in code).

If the half-life of knowledge is short, storing it in code only is probably the best economical choice. If the half-life of knowledge is long, we have to worry a little more. If we add relevant unit tests to that piece of code, we increase the repository half-life, as they make it easier to recover knowledge from code. If we use a mainstream language, we can also increase the repository half-life.

This may still not be enough. If you had to recover the entire knowledge stored in a non-trivial piece of code (say, an mp4 codec) having only the source code, and no (comprehensive) documentation on what that piece of code is doing, why, and how, it would take you far too much. The half-life of code is shorter than the half-life of code + documents.
Actually, depending on context, given the choice to have just the code and nothing else, or just comprehensive documentation and nothing else, we better be careful about what we choose (when knowledge half-life is long, of course).

Of course, the opposite is also true: if you store knowledge with short half-life outside code, you seriously risk wasting your time.

I've often been critic about teaching and applying principles and techniques without the necessary context. I hope that somehow, the table above and the underlying concepts can move our understanding of when to use what a little further.

Labels: ,

Sunday, March 07, 2010 

You can't control what you can't …

… measure, Tom de Marco used to say ("Controlling Software Projects: Management, Measurement, and Estimation", 1982). Tom recently confessed he no longer subscribes to that point of view. Now, I like Tom and I've learnt a lot from him, but I don't really agree about most of what he's saying in that paper.

Sure, the overall message is interesting: earth-shaking projects have a ROI so big that you don't really care about spending a little more money. But money isn't the only thing you may need to control (what about time, and your window of opportunity?) and not each and every project can be a earth-shaking project. If you need to comply with some policy or regulation by a given date, it may well be a moot project, but you better control for time :-). More examples (tons, actually) on demand. Relinquishing control is a fascinating concept, and by all means, if you can redefine your projects so that control is no longer necessary, just do it. But frankly, it's not always an option.

Still, can we control what we can't measure? As usual, it depends. It depends on what you want to control, and on your definition of control. We can watch over some things informally, that is, using a rough, imprecise, perhaps intuitive measure ("feeling") and still keep inside reasonable boundaries. This might be enough to be "in control". As others have noted (see for instance Managing What You Can’t Measure) sometimes all we need is a feeling that we're going off track, and a sensible set of tactics to get back on.

All that said, I feel adventurous enough today :-) to offer my own version of Tom's (repudiated) law. I just hope I won't have to take it back in 30 years :-).

You can't control what you can't name.

I would have said "define", but a precise definition is almost like a measure. But if you can't even name the concept (which, yes, requires at least a very informal definition of the term), you're consciously unaware of it. Without awareness, there is no control.

I can say that better: you can't control it intentionally. For a long time, people have controlled forces they didn't fully understand, and perhaps couldn't even name, for instance in building construction. They did that through what Alexander called the unselfconscious process, by relying on tradition (which was largely based on trial and error).

I see this very often in software projects too. People doing things because tradition taught them to do so. They don't really understand why - and would react vehemently if you dare to question their approach or suggest another way. They do so because tradition provides safety, and you're threatening their safety.

The problem with the unselfconscious process is that it doesn't scale well. When the problem is new, when the rate of change in the problem domain increases, whenever the right answer can't be found in tradition, the unselfconscious process doesn't work anymore. We gotta move to the selfconscious process. You gotta learn concepts. Names. Forces. Nonlinear interactions. We gotta think before we do. We gotta ask questions. Question the unquestionable. Move outside our comfort area. Learn, learn, learn.

Speaking of learning, I've got something to say, which is why I wrote this post in the first place, but I'll save that for tomorrow :-).

Labels: , , ,

Sunday, January 10, 2010 

Delaying Decisions

Since microblogging is not my thing, I decided to start 2010 by writing my longer post ever :-). It will start with a light review of a well-known principle and end up with a new design concept. Fasten your seatbelt :-).

The Last Responsible Moment
When we develop a software product, we make decisions. We decide about individual features, we make design decisions, we make coding decisions, we even decide which bugs we really want to fix before going public. Some decisions are taken on the fly; some, at least in the old school, are somewhat planned.

A key principle of Lean Development is to delay decisions, so that:
a) decisions can be based on (yet-to-discover) facts, not on speculation
b) you exercise the wait option (more on this below) and avoid early commitment

The principle is often spelled as "Delay decisions until the last responsible moment", but a quick look at Mary Poppendieck's website (Mary co-created the Lean Development approach) shows a more interesting nuance: "Schedule Irreversible Decisions at the Last Responsible Moment".

Defining "Irreversible" and "Last Responsible" is not trivial. In a sense, there is nothing in software that is truly irreversible, because you can always start over. I haven't found a good definition for "irreversible decision" in literature, but I would define it as follows: if you make an irreversible decision at time T, undoing the decision at a later time will entail a complete (or almost complete) waste of everything that has been created after time T.

There are some documented definitions for "last responsible moment". A popular one is "The point when failing to decide eliminates an important option", which I found rather unsatisfactory. I've also seen some attempts to quantify that better, as in this funny story, except that in the real world you never have a problem which is that simple (very few ramifications in the decision graph) and that detailed (you know the schedule beforehand). I would probably define the Last Responsible Moment as follows: time T is the last responsible moment to make a decision D if, by postponing D, the probability of completing on schedule/budget (even when you factor-in the hypothetical learning effect of postponing) decreases below an acceptable threshold. That, of course, allows us to scrap everything and restart, if schedule and budget allows for it, and in this sense it's kinda coupled with the definition of irreversible.

Now, irreversibility is bad. We don't want to make irreversible decisions. We certainly don't want to make them too soon. Is there anything we can do? I've got a few important things to say about modularity vs. irreversibility and passive vs. proactive option thinking, but right now, it's useful to recap the major decision areas within a software project, so that we can clearly understand what we can actually delay, and what is usually suggested that we delay.

Major Decision Areas
I'll skip on a few very-high-level, strategic decisions here (scope, strategy, business model, etc). It's not that they can't be postponed, but I need to give some focus to this post :-). So I'll get down to the more ordinarily taken decisions.

Choosing the right people for the project is a well-known ingredient for success.

Are we going XP, Waterfall, something in between? :-).

Feature Set
Are we going to include this feature or not?

What is the internal shape (form) of our product?

Much like design, at a finer granularity level.

Now, "design" is an overly general concept. Too general to be useful. Therefore, I'll split it into a few major decisions.

Architectural Style
Is this going to be an embedded application, a rich client, a web application? This is a rather irreversible decision.

Goes somewhat in pair with Architectural Style. Are we going with an embedded application burnt into an FPGA? Do you want to target a PIC? Perhaps an embedded PC? Is the client a Windows machine, or you want to support Mac/Linux? A .NET server side, or maybe Java? It's all rather irreversible, although not completely irreversible.

3rd-Party Libraries/Components/Etc
Are we going to use some existing component (of various scale)? Unless you plan on wrapping everything (which may not even be possible), this often end up being an irreversible decision. For instance, once you commit yourself to using Hibernate for persistence, it's not trivial to move away.

Programming Language
This is the quintessential irreversible decision, unless you want to play with language converters. Note that this is not a coding decisions: coding decisions are made after the language has been chosen.

Structure / Shape / Form
This is what we usually call "design": the shape we want to impose to our material (or, if you live in the "emergent design" side, the shape that our material will take as the final result of several incremental decisions).

So, what are we going to delay? We can't delay all decisions, or we'll be stuck. Sure, we can delay something in each and every area, but truth is, every popular method has been focusing on just a few of them. Of course, different methods tried to delay different choices.

A Little Historical Perspective
Experience brings perspective; at least, true experience does :-). Perspective allows to look at something and see more than it's usually seen. For instance, perspective allows to look at the old, outdated, obsolete waterfall approach and see that it (too) was meant to delay decisions, just different decisions.

Waterfall was meant to delay people decisions, design decisions (which include platform, library, component decisions) and coding decisions. People decision was delayed by specialization: you only have to pick the analyst first, everyone else can be chosen later, when you know what you gotta do (it even makes sense -)). Design decision was delayed because platform, including languages, OS, etc, were way more balkanized than today. Also, architectural styles and patterns were much less understood, and it made sense to look at a larger picture before committing to an overall architecture.
Although this may seem rather ridiculous from the perspective of a 2010 programmer working on Java corporate web applications, most of this stuff is still relevant for (e.g.) mass-produced embedded systems, where choosing the right platform may radically change the total development and production cost, yet choosing the wrong platform may over-constrain the feature set.

Indeed, open systems (another legacy term from late '80s - early '90s) were born exactly to lighten up that choice. Choose the *nix world, and forget about it. Of course, the decision was still irreversible, but granted you some latitude in choosing the exact hw/sw. The entire multi-platform industry (from multi-OS libraries to Java) is basically built on the same foundations. Well, that's the bright side, of course :-).

Looking beyond platform independence, the entire concept of "standard" allows to delay some decision. TCP/IP, for instance, allows me to choose modularly (a concept I'll elaborate later). I can choose TCP/IP as the transport mechanism, and then delay the choice of (e.g.) the client side, and focus on the server side. Of course, a choice is still made (the client must have TCP/IP support), so let's say that widely adopted standards allow for some modularity in the decision process, and therefore to delay some decision, mostly design decisions, but perhaps some other as well (like people).

It's already going to be a long post, so I won't look at each and every method/principle/tool ever conceived, but if you do your homework, you'll find that a lot of what has been proposed in the last 40 years or so (from code generators to MDA, from spiral development to XP, from stepwise refinement to OOP) includes some magic ingredient that allows us to postpone some kind of decision.

It's 2010, guys
So, if you ain't agile, you are clumsy :-)) and c'mon, you don't wanna be clumsy :-). So, seriously, which kind of decisions are usually delayed in (e.g.) XP?

People? I must say I haven't seen much on this. Most literature on XP seems based on the concept that team members are mostly programmers with a wide set of skills, so there should be no particular reason to delay decision about who's gonna work on what. I may have missed some particularly relevant work, however.

Feature Set? Sure. Every incremental approach allows us to delay decisions about features. This can be very advantageous if we can play the learning game, which includes rapid/frequent delivery, or we won't learn enough to actually steer the feature set.
Of course, delaying some decisions on feature set can make some design options viable now, and totally bogus later. Here is where you really have to understand the concept of irreversible and last responsible moment. Of course, if you work on a settled platform, things get simpler, which is one more reason why people get religiously attached to a platform.

Design? Sure, but let's take a deeper look.

Architectural Style: not much. Quoting Booch, "agile projects often start out assuming a given platform and environmental context together with a set of proven design patterns for that domain, all of which represent architectural decisions in a very real sense". See my post Architecture as Tradition in the Unselfconscious Process for more.
Seriously, nobody ever expected to start with a monolithic client and end up with a three-tier web application built around a MVC pattern just by coding and refactoring. The architectural style is pretty much a given in many contemporary projects.

Platform: sorry guys, but if you want to start coding now, you gotta choose your platform now. Another irreversible decision made right at the beginning.

3rd-Party Libraries/Components/Etc: some delay is possible for modularized decisions. If you wanna use hibernate, you gotta choose pretty soon. If you wanna use Seam, you gotta choose pretty soon. Pervasive libraries are so entangled with architectural styles that it's relatively hard to delay some decisions here. Modularized components (e.g. the choice of a PDF rendering library) are simple to delay, and can be proactively delayed (see later).

Programming Language: no way guys, you have to choose right here, right now.

Structure / Shape / Form: of course!!! Here we are. This is it :-). You can delay a lot of detailed design choices. Of course, we always postpone some design decision, even when we design before coding. But let's say that this is where I see a lot of suggestions to delay decisions in the agile literature, often using the dreaded Big Upfront Design as a straw man argument. Of course, the emergent design (or accidental architecture) may or may not be good. If I had to compare the design and code coming out of the XP Episode with my own, I would say that a little upfront design can do wonders, but hey, you know me :-).

OK guys, what follows may sound a little odd, but in the end it will prove useful. Have faith :-).
You can get better at everything by doing anything :-), so why not getting better at delaying decisions by playing Windows Solitaire? All you have to do is set the options in the hardest possible way:

now, play a little, until you have to make some decision, like here:

I could move the 9 of spades or the 9 of clubs over the 10 of hearts. It's an irreversible decision (well, not if you use the undo, but that's lame :-). There are some ramifications for both choices.
If I move the 9 of clubs, I can later move the king of clubs and uncover a new card. After that, it's all unknown, and no further speculation is possible. Here, learning requires an irreversible decision; this is very common in real-world projects, but seldom discussed in literature.
If I move the 9 of spades, I uncover the 6 of clubs, which I can move over the 7 of aces. Then, it's kinda unknown, meaning: if you're a serious player (I'm not) you'll remember the previous cards, which would allow you to speculate a little better. Otherwise, it's just as above, you have to make an irreversible decision to learn the outcome.

But wait: what about the last responsible moment? Maybe we can delay this decision! Now, if you delay the decision by clicking on the deck and moving further, you're not delaying the decision: you're wasting a chance. In order to delay this decision, there must be something else you can do.
Well, indeed, there is something you can do. You can move the 8 of aces above the 9 of clubs. This will uncover a new card (learning) without wasting any present opportunity (it could still waste a future opportunity; life it tough). Maybe you'll get a 10 of aces under that 8, at which point there won't be any choice to be made about the 9. Or you might get a black 7, at which point you'll have a different way to move the king of clubs, so moving the 9 of spades would be a more attractive option. So, delay the 9 and move the 8 :-). Add some luck, and it works:

and you get some money too (total at decision time Vs. total at the end)

Novice solitaire players are also known to make irreversible decision without necessity. For instance, in similar cases:

I've seen people eagerly moving the 6 of aces (actually, whatever they got) over the 7 of spades, because "that will free up a slot". Which is true, but irrelevant. This is a decision you can easily delay. Actually, it's a decision you must delay, because:
- if you happen to uncover a king, you can always move the 6. It's not the last responsible moment yet: if you do nothing now, nothing bad will happen.
- you may uncover a 6 of hearts before you uncover a king. And moving that 6 might be more advantageous than moving the 6 of aces. So, don't do it :-). If you want to look good, quote Option Theory, call this a Deferral Option and write a paper about it :-).

Proactive Option Thinking
I've recently read an interesting paper in IEEE TSE ("An Integrative Economic Optimization Approach to Systems Development Risk Management", by Michel Benaroch and James Goldstein). Although the real meat starts in chapter 4, chapters 1-3 are probably more interesting for the casual reader (including myself).
There, authors recap some literature about Real Options in Software Engineering, including the popular argument that delaying decisions is akin to a deferral option. They also make important distinctions, like the one between passive learning through deferral of decisions, and proactive learning, but also between responsiveness to change (a central theme in agility literature) and manipulation of change (relatively less explored), and so on. There is a a lot of food for thought in those 3 chapters, so if you can get a copy, I suggest that you spend a little time pondering over it.
Now, I'm a strong supporter of Proactive Option Thinking. Waiting for opportunities (and then react quickly) is not enough. I believe that options should be "implanted" in our project, and that can be done by applying the right design techniques. How? Keep reading : ).

The Invariant Decision
If you look back at those pictures of Solitaire, you'll see that I wasn't really delaying irreversible decisions. All decisions in solitaire are irreversible (real men don't use CTRL-Z). Many decisions in software development are irreversible as well, especially when you are in a tight budget/schedule, so starting over is not an option. Therefore, irreversibility can't really be the key here. Indeed, I was trying to delay Invariant Decisions. Decisions that I can take now, or I can take later, with little or no impact on the outcomes. The concept itself may seem like a minor change from "irreversible", but it allows me to do some magic:
- I can get rid of the "last responsible moment" part, which is poorly defined anyway. I can just say: delay invariant decisions. Period. You can delay them as much as you want, provided they are still invariant. No ambiguity here. That's much better.
- I can proactively make some decisions invariant. This is so important I'll have to say it again, this time in bold: I can proactively make some decisions invariant.

Invariance, Design, Modularity
If you go back to the Historical Perspective paragraph, you can now read it under a different... perspective :-). Several tools, techniques, methods can be adopted not just to delay some decision, but to create the option to delay the decision. How? Through careful design, of course!

Consider the strong modularity you get from service-oriented architecture, and the platform independence that comes through (well-designed) web services. This is a powerful weapon to delay a lot of decisions on one side or another (client or server).

Consider standard protocols: they are a way to make some decision invariant, and to modularize the impact of some choices.

Consider encapsulation, abstraction and interfaces: they allow you to delay quite a few low-level decisions, and to modularize the impact of change as well. If your choice turn out to be wrong, but it's highly localized (modularized) you may afford undoing your decision, therefore turning irreversible into reversible. A barebone example can be found in my old post (2005!) Builder [pattern] as an option.

Consider a very old OOA/OOD principle, now somehow resurrected under the "ubiquitous language" umbrella. It states that you should try to reflect the real-world entities that you're dealing with in your design, and then in your code. That includes avoiding primitive types like integer, and create meaningful classes instead. Of course, you have to understand what you're doing (that is, you gotta be a good designer) to avoid useless overengineering. See part 4 of my digression on the XP Episode for a discussion about adding a seemingly useless Ball class (that is: implanting a low cost - high premium option).
Names alter the forcefield. A named concept stands apart. My next post on the forcefield theme, by the way, will explore this issue in depth :-).

And so on. I could go on forever, but the point is: you can make many (but not all, of course!) decisions invariant, if you apply the right design techniques. Most of those techniques will also modularize the cost of rework if you make the wrong decision. And sure, you can try to do this on the fly as you code. Or you may want to to some upfront design. You know what I'm thinking.

OK guys, it took quite a while, but now we have a new concept to play with, so more on this will follow, randomly as usual. Stay tuned.

Labels: , , , , , ,

Tuesday, July 22, 2008 

SmartFP™ paper (and tool) online

As promised, I've uploaded a free, simple tool to calculate Function Points using a decision tree. I've also uploaded a (draft) paper describing the overall approach. The paper is still missing a case study, which would help, but I just wanted to put the whole thing online. I'll add the case study, and a few more details, before submitting the paper for publication.

The decision tree approach is quite simple, especially if you have some knowledge of function points. Although it may seem like a small change in perspective from the usual "counting" approach, the result is that we can save a lot of time doing a function point estimate, and in many cases we also get more robust results.

Experiences and feedback are welcome, as usual. You can find the whole thing on the SmartFP page.

Note: as I plan to make more tools and libraries freely available, I've also created a new "tools" page. So far, there is only a link to BetterEstimate and SmartFP, but more will come...

Labels: , ,

Wednesday, October 24, 2007 

More on evolving (or rewriting) existing applications

Just when I wanted to be more present in my blog, I had to survive the loss of about 200 GB of data. Nothing vital, but exactly for that reason, not routinely backed up either. Anyway, sometimes it's a relief to lose a little information :-).

In my previous post I've only scratched the surface of a large problem. If rewriting the application from the core is so dangerous (unless you can break the feedback loop in the diagram, which you can, but not for free), are we left with any viable option?

We all heard about refactoring in the last few years. Refactoring tools are getting better, to the point that you can actually trust them with your production code. However, refactoring can only bring you so far. You won't change a fat client into a web application by incremental, small scale refactoring. But careful refactoring is truly helpful when dealing with small scale problems. I've recently seen some C/C++ code full of comments, and guess what, the backtalk of those comments (see my "Listen to your tools and materials") is actually "please remove me and refactor the code instead". Refactoring is also hard when the code is strongly coupled with an outdated (or ill-structured) relational database, without any kind of insulation layer. So, refactoring is an interesting technique, and can often be applied in parallel with some more aggressive strategy, but is not really helpful when you need large scale changes.

An interesting strategy, that can be applied in more than a few cases, is the modular rewriting of applications. The concept is not new: you exploit (or improve by refactoring and then exploit) the existing architecture, by gradually replacing modules. With a little oversimplification, we could say that the real problem is which modules you want to replace first.

Again, this depends on a number of factors:

- do you have a different target architecture? If so, is there any module that you can replace in the existing system, and which will effectively move, almost unchanged, to the new architecture?

- are you just tired of maintenance problems? Then use quadrants to isolate the best candidates. Troubled modules with small size, or troubled modules with minimal coupling from the rest, should be your first candidates. Get as much value as you can, as soon as you can. Get momentum. We're not in the 80s anymore; flat value delivery curves are a recipe from project cancellation.

- are you "just" changing technology, or are you trying to innovate the application? Honestly, most companies can't innovate. Not by themselves, and quite often, not even with external help. They're prisoners of their own artifacts. They think of their problems in terms of the existing implementation. They will never, for instance, conceive a real web 2.0 application, because they can't think that way.
If you want to innovate the application, start with innovative features in mind. Design new modules, then bridge them with the existing system. While designing the bridge, push the envelope a little: can we erode a little of the existing system, an replace some subsystem with something more appropriate?
Doing so will move you toward an Incremental Funding project, which will gradually erode the existing system until it will make economic sense to scrap the rest.

- Note that this assumes that innovation is happening at the fringes of your system, not at the core. This is often true (see also "Beyond the core" by Chris Zook, Harvard Business School Press, for a business-oriented perspective), but sometimes, the real innovation requires a new core. Say you need a new core. Are you sure? Maybe you "just" need a new product, barely integrated with the existing system. Don't say no so soon. Play with the idea. Don't get into a self-fulfilling expectation by telling yourself that everything must be tightly integrated into a new core.

- Are you moving toward a truly better architecture, or just following a technology trend? Maybe your company was much less experienced when the current system was built. Still, are you sure the new core won't become brittle very soon? Do you really need a large core? Can't you decompose the system toward (e.g.) a loosely-coupled, service-oriented architecture, instead of a tightly-coupled, database-centric architecture (maybe with a web front end, big deal :-).

- can you apply some non-linear (a.k.a. lateral) thinking? Not familiar with lateral thinking? De Bono should be a required reading for software developers, and even more so for project managers and designers!

- is your new/target architecture aligned with your company strategy? If not, think twice!

OK, it's getting late again. See you soon, I hope, with some reflections on the concept of form (vs function) in the software realm.

Labels: , ,

Sunday, October 14, 2007 

Evolving (or rewriting) existing applications

I've been conspicuously absent from my blog in the last two weeks. Hectic life takes its toll :-), and it's not always possible to talk here about what I'm actually doing.
I'm often involved in very different activities, from weird bugs at the hardware/software interface, to coding tricky parts, to designing [small] application-specific frameworks, to making sense of nonsensical requirements. Recently, however, I've been helping a few customers make some rather fundamental decision about how to evolve (or rewrite) their applications.

Indeed, a significant subset of existing applications have been developed with now-obsolete technologies ("old" languages, framework, components, architectures), or even obsolete business concepts (e.g. a client-side application sold as a product, instead of a web application sold as a service).

Now, when you're planning the next generation of a successful application, you often end up trapped into a very natural, logical, linear thinking: we start from the core of the application, build a new/better/modern one, then we keep adding features till we have a complete application.
Sometimes, it's not even about starting at the core: you start with a large framework, and expect that you'll be able to build your applications in a snap (when the humongous framework is completed).

Now, this seems very logical, and if you look at it from a technology perspective, it makes a lot of sense. By dealing with the core first, you have the best chance to make a huge impact into the foundations, making them so much better. All those years on the market taught you a lot, so you know how to make the core better.
It's also easier this way: the new application will be entirely based on the new technology, from the bottom up. No need to mix old and new stuff.

As usual, a successful development strategy is completely context dependent! If your application is small, the natural strategy above is also the best overall strategy. In a relatively short time (since the application is small) you'll have a clean, brand-new application.
Unfortunately, the strategy does not scale at all to large applications. Let's see a few (well known, and often ignored) problems:

- The value delivery curve is pretty flat. Looks more like fig.1 in an old post of mine, as you can't really sell the new application till is finished.
People usually argue that by choosing the "right" core, they can start selling the core before the whole application has been ported. Yeah, sure, maybe in some alternative reality, but truth is, in many cases your existing customers won't downgrade to a less powerful, usually incompatible application (unless they're having major headaches from the old app).
New prospects won't be so exhilarated by a stripped-down core, either. Over the years, for a number of reasons, it' very likely that most innovation happened at the fringes of the old application, not at the core. You're now taking this stuff away for a potentially long time. Your old product will be competing with your new product, and guess what, lack of features tends to be rather visible.

- Although management may seem initially inclined to support your decision to rebuild everything from scratch, they won't stay silent as the market erodes. Soon they will (very reasonably) ask you to add some stuff to the existing application as well.
Now, by doing so, you'll slow down the development of the new application (resource contention) and you'll also create a backlog of features to be ported to the new application, once it is finished.

- All this conjures for very long development times. If you're working with intrinsically unstable technologies (like web applications), there is even a significant chance that your new application will be technologically obsolete before it gets to the market!

Let's try to model these (and a few more) issues using a diagram of effects

You may want to spend a little time looking at the different arrows, and especially at self-sustaining feedback loops. It's not hard to see that this is a recipe for having hard times. Yet companies routinely embark into this, because redoing the core is the most logical, most technologically sound thing to do. Unfortunately, it doesn't always make business sense
As usual, there are always other choices. As usual, they have their share of problems, and often requires better project management practices and skilled software designers. They also happen to be very context-dependent, as you have to find a better balance between your business, your current application, your new application.
Ok, time and space are up :-), I'll add a few details later.

Labels: , ,

Wednesday, April 11, 2007 

Design for Outsourcing

Countless pages have been devoted, over the years, to explain why design is useful. Countless pages have been devoted to explain how to design for extendability and reusability. Significantly less on how to design for testability, although the agile camp contributed some, under the Test Driven Design chapter. Again, countless pages have been spent in debates over the up-front Vs. as-you-go design approaches.
What is often missing in most debates is context: quite often, there is some truth in both sides, that is, given the proper context, a given approach might be better suited. It's usually the (faulty) assumption that some approach can always be successfully adopted that makes so many debates futile.

Design should be therefore discussed in-context. Context includes the technological issues, the market issues, the organizational issues, the human issues, and so on. For instance, a recurring problem among many companies is:

- they have more work to do that they can possibly do.

- they are reluctant to hire new developers; even if they do, they believe it will take a significant time to get the new hires up to speed.

- they are reluctant to outsource some developments. The usual complaint is that just explaining the problem, following progress, training some external personnel on the business issues is more effort than just doing the damn thing.

We all know how it ends:

- considerable friction between management and (disgruntled) developers.

- delayed or canceled projects, possibly some lost market opportunity.

- little or nothing is learnt from the experience, so next time it's the same game all around.

Now, this is not a technical issue. It's an organizational issue, and as such, it can't be completely solved at the technical level. However, it's a relatively common context, and as designers, we should take this into serious consideration.

In an old post, I discussed how to use quadrants to divide activities, to find some tasks better suited to offshoring (in a particular context, where the offshore team didn't have much domain expertise). The basic idea was that some tasks (technology oriented, stable requirements) where better suited to offshoring.

Of course, offshoring and outsourcing are quite different matters. However, there are many similarities that might be worth exploring. Indeed, for sake of brevity, that post didn't mention two important issues:

- technology Vs. domain is just a simplified view of strongly - loose coupled.

- tasks are a consequence of design; we can change the design to move some critical mass of tasks into a different quadrant.

Let's review the two concepts:

- In that particular context, what was missing offshore was domain knowledge, not programming knowledge. Transferring domain knowledge would have bogged us down for quite some time. However, this is just a (real-world) example. In other cases, transferring knowledge about a huge database schema would slow you down. Or about a particular middleware you're using. Or about a specific framework (which is why, by the way, I don't like invasive frameworks), And so on. Knowledge must be transferred because it is not isolated (not enough information hiding, not enough separation of concerns, and so on) or because it has not yet been encoded into an executable form (that is, it's just in your head, but not yet in code). The tasks better suited to outsourcing (or to offshoring) are obviously those with the minimum coupling, therefore with the minimum need for knowledge transfer.

- Tasks are a consequence of design. Design is under our control: we decide where the effort must go (extendability, reusability, or... outsourceability :-). We can change the structure, the approach, even twist some requirements (c'mon: we always do) to increase the number of loosely-coupled components. Of course, this is not going to be free: you'll have to compromise elsewhere (performances, observability, etc), but as I said, design is contextual, we don't make choices in a void (this, again, is why I don't like frameworks that have made too many choices for me).

Note that I'm talking about outsourcing tasks (which most likely, translates into outsourcing components) as opposed to outsourcing applications. There would be a lot to say about this, under the perspective of risk management, but I'll save that for another post. Suffice to say that there are certainly applications that can't be economically outsourced, but which have significant components that can be economically outsourced.

Bottom line: we can gradually break the loop above, by designing software in a different way, so as to move more tasks into the outsourcing sector (loose-coupled, stable requirements). This requires, in my opinion, some degree of up-front design. Of course, I'm aware that up-front design has been given a rough time by more than a few agilist. But it's also quite obvious that most of the critics made an implicit assumption upfront = big, extensibility and reuse oriented. Which is quite a narrow view of design.

So, next time you find yourself saying "nobody can help us with this", try a different angle: can we change the structure so that somebody can help us on this? This may be all you need to get out of an otherwise deadly self-fulfilling expectation.

for the paranoids :-) out there: no, I'm not in the outsourcing business :-)). And yes, I've helped quite a few people to reshape their design to make outsourcing easier :-).

Labels: ,

Tuesday, November 14, 2006 

Slip Charts

Software development is a learning process, yet we are often asked for early estimates. In several cases, those estimates turn out to be wrong, because we didn't have enough information. In pathological companies, estimates are considered promises. In more enlightened (should I say realistic :-) companies, estimates are periodically reframed.
It's very common for early estimates to be overoptimistic, so when we review an estimate it's usually to add more effort. Unless we can shrink requirements, or unless we can find a smarter way to satisfy requirements (which is usually what I try to do), we have to postpone delivery.
When the project has high complexity and high uncertainty, it's not uncommon to slip several times, as more and more knowledge is gained. Of course, at some point we have to understand if we are going somewhere, or if we simply don't know how much it's gonna take. A useful, cheap, and simple way to get a better understanding of slippage is to draw a slip chart. You simply add one point every time you review your estimate. On the x axis, you have the current date; on the y, you have the expected delivery date.

Here is a real slip chart (from a real project)

as you can see, after less than one month, the estimate was increased by about one month. Then it remained fixed up, to delivery. After a few reviews, the confidence in making the deadline increases.

Here is another real slip chart (same product as above, but different project):

now, that's a troubled project. The team is learning a lot along the road, but not enough to give better estimates. As you can guess, it wasn't really finished when it was declared finished.

Although learning to read a slip chart (and its counterpart, the slip/lead chart) is easy, if you want to squeeze every ounce of information from the chart, I suggest that you read an excellent book from Gerald Weinberg, "Quality Software Management Vol. 2". Well, if you're any serious about software project management, you probably want to read all the 4 volumes anyway :-).

Note: looking at the slip chart can give you precious information, but information without action is useless. Now, action is always context-dependent. For the real project above, my suggested action would have been (I've seen the slip charts too late) at least to exclude the project from the next delivered version of the product, and possibly to schedule a design review to understand what was really going on. Of course, removing a single project from the next version of a requires some configuration management discipline (which was in place), and in this case it certainly mandates that the development had to be done outside the baseline (so, sorry, no continuous integration).
But then, when you have high technical uncertainty, continuous integration is not necessarily a good practice (quite the opposite, I would say). Continuous integration is good when you have high uncertainty on functional requirements, because you can (ideally) get early feedback from users. It's also good for low-uncertainty projects, because in this way you don't add additional uncertainty (from late integration). But it's not a silver bullet, and should not be applied blindly "by the book".

Labels: ,

Wednesday, July 12, 2006 

Self-fulfilling expectations

A reader reminded me via email that I've been using the expression "self-fulfilling expectation" twice when answering comments from the agile crowd (see the recent post on TDD and an older post on project management) without ever really explaining what I meant.
A self-fulfilling expectation (also, self-fulfilling prophecy) is a belief that, when held [by enough people, in some cases] works to make itself true. A popular example comes from finance: for some unjustified reasons, a number of people start to believe that a traded company has financial troubles; they start to sell stocks, the company value begins to drop, they sell at even lower prices, and so on; in the end, the company may experience real financial troubles. For a variation, see the wikipedia.

What has this to do with software engineering? Well, quite a lot. I'll give an example, using non-code artifacts, say diagrams. Of course, it's just one among many possible examples.

Some people start with an initial belief that diagrams are a useful way to reason about some problems (if they're naive, they may think all problems, but let's skip that). They believe that having an abstract view of the code (without all the details of code) makes some reasoning simpler. They start to use diagrams, they hone their ability to create useful diagrams. They also keep diagrams alive as the project grows: not using reverse-engineering tools, because that would quickly make the diagram useless (full of details, basically a graphical view of code, not a model). Since they believe that diagrams must be abstract views, they make abstract models, so small changes to the code won't change the diagrams. Since they believe that some reasoning is simpler on diagrams than on code, when the change is so big as to impact the diagrams, they will first think on the diagram, find a good solution by playing with it, then implement the change in code. Anyway, the diagram will be kept alive, so at any time, they can switch the reasoning from code to diagram. Diagrams will probably prove to be also a good communication device. Naturally, there will be a cost (sometimes you won't even see an immediate value in keeping the diagrams alive) but in the end, the diagram will prove itself really useful, and they will more than pay back, just as expected

Some people believe that the only useful artifact (long-term) is code. They may sketch some diagrams in the beginning (or maybe not), but they won't keep them alive too long. As soon as they don't see immediate value on keeping the diagrams alive, they will scrap them. As they don't value diagrams too much, they won't invest much in making useful, long-lasting diagrams; in some cases, they won't even spend time learning something like UML in depth, learning maybe just the basic syntax. Therefore, they won't even be able to model much using diagrams (which immediately makes them useless, already fulfilling the expectation). Given the cursory knowledge of the notation, it will be hard for a group of such people to use diagrams as an effective communication tool (again, making it pretty useless, and fulfilling the expectation). If diagrams are not entirely disposed, they will soon get totally out of synch with code (since even large changes are done directly on code, not thought on diagrams). Therefore, they will give less and less value to the project, finally becoming useless, just as expected.

Note that you can't just say stuff like "use diagrams until they add value", because that implies that as soon as you don't see a short-term value you'll scrap them. In two weeks (or 2 months) they could prove exceptionally useful, if you keep them alive (prophecy 1). Of course, they won't be if you don't (prophecy 2).

The fact is, what we believe tends to get true. We come at points where an option is logical only within a belief system, and acting according to our beliefs makes those beliefs true. More on beliefs and values in Software Engineering another time :-)

This is one more reason why I don't like fanatism of any kind. Fanatics tend to close themselves in self-fulfilling, self-feeding belief systems, and lose the ability to see beyond that. My simple recipe? Keep your beliefs flexible - it's science and technology, not religion. Understand that there is not a single perfect approach to any problem, although we can try to find a specific almost-perfect approach to any specific problem. Finally, if you really have to believe in something, believe in what you would like to get true :-).

Labels: , ,

Tuesday, May 02, 2006 

Knowing your real capacity

There is a magic step in most companies, when estimated effort is transformed into a schedule, and so in a delivery date.
In some companies the step is trivial - management is mandating the delivery date without even looking at the estimate. That's obviously a recipe for failure.
In most cases, however, the effort is somewhat allocated to multiple resources, a few external factors like holidays :-) are accounted for, and you get a schedule. I mean, an unrealistic schedule.
Indeed, most schedules are unrealistic even when the estimate is realistic. The reason is that most schedules assume that resources will be allocated 100% on the project, although it's pretty obvious to every reasonable soul that they won't be.
Resources are routinely distracted for several reasons. In many cases, those factors are collectively called "emergencies" or "crisis", but they are just the natural consequence of bad planning. Here is a (highly simplified) effect diagram (a la Weinberg) that I've drawn so many times for so many companies :-).

It all starts with some problem on the field: for instance, a customer experiencing some troubles. Since the schedule for the current project assumes 100% availability, resources are distracted. Note that having someone investigate the issue is a (sensible) management choice; it's the fact that no one is available (because the schedule assumes 100% availability) that causes resource distraction. Distraction leads to delay, and therefore to pressure to complete on-time; this is a natural consequence. In many cases, when the deadline approaches quality is compromised; this is a management choice (although in more than a few cases, the decision is taken at the development level). Here is the catch: if you compromise quality, you'll have problems on the field, creating a self-sustaining loop.
Now, we can try to break the loop with different techniques. We may artificially pad estimates; we may try to convey the message that "quality shouldn't be compromised", and so on. We can also face reality and understand that people won't be available 100% of their time. Ideally, we should measure their average availability: it's easier than it seems, if you don't get yourself carried out too far. We should also create a plan that it's both realistic (will deliver with good quality even if resources are distracted as usual) and aggressive (will make good use of resources if they are not distracted). The A/B/C metric from Todd Little would help here.
This will gradually reduce the amount of resource distraction (as you'll get less and less problems from the field once you don't compromise on quality), and therefore you'll be able to assume an higher availability of resources. Again, don't overshoot, as working on the edge is always dangerous.


Wednesday, March 15, 2006 

Organizational Structure

Tomorrow and the day after I'll be teaching my Software Project Management course. From previous discussions, I could foresee that there will be some interest on organizational structure (functional teams, project teams, matrix structure and the like), so I decided to add some custom material. However, the traditional literature on organizational strategy (I've got more than 20 books on the subject) is very theoretical, and definitely not grounded in software, which is an hallmark of my course. Therefore, I took a rather original road, organized around two major tenets.
1) The organizational structure must support the company strategy. Any misalignment will be a cause for friction and inefficiency. An effective, barebone classification of company strategy can be found in Michael Tracy, Fred Wiersema, "The Discipline of Marker Leaders". A couple of papers from Stan Rifkin ("Why Software Process Innovation Are Not Adopted", IEEE Software, July/August 2001 and "What Makes Measuring Software So Hard?", IEEE Software, May/June 2001) give some hints on the relevance of this work for software development.
2) The organizational structure must be aligned with the product architecture. This is a consequence of Conway's Law: "Any organization which designs a system will inevitably produce a design whose structure is a copy of the organization's communication structure" (Melvin E. Conway, "How Do Committees Invent?", Datamation, April, 1968). Convay's paper is one of those classics often quoted and never read, partially because they used to be hard to find. Fortunately, here is an online version. Luke Hohmann ("Journey of the Software Professional", Prentice-Hall) also had an interesting intuition, that is, the best organizational structure for a product is not necessarily fixed: it may change over time (this is probably an agile management frontier right now :-). Finally, some useful information on how the traditional organizational structures (functional, matrix, etc.) performs on software projects can be found in Jack T. Marchewka, "Information Technology Project Management", John Wiley & Sons.
Quite a mouthful, but being a good manager takes much more than technical skills...
For another interesting point of view on organizational structure, see my post An organization consists of conversations.

Labels: , , , ,

Monday, February 13, 2006 

Corso Software Project Management

Ho finalmente messo online il programma del mio corso Software Project Management. Come si evince dal nome :-), il corso e' dedicato al PM di progetti software, che non sono progetti come gli altri.
Oltre al programma, trovate alcuni cenni al corso in un mio post di ottobre, ed alcuni cenni (ed altri riferimenti) ad argomenti comunque correlati in un post di novembre.

Labels: , ,

Wednesday, November 02, 2005 

Stop thinking "feature", think "value"

So, your project is going to be late and over budget. Does that make you and incompetent manager? An incompetent architect? An incompetent developer? Well, it depends.
No, it does not depend on whether your original plan was "right", but then requirements changed and new features crept in. That always happens. The critical question is: if we ship now, do we have enough value in our product?
Optimizing delivered value is not easy. It requires fundamental skills at every level:
manager: must be able to define the value of all features. Business value, marketing value, user value. Pick one (hopefully the most important) but be systematic along a project. Must be able to see the small value-delivering product inside a list of features. This ability is sorely lacking in many projects I see. That makes management's life easier, but this is not what I would call good management.
designer: must create a modular architecture, where features with low value can be plugged in later. Must avoid overambitious infrastructures which will postpone delivery of value. Or, must be able to provide real numbers on the risk and ROI of those infrastructures. Must design a system so that high value features can be implemented sooner than low value features.
developer: should tactically optimize for value. Should advise the designers and the managers on value-optimizing opportunities they may not be able to see. Should write working code, test it, and then move the next value-creating feature. Should avoid half-working code like the plague. Half-working features have no value.
At all levels, we must understand that we are embarking into a knowledge discovery journey. Customers will change their mind as they gain new knowledge. So will managers. So will designers. So will developers. We must be able to design some flexibility into our product, and change (even dramatically change) our plans throughout the journey. I still value the time spent planning and designing. But plans are a flexible, working tool, not a promise carved in stone.
So, your project is going to be late and over budget. But if we ship today, will we deliver enough value? If so, you did a good job, no matter what. Features can wait. Value can't.
See also the concepts of Incremental Funding and Minimum Marketable Features, that I've already mentioned elsewhere.


Monday, October 31, 2005 

Teaching SOFTWARE Project Management

Over the years, a number of customers asked me to teach project management techniques to their team leaders and project/product managers. The reason is quite simple: traditional project management techniques don't work so well for software projects. Sure, you are better off if you know how to use PERT and GANTT charts, and you may still benefit from some traditional lecturing on risk management, but software is different. The best reason for software to be different comes from Armour: software development is a knowledge acquisition process, not a product manufacturing process. That's a huge difference.
Despite the large number of requests, I've never committed myself to create a set of PM slides. For more than a few years, I've been firm into telling that I knew enough to run a successful project, and even enough to advise on how to run a specific project, but not enough to teach how to do it in general (which is what I would expect from a PM course).
In the last year, I've spent more and more time thinking on what I could actually teach - valuable, modern, software-specific techniques that I've tried in the real world and that I can trust to work. It turned out that I knew more than I thought, but also that I couldn't teach those techniques without first teaching some (even more fundamental) conceptual tools, like Armour Ignorance Orders or Project Portfolios or Option Thinking, and so on.
In these days I'm polishing the slides I've created, and trying to create a natural bridge between those slides and some of my material on Requirements Analysis. This is probably a good chance to review that material as well, along the lines I've envisioned a few months ago. So, very soon I'll have a new, short, hopefully fun course on PM appearing on my course catalogue.

Labels: , ,

Sunday, October 09, 2005 

Simple ideas on measuring the business value of a feature

In an earlier comment, I've mentioned the need to make a clear case for the business value of features and design choices. Since features, and in general user-level choices, are easier to discuss with management than lower-level design choices, they could make for a good training :-) about thinking at the edge between software and business.
Here is an easy to read article offering some simple, yet practical suggestions: Identifying the Business Value of What We Do. On the same website, you'll also find some nice papers on usability, and a few more dealing with business issues and software design.

Labels: ,

Saturday, July 16, 2005 

"An organization consists of conversations"

The relationship between software engineers and business (management, marketing, etc) has never been idyllic - for many reasons I won't touch yet. Still, at some point, many talented software engineers are asked to outgrow their role and move into management, or in other business roles.
This is never an easy step. It's not just about new concepts and skills to master - software engineers are usually quick to grasp new ideas; it's about learning new values, being able to shift into a different mindset, and then back when is needed. It takes time (and some effort), and most people don't take that time. They assume they will succeed solely on the basis of technical excellence. Good luck :-).
Meanwhile, I see a minority of excellent software engineers growing a sincere interest (and appreciation) for management and organizational issues. They may appreciate this pamphlet: Notes on the Role of Leadership and Language in Regenerating Organizations. I've stumbled into it about one year ago, and found it extremely interesting. Here is my favorite paragraph:
an organization consists of conversations:
who talks to whom, about what.
Each conversation
is recognized, selected, and amplified
(or ignored) by the system.
Decisions, actions, and a sense of valid purpose
grow out of these conversations.

Now, what is the conversational nature of your company? :-)

Labels: , ,

Friday, July 01, 2005 

Build in-house, or buy a customization?

Some questions come up over and over, but they don't get any easier to answer.
Make or buy is a classic, and in practice is often answered in unclear, emotional terms.
One of my customers is facing a though choice. They could create a product in-house, reusing a few components from other projects and using a few commercial components as well. Alternatively, they could ask a third-party to customize an existing, similar product, which would sensibly shorten development time.
We have full data for the in-house effort:
  • requirements are reasonably clear
  • we have an overall architectural model
  • we have two alternative schedules for implementation, both based on the concepts of Incremental Funding and Minimum Marketable Features (see "Software by Numbers", from Mark Denne and Jane Cleland-Huang), so the product should start paying for itself when is roughly 50% "complete".
  • we have a reasonable estimate for development cost and time, and to some extent for maintenance costs over the next few years.
  • we know the in-house process, the kind of documentation we will have, the defect density we can expect from the development team, etc.

The third-party is reasonably eager to provide an extremely favorable estimate (cost and even more so time) so to get a contract signed.
The real question is: how do we make an informed choice? Management shouldn't (and with any luck, won't :-) choose based only on time and money. Will product quality be comparable with that of in-house products (or better)? What is the third-party strategy for customization, e.g. do they have a modular, plug-in based architecture, or will they simply create a branch of their main product? How do they feel about a joint design and code review?
Over time, I've developed a comprehensive approach to deal with this kind of issues. In fact, while I appreciate the need to trust your guts when you make this kind of decision, I find that answering a detailed set of questions provides you with two benefits:
1) you have a clear set of values to consider, where "value" means both something that is valuable to know, and a precise answer to ponder.
2) since in most cases the answers must come through extensive talking with the third-party, you get to know them much better, well under the sugarcoating of glossy brochures and nice presales. This will help your guts too :-).

Labels: , ,

Wednesday, June 29, 2005 

Traveling - Project Management Tools

I've spent a good part of the day traveling, and that means reading and... thinking. I re-read, after a few years, two chapters of a nice book from Gerald Weinberg, one of my favorite authors. He has a unique perspective on software projects management, and looking at his concept of Standard Task Unit got me thinking about using Activity Diagrams (and colors) as a project management tool. It would need some scripting to generate a Gantt, but besides that, STU maps perfectly into activities, including the entry/exit criteria.
More on PM tools, including Slip/Lead charts, project history, real support for [probabilistic] estimates, and maybe just a little Monte Carlo simulations... later :-).
Well, it's late, no time to talk about Edward de Bono and, on a completely unrelated topic, on the Big Question of the day...


Monday, June 27, 2005 

Estimation models

Today I've been playing again with custom estimation models for software development. Custom models are derived from historical customer's data, and can accurately reflect the peculiarites of the customer's environment.
The model I'm working with is tailored to estimate debugging time given the coding time and 5 boolean properties of the project (e.g. new developments Vs. maintenance). The properties have been selected for their statistical significance for a specific customer.
Reflecting my approach to estimates and risk management (I'll write more on that) the model provides a "most likely" figure, but also a probability distribution, so that you can ask questions like "what is the maximum debug time, with 90% confidence?".
Of course, given an early estimate of coding time, we can apply the model and get back a rough estimate of debugging time, still quite useful as a sanity check. We can also play what-if scenarios with the boolean properties, see the impact (for those under control) and steer the project accordingly.

Labels: ,