Sunday, February 22, 2009 

Notes on Software Design, Chapter 4: Gravity and Architecture

In my previous posts, I described gravity and inertia. At first, gravity may seem to have a negative connotation, like a force we constantly have to fight. In a sense, that's true; in a sense, it's also true for its physical counterpart: every day, we spend a lot of energy fighting earth gravity. However, without gravity, like as we know it would never exist. There is always a bright side :-).

In the software realm, gravity can be exploited by setting up a favorable force field. Remember that gravity is a rather dumb :-) force, merely attracting things. Therefore, if we come up with the right gravitational centers early on, they will keep attracting the right things. This is the role of architecture: to provide an initial, balanced set of centers.

Consider the little thorny problem I described back in October. Introducing Stage 1, I said: "the critical choice [...] was to choose where to put the display logic: in the existing process, in a new process connected via IPC, in a new process connected to a [RT] database".
We can now review that decision within the framework of gravitational centers.

Adding the display logic into the existing process is the path of least resistance: we have only one process, and gravity is pulling new code into that process. Where is the downside? A bloated process, sure, but also the practical impossibility of sharing the display logic with other processes.
Reuse requires separation. This, however, is just the tip of the iceberg: reuse is just an instance of a much more general force, which I'll cover in the forthcoming posts.

Moving the display logic inside a separate component is a necessary step toward [independent] reusability, and also toward the rarely understood concept of a scaled-down architecture.
A frequently quoted paper from David Parnas (one of the most gifted software designers of all times) is properly titled "Designing Software for Ease of Extension and Contraction" (IEEE Transactions on Software Engineering, Vol. 5 No. 2, March 1979). Somehow, people often forget the contraction part.
Indeed, I've often seen systems where the only chance to provide a scaled-down version to customers is to hide the portion of user interface that is exposing the "optional" functionality, often with questionable aesthetics, and always with more trouble than one could possibly want.

Note how, once we have a separate module for display, new display models are naturally attracted into that module, leaving the acquisition system alone. This is gravity working for us, not against us, because we have provided the right center. That's also the bright side of the thorny problem, exactly because (at that point, that is, stage 2) we [still] have the right centers.

Is the choice of using an RTDB to further decouple the data acquisition system and the display system any better than having just two layers?
I encourage you to think about it: it is not necessarily trivial to undestand what is going on at the forcefield level. Sure, the RTDB becomes a new gravitational center, but is a 3-pole system any better in this case? Why? I'll get back to this in my next post.

Architecture and Gravity
Within the right architecture, features are naturally attracted to the "best" gravitational center.
The "right" architecture, therefore, must provide the right gravitational centers, so that features are naturally attracted to the right place, where (if necessary) they will be kept apart from other features at a finer granularity level, through careful design and/or careful refactoring.
Therefore, the right architeture is not just helping us cope with gravity: it's helping us exploit gravity to our own advantage.

The wrong architecture, however, will often conjure with gravity to preserve itself.
As part of my consulting activity, I’ve seen several systems where the initial partitioning of responsibility wasn’t right. The development team didn’t have enough experience (with software design and/or with the problem domain) to find out the core concepts, the core issues, the core centers.
The system was partitioned along the wrong lines, and as mass increased, gravity kicked in. The system grew with the wrong form, which was not in frictionless contact with the context.
At some point, people considered refactoring, but it was too costly, because mass brings Inertia, and inertia affects any attempt to change direction. Inertia keeps a bad system in a bad state. In a properly partitioned system, instead, we have many options for change: small subsystems won’t put up much of a fight. That’s the dream behind the SOA concept.
I already said this, but is worth repeating: gravity is working at all granularity levels, from distributed computing down to the smallest function. That's why we have to keep both design and code constantly clean. Architecture alone is not enough. Good programmers are always essential for quality development.

What about patterns? Patterns can lower the amount of energy we have to spend to create the right architecture. Of course, they can do so because someone else spent some energy re-discovering good ideas, cleaning them up, going through shepherding and publishing, and because we spent some time learning about them. That said, patterns often provide an initial set of centers, balancing out some forces (not restricted to gravity).
Of course, we can't just throw patterns against a problem: the form must be in effortless contact with the real problem we're facing. I've seen too many good-intentioned (and not so experienced :-) software designers start with patterns. But we have to understand forces first, and adopt the right patterns later.

Enough with mass and gravity. Next time, we're gonna talk about another primordial force, pushing things apart.

See you soon, I hope!

Labels: , , , , ,

Saturday, December 06, 2008 

Notes on Software Design, Chapter 2: Mass and Gravity

Mass is a simple concept, which is better understood by comparison. For instance, a long function has bigger mass than a short one. A class with several methods and fields has bigger mass than a class with just a few methods and fields. A database with a large number of tables has bigger mass than a database with a few. A database table with many fields has bigger mass than a table with just a few. And so on.

Mass, as discussed above, is a static concept. We don't look at the number of records in a database, or at the number of instances for a class. Those numbers are not irrelevant, of course, but they do not contribute to mass as discussed here.

Although we can probably come up with a precise definition of mass, I'll not try to. I'm fine with informal concepts, at least at this time.

Mass exerts gravitational attraction, which is probably the most primitive force we (as software designers) have to deal with. Gravitational attraction makes large functions or classes to attract more LOCs, large components to attract more classes and functions, monolithic programs to keep growing as monoliths, 1-tier or 2-tiers application to fight as we try to add one more tier. Along the same lines, a single large database will get more tables; a table with many fields will attract more fields, and so on.

We achieve low mass, and therefore smaller and balanced gravity, through careful partitioning. Partitioning is an essential step in software design, yet separation always entails a cost. It should not surprise you that the cost of [fighting] gravity has the same fractal nature of separation.

A first source of cost is performance loss:
- Hardware separation requires serialization/marshaling, network transfer, synchronization, and so on.
- Process separation requires serialization/marshaling, synchronization, context switching, and so on.
- In-process component separation requires indirect function calls or load-time fix-up, and may require some degree of marshaling (depending on the component technology you choose)
- Interface – Implementation separation requires (among other things) data to be hidden (hence more function calls), prevents function inlining (or makes it more difficult), and so on.
- In-component access protection prevents, in many cases, exploitation of the global application state. This is a complex concept that I need to defer to another time.
- Function separation requires passing parameters, jumping to a different instruction, jumping back.
- Mass storage separation prevents relational algebra and query optimization.
- Different tables require a join, which can be quite costly (here the number of records resurfaces!).
- (the overhead of in-memory separation is basically subsumed by function separation).

A second source of cost is scaffolding and plumbing:
- Hardware separation requires network services, more robust error handling, protocol design and implementation, bandwidth estimation and control, more sophisticated debugging tools, and so on.
- Process separation requires most of the same.
- And so on (useful exercise!)

A third source of cost is human understanding:
Unfortunately, many people don’t have the ability to reason at different abstraction levels, yet this is exactly what we need to work effectively with a distributed, component-based, multi-database, fine-grained architecture with polymorphic behavior. The average programmer will find a monolithic architecture built around a single (albeit large) database, with a few large classes, much easier to deal with. This is only partially related to education, experience, and tools.

The ugly side of gravity is that it’s a natural, incremental, attractive, self-sustaining force.
It starts with a single line of code. The next line is attracted to the same function, and so on. It takes some work to create yet another function; yet another class; yet another component (here technology can help or hurt a lot); yet another process.
Without conscious appreciation of other forces, gravity makes sure that the minimum resistance path is followed, and that’s always to keep things together. This is why so much software is just a big ball of mud.

Enough for today. Still, there is more to say about mass, gravity and inertia, and a lot more about other (balancing) forces, so see you guys soon...

Breadcrumb trail: instance/record count cannot be ignored at design time. Remember to discuss the underlying forces.

Labels: , , ,

Saturday, September 13, 2008 

Lost

I’ve been facing some small, tough design problems lately: relatively simple cases where finding a good solution is surprisingly hard. As usual, it’s trivial to come up with something that “works”; it’s also quite simple to come up with a reasonably good solution. It’s hard to come up with a great solution, where all forces are properly balanced and something beautiful takes shape.

I like to think visually, and since standard notations weren’t particularly helpful, I tried to represent the problem using a richer, non-standard notation, somehow resembling Christopher Alexander’s sketches. I wish I could say it made a huge difference, but it didn’t. Still, it was quite helpful in highlighting some forces in the problem domain, like an unbalanced multiplicity between three main concepts, and a precious-yet-fragile information hiding barrier. The same forces are not so visible in (e.g.) a standard UML class diagram.

Alexander, even in his early works, strongly emphasized the role of sketches while documenting a pattern. Sketches should convey the problem, the process to generate or build a solution, and the solution itself. Software patterns are usually represented using a class diagram and/or a sequence diagram, which can’t really convey all that information at once.

Of course, I’m not the first to spend some time pondering on the issue of [generative] diagrams. Most notably, in the late ‘90s Jim Coplien wrote four visionary articles dealing with sketches, the geometrical properties of code, alternative notations for object diagrams, and some (truly) imponderable questions. Those papers appeared on the long-dead C++ Report, but they are now available online:

Space-The final frontier (March 1998)
Worth a thousand words (May 1998)
To Iterate is Human, To Recurse, Divine (July/August 1998)
The Geometry of C++ Objects (October 1998)

Now, good ol’ Cope has always been one of my favorite authors. I’ve learnt a lot from him, and I’m still reading most of his works. Yet, ten years ago, when I read that stuff, I couldn’t help thinking that he lost it. He was on a very difficult quest, trying to define what software is really about, what beauty in software is really about, trying to adapt theories firmly grounded in physical space to something that is not even physical. Zen and the Art of Motorcycle Maintenance all around, some madness included :-).

I re-read those papers recently. That weird feeling is still here. Lights and shadows, nice concepts and half-baked ideas, lot of code-centric reasoning, overall confusion, not a single strong point. Yeah, I still think he lost it, somehow :-), and as far as I know, the quest ended there.
Still, his questions, some of his intuitions, and even some of his most outrageous :-) ideas were too good to go wasted.

The idea of center, that he got from The Nature of Order (Alexander’s latest work) is particularly interesting. Here is a quote from Alexander:
Centers are those particular identified sets, or systems, which appear within the larger whole as distinct and noticeable parts. They appear because they have noticeable distinctness, which makes them separate out from their surroundings and makes them cohere, and it is from the arrangements of these coherent parts that other coherent parts appear.

Can we translate this concept into the software domain? Or, as Jim said, What kind of x is there that makes it true to say that every successful program is an x of x's?. I’ll let you read what Jim had to say about it. And then (am I losing it too? :-) I’ll tell you what I think that x is.

Note: guys, I know some of you already think I lost it :-), and would rather read something about (e.g.) using variadic templates in C++ (which are quite cool, actually :-) to implement SCOOP-like concurrency in a snap. Bear with me. There is more to software design than programming languages and new technologies. Sometimes, we gotta stretch our mind a little.

Anyway, once I get past the x of x thing, I’d like to talk about one of those wicked design problems. A bit simplified, down to the essential. After all, as Alexander says in the preface of “Notes on the Synthesis of Form”: I think it’s absurd to separate the study of designing from the practice of design. Practice, practice, practice. Reminds me of another book I read recently, an unconventional translation of the Analects of Confucius. But I’ll save that for another time :-).

Labels: , , ,

Thursday, August 07, 2008 

Do we need a Theory of Wrapping?

I think it was about 10 years ago. I was teaching OOD principles to a .COM company, and a senior developer said we should really develop a theory of wrapping. We talked a little about it, but then we moved to explore other concepts. I guess I totally forgot the idea till a few weeks ago, when 3 different customers came up with thorny issues, and they all had something in common: some kind of wrapper.

Wrappers are routinely created during software development, yet not all forms of wrapping are benign. Some may look convenient in the short term, but will come back to haunt us in the long term.
Would a theory of wrapping help? Indeed, we already have some organized knowledge on wrapping: several patterns are built upon the idea of wrapping an object or a subsystem. Looking at the GoF patterns alone, Adapter is a wrapper, Facade is a wrapper, Proxy is a wrapper. Decorator, although somehow similar to proxy in implementation, isnt't really a wrapper, as it adds new responsibilities.
Looking at wrapping through patterns, however, doesn't provide much guidance about the long-term consequences of wrappers. So, let's move to anedoctal knowledge for a while.

My first troubled customer is trying to solve an architectural problem by wrapping a few functions, and perform some magic inside those functions. The magic involves threads, fibers, apartments, and stuff like that. As any true form of magic, it must not be revealed to the general public :-)), so I won't.
Magic isn't well-known for reliability, but there is chance that magic might indeed work, which in a sense is even worse. We know the real problem is elsewhere: it has been known for more than 10 years, but the troubled area has never been fixed, just hidden under several layers. The real fix would entail changing 100 to 150 classes. Personally, I would work toward making that change as painless as possible, and then do it. They would rather go the wrapping way.

My second, less troubled customer has a much simpler problem: a legacy database, which in turn is mostly a mirror of the in-memory data structures of a legacy application (still maintained and developed). We need to provide a web front end, and the general agreement is to outsource that development. We basically have three choices:
1) ask the contractor to write the web app against the existing, ugly db. That requires some documentation effort on the company side, as the db can't be used without a detailed knowledge about fields (and bit-fields inside fields).
2) clean the database (in-house), clean the legacy app (in-house), let the contractor write the web app against the clean db. Sounds nice, but requires a lof of in-house work, and even worse, it would delay the contractor. Cleaning the legacy app seems daunting, and sooner or later we want to scrap that code anyway.
3) keep the ugly db, but provide a wrapper, in the form of a facade object model. Ask the contractor to write the web app against the facade. Delay cleaning the db for a while (possibly forever) and hope that a quickly developed facade will withstand the test (or ravages) of time. Because yeah, well, we ain't got much time, or we would go with (2). By the way, we could write the facade in-house, or write the same kind of documents as in (1) and ask the contractor to write the facade.

I would love to recommend (2), but being reality-based, I recommended (3), with the facade written by the contractor. So (dare I say it!!) I actually recommended that we spend our valuable in-house time writing documentation. How non-agile is that :-)). But the thing is, there is only one guy who knows the db, and that's bad risk management. Also, if at some point we want to redesign and clean the db, a documentation of what is currently hidden inside bit-fields stored as integers would be quite valuable. Oh, by the way: we did a little experiment to estimate how long is gonna take to write that doc: about 10 man/days, easily broken in short tasks, which can partially overlap the calendar time consumed by the contractor to build the facade object model. Not so bad after all, except we still keep a crappy db and application.

My third customer doesn't really feel troubled :-). Well, in fact, he's not really troubled: I am troubled :-). Years ago, a small team ventured into building an application-specific framework. Although I often preached about creating a set of small, loosely coupled mini-frameworks, people on that team weren't so interested in listening. So they went on and created a large, tightly coupled framework (which is much easier to build, and much harder to learn, change, etc).
When the company decided to build the first (web) application based on that framework, the application developers found out (guess what :-) that the ambitious design of the large framework was quite hard to understand, to the point that their main goal was surprisingly hard to reach. I proposed that the framework developers could create a simple facade, closer to our problem and therefore easier to use. They did, the application was completed, and is still in use today. So far so good :-).
A few years went by, and more applications have been written against that framework. The original framework developers moved to application development (or leaved the company), actually leading application development. Recently, I discovered that all the subsequent applications have been written against "my" facade, which was never designed to be general-purpose. However, it was so much simpler to use than the framework that people opted to use it anyway. They tweaked the facade when necessary, so we now have multiple versions of the facade around.
Again, the "right" thing to do would have been to make the framework easy to use on the first place. The facade was my backup plan because I never managed to make a dent in the framework developers' minds. Sure, it was instrumental in writing the applications, but somehow, it backfired: it became a superstructure of its own, living well beyond my expectations. Unfortunately, it was never designed to be a general-purpose component.

Of course, not every form of wrapping is evil. For instance, I often add a small facade between my code and any third party components and libraries. That shields me from many instabilities in interfaces, and in many cases allows me to change supplier if needed. That's harder with GUI components making heavy use of design-time properties, but there are techniques to deal with that as well (if warranted). Again, some wisdom is required: it makes little sense to wrap a pervasive library or framework; it makes little sense to invest in wrapping when you're developing a short-lived corporate application, and so on. Still, as someone said (exact quote needed :-), every programming problem can be solved by adding one level of indirection, and in some cases a wrapper provides a useful level of indirection.

Can we learn a general lesson from experience? From the pattern perspective, my first customer is just trying to add some kind of proxy between subsystems. The second and third are using a plain old facade. Nothing wrong with that. Yet no one is making the "right" choice: the wrapper is used to avoid fixing the real problem. We often say words like "postpone", but in software development, "postpone" usually means "don't do it".

So, again, would a theory of wrapping help? And what should this theory tell us? As is often the case, a good theory of software design should help us to make informed decisions, and like it or not, that means informed economical decisions. All too often, sound engineering choices are discarded in favor of cheap solutions, because we have no real economic model, or even a rough economic model, to calculate the (possibly negative :-) ROI of what we propose.

The real, long-term economical consequences for my first customer are way too complex, and I have little hope that we could ever develop a theory that could even come close to help with that. However, I believe the bigger long-term cost factor of the proxy-based solution would be in the increased debugging effort that would fall on the shoulders of all the other programmers working on the same project (about 100 people, I guess). This problem, somehow, could be highlighted in a theory of wrapping. A proxy, especially when doing threads-fibers-apartments magic, makes it hard to get a clear picture of what's going on just by looking at the stack (which is a per-thread, per-fiber structure). Unfortunately, the impact of this problem seems very hard to quantify.

My second customer is facing a much simpler scenario. I think we could eventually build an economic theory that could model the maintenance cost of keeping the legacy code as-is, the risk of building a leaky abstraction as we quickly build a facade over the legacy db, thereby increasing the mass of code that we'll have to change later, as we finally get to create a better database, and compare that with the savings of not doing it. We should also be able to factor in the increased maintenance cost of keeping the facade in-synch with any changes to the db required by the evolving application. It's not rocket science. Or maybe it is, but we can actually send a rocket to Mars :-). A good economic theory should recommend, in this specific context, to go with (3). Why? Because that's what I did :-)).

The third problem, I think, is more of a process problem. There was nothing wrong in the original wrapper. However, it wasn't designed for reuse, and is should not have been reused as-is, or by tweaking. This is a direct consequence of a lack of control over design choices; the team is, well, wrapped :-)) on itself. Unfortunately, lack of control over design issues is exactly what the team wants. A good process is meant to keep that kind of disfunctional behaviour out. Note that it doesn't have to be an heavyweight process. Just a good process :-).

Unfortunately, I don't have Theory of Wrapping to offer. At least, not yet :-). So, for the time being, I'll keep dealing with wrapping using informal reasoning, mostly based on experience and intuition. Got some good idea? Leave me a comment here!

Labels: , , ,

Tuesday, December 18, 2007 

Problem Frames

I became aware of problem frames in the late 90s, while reading an excellent book from Michael Jackson (Software Requirements and Specifications; my Italian readers may want to read my review of the book, straight from 1998).

I found the concept interesting, and a few years later I decided to dig deeper by reading another book from Jackson (Problem Frames: Analyzing and structuring software development problems). Quite interesting, although definitely not a light reading. I cannot say it changed my life or heavily influenced the way I work while I analyze requirements, but it added a few interesting concepts to my bag of tricks.

Lately, I've found an interesting paper by Rebecca Wirfs-Brock, Paul Taylor and James Noble: Problem Frame Patterns: An Exploration of Patterns in the Problem Space. I encourage you to read the paper, even if you're not familiar with the concept of problem frame: in fact, it's probably the best introduction on the subject you can get anywhere.

In the final section (Assessment and Conclusions) the authors compare Problem Frames and Patterns. I'll quote a few lines here:

A problem frame is a template that arranges and describes
phenomena in the problem space, whereas a pattern maps forces to a solution in the solution space.


Patterns are about designing things. The fact that we put problem frames into pattern form demonstrates
that when people write specifications, they are designing too—they are designing the overall system, not its
internal structure. And while problem frames are firmly rooted in the problem space, to us they also suggest
solutions.


If you read that in light of what I've discussed in my latest post on Form, you may recognize that Problem Frames are about structuring and discovering context, while Design Patterns helps us structure a fitting form.
When Problem Frames suggests solutions, there is a good chance that they're helping us in the elusive game of (to quote Alexander again) bringing harmony between two intangibles: a form which we have not yet designed, and a context which we cannot properly describe.

Back to the concept of Problem Frames, I certainly hope that restating them in a pattern form will foster their adoption. Indeed, the paper above describes what is probably the closest thing to true Analysis Patterns, and may help analysts look more closely at the problem before jumping into use cases and describe the external behaviour of the system.

Labels: , , , , ,

Tuesday, June 26, 2007 

Got Multicore? Think Asymmetric!

Multicore CPU are now widely available, yet many applications are not tapping into their true potential. Sure, web applications, and more generally container-based applications have an inherent degree of coarse parallelism (basically at the request level), and they will scale fairly well on new CPU. However, most client-side applications don't fall in the same pattern. Also, some server-side applications (like batch processing) are not intrinsically parallel as well. Or maybe they are?

A few months ago, I was consulting on the design of the next generation of a (server-side) banking application. One of the modules was a batch processor, basically importing huge files into a database. For several reasons (file format, business policies), the file had to be read sequentially, processed sequentially, and imported into the database. The processing time was usually dominated by a single huge file, so the obvious technique to exploit a multicore (use several instances to import different files in parallel) would have not been effective.
Note that when we think of parallelism in this way, we're looking for symmetric parallelism, where each thread performs basically the same job (process a request, or import a file, or whatever). There is only so much you can do with symmetrical parallelism, especially on a client (more on this later). Sometimes (of course, not all the times), it's better to think asymmetrically, that is, model the processing as a pipeline.

Even for the batch application, we can see at least three stages in the pipeline:
- reading from the file
- doing any relevant processing
- storing into the database
You can have up to three different threads performing these tasks in parallel: while thread 1 is reading record 3, thread 2 will process record 2, and thread 3 will store [the processed] record 1. Of course, you need some buffering in between (more on this in a short while).
Actually, in our case, it was pretty obvious that the processing wasn't taking enough CPU to justify a separate thread: it could be merged with the read file operation. What was actually funny (almost exhilarating :-) was to discover that despite the immensely powerful database server, storing into the database was much slower than reading from the file (truth to be said, the file was stored in an immensely powerful file server as well). A smart guy in the bank quickly realized that it was our fault: we could have issued several parallel store operations, basically turning stage two of the pipeline into a symmetrical parallel engine. That worked like a charm, and the total time dropped by a factor of about 6 (more than I expected: we were also using the multi-processor, multi-core DB server better, not just the batch server multicore CPU).

Just a few weeks later (meaningful coincidence?), I stumbled across a nice paper: Understand packet-processing performance when employing multicore processors by Edwin Verplanke (Embedded Systems Design Europe, April 2007). Guess what, their design is quite similar to ours, an asymmetric pipeline with a symmetric stage.

Indeed, the pipeline model is extremely useful also when dealing with legacy code which has never been designed to be thread-safe. I know that many projects aimed at squeezing some degree of parallelism out of that kind of code fails, because the programmers quickly find themselves adding locks and semaphores everywhere, thus slowing down the beast so much that there is either no gain or even a loss.
This is often due to an attempt to exploit symmetrical parallelism, which on legacy, client-side code is a recipe for resource contention.Instead, thinking of pipelined, asymmetrical parallelism often brings some good results.
For instance, I've recently overheard a discussion on how to make a graphical application faster on multicore. One of the guy contended that since the rendering stage is not thread-safe, there is basically nothing they can do (except doing some irrelevant background stuff just to keep a core busy). Of course, that's because he was thinking of symmetrical parallelism. There are actually several logical stages in the pipeline before rendering takes place: we "just" have to model the pipeline explicitly, and allocate stages to different threads.

As I've anticipated, pipelines need some kind of buffering between stages. Those buffers must be thread safe. The banking code was written in C#, and so we simply used a monitor-protected queue, and that was it. However, in high-performance C/C++ applications we may want to go a step further, and look into lock-free data structures.

A nice example comes from Bjarne Stroustrup himself: Lock-free Dynamically Resizable Arrays. The paper has also a great bibliography, and I must say that the concept of descriptor (by Harris) is so simple and effective that I would call it a stroke of genius. I just wish a better name than "descriptor" was adopted :-).

For more predictable environments, like packet processing above, we should also keep in mind a simple, interesting pattern that I always teach in my "design patterns" course (actually in a version tailored for embedded / real-time programming, which does not [yet] appear on my website [enquiries welcome :-)]. You can find it in Pattern Languages of Program Design Vol. 2, under the name Resource Exchanger, and it can be easily made lock-free. I don't know of an online version of that paper, but there is a reference in the online Pattern Almanac.
If you plan to adopt the Resource Exchanger, make sure to properly tweak the published design to suit your needs (most often, you can scale it down quite a bit). Indeed, over the years I've seen quite a few hard-core C programmers slowing themselves down in endless memcpy calls where a resource exchanger would have done the job oh so nicely.

A final note: I want to highlight the fact that symmetric parallelism can still be quite effective in many cases, including some kind of batch processing or client-side applications. For instance, back in the Pentium II times, I've implemented a parallel sort algorithm for a multiprocessor (not multicore) machine. Of course, there were significant challenges, as the threads had to work on the same data structure, without locks, and (that was kinda hard) without having one processor invalidating the cache line of the other (which happens quite naturally in discrete multiprocessing if you do nothing about it). The algorithm was then retrofitted into an existing application. So, yes, of course it's often possible to go symmetrical, we just have to know when to use what, at which cost :-).

Labels: , , , , , , ,