can hidden complexity be good?

July 1st, 2008

The intuitive answer would be: no. Complexity is the huge cross we have to bear, the great weight that squashes our systems and makes them unmaintainable. We fight complexity tooth and nail, and hidden complexity is the worst kind, because it breaks things for reasons we don't understand. So the least you can achieve with a system is grok the full complexity of it, even if it's too much of a mess to do anything with.

On the other hand... if you take a step back and think about what programming really *is* then you might have to rethink that conclusion. It is, plainly, finding ways to solve problems of data processing in one way or another. And to be a coder that's really all you need to know. It does mean that you risk ending up on thedailywtf, however. So now, how is "good code" different from "just code"? What characterizes code that wins our approval? In a word: simplicity. The smartest way of doing something is the simplest way, without missing any of the requirements. A simple solution is an elegant solution, isn't it? Simplest often means shortest, too. The principle of simplicity also relates directly to the issue of complexity in a technical sense. High performance code is efficient because it's the most lazy way to do the job. Inefficient code, conversely, does too much work -- it's for suckers.

What "good programming" is

This ingrained characteristic of programming is reflected very clearly in just about any discussion of code that is "too slow". People critique the code for being awkward, for doing things in a round about manner. Eventually you arrive at a solution that is typically both shorter, and clearer. On the other end of the spectrum you have concepts like "beautiful code" and "code that stands the test of time". And when you look at the code they're raving about, the same observation transpires. It's simple. It's both blindingly obvious (once you get around to thinking about the problem in that particular way) and impressively simple for something *that* hard. It is the ultimate optimization of problem vs effort.

Our product smacks somewhat of mathematical proofs. In mathematics you score points for simple solutions, but it's not strictly necessary. All it takes is for someone to read your proof and verify that it all fits together. No one is gonna run the proof on their machine a thousand times per second.

And that is what I'm driving at here. Programming is the activity of solving a problem such that the solution exerts the least amount of effort. It's kind of funny that we of all people are dedicated to this particular discipline. Us, with the expensive silicone that can perform more operations than any other machine.

Set in those terms, programming is the art of doing as little as possible. And the great pieces of code are great not in what they *do* but in what they *don't do*. In other words, if you write good code, it's because you've found a way to take an input and do the least amount of work to produce the output. Baked into that is the secret of choosing your input very carefully in the first place. So if you can gain something from the form that the input is in, then you're achieving something without writing any code for it. This is step 1 toward your brilliant piece of code.

Hidden complexity

But this is also when you start introducing complexity. Even if it's external to your program, it is still an assumption that must hold. Is this good or bad? From your classical software engineering perspective, you badly want to minimize the amount of text you have to read to look at a piece of code and make sense of it. But then again, you need to know everything, because ignorance will bite you.

For example, in spiderfetch I spider a web page for urls. And it can run in a mode that will just output the urls and stop there. Now, if the urls are in the same order that they appear on the page, this is a big advantage, because the average web page can easily yield 50 urls and the user won't be able to easily recognize them if they come up in some random order. But this is also too cosmetic an issue to be an explicit requirement. I certainly didn't think of this particular issue when I started working on it. If you really wanted to document this kind of behavior down to the smallest detail, your documentation would be enormous. Pragmatically speaking, this behavior probably would not be documented.

Why is this an issue? Because giving a unique list of urls *is* a requirement which will influence this particular issue. (Hence, list(set(urls)) won't do the trick).

Suppose you find a way to produce the desired behavior without doing any work (or doing very little). Should this be documented in order to make this bit of complexity explicit? If this added bit of complexity doesn't affect the working of the function much at all, then it's quite peripheral anyway. What are the risks? If you break the function then obviously the peripheral complexity doesn't affect you. If you refactor you might break the peripheral bit, because it wasn't written down anywhere. On other other hand, if all such peripheral bits were to be specified, it would take you that much longer to grok the code at all.

The message we send

The question we like to ask ourselves is: what happens to the person who inherits the code? Will he notice the "hidden" (or more precisely: incidental) desirable behavior? If not, is it really important enough to document it? And if yes, will he understand why it works that way? If you lose this behavior you haven't broken the program. You have degraded it in a cosmetic way, but it still works well enough. So does it really need to be explicit?

The so-called clever coder that every middle manager is blogging his heart out about hiring, will, obvously, notice the hidden complexity. And know both how and why it's there. The less clever coder might not notice. Or he might notice, but not understand the thought process behind it. What do we want to say to him? It's okay if you mess this up, it's not that important -or- Pay close attention to the detailed documentation or you might break something?

:: random entries in this category ::

4 Responses to "can hidden complexity be good?"

  1. Interesting thoughts. In general I agree, but since I do know one interesting counter-example I thought I must share it. Lets stay close to mathematics: Use Haskell to calculate the factorial of n. There are many ways to do this, as can be seen on http://www.willamette.edu/~fruehr/haskell/evolution.html

    The interesting solutions the ones by 1: "Combinatory programmer" and 2: "Tenured professor".

    The Combinatory solution is far more complex, but it is quicker ;) And it is kinda elegant to...

    /Andreas

  2. numerodix says:

    I don't know what you mean by "counter example". I was saying that if you have a piece of code that does something to specification, and in addition exhibits some behavior that is desirable, but undocumented, this is what I called "hidden complexity". Ie. you could rewrite the code and it would still perform its primary function, but the "hidden" extra functionality would go away.

    In the case of the factorial function, not that I've run all of the code samples in ghc, but I imagine they give the same output? Then what are you trying to say?

  3. I was merely commenting "High performance code is efficient because it’s the most lazy way to do the job. " and in general the second half of the second paragraph. So I adressed a single statement, not the general statement about hidden complexity.

  4. numerodix says:

    But that is self enforcing :D A log(n) algorithm is faster than n because it's more lazy, does less work. Doesn't matter how long or short the code is, all that matters is how many instructions are executed in the course of the program. ;)

    "Beautiful code" is often short code, but it's by no means a certainty that you can get the most laziness out of the shortest code.

    Anyway, fair point I suppose.