What garden weeds can teach us about system resilience

I’ve been wondering about how it can be that garden weeds are so hard to get rid of considering they only grow at the rate of 1 cm a day. I realised that they apply lots of principles which are pretty useful in making a resilient system of any kind (including computer systems) – here are some cross-links:

1. When you pick the biggest and most obvious ones, the tiny ones you hadn’t noticed suddenly grow big in no time at all. I.e. they have a load-balanced cluster model with auto-scaling built in. When one machine goes down, the others jump in and take up the slack.
2. They stash away resources so that if you only pick the top and don’t uproot them, they can grow again, and again and again. I.e. they are smartly over-provisioned so that in high-demand conditions, they can recover. This might also tell us something about market economics – they must have a pretty tried-and-tested model about how much resources to stash away (invest) depending on the risk climate.
3. They metastasize – i.e. you can cut them up into little pieces but this will just make more of them grow. I.e. they are built in loosely coupled components which are able to operate and recover when the others stop working.
4. Even if you kill them all off, there are still the seeds – i.e. they are resilient to reboot and relaunch.
5. They dump seeds everywhere: i.e. they are opportunistic – ready to boot wherever there is an opportunity – e.g. in a crack in the concrete.

On SSL – why Crypto can’t fix trust for end-users

Cryptography is widely claimed to be useful in establishing trust in electronic assertions (such as an assertion that the holder of a certain public key is also the registered owner of a certain domain) . The argument goes like this: a certain operation (E) on the assertion A resulting in an artefact A is mathematically intractable (i.e. practically impossible) without knowledge of a certain secret S. Another bit of information, Pi which is not secret, but instead publicly available can be used to verify whether A was produced using a given secret Si or not using the inverse mathematical operation to E, D. Pi can only be used to prove this for only one other secret. Therefore if you know P and the binding between P and the entity owning the secret  it can verify (say Bi(P,E,A)) then you can be sure that E produced A with S. In order to be sure of this binding, you can rely on a second  binding  which makes the statement that Bi(P,E,A) is valid. By chaining these bindings, as long as you have one binding which you are sure of (for example because someone you trust gave it to you) then you can be sure of Bi and ultimately the source of A.

This is all fine and dandy, but the problem is with the dumb user. Ultimately the way this works is that your computer performs D and tells you whether Pi corresponds to Si or not.  No user is capable of checking whether D was performed correctly or whether instead the computer just said – yeah, looks good to me, without actually doing D at all, therefore the user has to trust the computer and its software. So the user, most of whom wouldn’t know a public key from a public toilet, is tasked with figuring out whether the computer was telling porkies or not. How can this be done? The options are limited to one: you trust the person who gave you the device that they neither cooked the machine deliberately nor exposed it to anyone else who could have cooked it instead. This is almost as hard as figuring by another means other than crypto whether for example, a document was really sent by a specific individual – for example by phoning them and listening to their voice telling you that they sent it.