Yesterday, an old favorite post was reposted over at The Oil Drum. The sudden shift from normal banking to the credit crisis and lending freeze in the autumn of 2008 was a classic example of the failure of a networked system, in this case the private sector investing and financing systems.

The interesting question here is this: can the failure of networked systems be predicted or anticipated? Or is it the case that, as the banksters claimed in their defence, the credit crisis was a “1,000 year flood”, i.e. a Black Swan? In his Oil Drum essay titled “The Failure of Networked Systems”, excerpted below, David Clarke argues that we have the methodology to understand the origin of, and thus to anticipate, sudden networked-system failures. The sand castle analogy that he offers is particularly instructive:

Academics have studied failures of complex systems with interesting results. One of the experiments they did will be familiar to anyone who has ever played with sand-castles as a child. Build a sand pile by gradually adding grains of sand. After a while, avalanches start to run down your pile. Sometimes they are minor, while other times they affect the whole pile. There is seemingly no way to reliably predict the outcome.

However Per Bak, in his book “How Nature Works,” shows that there is an instructive way to look at this question.

“…There is a critical angle for piles of sand–a level of steepness that the slope cannot go beyond without sand starting to roll down the slope. Imagine that, as you add sand, you colour red all of the areas of the pile that achieve this critical angle (and are thus on the verge of an avalanche). You will notice that the red patches appear as tendrils running down the side of the pile. As you add sand to the pile it gets higher and wider – the pile gets steeper and more little tendrils of red appear. Eventually you will see the tendrils of red start to interconnect.

If you drop a grain of sand on a red area then you will precipitate an avalanche. If the red area is interconnected with other red areas then all these areas will be drawn into the avalanche. If the red area is isolated, then the avalanche will be confined to one red tendril running down the side of the pile….”

This basic principle can be applied to my network problem. If one route on the network gets loaded to capacity (i.e. turns red), the system detects that it has reached maximum capacity, and it delays traffic (piles it higher) or switches traffic to other routes (spreads wider).

If the other routes were new, unloaded and redundant parts of the network, then this would not be a problem. But they are not. The other routes are simply other parts of the old, heavily loaded network. Pretty soon all routes are red, and they are all interconnected. So when one part of the network fails, it passes the traffic to another part of the network, which fails and your avalanche starts. With all networks connected, all of them are vulnerable and all fail.

The takeaway is that, if the study of interlinked unsustainable practices can anticipate and describe the broad contours of future chaos, then maybe, just maybe, investors and policymakers can have sufficient early warning to take drastic steps to prevent or minimise damage. Cyprus, with its carefree reliance on debt and fossil fuels, so far remains on exactly the wrong path: by David Clarke and Per Bak’s analogy, we are furiously piling sand onto any green-colored slopes of our sandcastle that we can identify, instead of drastically slowing down the rate of sand accumulation and broadening the base of our sand pile by embracing new sustainable practices.