The inevitable instability of systems
Sometimes we tend to believe in the stability of systems. Systems are sometimes designed, think about road- or rail systems, or sometimes they are discovered, like stellar systems or the behavior of ant colonies. We design for the best, making it as robust as possible. Or when we discover them, we are amazed about the complexity of it. In most cases, we just don’t understand them with the universe as we know it as the best example.
Why is it that we tend to think that systems need to be stable? Cant’t we just accept that everything where energy (or another flow) is involved is by definition unstable? Sometimes a system appears to be stable, but in time it will become unstable and ultimately it will collapse. Road systems will collapse because too many cars drive on them, because there is not enough construction, or because they become superfluous. Stellar systems are unstable because they will collapse with others or they faint away (no more energy), and ant colonies will disappear.
My understanding is that every system that is created at some point will disappear at another point, and that energy is the fuel that is needed to create and maintain it, but that energy will also destroy it. Without energy a system is dead (maybe stable?), therefore it is inevitable that a system is unstable by definition. So the only stable system might be a dead system. Characteristics of a system are structure, behavior and interconnectivity, all three influence each other resulting in change in those characteristics. While a system exists, those three characteristics influence and change each other. At one point a minor change can start the disruption of the system.
By accepting that systems are per definition unstable, can we design better systems? Let go of control, and accept that the end of one system can mean the beginning of another. Or by letting two systems collapse in a controlled manner, this can mean the start of a new (and perhaps better) one. If we bring this philosophy into organizations (or economies), what can we learn from this? Can we develop new design principles that respect the temporal nature of systems? What is we always include a scenario of the end of the system while we design it? I think this would be a lot better. Think about the current banking issues. Banks collapse, and we try to ‘save’ them. It is basically a quick fix without thinking things through. We think this system is needed, but we haven’t thought through alternatives, and certainly did not think about what to do when this system might fail at some point, certainly not when this system was introduced.
The banking system is not needed for humanity. At some point it seemed a good system for us, and it still might be for some time despite the huge financial injections. But this system is not there forever, and we have seen it’s weaknesses. One of the best example of a temporary system is the democratic system. By definition we accept that they are unstable, and we’ve built in rules to make sure it will collapse quickly. It is not the most efficient system, but it is a system that renews itself on a regular basis. While the democratic system itself can collapse as well, we do not try to make it efficient and stable. That would bring us to dictatorship, which is efficient but has it’s disadvantages.
So, maybe more questions than answers or solutions, and maybe questions that were asked many times before, but some questions need to be asked again and again. Last but not least: systems are interconnected not only with itself, but also with other systems. Let’s not forget that one while designing systems. Instability of one system might be needed (or even crucial, think about day and night, rain and drought) for the stability of another.
Complex Adaptive Systems, my understanding
Some commenters on previous posts on this blog referred to CAS or Complex Adaptive Systems. This term is somewhat fuzzy for me, as I’ve never read about CAS before. So now is the time to do so. A first lookup in Wikipedia is always a good start, so that’s what I did. I must say, the C in CAS already becomes apparent when you look at the definitions. One of the definitions that is mentioned is the following:
A Complex Adaptive System (CAS) is a dynamic network of many agents (which may represent cells, species, individuals, firms, nations) acting in parallel, constantly acting and reacting to what the other agents are doing. The control of a CAS tends to be highly dispersed and decentralized. If there is to be any coherent behavior in the system, it has to arise from competition and cooperation among the agents themselves. The overall behavior of the system is the result of a huge number of decisions made every moment by many individual agents.
So this definition says that a CAS is a network, where many actors act for themselves in a response to their (changing) environment. If I interpret this correctly, human behaviour is a CAS as well. Almost all humans are connected to each other via a number of other humans. Or the Internet is a CAS, where many endpoints are connected to the same network, they determine the network, they are the network. Or maybe the universe and evolution as well.
My interpretation is that we use the term CAS when we do not understand the behaviour of a system or phenomenon or when it can’t be controlled. Examples that are given are ant colonies, stock markets, the ecosystem, or political parties. All are difficult to understand, if they can be understood at all, and even the actors in it probably do not understand their system that they are part of, for example the politicians in a political party or the ants in the colony. These systems or phenomena can’t be controlled, their behaviour can seem unpredictable. And that’s a good thing, the urge to control is overrated very much. Maybe some influence can be desired sometimes, if possible.
The Wikipedia article also states that theĀ principlesĀ of self-organization and emergence are very important in these systems. The relation between self-organization and CAS became apparent in the discussion on self-organization as well. But then we come to the differences between human beings with a mind of their own, and other players like ants or cells. Can self-organization occur in an organization where people are involved? Or is it just not possible because we can think for ourselves and can act by reason? However, the latter is a philosophical discussion. Do we act by reason or by drifts for power? The philosophers Immanuel Kant and Friedrich Nietzsche thought about that very differently. So maybe this discussion is always a philosophical one.
If we go back to the definition, the C in CAS is only true when you look at the phenomenon from a birds-eye perspective. All the actors deep down in the system are probably not aware (if they could) that they are part of the system, and just follow simple rules. So from their perspectives, there is not much complexity. They adapt to their environment, like a water drip just follows the easiest path. This drip is not aware of the ecosystem that it is part of, just like the system is not aware of the single drip. However, it is possible to influence the flow of the water, because we understand the characteristics of water. But it is not possible to influence the whole system where water is a part of, it’s just too complex.
Translated to organizations, complexity is there or not depending on the perspective you’re in. The higher in the hierarchy, the more complex the organization as a whole seems to function. If you are high in the organization, you’re aware of the size of the organization, and therefore aware of the variety of actors. How they all interact, is difficult to grasp. The lower in the hierarchy, the less you are aware of all the other players that exist in the organization, and the more focussed you are on your tasks which are relatively not complex at all. Well, that’s my understanding at this point.
2 comments