In a complex system, we can’t predict consequences. As Cynefin expresses it: in complexity, we determine causality in retrospect, not ahead of time.
So how do we make decisions? We have to guess, try things, see what happens, react to that. We use reasons and heuristics.
There are some consequences we do predict, and these are reasons. I wake up my children because I predict that getting up before noon will help them sleep at night (not really, but some other parent might). A store puts groceries in bags because it helps me carry them, so I will buy more. I add a feature to my software because I expect users to like it.
Will these predictions come true? can’t know until we try it.
Are these valid reasons? Yes, because there’s an explanation and a theory behind them. They’re good reasons to try the thing.
Yet, we can’t predict all the other consequences. Will getting up early make my children hate me? Will the plastic bags clog the ocean and destroy human life as we know it? Will the feature complicate the UI until the software becomes unappealing? or will the new feature interact with future features in a way that makes everything harder? (yes. the answer to this last one is yes.)
That’s where heuristics come in. Children should get up at a reasonable hour. Anything we make, we should complete the system with a way to reuse and eventually destroy it. We should not have more than five buttons on a page. We should keep code decoupled. We should put features behind feature flags.
Heuristics include morals, ethics, principles, values. All of these account for consequences that are hard to predict, or hard to measure the importance of. They guide our actions toward a larger good, a longer-term good, bigger than our immediate situation. Bateson talked about religions as guidance toward caring for the higher-level system. Values preserve safety, quality, and other system properties that degrade through purposive work.
We put the reasons and heuristics together and come up with a decision, using our magic dealing-with-ambiguity human power. Was it good?
Sure, I’m not gonna judge your decision, as long as you have a story for it. What matters is what happens next.
We take action, and then we look at the consequences. Did our predictions come true? More important, what happened that we did not expect? Now is the time to construct that causality in retrospect.
And then take more action! Tweak, revert, compensate, whatever it takes until we get results we do want.
Action is a process. Our intentions are revealed not by one isolated thing we do or tweet, but by how we respond to the results.