The Essential Arrogance of Children
Dad. Why does the sun go down at night?
Ah! Good question my sparking and intelligent boy. You see. The sun is a giant flaming ball of gas in space, and the earth is actually rotating, so what looks to us like the sun going “down”, is actually us rotating so we can’t see it anymore!
But why?
Well. Because you see, my mellifluous and unique offspring. Objects in motion, such as the earth, do not stop without a reason. In space, there is no air to slow you down, so it just keeps spinning.
But why does it spin?
My increasingly aggravating child, the earth was formed a long time ago … and, um, things were always in motion since the big bang, so it has just … always been spinning?
Why?
...Ask your mother.
This delightful little exchange is an example of infinite regress. If you assume that all effects have causes, you can do very fancy things, like try to assert the existence of a non-contingent creator god. It’s a fun hobby for a lazy Sunday.
It’s also arrogant. Adults realize that they can’t hold the entire causal chain all the way from observed phenomenon back to the creation of the universe (and beyond!) in their heads. Children, tiny fools that they are, have not learned this yet.
Paradoxically, the ability to be able to perform this sort of thinking is the main thing that matters in differentiating good technical thinkers from bad, and that is why I exclusively choose to hire children. End of blog.
Wait, no, that’s not right. Children have all sorts of other problems, and I have been informed there are legal issues. Back to dealing with adults, adults who, through engaging in a culture that would much rather the working class did not question things, have been indoctrinated into thinking that they simply cannot understand the complexities of the world. Financial systems, political systems, computer systems, they’re just beyond you, let the smart rich people deal with them.
Wrong. Obviously wrong. Few things are actually that complex when you get through all the obfuscation and accidental complexity. You can understand the world around you fully. You will be considered arrogant for believing this.
But Elliot, I hear you ask, you are being arrogant! Didn’t you just prove that you can’t actually do this? You’d have to go back to the beginning of the universe every time?
Correct! Congratulations on passing the first test and not blindly believing whatever I say. Luckily for us, we get to apply constraints to our reasoning. What follows are the two constraints I think apply to most software organizations, and allow us to formally break the infinite regress problem when doing technical planning.
- Computer hardware can be assumed to exist, and behave according to its published specifications.
- The final goal of the endeavour is to generate revenue for the company.
Annoyingly these do come with caveats, although if they were bulletproof they wouldn’t be assumptions now would they. The first is obviously sometimes false. You need to be aware of that whilst simultaneously doing the reasoning that requires it to be an absolute truth.
The second is an oversimplification, and if you know me you know I don’t believe revenue is the actual goal anyway. Nonetheless, it does seem to be a necessary (albeit increasingly inefficient) assumption in order to sustain the bodies of the people that create real value.
Really any goal will do for the purposes of breaking infinite regress, this is a fiction that your senior leadership gets to simply create. The closer this goal is to the overarching goal of the entire organization, the grander in scope your project will be able to be, and the more impact you will be able to have. Even so, if you find yourself on a project where the stated goal seems unconnectable to the final goal of the company, you should probably still poke your head up and start asking questions.
With these two conditions, we have both a start and end to our previously infinite regress. We can start with our goal and trace the causal chain all the way down through our theoretical systems terminating with the behaviors of the computer hardware that we know exists because we have assumed it.
Most companies will also add the following to make things easier
- Third-party computer software can be assumed to exist and behave according to its published specifications.
This is becoming even less true as LLM rot infects our entire industry, but it’s still true enough to be carrying on with. You may have to perform more software audits than before to sustain this assumption.
Organizations will have loads more of these, normally not just undocumented, but unconceptualized. They exist in a miasma of common wisdom and good practice and constantly change over time in the most frustrating manner. Keeping at least a loose handle on that change is also a necessary skill.
It startles me how difficult I find this to communicate. The behavior I tend to observe isn’t so much that people are bad at this, it’s more that they don’t realize they are allowed to do it. How many of us are doing our technical planning entirely through vibe or blind trust? I’ll grant that even with these assumptions known, you still have to rely on estimates and some educated gambles to close gaps of ambiguity in the causal chain, but that’s different from living in a world completely divorced from cause and effect my guy.
I’m getting a bit long winded for a daily blog, but before I go I will gesture briefly at one more concept, which I am going to term the strong and weak forms of causal technical thinking.
The weak form goes something like :
- We should do X because Y
- We should do Y because Z
- We should do Z because it has worked before.
This is pretty great. Yes it’s a shortcut that relies on heuristic, but observation of the past is a pretty reliable heuristic. Probably most of what you do should be this. You can also rely on other heuristics like industry best practice, but be warned. Best practice can be unreliable, presented as a near global truth when it is in fact appropriate only in very specific contexts. Apply care.
Even a well justified weak formulation, outside of being easier to achieve, is strictly worse than the strong form, which traces the causal chain in your specific context all the way from first cause to final effect. When I say this, I mean you can assemble, in detail, the specific function calls and data exchanges that happen throughout your entire stack in order to support the workflow that delivers the value you’re after. You will also be able to tell me why that specific workflow is the workflow that best delivers that value in human terms, and you will be able to connect those two lines of reasoning together. You will be able to do this without relying on any thought-terminating shortcuts outside of the global presuppositions covered earlier.
If you can do this, you’ll learn that you can estimate projects significantly more accurately and with fewer surprises, and have much greater confidence that the work you are doing is not going to go to waste.
This is practically impossible and takes forever. You have to talk to a lot of people, read a lot of manuals, know the trade-offs for common implementation strategies and architectures, and still have to run a bunch of experiments. I certainly wouldn’t say I’m very good at this, it takes a lifetime, although even being able to make the attempt seems to be becoming a bit of a rarity.
Importantly, the closer you can get to the strong form the better. There will be gaps, and you will have to bring in more assumptions (such as trusting what colleagues tell you) in order to complete the chain. Despite this, every extra link you place in the chain reduces the probability that your organization has missed something, that they are unaligned, miscommunicated, just plain wrong. It’s great stuff.
If this seems out of reach to you and you still want to be a technical leader, you better get practicing. I promise, despite what the entirety of modern culture is telling you, you can do this.