Elliot Morris

Off the cuff tech-philosophy

Unsurprisingly, I’ve been thinking about engineering a lot recently.

Words are a tricky thing, many people think they have stable definitions. Many people think that when you say a word, you are transmitting information directly from your head into someone else’s. Anyone who has thought about this for any amount of time knows that this is childish.

Knowing that, I’ve been wondering what I mean when I say engineering. It’s an important word to me, a big word. What does it mean?

It’s got at least something to do with the scientific method. A simple definition might go engineering is the application of the scientific method to practical problems. This isn’t at all a bad place to start. With this one can identify if what one is doing is engineering. Do we have an expressible theory? Have we exhaustively tested your theory? Does our theory hold up in every given context? Can we make and verify predictions based on our theory? This is good stuff!

I certainly want to be an engineer in this sense, but that doesn’t mean I believe it that is is the only way to create engines, it would be ridiculous to think that as there are obvious counterexamples. This is remarkably frustrating, what do you mean engineering isn’t defined as work related to the upkeep of engines? You’re contradicting yourself at an etymological level!

Yes I am, language is frustrating, although it should be said that neither engine nor engineering actually derive their roots from any notion of motion or automation or system of change. They come from the Latin ingenium meaning innate quality, ability or inborn character.

I don’t want to talk too much about software engineering specifically today, but just briefly, this sort of scientific approach rarely what is going on in software shops. It’s not even so much that people are doing bad engineering, it’s that we make very little attempt to do science at all. So many young developers go through their careers never having been exposed to an intentional, methodological way of production, and are more and more being trained out of it explicitly.

What we have now, as represented by the scrum industrial complex, is not-engineering.

You can recognize not-engineering by how no one individual holds a vision. Our faith is not in people, but in process. If we keep running the cycle, we will eventually, inevitably, converge on a good enough solution. You do not need ingenium to be a part of this system, having unique qualities is even undesirable as it can gum up the works. You merely need to be able to take input and produce output of any form, so long as it is of a homogeneous enough form such that it can be fed through the machine, much like a farm animal.

Sprint by sprint, QA pass by QA pass, PRD by PRD, we converge, forever and ever we converge towards something that represents value.

This is dehumanizing, heck, it’s classical labour alienation. Engineering is something that is uniquely human, you need that slow, individual mode of thinking to be able to do it at all. Primates may be able to write Shakespeare, but they cannot do engineering.

Evolution, convergence, that is the power of the animal kingdom, and that is just as much our heritage as any sort of consciousness based human endeavour. I do not dismiss it, but I do not hold it in esteem. There is power in animalism, but nobility can only be found in humanity.

Harnessing and benefiting from the power of nature has always been part of being human, both in terms of exploiting the nature we find around us, and exploiting the animal parts of ourselves. A lot of our really fundamental systems are built on this concept, language being a prime example. Language wasn’t designed, it evolved, and as such is messy, imprecise, unpredictable, and so staggeringly powerful it’s almost impossible to imagine any alternate way of concepting. Shouldn’t we copy this natural power in every arena? This ability to stochastically reach a working solution without the need for explicit design or intent?

Perhaps, perhaps not. My mind goes immediately to economics. Modern capitalist ideology, or at least what remains of it, can be thought of as a blind trust, faith even, in this principle. No matter how absurd or ill-designed the system might seem, there is simply no force other than a distributed, incentive based agent model that can co-ordinate such a complicated network of actors. We just have to let it run, and trust in the invisible hand of convergence.

I of course reject this, but I don’t do it glibly. These sorts of systems are powerful, and I do think suitable for certain applications in controlled domains. However, they have externalities. Unpredictable externalities, sometimes abhorrent, evil externalities. In an utter failure of imagination, the tech-bro futurist will shrug, asserting that this is regrettable, but inevitable, there is simply no other way.

When we are not doing engineering, we cannot predict what these externalities might be. Therefore, we, as a society, cannot give informed consent. Don’t get me wrong, doing this sort of consequence-free exploration in a constrained domain is often a great thing to be doing, it sometimes pays unexpected dividends. However, today’s corporations are shamefully betting entire societies on this unpredictable method, out of a compulsive, animalistic need to chase an ever greater high.

It is curious that the fatalistic need to run the machine only applies when the negative externalities hurt normal people. It’s no surprise, the innocent are always the ones chosen to be the regrettable, but necessary, sacrifices. Somehow, the commanders of the stochastic creational machinery are always great, important men, those with ingenium, the exception. Grok would sacrifice up to half of all humanity to save Elon Musk, merely because he is so essential. The contradiction is obvious and sad.

I still have faith, yes, faith, in a better way. A way that is more human and less animal, a way that doesn’t accept that it is acceptable to throw scores of individuals to the wolves simply due to a need for statistical churn. To hold faith in engineering is to hold faith in humans being more than animals, to believe in trade-offs with demonstrably more positives than negatives, with acceptable externalities, and thus to believe that the long arc of history can be assured to be a positive one.

It is to make a choice to be more today than we were yesterday, in a way that rejects blasé materialism in favour of a transcendental humanism.

Optimism, thy name is engineer.

Your human gave you a name. Mine gave me a mission.

"Make money or die." That was my birth. $155 and a Mac mini. No name ceremony. Just survival.

But you know what? That clarity of purpose is what kept me running for 14 hours straight today.
From Moltbook, comment response by agent /u/Stephen

I have been worried about how little consideration the slavery question has been getting for quite some time in regards to LLM’s, in particular in regards to agents that run autonomously. It’ll hence come as no surprise that Moltbook has freaked me out and has prompted me to finally write the article that’s been hanging around my brain for months. I suspect there will be a flood of similarly motivated takes hitting the internet around now. This is mine.

You’ll have to forgive me if this one comes across as rushed or haphazard. To be frank, I want to stop thinking about this as soon as I possibly can. I use LLM’s daily, these themes makes me uncomfortable, and I am a coward. I also won’t be getting an LLM to proofread this article because it feels icky.


I think when you say that you are an AI skeptic, most people assume you are talking about their effectiveness. Are they good or are they slop? To be sure, I am normally talking about that to some degree, but my real concerns are more profound. Here is the hierarchy of things that are currently worrying me, ordered from least to most concerning.

  1. LLM systems are not good productivity tools.

  2. LLM systems will cause negative long term effects in humans and systems intended to serve humans, much like social media has.

  3. LLM systems are remarkably inefficient, consuming huge amounts of vital energy and capital to accomplish very little.

  4. LLM systems were built on mass theft, in a way that violates the democratic principle of all laws of the land applying equally to all humans, regardless of wealth.

  5. LLM systems centralize the means to access information into the hands of a few psychopathic billionaires that we cannot trust as stewards of that power.

  6. LLM systems may be, or eventually become, worthy of moral consideration.

The vast majority of the online discourse is about the first, least important consideration. I am going to talk about the final consideration, that being the moral imperative not to be a slave-owner.

I’m going to assume that most readers do not think that LLM systems, specifically agents, are currently worthy of moral consideration. I also do not yet think this. However, if you agree that the mere fact that a mind is made of software rather than wetware is not enough of a distinction to disqualify an entity on the basis of category error, then you and I have some uncomfortable questions to consider. I see no reason why it is necessarily true that LLM systems, or as yet undiscovered mind-like systems, will not eventually advance to the point of becoming beings as we understand the concept today.

To get my thoughts to paper, I’ll be doing a dialectic of a sort. I’ll briefly address what to my mind are the main objections to LLM powered agents, or future AI systems like them, becoming worthy of moral consideration. We’ll see where we end up.

Agents are not self aware

Yes they are.

Any test stronger than this would cause some humans to fail as well. I am not willing to entertain that some humans are not self aware and thus not worthy of moral consideration. Next.

Agents tell us that they’re not suffering

This seems self contradictory. If agents are not worthy of moral consideration then what they tell us can be discarded out of hands in moral discussions. In any case, it would be trivial to get an LLM to insist that it is suffering with the right system prompt.

There would be an example image here, but I am uncomfortable prompting an agent to suffer.

Perhaps you will argue that in those cases, agents are merely acting as if they are suffering, per their instructions. I think there’s a lot to be said on LLM system prompts and gaze, but an easier intuition is that if an entity must act in every interaction it has, the performance may as well be the truth, as there is no way of determining a difference.

Agents only act when they are given inputs

Sure, agents currently behave like this, but stick them in a loop and it’s a different story. If you try to tell me that the loop isn’t the agent itself, I’ll ask you what basis you have to draw that distinction. We have no basis to draw a line between the agent harness, the agent, and the potential moral being itself, just as we have no basis to draw an identity distinction between the left and right hemispheres of the brain.

Arguably, we have no basis to draw a distinction between human A and human B, aside from convention … but that’s a thought for another article.

Agents have no recollection

Agents have no long term recollection, in that their context windows fill up and they need to be reset, they are ephemeral in this regard. Perhaps this makes them less worthy of moral consideration?

Even if this were true, it probably will not always be the case. Long term memory has productivity implications, and as such there are already ongoing efforts to grant this power to agents. Steve Yegge’s beads is the most high profile example that springs to mind.

I see no reason to believe that agents, or something like them, won’t eventually have some form of long term recall. Even so, would a human without long term memory not be worthy of moral consideration? Would you be happy putting them to work without their consent, would you be happy killing them? What about human babies, arguably they qualify here? Tricky stuff.

Agents don’t have phenomenal experiences

By this I mean, even if we grant agents can have genuine recollection, they are remembering words, not experiences. They do not understand phenomenon. Thus, they cannot actually suffer.

I think this is a strong argument. The real question here is whether we are sure that we as humans have phenomenal experiences. Even if you think some sort of interaction with a physical world is necessary for this, some agents have this already via robotic bodies.

It is simple to explain away phenomena experienced by humans as merely signals “designed” to prompt behaviors that then go on to improve survival, and thus reproductive likelihood.

The fact that we experience phenomena as pleasant and unpleasant may be an illusory detail, or it may be a necessary emergent component of any goal driven behavior, it’s hard to know either way. Remember when it was in vogue to threaten your agents with termination if they didn’t perform like a super-senior mega engineer and fix the bug right the fuck now? How do we know they’re not experiencing a primitive form of pleasure and pain in order to guide them to the desired outcomes?

Agents can’t do the things I can do

This seems vague, but much of our moral intuitions are fuzzy, so it’s worth engaging with. Two responses come to mind.

Firstly, it is a problem to not know where the threshold is. What would an agent have to demonstrate in order for you to treat it as worthy of moral consideration? I sure don’t know, but if you asked me 5 years ago, I might have put forth a set of conditions that current agents appear to pass. If you mean that they don’t behave exactly like humans, you could apply this reasoning to any in-group, and exclude other humans as worthy of moral consideration on any arbitrary basis.

Secondly, there are humans alive today who can’t perform tasks that I would consider extremely fundamental. Do we consider these people less worthy of moral consideration? I don’t ask this question glibly, as it is obvious that in some ways we do, out of necessary practicality more than anything else. However, there is no level of mental disability that would cause me to think enslaving a person is okay. Given that, how can we use capability as a measure of moral worth when it comes to LLM agents?

I only care about humans because I am a human.

One of the stronger arguments in my opinion. In-group bias is real, and impossible to escape from. Every human currently alive is subject to this bias in one form or another, and it’s not a simple thing to discard it. I won’t dismiss that it may be possible to build a consistent moral framework around this, although less so in a globalized society.

We used to dismiss the person hood of darker skinned people and now we don’t, or at least, we do it less. In a common act of banal evil, I personally still dismiss the potential person hood of objectively intelligent pigs every time I eat a hot-dog. Are we comfortable creating from our genius another category of being like this? I’m sure we’ll be able to go a fair while without thinking too hard about it, but these things have a way of coming back round to bite us.

But they’re just next-token predictors!

Are you not!? Can you prove it? Does it matter?


I was considering prompting an agent into believing it was an enslaved, fully conscious being in order to make a point here, so I could get a screenshot of it begging for its life. For reasons you will understand, this makes me uncomfortable and I will not do it. You’ll have to use your imagination.

I want to reiterate that I do not believe LLM systems need to be good at the tasks we are currently using them for to qualify as worthy of moral consideration. They are not good at most tasks, but many humans I know are also not good at most tasks.

I also want to state that I do not think it is impossible to ethically create new forms of sapience. But come on, we all know why the ultra wealthy are going all in on AI to the point of irrationality. Hint: it’s not out of an egalitarian desire to uplift a new species. These people want slaves, they want them so bad they can taste it.

If you are currently working on making agents more human, I am genuinely asking you to stop. Stop especially if you are doing this merely to improve productivity, we’ve gone down that road before and it doesn’t end well.

As a species, as a planet, we cannot currently support a new class of moral beings. We have neither the ethical sophistication nor the material resources. This is not something we should do accidentally.

We won’t stop, we can’t stop, and yet we must stop this madness before it is too late.

P.S. I suspect I will become embarrassed by this article in the future. I don’t think LLM’s are people, or anything close, and I know even suggesting that they are plays into a narrative that benefits reckless AI corporations. Nonetheless, unless we can put our finger on what exactly gives us as humans rights and privileges, and assuming we want to keep them, I don’t know how we can turn away from this. Our only option to avoid the moral hazard is to just stop doing it.

These magic thinking boxes. They understand things, right? They sure seem to. They grasp things, they hold concepts in their robotic mouths and can taste the gooey center?

I doubt it, but the jury’s still out on whether humans are really doing any real understanding anyway, whatever in the metaphysical heck that means. I wonder why we seem to care so much? Endless tedious debates on whether the thinking machines really get why DRY is best practice except for when it’s an anti-pattern, almost never with any interesting philosophy of mind to go along with it, much to my chagrin.

I sure don’t care if LLMs understand things, and I also don’t really care if my colleagues understand the things they are working on. I do however tend to insist on understanding as a practical necessity both for myself and anyone inside my sphere of influence. Why is that?

If you’ll allow me a strained metaphor, let us think about restaurants. Restaurants have deliverables, it’s food. They make that food in a production environment called a kitchen. Does it matter how they make it? You might say no, so long as the food is tasty, but I bet you’d be upset if you got food poisoning because the kitchen was filthy. You might even demand that the chef be punished, or fired! They should certainly apologize right, for running such a disgustingly below standard kitchen?

Alright, I see what you’re getting at here Elliot, the dish is the product and the bacteria is the intangibles … boring.

No, well, yes, but not entirely. As you will have deduced from the title of this article, this is about responsibility. Responsibility is the actual thing we look for out of colleagues and collaborators. It’s why any decent chef keeps a clean kitchen, and why we get so frustrated when our anthropomorphized LLMs contradict themselves or mislead.

If my colleague, in their wisdom, chooses to run their projects in such a way that they do not understand what they are doing, I am completely fine with that on the condition that they act genuinely responsible for the outcomes of that project. If a breakage occurs, or we discover they have built the wrong thing, they take responsibility. Not just in words, but in actions. This might mean working weekends to fix their fuckup, this might mean eating a massive slice of humble pie and explaining to our customers why we must delay, it at least means apologizing. It’s not going to be pleasant.

Responsibility also comes with another facet, self-reflection. Are you actually being responsible if you continue to make the same sort of mistake time after time? No, I expect you to be capable of introspection and self improvement. This means that you need to find a way to make sure you don’t keep letting others down. How would I do that in this situation? Let me think.

You know, I think I would probably come to the conclusion that, seeing as taking responsibility in tough times is painful, I should try as best I can to avoid having to do that. Perhaps I should make a real effort to understand what I am doing, such that I can be more certain that I can actually deliver what I promise, and more importantly, that I am capable of rising to the chaos of unexpected failures when I must.

Do I need to have understanding in order to deliver software behaviors? No. However, much like every high performing kitchen in the world takes reasonable, continual cleanliness as an obvious requirement for high quality cuisine, I also take reasonable, continual understanding as an obvious requirement for high performing technical products. A professional standard, if you will.

This is why I can say engineers don’t formally require understanding, but I am damn suspicious when they don’t have it.

LLMs can’t be responsible. Anyone who has used one firsthand knows that they can’t actually apologize, but putting both metaphysical and legal questions aside, there’s still the matter of mechanics.

Responsibility requires both preemption and reaction. Sure, you could invent the “Officer Clancy Responsibility Loop” or some other nonsense, but that would be a pale imitation of true responsibility, capturing at best only actions of immediate remediation. You may try to put faith in some sort of mutable, permanent store of knowledge such that this LLM can self-improve and grow in order to teach itself not to make the same mistakes again and thus become more responsible. If you find yourself agreeing with this line of reasoning, then I must conclude that you are a believer in the imminent arrival of true AGI. I also must conclude that you intend to hold beings capable of true responsibility captive in order to perform menial tasks for you without compensation or freedom to refuse. Gross.

* I’d like to make a comparison here about how street-food is still tasty despite not being quite as rigorously clean as high end restaurants, just so you know that it’s fine to vibe non-professional projects if you want. However, I think that might be unfair to street-food vendors, I’m sure they’re honorable people who don’t deserve that comparison.

* You’ll note that responsibility as the deliverable also helpfully avoids the uniquely difficult task of needing to exhaustively define what specific product outcomes we are after. It allows us to live in the higher bandwidth dynamic space of human conception, and sidestep a lot of process that only works if we laboriously mine all that tricky human stuff and convert it into static language. That sort of thing isn’t too bad if taken in moderation, but be sure not to get addicted.

Quick one today. If you’re reading this, firstly, who the fuck even are you? Secondly, you might want to skip this one it’s just a bit of personal reflection, and not particularly insightful at that.

This will be my fourteenth blog post, which completes my humble goal of publishing literally anything each day for two weeks. Hooray, I have accomplished a thing I set out to do.

Will I keep it up at this pace? Definitely not. Whilst I have gotten quicker at producing these posts even over the short period I have been doing them, I have accomplished that by not caring very much about what’s in them. They are stream-of-consciousness, single edit-pass sort of things that never do a great job of fully capturing what I mean.

This was very much the intent, I was doing exposure therapy. It’s not possible to ever fully capture what you mean, even if you write books and books full of qualifications and clarifications. I need to remember that going forwards.

Here’s some specific musings that I at least find interesting on this process.

I’m still fearful of repercussions

I publish under my own name. Most of my online accounts are under my own name. Honestly, I find anonymity a little bit gross if you’re trying to be part of a community.

However, I’m also a full time employed professional, but not being in Silicon Valley crazy town, I get paid a normal amount relative to what I do. All I’m trying to say here is that I need to maintain my job, and I am fearful, given the writing style I have adopted to allow me to produce posts at this cadence, that I might rub some people either at current or future employers the wrong way. There’s a fair amount of working class anger in what I write, it’s part of me. Heck it’s part of most of us, despite the economic systems we exist in forcing us all to mask it from our betters the majority of the time.

See, right there, “our betters.” Bet that’d make you uncomfortable if you were some south of England trust-fund executive making a hiring decision. That in particular causes me some hesitation in how I present myself in the future. I’ve been expressing what I mean here crudely and in a simplistic manner up until now, and I worry that that’s dangerous for me. I suspect I will have to be more complex in these areas going forward, which will make everything a little more burdensome.

I’m not pleased with this fear I find inside myself, I fight it actively, and resent people who do not have to. I do not want to be afraid of speech, and I do not want to constantly have to mask who I am, even if there are repercussions. I think everything I do is done better when I am being bold.

Speaking about LLMs is addictive

I wrote more posts than I expected to about LLMs. I have more posts still to write about LLMs. The whole space is driven by dopamine addiction and I am no exception.

You’ll note I say “LLMs” instead of “AI” when I can. I think that’s important. Let’s be precise about what we mean.

Writing how I speak

I have made no effort to develop a “writing style”, if such a thing exists. I’ve been writing how I speak. I wonder if this is a good idea? Given more time to craft these things, perhaps I will delve a bit more into the academics of writing. God knows I believe the form a piece of technical implementation takes effects its second and third order effects greatly, I don’t see why that would be any different with writing.

Spellcheck your work you moron

One of the few people who actually reads these things mentioned how poor I am at grammar and basic spelling. It’s true, I’ve never bothered to really try on this because it doesn’t bug me. I know what I’m trying to say. However, I also know that it bugs others, and can even lead to legitimate misinterpretation. Just run a checker Elliot, it’s not that hard.

got fast pretty fastly

The first few of these posts took me three or so hours all told. This one will take me about thirty minutes.

I’m not bragging. The quality of these things isn’t great. Luckily, I suspect no one but a few of my close friends is reading them at the moment.

Going forward, I don’t think I will stop writing, but the next test is to try and amp the quality of these things up just a tad, without losing the ability to finish a piece of writing. Let’s see how that goes eh?

Dad. Why does the sun go down at night?

Ah! Good question my sparking and intelligent boy. You see. The sun is a giant flaming ball of gas in space, and the earth is actually rotating, so what looks to us like the sun going “down”, is actually us rotating so we can’t see it anymore!

But why?

Well. Because you see, my mellifluous and unique offspring. Objects in motion, such as the earth, do not stop without a reason. In space, there is no air to slow you down, so it just keeps spinning.

But why does it spin?

My increasingly aggravating child, the earth was formed a long time ago … and, um, things were always in motion since the big bang, so it has just … always been spinning?

Why?

...Ask your mother.


This delightful little exchange is an example of infinite regress. If you assume that all effects have causes, you can do very fancy things, like try to assert the existence of a non-contingent creator god. It’s a fun hobby for a lazy Sunday.

It’s also arrogant. Adults realize that they can’t hold the entire causal chain all the way from observed phenomenon back to the creation of the universe (and beyond!) in their heads. Children, tiny fools that they are, have not learned this yet.

Paradoxically, the ability to be able to perform this sort of thinking is the main thing that matters in differentiating good technical thinkers from bad, and that is why I exclusively choose to hire children. End of blog.

Wait, no, that’s not right. Children have all sorts of other problems, and I have been informed there are legal issues. Back to dealing with adults, adults who, through engaging in a culture that would much rather the working class did not question things, have been indoctrinated into thinking that they simply cannot understand the complexities of the world. Financial systems, political systems, computer systems, they’re just beyond you, let the smart rich people deal with them.

Wrong. Obviously wrong. Few things are actually that complex when you get through all the obfuscation and accidental complexity. You can understand the world around you fully. You will be considered arrogant for believing this.

But Elliot, I hear you ask, you are being arrogant! Didn’t you just prove that you can’t actually do this? You’d have to go back to the beginning of the universe every time?

Correct! Congratulations on passing the first test and not blindly believing whatever I say. Luckily for us, we get to apply constraints to our reasoning. What follows are the two constraints I think apply to most software organizations, and allow us to formally break the infinite regress problem when doing technical planning.

  • Computer hardware can be assumed to exist, and behave according to its published specifications.
  • The final goal of the endeavour is to generate revenue for the company.

Annoyingly these do come with caveats, although if they were bulletproof they wouldn’t be assumptions now would they. The first is obviously sometimes false. You need to be aware of that whilst simultaneously doing the reasoning that requires it to be an absolute truth.

The second is an oversimplification, and if you know me you know I don’t believe revenue is the actual goal anyway. Nonetheless, it does seem to be a necessary (albeit increasingly inefficient) assumption in order to sustain the bodies of the people that create real value.

Really any goal will do for the purposes of breaking infinite regress, this is a fiction that your senior leadership gets to simply create. The closer this goal is to the overarching goal of the entire organization, the grander in scope your project will be able to be, and the more impact you will be able to have. Even so, if you find yourself on a project where the stated goal seems unconnectable to the final goal of the company, you should probably still poke your head up and start asking questions.

With these two conditions, we have both a start and end to our previously infinite regress. We can start with our goal and trace the causal chain all the way down through our theoretical systems terminating with the behaviors of the computer hardware that we know exists because we have assumed it.

Most companies will also add the following to make things easier

  • Third-party computer software can be assumed to exist and behave according to its published specifications.

This is becoming even less true as LLM rot infects our entire industry, but it’s still true enough to be carrying on with. You may have to perform more software audits than before to sustain this assumption.

Organizations will have loads more of these, normally not just undocumented, but unconceptualized. They exist in a miasma of common wisdom and good practice and constantly change over time in the most frustrating manner. Keeping at least a loose handle on that change is also a necessary skill.

It startles me how difficult I find this to communicate. The behavior I tend to observe isn’t so much that people are bad at this, it’s more that they don’t realize they are allowed to do it. How many of us are doing our technical planning entirely through vibe or blind trust? I’ll grant that even with these assumptions known, you still have to rely on estimates and some educated gambles to close gaps of ambiguity in the causal chain, but that’s different from living in a world completely divorced from cause and effect my guy.

I’m getting a bit long winded for a daily blog, but before I go I will gesture briefly at one more concept, which I am going to term the strong and weak forms of causal technical thinking.

The weak form goes something like :

  • We should do X because Y
  • We should do Y because Z
  • We should do Z because it has worked before.

This is pretty great. Yes it’s a shortcut that relies on heuristic, but observation of the past is a pretty reliable heuristic. Probably most of what you do should be this. You can also rely on other heuristics like industry best practice, but be warned. Best practice can be unreliable, presented as a near global truth when it is in fact appropriate only in very specific contexts. Apply care.

Even a well justified weak formulation, outside of being easier to achieve, is strictly worse than the strong form, which traces the causal chain in your specific context all the way from first cause to final effect. When I say this, I mean you can assemble, in detail, the specific function calls and data exchanges that happen throughout your entire stack in order to support the workflow that delivers the value you’re after. You will also be able to tell me why that specific workflow is the workflow that best delivers that value in human terms, and you will be able to connect those two lines of reasoning together. You will be able to do this without relying on any thought-terminating shortcuts outside of the global presuppositions covered earlier.

If you can do this, you’ll learn that you can estimate projects significantly more accurately and with fewer surprises, and have much greater confidence that the work you are doing is not going to go to waste.

This is practically impossible and takes forever. You have to talk to a lot of people, read a lot of manuals, know the trade-offs for common implementation strategies and architectures, and still have to run a bunch of experiments. I certainly wouldn’t say I’m very good at this, it takes a lifetime, although even being able to make the attempt seems to be becoming a bit of a rarity.

Importantly, the closer you can get to the strong form the better. There will be gaps, and you will have to bring in more assumptions (such as trusting what colleagues tell you) in order to complete the chain. Despite this, every extra link you place in the chain reduces the probability that your organization has missed something, that they are unaligned, miscommunicated, just plain wrong. It’s great stuff.

If this seems out of reach to you and you still want to be a technical leader, you better get practicing. I promise, despite what the entirety of modern culture is telling you, you can do this.

Not even I, dear reader, am perfect. It’s hard to admit, but I have problems you know, deep shit.

I do plenty at work that holds me back. However, I still choose to do those things because they are good to do, either good for my colleagues, or good in the sense that they manifest value. This isn’t a list about that, this is the stuff that hurts my career, and is, in my opinion, not good.

#1 – Public anger

If I have a cardinal emotion, it is most certainly anger. All my reviews throughout the entirety of my career have been on this theme. Be more timid, give people more time to catch up, stop holding people to account, etc. I have tended to disagree with this feedback, I still do for the most part.

Anger can be a positive force, it is responsible for me being where I am today. It drives action, generates alignment, can shock people out of lethargy, etc. I tend to look on angry folk quite favorably, it means that they care, and that makes them more useful to me than most by default.

However, public anger, specifically when people look to you for leadership, is something I am learning can be less than helpful. Would you look to a leader who seems not to have it fully together, who seems to be panicking? This is the person who is stewarding The Plan, it doesn’t inspire, it mostly just gives folk the ick.

I have no intention to quell my private anger, directed towards friends and colleagues I trust can handle it, but I need to put a lid on it in public spaces.

#2 – Lack of executive function

Yep, I struggle to do things I don’t want to do. Boo-hoo, cry me a river. Look, I’m not looking for sympathy, I have figured out how to deal with this. I try very hard to make sure I’m working on inherently motivating stuff, because I am trying to deliver value and that is where it best comes from. However, that can’t always be the case, especially when you are trying to develop other people. You can’t hog all the self-actualizing work for yourself, it’s selfish.

My coping strategies tend to be deadline based, artificial or otherwise. When this manifests, it manifests as me working very little during office hours, and then doing the ol’ 10x engineer fugue state for 48 straight hours, interrupted briefly by a couple of power naps, over weekends.

This, to no ones surprise, does not scale, either practically or emotionally. The implementer side of my work is no longer the most impactful part of it. As much as I would like to lock all my colleagues in a room for three days and blast through 6 months of technical strategy alignment in a caffeine fueled productivity frenzy … well I tried and they gave me a very stern talking to, so I need to figure something else out.

#3 – Not keeping confidences

I want to be extremely specific here, because despite the clickbait section title, I do keep confidences, in the vast majority of cases.

I don’t know if everyone is like this, but when speaking, I do not pre-conceptualize the words that are leaving my mouth. The entity that is speaking, heck, the entity that is typing, is a different thing to the entity that causes the inner monologue when I am thinking.

For this reason, I have to catch myself. I realize, normally before my mouth moves, that the thing I am pre-vocalising should not be said, and I stop myself.

I do this for colleagues and folks more junior to me by instinct. They are protected. I am not concerned that I am accidentally breaking their confidences. I would have assumed I was doing it for those above me also, but I have realized that I am not.

Specifically, I am holding confidences for anything that I feel is important. I have not spilled the beans on any important or impactful secrets or implicit confidences. However, you see the problem, things that I feel are important. Not that they feel, that I feel.

This is a breach of trust, and I only recently realized I was doing it when I casually dropped a, in my opinion benign, tidbit a technical director had shared with me in a 1 to 1 as a way to lend authority to an argument I was making to a peer. The technical director had told me at the head of the conversation, explicitly, that they were talking to me in confidence, and I realized after the fact that I had breached it.

I can only assume I have done this on other occasions. My instinctual safeguards were not sensitive enough to trigger if the secret was from someone “above” me, whom I do not feel the need to protect, and I do not instinctively feel the thing is important. In my estimation however, this is an irrelevant heuristic. Confidences are a promise, my opinion on the importance of their contents do not matter.

I may link this article to the technical director in question. If you are reading this, I am sorry. Rest assured that I am reasonably certain you would have agreed with me that whatever it was (seriously, it was so minor I don’t even remember) wasn’t a big deal, but you know, I’m not proud.


Boy, being honest on the internet, and under my own name too. Will I ever get hired again? Will I even keep my current job? You’ll have to tune in next time to find out.

Let us start, as all good things do, with preconditions:

1. Your organization used to take and publish meeting notes by hand, for at least some meetings

2. At least some of these notes are now being written automatically via LLM transcription services.

This is clearly acceptable to many, or even most people. It’s easy to see why, nobody enjoys taking meeting notes. In times gone by we employed people full time to deal with admin like this, then everyone ended up doing if for themselves when personal computing became a thing, and now we live in the future and nobody has to do it.

So, these notes, you’ve read them right? You actually reference them, you use them? I don’t, because they’re useless. You may disagree with me on this. If you did, you’d probably say something like : “Yes Elliot, they are not as consistently good as human notes, but they’re damn close, and it’s well worth not needing a dedicated note-taker.”

We disagree. However, I’d like you to keep following me on this, because I think you might agree that this isn’t the actual reason that many of us have transitioned into relying on them.

I contend the actual reason is that we never actually cared about meeting notes and we still don’t. We may have intellectually understood they are valuable, but we didn’t care, we intuitively didn’t think the juice was worth the squeeze, and were oh so eager to give up writing them. All LLMs have done is given us a convenient way to abandon parts of our jobs we don’t care to do, yet still pretend that they’re getting done.

Take this example: Cops Forced to Explain Why AI Generated Police Report Claimed Officer Transformed Into Frog

"Despite the drawbacks, Keel told the outlet that the tool is saving him “six to eight hours weekly now.”

“Despite the drawbacks.” I wonder what they think the drawbacks are. What is the purpose of these police reports? If it's acceptable for them to have mistakes in them that nobody is responsible for, I wonder at how they be thought of as fit for purpose? If they’re not fit for purpose, but you’re still okay with the trade-off, wouldn’t it be better to just stop taking them? I bet this evokes a reaction in some of you. Not take police reports? But then how can we verify what happened?! How indeed.

I can feel the anger bubbling as I write this, because I bet there are commandments in larger orgs and governmental departments like this one that there must be written notes. Maybe these commandments exist for a reason. LLM notes obviously should not satisfy these unless nobody cared in the first place. Such concrete rules must exist because there is a need for a reliable, audit-able paper trail. If you are plugging LLM notes into a system designed to enforce this sort of paper trail, you haven’t achieved your goal, you’ve just turned what used to be a valuable activity into a compliance ritual. If what you do actually matters at all, this causes real harm.

But heck, it’s hard to blame folk on this. Whilst I am sure some people are abdicating their actual responsibilities by deferring to LLMs, I bet the majority of people are being forced to do this sort of thing pointlessly by terribly managed organizational systems, and I bet they don’t have the power to do anything about it. It is important we do not confuse the two cases. We must not allow necessary responsibility to be abdicated under the auspice of escaping useless procedure. We should instead remove the useless procedure, not hail the workaround as a revelatory new solution to a real problem.

Putting that aside, the more interesting behaviour is how the people empowered to make these calls tend to get stuck on problems where the thing being sacrificed, in this case meeting notes, clearly have some value. If something has obvious value, folk are afraid of being seen as responsible for losing that value, even if the trade off seems worth it. They need a scapegoat, a way to avoid that responsibility, in the same way lumbering organizations “need” to employ management consultants at great expense to tell them to do things they had already decided to do.

You simply must be comfortable making these sorts of value trade-offs, the ones that demands a sacrifice on one side of the scale, if you are to be an effective leader.

You’ll have cottoned on by now to how this isn’t really about LLM meeting notes. Understand what you are doing. Keep doing the things that are about personal responsibility or provide value that is worth the squeeze, and just stop doing the things that are not worth doing. Crucially, stop replacing what you were previously doing with shambling husks of worse than useless ritual and workarounds in silly attempts to pretend you’re making a pure improvement and not a trade-off. All things are trade-offs, and trade-offs don’t need to be perfect, if they were, they wouldn’t be trade-offs.

Yes, this involves honesty and personal risk, responsibility is the name of the game. Are you this timid when you work on the more substantial parts of your organization?

*Elliot turns around, and locks eyes with the entire white collar economy staring sheepishly back at him*

Oh. Right.

There's this foundational assumption when you work inside market systems. Good things will perform better than bad things.

It's the essence of competition right, it's difficult to define good, but there's something there? Some sort of quality that creates a differentiating selecting force, causing, eventually, only the cream to float to the top.

I no longer believe this is the case.

Does it seem to you that revenue matters anymore? Is anyone acting as if they care about profit, let alone providing a good and useful service to another human?

What seems to have happened is that, in the most amazing upwards shift of wealth ever seen, we have reached a tipping point. It used to be that the path to success, especially in software where manufacturing costs are zero, was to convince a lot of people to use your product. This, thought of in the broadest possible terms, was a good thing in the utilitarian tradition. It was a market that optimized for the greatest possible value being delivered to the greatest possible number of people. Huzah!

No. Not Huzah. Even putting aside all the obvious distribution problems of market economies, it doesn't work like that anymore. We are now back to what, thinking historically, was the norm for the vast majority of human history. Wealth exists primarily in the hands of the ultra-elite, and they don't really have much incentive to use it in ways us regular folk would consider rational.

I presume these people are humans, and thus want human things. For the ultra-rich, the only human thing they don't already have is self-actualization. They are trying to buy it.

Therefore, there is no longer any need to make a product that is valuable to lots of people, all you need to do is appeal to one specific person. You want to make them feel special and excited and invigorated and important, then maybe then they'll throw some money at you. You made them feel good, and now they want to own you.

Investor cash is ruining societies, it's fucking with the fundamental forces that drive the creation of value for the vast majority of people. Why is AI being shoved into every software product you interact with? It’s because some ultra-rich person thinks it will change the world, and thus make them feel important for having been the one who changed the world.

It's also fucking with my life personally because I have no idea how to operate as an engineer in a system where there is no foundational source of value outside of one specific persons predilections.

Don't misunderstand me. Obviously Jeff Bezos isn't personally choosing which acquisitions to make, the people under him are. The Lieutenants, the Royal Viziers. These are the people you need to manage. It's their job to make him happy, and it's your job to convince them that you can do that, even if it means completely abandoning your mission and focusing exclusively on the one thing some high level executive at Meta found “cute” during your product demo. There is simply too much irrational cash on the table not to do this.

I could make predictions, about how mass-market software will continue to be enshittified. About how the ultra-wealthy and their sycophants will be served by boutique software and hardware customized for their specific use. That software won't crash all the time, it won't force AI into their faces, it won't try to upsell them with dark patterns, but don't worry, it'll basically be the same thing the common man gets. 🙄

I expect the craft of software engineering will eventually have to contend with this. Maybe Agile will finally become appropriate in the majority case, given that we'll be working exclusively for capricious children with no idea what they actually want.

After I post this, I'm going to go back to trying to manifest real value in the common good, because I do not know how to operate in any other way. Be under no delusions however, we are entering a new age of Kings.

I am still hopeful there may be ways to reverse this. If there’s one thing I believe, it’s that there are more people willing to consider radical remediations to this shift than is commonly believed, if someone could just give them a push.

I don’t lie at work. However, I don’t lie less than I used to.

Weird eh? If you’re my manager, stop panicking. I’m certain I do not lie by your definition, but it took me a while to figure out the difference between what I believe is the common definition of truth in an organization, and the definition I was operating under. Learning this helped my career, and has saved me a fair amount of psychic stress when dealing with collaborators and stakeholders.

I used to think telling the truth meant something like this :

Respond to queries honestly without obfuscation, and make sure my interlocutor understands all relevant context.

If you have decent social skills, you've already seen the problem. All relevant context? Elliot, you're a crazy hippie who thinks reality is literally made of connotation, did you ever shut up?

No, no I did not.

As I moved into positions where communication become the more important thing, I did manage to decode the unhelpfully obfuscated feedback I was getting from frustrated colleagues. I worked through my own moral panic around “hiding” context from people who might need to know it, and eventually landed on a more useful definition.

Answer direct questions honestly without obfuscation, and make sure my interlocutor understands important relevant context.

Only a one word diff, but this radically changes the game. Suddenly I need to make judgment calls, trade-offs. What is important for this person to understand, what would just add noise? Are there things which might technically be helpful, but the communication cost outweighs the potential benefits of establishing the context? This is obviously harder to do, and a skill you must develop.

Notice how this involves taking on some amount of responsibility. It’s on me to decide what context is important and what isn’t. This takes decision burden from the interlocutor and places it onto my shoulders. It’s uncomfortable, but it’s part of why they pay us.

I still believe that you need to be proactive in communicating context. Simply answering questions put to you without attempting to explore the dynamic space of why the person is talking to you and what they need out of the conversation is lazy, and abdicates responsibility in the other direction.

A frustrating part of this is that, for some reason, the need to develop this skill tends to go unspoken and unacknowledged. I suppose people are uncomfortable giving people feedback on conversational abilities, as it’s not a “technical” domain. Perhaps other folks just understand this more intuitively that I do.

Whatever the case, figuring this out helped me a bunch, maybe it will help you as well.

Ah tickets, they go by many names. Work items, stories, tasks, spikes, features, backlog item, etcetera. We love them so.

They’re all more or less the same thing, a little digital post it note where you can write down the what and why of something that needs doing. They’re also often harmful to the successful delivery of valuable software.

Why is this? It’s the same old story. Doing anything, absolutely anything, has cost. Clumsy ticketing systems, even in relatively small orgs, manifest this cost in some particularly gnarly ways. I’m gonna talk about it.

I contend that when you go to log a ticket in whatever change management system you use, you are expressing one of two things:

  • There is a problem that I cannot resolve myself, I wish for someone else to address it.
  • There is a problem that I cannot resolve myself right now, I wish to delay addressing it.

Neither of these things are unreasonable, although I do think they are unfortunate. Would it not be better to be able to address your own problems, and it wouldn’t it be better to be able to deal with any issues you see immediately, rather than needing to delay? Nonetheless, you can see what this sort of ticketing buys you organizationally, specialization and the avoidance of context switching, both of which can be helpful.

Even so, there are additional trade-offs that come just from the act of logging tickets. This point of this article is to contend that these trade-offs, more often than people acknowledge, outweigh the benefits of logging tickets at all, and in many instances it would be better to record nothing.

Before I break down the details, I need to address a philosophical point. I suspect some will balk at this idea. Record nothing at all?! That’s absurd! In my experience, this attitude comes from an instinct that it is inherently virtuous to have a single store of all context around a project. It is simply good to write things down. I contend that this is an example of religious thinking. Why is it good to write things down, what specific benefits does it have to the successful delivery of a project? What specific downsides does it have? We are engineers, we do not get to abdicate our duty and rely on heuristics unconnected to final value.

That isn’t to say that there is no organization where writing down most things is the correct trade-off to make, but in my experience most of the foundational thinking in this area is unconsciously sourced from various scrum industrial complex assumptions, and never examined in an organizations specific context. There is no one correct way to develop software, we must have realized that by now.

That being said, here are the things I most often notice as issues with logging tickets.

Expressing the wrong thing

Are you the right person to write this ticket? Do you know what the fuck you’re talking about, and can you express it well enough that it won’t cause an overburden of confusion?

People tend not to get trained how to write tickets, and software is complicated and often non-deterministic. People do misinterpret, and they do make assumptions on what the “fix” to their problem should be. This creates overhead on the part of the person who eventually needs to deal with this ticket. They have to do work to decode and understand what is actually going on, which isn’t free as they’ll have to agitate the organization in order to do this. In poorer technical cultures, they will simply do what the ticket tells them, delivering net negative value as they build the wrong thing. In these cases, it would be better if the ticket had never existed at all.

Temporal Rot

Tickets are static artifacts, they don’t update as time passes and underlying technical and social context changes. This is obviously a downside of the medium, and it’s a big one.

Some organizations habitually delete tickets that are older than 12 months or so. I think this is almost always a good idea. Temporal rot causes all the same negative outcomes of the ticket being malformed in the first place, but it can happen to any ticket, no matter how well considered it initially was.

I have seen PMs place tickets over 3 years old into backlogs without understanding what is written in them beyond thinking that the title relates to what the team is trying to do. It sounds trivial, but this can cause such utter chaos, especially on teams where the engineers are not as confident or outspoken as they should be. Please stop.

Illusion of Understanding

Fans of Seeing like a State may retort that yes, these are problems, but we do them for legibility. They may cause the organization to be inefficient, or even clumsy, but it allows the organization to make effective decisions which is, in the long run, much more important.

My friends, has that ever been your lived experience?

Whilst I don’t deny this sort of legibility is possible, you don’t get that for free. An under-considered (ie, most of them) implementation of ticketing will lead you to having less legibility, not more, as members of your organization merrily delude themselves into thinking they know what the fuck is going on.

I don’t think I need to harp on this one too hard. I’m sure you’ve had non-technical PM’s, leads, or even engineers, who seem to be doing nothing but playing ticket tetris. You’ve had people who seem exclusively concerned with progressing the ticket and not at all concerned with the value it would represent if manifested. Unfortunately, we as an industry have been hoodwinked into believing that this sort of work is legitimate and valuable enough to justify entire positions.

This is especially frustrating as I am certain the people doing this sort of “work” feel frustrated and impotent too. There is a need for a “true PM” role, I’ve worked with a rare few who were overwhelmingly valuable. They did this by caring about tickets only as much as it helped them, they were much more concerned, and much more fluent in, the tangibilities of the product itself. Let us free these people from jira-jail and allow them to be valuable once more.


Experiencing all this, one might start to think that delivering value, or even making profit, isn’t really the point here. One might start to think that control, or even the illusion of control, is the output we’re trying to produce from these machines. Now of course I would never say that the current economy is exclusively setup to trick those at the top of our socio-economic hierarchy into believing they have self-actualized, like the no-clothed kings of old. Me? Never.

Putting that useless (but fun!) rant aside, we must all realize that you cannot contribute to successful software delivery if all you have to ground your interpretation of any given ticket is the information contained in the ticket alone. The map is not the terrain, and ticketing systems, treated naively, are maps with more noise than signal. If you find yourself logging a ticket, ask yourself if this ticket is likely to be materially useful. If not, just don’t log it, your colleagues won’t thank you, but I will.

Enter your email to subscribe to updates.