Accountablli-Buddy

November 20, 2025

My Theory of Productivity

Allow me a theory. I probably didn’t invent this, but if I did, bully for me.

Work output, when increased beyond the amount of available accountability, without a matching increase the available accountability, will never generate a lasting increase in productivity.

Here’s what I mean by all these terms. Work output is the amount of stuff you make or things you get done. Maybe you make toys, maybe you insure businesses against floods, maybe you send emails. Work output is necessary but not sufficient to generate productivity, which is often why doing actual, productive work is challenging. If all you needed was raw work output, you could just jam out whatever all day as quickly and easily as possible and solve lots of problems. Nothing works this way.

This is why we have accountability. Accountability, in my parlance, is sort of a backstop against work output. It doesn’t necessarily guarantee that all work output is good (that would be “micromanagement”, which is a great way to drive work output to zero), but it ensures that for every kind of bad output, there is time, energy, and brainpower available to figure out why that’s happening and take on the responsibility of preventing it. In other words, mistakes are okay, but just making the same ones over and over again at similar (or increasing) rates is not.

Productivity is when these two things align. If work output goes up, and a person, system, or process is available to make sure that work output is positive (or at least not negative), you will almost always see a commensurate improvement in productivity. I define that as whatever positive outcome your organization is trying to accomplish (usually whatever gets us paid — the insuring, the toys, the emails, etc.)

Accountability as a Limiting Factor

As we are frequently reminded by people trying to get us to invest in insane exciting things, the world has gone through many productivity revolutions thanks to various technological innovations. While some of things improved the quality of things, or the safety of the people making them, the most obvious impact of the biggest ones is a large, comparatively inexpensive increase in work output. My new factory can make a thousand pairs of pants in the time it used to make a hundred, resulting in both cheaper pants and higher margins for me. Everyone wins. My new corn can grow on land that used to be barren, and using water from far away brought in by pumps that didn’t exist before. More corn, less useless land, everyone wins, at least until I start growing fake, inedible corn to take advantage of massive agricultural subsidies.

I am a huge fan of these developments in general, and always have been, even though I am not an industrialist but merely a simple 21st century office drone. I am constantly seeking ways for technology to improve my work output, because when I succeed at this, I usually receive praise (and occasionally promotions!) for actually doing less work. It’s fantastic, and if you’ll allow me, just very American in the best way.

(music swells)

But there’s a catch. I mean, it’s not really a catch so much as a limiting factor. My technological innovation can’t increase my work output beyond my ability to make sure my work output isn’t actively harmful. Have you ever sent out an automated email to a large customer base that said “Hi, {FIRST NAME}”? Obviously I haven’t, I am totally asking for a friend. But that’s an example of the risk of increasing work output beyond accountability bandwidth.

Now, in the case of the embarrassing auto-email, we make that tradeoff 10 times out of 10, but there’s still accountability. Just ask… my friend! When you send that stupid email, you’re going to hear about it. Recipients will respond and yell at you, co-workers will forward angry customer replies, and eventually you’re going to hear about it from your boss. If they are a bad boss, they will tell you this is unacceptable and maybe call you an idiot and then walk away and start working on your Performance Improvement Plan so they can fire you. If they are a good boss, they will help you figure out some way to audit the “Name” information in the CRM, or use “no data fallback” feature in your email designer. But either way, these emails will not just blast out with FIRST NAME on them forever. And that, my friends… is accountability.

Innovations that increase work output must, by the rules of my theory, be paired with innovations in accountability, or else that automation will not generate productivity.

By and large, this is exactly what has happened with most work output-increasing technological innovations, otherwise they would never have stuck around. Various technologies have improved our ability to review things, reduce random errors, and more quickly and easily work with various problem solving experts in the event that something goes wrong. These are good things, even though they are less sexy than, you know, the cotton gin or the assembly line or whatever. Hell, unions are an accountability innovation in a lot of ways; an organizational check on the staggering increase in work output (and some of the resulting bad effects) resulting from the Industrial Revolution.

First Salads, Then Robots

Okay, so as legally required by Internet Rules of 2025, let’s now apply this to artificial intelligence, or more accurately, large language models, image generators, and maybe (if you squint) “agents”. There is a funny little dance going on when we talk about these things, because we can’t quite decide if they are supposed to replace us, or be used by us. In general, I find that the conversation — whether about society in general or in a sales pitch for some AI tool — tends to start with replacing, only to then inevitably downshift to “it’s a just a tool/massive force multiplier/etc.” as details and reality begin to weigh down the conversation. The tech industry has been able to dodge a lot of difficult logical questions by flipping this bit back and forth as it suits them.

“Layoffs in the streets, force multiplier in the sheets”, if you will.

But here’s the thing — it actually doesn’t matter because accountability is the limiting factor in either case. Think of it this way. You’d never hire someone with zero accountability. Maybe their accountability is generous, or forgiving (maybe they’re a cousin, I don’t know, I’m from Rhode Island), but unless you’re talking about a legitimate charity case, there’s gotta be some. Even something as simple as saying “if you screw up badly enough, you’ll get fired and then the paychecks will stop” (again, optional in Rhode Island) is a form of accountability, however crude.

So there’s at least an element of accountability in every function, or it’s simply not a productive job, because if there’s no accountability, you’re effectively saying the output doesn’t matter. And that means the accountability bandwidth thing comes into play with every function as well. Somebody was responsible for every caesar salad I made in the summer of 2001 at 22 Bowen’s Bar & Grill. Mostly it was me, but it was also my boss. That doesn’t mean I couldn’t make a mistake on any given salad (oh Lord did I make those), just that you could take any one of those mistakes and say “who is dealing with this by either fixing the cause of this or leaving the organization?” and there’d be an answer every single time. But the system we had in place was predicated on three college kids making every salad to order, by hand. There were only so many we could make, and thus there was a hard cap on both our possible work output and the accountability needed to ensure the value/utility of our output. In short, three college kids who didn’t want to get yelled at or fired in the middle of the summer, plus a full time assistant chef who really didn’t want to get fired, together, could handle the task.

If for some reason the restaurant exploded in popularity and we needed ten guys making salads at once like a Chopt in Midtown, or if the three of us got magic AI salad machines that let us spit out hundreds or thousands of salads a minute, this system would need to evolve for us to use it. Either those machines would need iron-clad safeguards in them (making them effectively error-proof), or you’d need a massive investment in quality control to hold yourself accountable for making good, edible salads. You wouldn’t just 10x the process and say “look how productive we’re being” without anything in place to make sure people weren’t just getting dirty bowls hastily filled with handfuls of chick peas.

Okay, Now Robots

The fundamental business promise of generative AI has a giant, accountability sized hole in it. So far, many vendors have dodged this by making accountability sound like ethics, which many companies will discard for a big enough ROI, but this is a major misread of how business works. Accountability can be about ethics, but it’s really just about outcomes, and there is no reason to think that businesses are willing to throw away their control over outcomes because generating a lot of arbitrary, uncontrolled outcomes now requires little to no human effort.

Bluntly, if you’re selling a massive increase in work output without some way to also massively increase accountability bandwidth, you’re essentially proposing one (or maybe more) of four possibilities:

1. Workers have plenty of unused accountability bandwidth now, so a huge increase in work output is actually great

This doesn’t sound like 2025 to me. Companies have been finding new and more creative ways to avoid being held accountable for things for decades now, with “innovations” ranging from binding arbitration clauses, to liability shield laws, to shutting down support lines and dumping people into user forums. The accountability that remains is almost entirely related to raw financial performance. Companies have been talking about “doing more with less” for years now; I find it hard to believe people are sitting around checking their work twice for lack of anything else to do.

2. It’s worth it for businesses to pay increase accountability bandwidth to meet the huge increase in cheap work output

Is it, though? Every marketing department has to figure out things like “the right number of campaigns to run” precisely because it’s often not worth the cost of installing sufficient accountability. It wouldn’t be worth the cost of doing thousands of additional hours in post-mortems and data analysis just because you could now afford to spin up those campaigns for zero dollars. Then again if we’re not 100x-ing our output here, what’s the value of this innovation? What I could really use is one automated campaign manager who was just as accountable as my human ones…

3. Accountability bandwidth can be increased with the same types of technologies that increase work output

… but that is not a thing, because computers are not accountable. They don’t care. They don’t need to eat. They don’t need to make their parents proud. They don’t need anything, because they’re not alive, and being alive is a non-negotiable part of giving a shit about anything. You can program rules into software, but rules are not the same thing as giving a shit, which you will immediately realize if you ever have to work with someone who doesn’t give a shit about work, but is sufficiently capable of following rules. In some ways that kind of person is just a slow computer, but more importantly, every computer is just a really, really fast version of that annoying employee. There are lots of ways to get different kinds of people to give a shit about different kinds of things (this process is called “middle management”, ask ChatGPT about it), but there zero ways to make computers give a shit about anything. In fact, one of the reasons software can work/write/execute a decision tree so quickly is that it doesn’t give a shit about literally anything, so nothing slows it down.

4. Accountability is irrelevant if you scale work output enough, or it’s just irrelevant in general

Lastly, despite it being a running joke for — I dunno — a hundred years or so, there’s an increasingly weird, inexplicable interest in the “a thousand monkeys on a thousand typewriters will eventually write the greatest novel ever” approach to work. Maybe it’s because some of the world’s most successful high margin businesses are automated, algorithmic trash dispensers with seemingly no limit to how big they can scale. Maybe it’s because we’ve over-financialized basically everything at this point, and we can’t conceive of any system where “N” has some sort of fundamental limit. Maybe it’s something else! But I’m here to tell you that you can’t scale any work output to the point where nothing matters. Facebook is certainly going to try, but for the rest of us, there are only so many places to show your infinite, dynamic ad variations. So even if the numbers are large, they are still numbers, and they still require some form of boring old accountability for results.

Good Automation is Still Good, Bad Automation is Still Bad

Automation is an incredibly valuable thing, but one of the many downsides of the blind rush to automate anything we can find is that some of the most important skills in making automation decisions seem to be atrophying as we race to lower the marginal cost of arbitrary work output. For instance, my little event management project automates the gathering and tracking of events and their related deadlines. You just give it a website, and it goes and finds all the events, all the deadlines, and organizes everything. I hate doing this, it takes me forever, and I make a lot of mistakes, so in general I like this painless increase in work output.

But there’s obviously an accountability issue here as well that I don’t want to ignore. No one is going to want my application to automatically grab tons of irrelevant events and deadlines and shove them into the system. They won’t even want me to automatically grab events and deadlines that are sort of relevant. They want the right ones and only the right ones and since events are not labeled on the internet as “ones you should care about”, there is no ironclad, computer-powered way to do this with 100% certainty, which pushes accountability to my user. So I actually throttle the process by (a) separating the process of finding events from finding deadlines, and (b) making you confirm which ones you want in each process. I could increase the number of events and deadlines you find, and reduce the time and effort it takes to do so (wheeee, work output!), but that would be dumb and counter-productive because it would ignore how difficult that makes the unavoidable challenge of making sure those things are correct and useful.

This is a really simple, small example, and it’s not something I’m doing because I’m some product genius or especially sensitive to user needs (history would indicate that I am quite bad at this, actually). I’m doing it because I actually care about the true productivity of using this application, because when I originally needed it, I was held accountable for effective, intelligent event management and having good, accurate answers to logistical questions. No one cared how long my little tracker spreadsheet was, or how many columns it had, or whether I knew the load-in dates for events we shouldn’t be going to.

The problem 99% of generative AI uses cases I see have is that they don’t really care about true, accountability-supported productivity and outcomes, primarily because they don’t have an answer to the accountability bandwidth problem they themselves are creating, and their business models and cap tables dictate they make a 1000% impact, not a 50% impact, so being held back by the bandwidth is simply unacceptable.

But that’s a vendor/product/business model problem, not the market’s. If you’re trying to sell these things as solutions to actual, functioning organizations who care about outcomes, or ideally, thinking though how to build them before you do that, I’d keep that top of mind.