The Unstoppable Bullshit Robot

December 8, 2022

Just like you can spend months rolling out critical back-end improvements to a software application to no reaction whatsoever, only to set off a riot when you change the color of a button, “AI” is having a moment (again) right now because the folks at OpenAI did an awesome job plugging their work into something we can all relate to — an overconfident, well-spoken, semi-competent reply guy.

No alt text provided for this image

I don’t need to say it but I will — as an experience for random internet users, this is an extraordinarily cool, well-implemented experience. I used to use some ancient Mac version of ELIZA as a kid, and I got a huge kick out of it even with its immediately obvious flaws, so comparatively this is like talking to something from the year 3000. That doesn’t even take into account the whole “write me a song/poem/bible verse” aspect of this, which will be entertaining us all for some time.

Obviously ChatGPT, as it’s known, isn’t perfect. In fact, since it’s hard to describe exactly what ChatGPT is, or what it’s intending to be, I’m not sure it’s possible for it to have a definition of “perfect” to begin with. But it does a couple of things that, in theory, you probably wouldn’t want something like this to do if you could help it.

  1. It’s often wrong about objective facts
  2. It almost always sounds right

The second thing is more of a good thing by itself (“it’s so well spoken!”), but it’s also something that makes the first thing much, much more of a problem. Of course, there are people like this, too, and there have been throughout the history of people. But in my experience, they’re actually pretty uncommon. The majority of folks I’ve met who don’t know what they’re talking about give it away by also sounding like they don’t know what they’re talking about. This is in important, easily taken for granted component of human-generated bullshit that allows us to sort through it without even trying that hard.

But if machine-generated bullshit can sound right more frequently than the man-made variety, the fact that anyone in the world can crank it out by hitting a button is an absolutely enormous problem. People’s “this guy doesn’t know what he’s talking about” detectors are pretty good, but they’re trained to detect the kind of specious nonsense that — written or not — comes from people, or at least much more socially awkward robots. We hear/read/see certain cues, and we use those things to filter out a lot of life’s junk. This new stuff either doesn’t have those factors (“look deep into the eyes of this computer”), or emulates it better than the vast majority of crooks, liars, and major-party presidential candidates ever have. 

There’s already plenty of discussion about the social impacts of this kind of thing, and for good reason. Unless we specifically tell them to, robots probably aren’t going to rise up and kill us in the immediate future. But it’s very possible they’ll be used by us to generate enough automated nonsense to destroy the necessary levels of trust and social cohesion that society needs to function. Seems bad. 

Still, I’m not here to focus on the fraud and gaslighting part of this. Instead, consider this much more basic issue: in a world that isn’t just dependent on AI-driven insight, but is actually polluted by AI-generated noise posing as either data or analysis, how are you simply supposed to do your job?

Cognitive Automation Cranked to 11

François Chollet wrote a great, high level explanation of what today’s real-world artificial intelligence is, and is not. I strongly encourage you give it a read. But for the extreme layperson, basically when regular people (hello!) talk about AI, we talk about a wide variety of tasks and concepts, many of which we’ve collectively made no progress on at all. But we also talk about things that can be done with what Chollet refers to as Cognitive Automation, and we really have made pretty amazing, practical strides in that over the years. 

In Chollet’s words, cognitive automation means “encoding human abstractions in a piece of software, then using that software to automate tasks normally performed by humans”. What we’ve seen in the internet age, and more recently with the work of groups like OpenAI, is a huge leap forward in our ability to encode tons and tons of fairly complex abstractions — far more than the experiments and “AI” demos I grew up with ever had to work with. 

Abstractions ain’t the whole bag, of course. The next level up would enable software to help us make better decisions in the complex real world (not in a closed off system, and not by simply presenting or sorting data for us in a certain way, as many tools do today). Beyond that, you’re talking about independent minds and consciousness and a bunch of stuff that we are absolutely nowhere near making a reality — “science fiction”, in Chollet’s words.

But the thing about mastering abstractions is that if you master enough of them, you sure do start to look and sound like a real mind, at least to regular dummies like us. And if you really master abstractions, you can sound smarter than a real mind, because we often evaluate the quality of an idea, or the accuracy of an objective fact, based on a bunch of abstractions like the human language used to explain it. 

And while computers might still be dumb, they sure are fast. So we taught them how to figure out a secret social handshake, and then they used their inherent computer-ness to go ahead and learn how to handle another bazillion of them or so. The end result is pretty compelling, even if it’s not actually any smarter. 

Bullshit Detection 101

So what do we do? Ben Thompson wrote a fascinating piece on the flaws and implications of ChatGPT and in doing so, introduced to me the awesome concept of “Zero Proof Homework”. The idea, in a nutshell, is that the next generation shouldn’t try to block out garbage content from AI, but instead be challenged and evaluated on their ability to catch the errors and/or lies that come with it. After all, even in this early version, ChatGPT can spit out incredibly authoritative sounding answers and arguments, complete with references, that are completely and totally wrong. That’s because it doesn’t actually understand the information itself — it only understands language. 

This is weird and counterintuitive to a lot of people. Many of us think of numbers as things that computers can work with easily, and language as something qualitative and human. But numbers and language are both simply abstractions of something much more important and complex — the meaning, logic, and context of the world around us. People aren’t always good with those things either, but they can learn, whereas something like ChatGPT (despite its apparent mastery of our abstractions) literally cannot. However, it is so good at using that vast library of well-encoded human language abstractions to generate believable content that it will often appear like it understands the world better than you do.

Remember the Friends episode where Joey tried to make people think he owned a Porsche by wearing a bunch of Porsche merchandise? 

No alt text provided for this image

Did he actually need to own a Porsche? In some ways, he didn’t. In some ways, he did. But if you interacted with him from a certain perspective, with a certain set of expectations and information, he effectively did own a Porsche. Maybe even moreso than someone who actually, technically, legally owned one and got to drive it around.

Your Pre-Existing Bullshit

I could now spin you a dystopian tale about your information-economy job, where you’re inundated with so much data and analysis, from so many sources that look right, and sometimes are right, but often are not, that you can barely make a decision. A world where people stare at output generated by different algorithms and weakly poke at it, hoping that it will fall apart with basic scrutiny if it says something bad, and if not, ready to push a different button and generate a different result, maybe from a different algorithm. 

Except… this is already what work is like! It’s been like this for years! We use business systems every day that have mastered an even less complex set of abstractions (“pipeline revenue”, “units sold”, “project velocity”) and posed as voices of wisdom and authority, like two kids in a trench coat trying to get into an R-rated movie.

So what we do we do at work? Well, for the big stuff — quarterly reports, managing cash, etc — we tend to do things by hand, or with systems that are reliable, because they’re actually pretty simple. They either just tell us observably true things, or project things with math that at least one person at the company understands, and is capable of challenging and auditing. That’s how we keep the lights on. 

But for WAY more things than you’d like to admit, we’re total suckers for context-free conclusions that prey on how easy it is to impress us with a mastery of our most trusted abstractions. Over here, this system will calculate how valuable the marketing team is! This other one will tell you how happy everyone is! This one will tell you what your customers want! Turns out they want lower prices! Who knew? THE DATA KNEW, APPARENTLY.

We either buy into — or reject — these systems and their methodologies for the same reasons we buy or accept a ChatGPT answer. Maybe the system is objectively correct/incorrect, and we have the knowledge to confirm that. Maybe it feels correct, or is at least partially correct in a way that makes us okay trusting the parts that are a mystery to us. 

No alt text provided for this image

What Thompson is proposing when he talks about teaching kids to challenge AI generated content makes a lot of sense for kids facing a future of plausible sounding misinformation. But it also sounds like a great idea for people making business decisions right now. 

I did a poll on LinkedIn last year where I asked people about their level of confidence in mapping out how their company makes money.

No alt text provided for this image

In the words of many a noted business expert, “C’MON, MAN!” Either your definition of “extremely accurate” is nonsense and you need some perspective, or a whole lot of you are just straight up full of it. Many of you would believe any Salesforce report I waved in front of your face if the numbers sounded familiar-ish and it had normal looking pie charts where most of the slices were of vaguely similar sizes. And for a long time, that’s been basically fine.

But it’s going to get worse, and if kids really do adapt to machine-generated incompetence, whether it’s due to education or simply overexposure and needing to survive, those kids are going to work with you in a couple of years and wonder why you’re such a gullible rube. Because for kind of an embarrassingly long time, holding up something you found on the Internet, or some default report the CRM spit out for you, was the easiest way to look smart.

But pretty soon, it’s going to be the fastest way to look like you need to retire.