The Last Mile

October 24, 2023

I make a lot of generative AI jokes, because they’re obvious, and because they’re hilarious. But, in all seriousness, there is a huge problem with the value proposition of almost every generative AI application that everyone is just whistling past for a variety of reasons.

The Promise is the Problem

Generative AI as a technology is what it is, and I’m not really here to assess it or say it should or shouldn’t exist. Basically, we’re now capable of building programs that, if given enough training data, take either input, a prompt, or both, and turn that into something that either looks or sort of looks like something a human would have made, given the same instructions.

That’s basically it. This isn’t that different from the AI we’ve gotten used to, which is basically automated decision making (i.e., should we give this guy a mortgage, should we call Nate back about his job application, etc). The only difference is that the decision we’re automating is “given all of these other things, what word should come next”, and then we’re doing that a bunch of times until we have enough words. The art stuff is a bit more interesting and harder for me to understand, and the music stuff is so bad that I don’t really feel like it’s worth talking about yet.

But what is all this actually for? You don’t need an answer for that to justify building it (at least with 2021-era interest rates), but you do need an answer for that if you want someone to pay for it. And Microsoft wants you to pay for it — so does Google, and so does every VC in the valley. In fact, they want you to pay for it over and over again, in every aspect of your business.

So then what is the promise this technology makes? Fundamentally, it promises to do work for you that normally requires people. This is a classic technology promise, and we’ve seen it work and revolutionize business many, many, many times — and it will happen again. So on the surface, okay, seems plausible (just like LLM output!). But in this case, the reason we’re so excited is that we’re promising to do work that ordinarily really requires people. Like, this is it — we really never thought we’d be able to get rid of good copywriters, and analysts, and artists, and all these people with… sigh… talent, and so the idea that maybe we could is amazing.

Except… we obviously can’t. It takes very, very little time working with AI tools to realize that if you don’t know what you’re doing, you are going to get absolute trash from these systems (although fortunately, many early adopters actually know so little about what they’re generating that they’re one of the few people who can’t tell how bad the output is). So as these products have gotten further from investor-world, and early adopter LinkedIn hustlers, and closer to people signing contracts to pay for them, the promise has had to shift. Now, the technology doesn’t replace these people — it augments them.

PROMPT: “an anthropomorphic banana looking at himself in the mirror as he ties a tie”

Have one artist? This is like having 100! Have five copywriters? Maybe you only need two! The software will do the “grunt work”, and allow your precious humans to either babysit the whole operation, or focus on human things.

Who We’re Not Asking

Of all the people I don’t trust, the one I trust the least might just be the person describing to my boss how to make me better at my job, which neither this person nor my boss actually do, or are held accountable for.

“See, for Nate, the problem is that he spends a lot of time generating ideas. This stupid robot will generate thousands of ideas — it’s a great starting point for him.”

Actually… no, it’s not! It’s super annoying, and not helpful at all! I guess it could be, if the way I solved problems was to create as many plausible sounding answers as quickly as possible, and then go through them and edit them to make them good. But that isn’t how I work, and never has been. In fact, almost no one works this way. The “blank page” problem is a real thing, and I have definitely heard of people using generative AI to break through it. I am not one of these people, but I have heard of them. That’s great, and good for them for successfully leveraging the 2023 zeitgeist for their benefit. But even that use case — which, again, I don’t actually use — can be solved with the crappiest version of ChatGPT, or some knockoff. It doesn’t require an $85k enterprise license for bullshitmachine.AI, because the output doesn’t have any real value besides being output.

If you look hard enough, you’ll see “the output here is just a starting point” ALL OVER the generative AI world. It’s a core part of the pitch, at least to actual users. Here’s OpenAI being honest for a second, in-between talking about building automated AI tutors and auto-generating lesson plans.

A starting point! If I have to read an entire document word for word, manually checking the veracity of each claim it confidently makes, that document is not a great starting point. If I asked my seven year old daughter — who is amazing, and incredibly smart — to write this blog post, she would probably do it. It would not be, by any stretch of the imagination, a great starting point for me. Hell, if I asked my Dad — an electrical engineer who is objectively smarter than me and does not, to my knowledge, have absolutely any interest in generative AI whatsoever — to write this blog post, it would still not be a great starting point. I would almost certainly re-structure and re-write the entire thing from scratch, and the sooner I accepted it and got to work, the sooner I would be done.

The problem with this “productivity” value proposition is that the characteristics of actually useful “starting points” are mostly things that LLMs are fundamentally ill-suited for. How about brainstorming? The LLM will create thousands of ideas in seconds! Except, if you’ve actually brainstormed before, or done work where brainstorming is useful, you know that the entire point of brainstorming is to attempt to generate entirely new, uniquely appropriate ideas. You don’t generate thousands of plausible, derivative ideas. Or if you do, that’s just noise the process generates that you throw away. It doesn’t add up to value, because rote pattern-matching is not a substitute for either good taste or inspiration we (very, very grudgingly) pay professionals to provide us.

The New Yorker just absolutely nails it.

The Last Mile

There’s a supply chain concept (I think this is in telecom too, but I’m out of my depth here) known as “the last mile”, which refers to the final step in getting something somewhere, and is often the most challenging part of doing anything logistical at scale. It’s easy enough to route a million packages to 8 different regional sorting centers, or scale that up to a hundred million packages. It’s getting each individual package to each individual house on each individual, weird little street that becomes an existential problem, and usually an existential expense. Whenever someone has some amazing idea to solve an inefficiency problem, they usually struggle with “the last mile” part of it, because the last mile is usually really inefficient, but utterly unavoidable. Unless, that is, you can make it someone else’s problem.

With knowledge and creative work, the same thing is true. “The last mile” of a marketing campaign, or a user interface idea, or a go-to-market strategy is where all the real, expensive work is. Giving me a go-to-market template you found online really will save me time and effort if you only care about generating something that looks like what people expect a go to market plan to look like. If you care about the plan being useful, and appropriate to your business, if anything you’re probably slowing me down by making me think about and answer arbitrary, potentially irrelevant factors that just happen to be in the generic template.

Everything generative AI tools leave out — factual accuracy, fingers, etc. — fails to make that “last mile” any easier, and in many cases, makes it harder, and that’s a deal-breaker for the productivity argument. You’re not able to completely replace people, because the output is bad. But babysitting the output either doesn’t sufficiently improve it, or makes generating it just as inefficient, or even more inefficient, than generating it by hand. Sure, executives will yell at workers to use these tools and somehow become more efficient. “Most of the work is already done!”, they’ll yell. And sometimes, in sufficiently derivative contexts, this will be true and the starting points or grunt work removed will be valuable. I’ve heard good things about tools that help with syntax in programming, for example. You can build a legitimately functional For loop by pattern matching other people’s work, and that’s great. But that’s not how a lot of information work gets done; LLM-powered tools have arrived with great fanfare to solve slightly-flawed-or-derivative-bulk-content-generation problems that many of your expensive, annoying employees don’t actually have. You’re better off buying them a better coffee machine.

So Where is This Going?

You can watch people flail around wildly, searching for these problems, on ProductHunt every day. What about thousands of email subject lines? Same problem. Product ideas? Same problem. Feedback to employees? Same. Okay, fine — but there has to be an application where being able to generate a huge number of fundamentally derivative, but plausible sounding ideas is inherently valuable, right?

There is! It’s called spam, and it’s a humongous, lucrative industry! Spam is perfect for generative AI; it’s where this technology was always destined to go, and I’m sure it’s already being used extensively for it today. Spam audiences are the ideal audience for generative AI content, because the entire point of spam is to present the trappings of thought, not actual thought, and the only thing more important than that is that you spend as little time and money as possible generating those trappings. LLMs are absolutely magical for it, and unlike with more subtle use cases, or fantasies about these things turning into sentient general intelligences, this is one job they’re going to continue to get better and better at as they not only get better at generated plausibility, but more customizable in ways that are compatible with scale (like programmatically generating more or less aggressive “custom” sales pitches based on how people react).

Because that’s what Amazon shoppers need — more fake things!

Spam has been around forever, so this in itself doesn’t grind my gears as much as you might think. It’s stupid and wasteful, but a lot of things are stupid and wasteful (the metaverse real estate con, the hilariously smug crypto boom and extremely quiet bust, etc.) and we figure out how to muddle on. What really does bother me, though, is that so many people are excited to apply the principles of spam — again, something we all know is stupid and wasteful — to things that have traditionally had at least a slightly higher bar.

Sales outreach. Customer support. Thought leadership. These things have always had more than a whiff of spam to them, but it seemed like we all understood that going too far in that direction was undesirable. Not anymore! Building an entire website with meaningless, industry-appropriate pablum used to get you dirty looks at work; now that same output, if created instantly with technology, isn’t just acceptable — it’s some kind of productivity breakthrough! Make 10,000 of them and just auto A/B test the entire pile of shit forever!

But of course, if you actually have this discussion with a vendor, and take this stance (as most experienced people in these customer-facing fields will do), you’ll quickly hear the last mile argument. No, don’t treat this like spam. You still matter. You should read all of this crap, and make it good, because God knows the machine has no idea what it’s barfing up. It’s a starting point. It’s “90% of the work done for you!”, kind of like 90% of hitting a home run is putting on your uniform, finding a bat, and stepping into the batter’s box, right? Just finish the job, kid!

The last mile always exists. The difference between breakthrough products, and quickly mumbled answers to questions that no one is asking, is that breakthrough products acknowledge that last mile, and either solve it, or do everything in their power to set you up for success when you solve it. Outside of spam, filler, or fake-it-until-you-make-it use cases, I haven’t seen an example of that with a business focused generative AI product, and I think it’s because the fundamentals of the technology are simply not up to it. No amount of venture funding or shoving it into existing products is going to cover that up in the long term. That means the only paths forward are (a) a sheepish retreat as this tech finds its proper home in non-transformative use cases where it makes sense (autocomplete, spam, custom “stock” imagery, elevator music, anything where a chatbot/command line interface is practical), or (b) the slow, venture-backed adoption of spam techniques in more and more last-mile-dependent work, which is going to make being a customer even more confusing and enraging.

I think we’re headed for (a), but not before some irritating amount of (b) takes place.