Major Scale

February 17, 2026

One thing I think a LOT of smart people misunderstand is that the power of computers is always SCALE. Computers are very fast, don’t get tired, and they have no shame. If you are smart, you can point those characteristics in useful directions — and people have been doing exactly that for decades.

Large language models — especially the really, really large language models that dominate discourse and investment in 2026 — are definitely an example of this. Computers can’t understand things, but they can identify extremely subtle patterns in data sets that are too massive for a person to even comprehend, and then simply respond to inputs as those patterns would. How they use those patterns to formulate output can be tweaked, or adjusted, or even randomized because all of that is just math and computers will never get tired of doing math, no matter what boring or weird or disgusting thing you make them turn that math into.

People respond to patterns too, although people are both much better and much worse at pattern recognition. They’re worse because they’ll never operate at the scale of a computer network, but at the same time they are vastly better at finding patterns outside of prescribed data sets, or even working with completely unstructured data they’ve never worked with before.

Basically:

  1. People are slow, but they can be “prompted” by literally anything, including things like feelings, instincts, and other elements of existence that we can’t even fully articulate.
  2. Computers are very fast, but you have to structure things for them (you can try to get a computer to structure things, and sometimes with enough patterns to consult it’ll figure it out, but it’ll miss tons and tons of relationships that are obvious to any person).

There’s a huge mistake being made (and frequently published) by even very smart, often technically competent people. That mistake is believing that human thinking is simply this kind of pattern matching done at almost incomprehensible scale, but that if we’re able to build software that can do it, we’ll have computers that work like people. I’m not a neurologist (or whatever the appropriate discipline is) so I can’t prove this point academically, but I have spent most of my time working with software engineers, data scientists, and venture capitalists, so I can tell you that this is exactly the kind of mistake these people have been making their entire careers.

But I’m not here to judge. Instead, I’m here to warn everyone — again — that the world-shaking impact of generative, model-based software (“AI”) is not going to be its ability to replace what people do, because despite what everyone is extrapolating, we don’t have a ton of evidence that a lot of important work can be done effectively by a pattern matching robot, no matter how much training data it has. I guess anything could happen, but the reason I’m not that interested in it is because “high competence” is not the most likely effect of scale. We didn’t see it in cloud computing, social networking, cryptocurrency, or anything else we threw money at.

Instead, the most likely effect of scale is (1) more supply, and (2) more trash. It will probably become very, very, very cheap to get an image of basically anything you can describe generated by a computer using pattern matching. If you’re willing to accept VC/hyperscaler-subsidized pricing, we’re already there. An entirely separate question is “how good of an image can you get from pattern matching”, which depends on everything from what the image is, to how you define “good”, to whether you believe in the existence of any sort of intellectual property rights. That question is extremely important if you are looking to replace the actual image creation skills that exist today with cheap, high speed pattern matching, but it’s not important at all if the goal involves making tons and tons of pattern-derived middling trash.

And that’s the thing! There are many business models that involve leveraging the ability to make middling trash much more quickly and easily than before. The most obvious one is spam, which is both a huge business and something predicated entirely on scale, low-cost, and finding the minimum level of plausibility necessary to work. Sure, spam is also a huge social and economic problem because it exhausts people and erodes trust, but if you make money by making spam that’s someone else’s problem.

Music & Spam

Music now has a spam problem. More accurately, music’s existing spam problem is now exponentially worse, to the point where it’s challenging the viability of the entire economic enterprise. Now, I happen to think this will get slowed or resolved simply because there’s so much money in music, and it’s an industry that isn’t new to the idea of protecting its intellectual property. Not all of that money is smart, but I think Spotify + Labels will be better at teaming up to protect their shared interests than an army of small-time grifters.

But there’s a lesson here for other industries that might not be as ready to fight spam. This is the challenge that AI will create, and it’s much scarier because unlike a lot of AI doom scenarios, it’s not speculation or based on extrapolating massive capability improvements that don’t exist yet but maybe seem possible if you squint or just raised a Series C. AI tools can generate better spam, more spam, and forms of spam that weren’t viable five years ago, and it can do it right now. The spam is stupid, of course, but almost all spam is stupid, so it doesn’t matter. In fact, while AI spam is stupid, it’s often less stupid than non-AI spam, not because AI is so great but because spam is so bad. It’s a legitimate revolution.

Honestly, I think the term “slop” for the AI detritus we’re all seeing everywhere is pretty good, so I don’t have a lot of pushback on it. But I do think we should remember that in most cases, slop has the same objectives as spam. It’s not to replace the work we do, but instead to overwhelm society with scale so it can collect the economic value that ordinarily would go to the superior, “real” product. When spam succeeds, it’s because it is so overwhelming that the distinctions between “good” and “merely plausible” go away, and whoever gets more at-bats wins by default. Spam always gets more at-bats than real work, and AI will always get more at bats than real content.

For all of the futurism and noodling about guaranteed income or whatever, there’s no real reason to think this technology won’t plug into actual, current economic behaviors and systems, and simply find the shortest path to money. That’s how capitalism works, especially when unchecked, and the American capitalism of 2026 is more unchecked than any in my lifetime. So maybe AI will iterate for however long it takes to generate music as good as what people make, or to become a useful workflow tool that actually improves the work musicians do (please, please, learn to mix and EQ my tracks for me). But there’s no maybe about it that AI will be used to extract value from existing markets at the expense of that market, its buyers, and its sellers by third parties, because it’s an incredible tool for that.

It brings me no joy to predict this, but if I’m being honest, I expect this exact type of problem to be the defining legacy of everything we’re calling “AI” today.