“Am I so out of touch?”

February 5, 2026

Some time around my freshman year of college, my Dad’s company gave him a phone with Windows CE on it. It quickly became a running joke in my house, because the phone was both kind of incredible, and the dumbest thing in the world. Dad hated it, surely in part because he didn’t like the idea of work trying to bother him at home, but also because he was an engineer and the phone was a truly bizarre collection of choices in resource allocation. It had a bunch of Windows-style bells and whistles that made it sort of feel like Windows, but was laughably bad at basically any task you’d ever actually want to do on the phone. It took forever to turn on. It was impossibly slow. Even getting and responding to an email was a huge chore with a fiddly little plastic stylus. If you had asked us in 2002 whether smartphones would take over the world in less than a decade, I think we would have had a good laugh.

The two major shifts that changed that trajectory were the Blackberry, and the iPhone. I never had a Blackberry, but I remember my wife’s office giving her one temporarily for some reason, and the difference in philosophy was obvious. I have no idea how many different things the Blackberry did or presumed to do, but it was really, really good at reading and sending short emails. I know this because my wife (at the time) had zero experience doing anything productive on a smartphone — I’m not even sure we knew how to text each other yet — and within a couple hours she looked like a legislative aide trying to push a bill through committee. She was hooked to that thing, to the point that I made her promise not to keep it if they let her. But as a pure messaging product, the Blackberry was a home run because (at least at first) it made vastly better resource prioritization decisions and focused entirely on making the useful experience of triaging emails while on the go better.

The iPhone was an even more impressive achievement because it made Blackberry-level allocation and optimization decisions in service of Microsoft’s much more ambitious goal, which was mobile, general purpose computing. And while, without a doubt, there were (and continue to be) amazing hardware innovations that made the iPhone idea work, they were all in service of software that was extraordinarily well-designed and built for the task. Multitouch hardware is cool, but multitouch hardware attached to an entire platform built around pinching and pulling and tapping actually solves many of the problems Windows CE simply stared at blankly and hoped people would accept.

Today, it’s easy enough to think about my Dad’s old phone (or “Pocket PC”, or whatever it was called back then) and think that Microsoft basically had this right and just dropped the ball. But they never actually had it, because their “it” was just “small computers”. They could have banged away at that for another twenty years with incrementally better hardware and made the horrendous, Start-menu-clicking Windows CE experience marginally better, but they never actually would have figured out what we have today. They were never going to solve a problem that required transcendently great software because they make shitty software and they don’t think that’s a problem. Instead, their solution was to push their laughably bad devices (sorry, “partner devices”) to corporate customers and try to pressure the world into adopting them. In fact, it’s a credit to Microsoft’s incredible sales and distribution skills that people like my Dad even had these devices in their hands at all, as they provided almost zero utility, were pretty expensive, and generated absolutely no mainstream, organic demand. They were stupid, fundamentally broken, almost nobody wanted them, and yet through sheer force of dollars and executive desire, they were a real product category and frequently discussed in business media as “the future”.

Sound familiar?

“No… it’s the children who are wrong.”

Ultimately, I think mobile “happened” because the software that powered it and defined the experience for the people using it made those massive leaps. The Blackberry made people realize the “tiny Windows PC” concept was stupid and insufficiently useful, and the iPhone gave people a fundamentally different way to interact with small devices that made them not just usable, but so usable that people became comfortable using them instead of desktops and laptops in many (most?) circumstances. There was no “adoption” problem because the products made obvious sense. In fact, at the enterprise level, the problem was backwards — companies didn’t know how to deal with the fact that everyone wanted to use their mobile device for work, and they actually dragged their feet and tried to slow the whole thing down.

One of the massive red flags about this generation of “AI” technology is the increasingly frequent (to the point of being almost constant now) reference to those pesky “adoption” problems. I get a little salty about “adoption” because it’s the kind of thing people complain to Product Marketing about, and feel like my team should be able to fix, or have prevented in the first place. And sure, every so often, there’s an awareness or education problem, and that’s something we can tackle. But much more frequently, people are aware of what we want them to do, and they do know how to do it. They just don’t want to do it, dislike the process of doing it, or don’t think it’s worth the effort. As a last ditch effort, I can always argue with them about it, but the honest truth is that if you’re arguing, you’re losing. We probably just made something no one wants or needs, or maybe even something that is actively bad.

Windows CE had adoption problems, too. I guess we could have written op-eds about how the world of 2002 needed to “transform itself” into a “mobile-first” world — I assume people wrote these, honestly — but none of that actually happened because the world wasn’t going to reorganize itself to mitigate the problems of new, unproven products caused by shitty, uncreative software. In reality, the world only grudgingly adapted once transcendently great software made these products shockingly, undeniably useful for interacting with the world around them. Adaptation is a reaction, after all. Businesses don’t preemptively adapt to anything unless they really enjoy wasting time and money.

If generative AI actually matters, its utility or ease of use will force the changes today’s investors and consultants are insistent that we make ahead of time — a trail to be blazed, so to speak, versus a red carpet to be rolled out. To the extent those changes exist today, they have very little to do with legitimate business and a lot more to do with society, as generative AI’s most dramatic efficiency improvement so far has been the creation of spam, slop, and disinformation. The world really is scrambling to adapt to massive adoption of the technology by bad actors, because their preferred workflows really are better with software that uses this technology. It’s disgusting and gross, but it’s not illogical, and it doesn’t require think pieces on adoption.

But these use cases, in addition to being socially dubious, are also pretty niche in the grand scheme of things. For returns that justify the investments being made, it’s going to take great, innovative software that turns whatever value exists in probability-based asset generation into something useful and accessible to people. Hilariously though, the one legitimate area this technology seems to be impacting is… creative software development! That’s right — creating derivative, marginally customized versions of software that have existed forever is now incredibly cheap, and incredibly compelling to companies. So while what the industry desperately needs is a completely new set of original software ideas, that same industry is aggressively discouraging craft and originality in software. In short, I don’t believe generative AI will really matter to the broader economy (outside of unsustainable data center expenditures) by remixing old ideas and conventions. It’s just not that kind of innovation. We got away with — and I personally profited from — twenty years of building cloud-based versions of things that we knew people wanted and needed. And while it didn’t work every time, there were tons and tons AND TONS of cases where that made sense, and simply bolting “internet” onto something useful made it better for a variety of reasons. That same approach is clearly being taken by today’s AI founders (stick AI onto things), but the benefits of AI are way, way more nebulous, the downsides more obvious, and the costs higher than anything in the cloud era, and you’re seeing that manifest in the “adoption problem” that simply isn’t going away.

There’s some amount of time you can just charge ahead and hope some version of Moore’s Law becomes apparent and everything just becomes easier on its own. That certainly seems to be the plan, albeit one crafted and promoted by a bunch of executives my age and younger who have been drafting off of the actual Moore’s Law their entire professional lives. I’m not sure they know what else to try, which is yet another reason why I think this is going to end very, very badly.