In 1985, my dad brought home a portable computer. It was the size of a small suitcase — an Olivetti M21, issued by the utility company where he worked as a senior manager. We sat together and figured out how it worked. That machine gave me my love of computers.
But that's not the point of this story.
Pretty soon, my dad discovered spreadsheets. He put them to work on everything: sales analysis, revenue forecasts, what-if models nobody asked him to build. He even used them to tally my mom's gradebook. His coworkers started calling him the Lotus 1-2-3 guru. One day he told me, with the particular pride of a man who has found his superpower: "I am doing things in an hour that the Fortran and Cobol programmers would do in months."
Does that sound familiar?
There is a term economists use for the fear my dad's coworkers probably felt: the lump of labour fallacy. The mistaken belief that there is a fixed amount of work in the world — and that if a machine does some of it, humans get less. A British economist named David Schloss coined the phrase in 1891, to describe why factory workers opposed being paid by the piece rather than by the hour. They feared that working faster would use up all the available work and leave them unemployed.
The fear is old. And history has disproven it, every single time.
The spreadsheet is the proof.
Since 1980, roughly 400,000 bookkeeping and accounting clerk jobs have disappeared in the United States. VisiCalc, then Lotus 1-2-3, then Excel automated the manual arithmetic that used to fill entire departments. The "human calculator" job largely vanished.
But over that same period, 600,000 accounting and financial analyst jobs were added. Higher-paying. More strategic. More interesting. The tool didn't destroy the profession — it elevated it. It phased out the ledger-balancing job and created the what-if analysis job. The nature of the work changed. The amount of work grew.
Still, even brilliant people get this wrong.
In 2016, Geoffrey Hinton — the Godfather of AI — stood at a conference in Toronto and said: "I think if you work as a radiologist, you're like the coyote that's already over the edge of the cliff but hasn't looked down yet. People should stop training radiologists now."
His five-year deadline expired in 2021. Demand for radiologists is higher than ever. Medical imaging volume has grown so fast that hospitals now need AI just to keep up — and over 30% of radiologists use it as a triage tool. Hinton himself has since walked back the "stop training" comment.
The coyote found a jetpack.
When a powerful new tool arrives, the instinct is to look at what it replaces. The augmentation story is harder to see. But it's the one that keeps being true.
Here's the thing about knowledge workers. We are bound by three constraints: time, skill, and creativity. For most of history, you could only do as much as your hours and expertise allowed. A designer who couldn't code was stuck. A writer who couldn't build was stuck. A manager with a brilliant idea had to wait weeks for someone to model it out.
AI is hitting the first two constraints hard. Non-programmers can write code. Programmers can design. Tasks that took days take minutes. My dad's boast applies again today, word for word.
But creativity? Creativity is unconstrained. You cannot automate human ingenuity. You can only give it more room to run.
There's a subtler point here too. AI is extraordinary at generating options — 500 slogans, 20 architectural sketches, a dozen versions of a business model. What it cannot do is tell you which one matters. That requires judgment: the ability to look at a field of possibilities and know, based on taste and experience and context, which direction is worth pursuing. When skill and time stop being the bottleneck, judgment becomes the thing that sets people apart. And judgment — real judgment — is the accumulated residue of everything you've lived and learned. It doesn't compress. It doesn't scale. It's yours.
The most exciting part of this moment isn't what AI does for us today. It's what we're going to invent with it — the uses nobody has thought of yet, the problems nobody tried to solve because they seemed too hard, the creative work that was always possible but never feasible.
My dad was not a programmer. He was a manager in a utility company who picked up a tool and figured out what to do with it. His coworkers hadn't imagined it. The Fortran programmers certainly hadn't imagined it.
My dad wasn't trying to transform his industry. He just found a tool and started tinkering. Nobody handed him a roadmap.
I don't think we'll have one either. What comes next with AI is genuinely hard to predict — and I'd be suspicious of anyone who claims otherwise. But I find that reassuring rather than unsettling. The history of powerful new tools isn't a story of neat outcomes. It's a story of people surprising themselves. Of finding uses nobody imagined. Of the world getting bigger, not smaller.
That might happen again. I think it will. And that's enough.