The refrain is familiar by now: "LLMs create nothing new. They just regurgitate training data."
Writers say it. Researchers say it. Developers say it. The argument shows up in lawsuits, professional forums, congressional testimony.
Even writers, who do create new stories, build on familiar patterns. The hero's journey. The meet-cute. The unreliable narrator. They use story hooks audiences already recognize to keep them engaged. New combinations of existing elements, not creation from nothing.
And they're right. LLMs don't create genuinely new knowledge. They surface, synthesize, recombine what already exists.
Here's what makes this critique uncomfortable: Neither did you. Mostly.
The Interpreter's Labor
Think about the senior developer who's invaluable to their team. They don't write revolutionary code. What they do is remember. The pagination bug gets fixed with that specific React pattern. This API call needs this particular header. That Stack Overflow answer from 2019 everyone scrolls past. Their value? Navigating existing solutions faster than anyone else.
Or consider the literature PhD spending years cataloging smell references in Victorian novels. Dr. Ally Louks's work on olfactory imagery didn't invent new scents or discover hidden texts. It required extraordinary diligence: thousands of pages read, databases built, patterns documented that had always been there. The contribution was real. The knowledge wasn't new.
This is interpretive labor. Finding things, connecting them, surfacing what exists. The journalist who knows which municipal archives hold the smoking gun. The consultant who's seen this problem at three other companies. The lawyer who remembers the precedent.
We built entire credentialing systems around this. PhDs and professional certifications don't just signal competence, they guarantee someone did the boring work. Reading everything. Remembering everything. Connecting everything. We needed those credentials because interpretive labor was hard and valuable.
But somewhere along the way we told ourselves this work required rare genius. That pattern-matching across knowledge was thinking. Synthesis was creativity. Interpretation was expertise.
Turns out it might have been diligence all along.
The Uncomfortable Truth
Most knowledge is repetition. Most wisdom is pattern detection.
This doesn't diminish the work. Reading every Victorian novel takes years. Doing it carefully takes real skill. But the insight that emerges usually isn't a logical leap. It's aggregation at scale, done systematically.
LLMs do something functionally similar: they make the corpus queryable. You don't need to read every novel to find patterns in olfactory imagery. You ask. You don't need to remember every Stack Overflow answer. You query the aggregate.
Critics emphasize what these systems can't do. They hallucinate. Can't truly reason. Lack nuanced judgement from deep expertise. All true. But it misses what they can do: turn static knowledge into responsive oracles. The power isn't creating something new. It's making everything that exists accessible, connectable, synthesizable at speeds no human matches.
This is where the fear lives. If my value was doing the diligence, and AI does the diligence now, what do I have?
The Path Forward
When execution gets easier, humans move to higher abstraction. Farmers stopped tilling by hand, started planning crop rotations for bigger yields. We don't dig ditches manually. Don't navigate by compass and sextant.
The cognitive capacity doesn't disappear. It redirects.
The developer not spending hours debugging can architect new systems. The researcher not cataloging references manually can ask questions nobody thought to ask. The journalist not filing endless FOIA requests can investigate patterns across decades.
This requires something uncomfortable: admitting the interpretive work wasn't the ceiling. It was the foundation. AI doesn't replace expertise. It commoditizes the diligence that expertise required.
Here's an example. You want to build a trading algorithm. No coding experience, just some PineScript knowledge and a hypothesis about market patterns. Ten years ago this idea dies immediately. The gap between concept and validation is too wide. Today you can test it. AI helps you code, iterate, validate. The barrier isn't grinding through syntax anymore. It's whether your hypothesis works.
Work that used to need credentials and decades can be prototyped by anyone with diligence and curiousity. The question shifts from "who did the reading?" to "who has the interesting question?"
The Choice
You're where coal miners were when told their jobs were disappearing. But there's a difference: this technology can also amplify you.
Use AI for interpretive work. Do it faster, better. Free up capacity for spaces AI can't follow yet. Genuine synthesis. Novel questions. Frameworks not in the training data. Be the developer using Copilot for boilerplate while focusing on architecture. The researcher using LLMs to surface cross-field connections and asking new questions. The writer treating AI as research assistant, focusing on voice and insight only you provide.
Or fight to preserve manual diligence. Build credentialist walls. Emphasize AI imperfections. Treat this as something to resist instead of master.
Both make sense. The first is harder because it means confronting uncomfortable truths about your work. The second is tempting because it lets you believe your value was always the interpretation, not what comes after.
Only one path goes foward though.
This is humanity-shaking technology. Baby steps or giant leaps, your choice. The interpretive labor that defined expertise is getting commoditized. What you do with the freed cognitive space? That's where value comes from next.
The question isn't whether LLMs create something new. It's whether you will.
PS: Written with Claude.
No comments:
Post a Comment