Skip to main content

Interpreters of Maladies. LLMs create nothing new. Neither did you. Mostly

The refrain is familiar by now: "LLMs create nothing new. They just regurgitate training data."

Writers say it. Researchers say it. Developers say it. The argument shows up in lawsuits, professional forums, congressional testimony. 

Even writers, who do create new stories, build on familiar patterns. The hero's journey. The meet-cute. The unreliable narrator. They use story hooks audiences already recognize to keep them engaged. New combinations of existing elements, not creation from nothing.

And they're right. LLMs don't create genuinely new knowledge. They surface, synthesize, recombine what already exists.

Here's what makes this critique uncomfortable: Neither did you. Mostly.

The Interpreter's Labor

Think about the senior developer who's invaluable to their team. They don't write revolutionary code. What they do is remember. The pagination bug gets fixed with that specific React pattern. This API call needs this particular header. That Stack Overflow answer from 2019 everyone scrolls past. Their value? Navigating existing solutions faster than anyone else.

Or consider the literature PhD spending years cataloging smell references in Victorian novels. Dr. Ally Louks's work on olfactory imagery didn't invent new scents or discover hidden texts. It required extraordinary diligence: thousands of pages read, databases built, patterns documented that had always been there. The contribution was real. The knowledge wasn't new.

This is interpretive labor. Finding things, connecting them, surfacing what exists. The journalist who knows which municipal archives hold the smoking gun. The consultant who's seen this problem at three other companies. The lawyer who remembers the precedent.

We built entire credentialing systems around this. PhDs and professional certifications don't just signal competence, they guarantee someone did the boring work. Reading everything. Remembering everything. Connecting everything. We needed those credentials because interpretive labor was hard and valuable.

But somewhere along the way we told ourselves this work required rare genius. That pattern-matching across knowledge was thinking. Synthesis was creativity. Interpretation was expertise.

Turns out it might have been diligence all along.

The Uncomfortable Truth

Most knowledge is repetition. Most wisdom is pattern detection.

This doesn't diminish the work. Reading every Victorian novel takes years. Doing it carefully takes real skill. But the insight that emerges usually isn't a logical leap. It's aggregation at scale, done systematically.

LLMs do something functionally similar: they make the corpus queryable. You don't need to read every novel to find patterns in olfactory imagery. You ask. You don't need to remember every Stack Overflow answer. You query the aggregate.

Critics emphasize what these systems can't do. They hallucinate. Can't truly reason. Lack nuanced judgement from deep expertise. All true. But it misses what they can do: turn static knowledge into responsive oracles. The power isn't creating something new. It's making everything that exists accessible, connectable, synthesizable at speeds no human matches.

This is where the fear lives. If my value was doing the diligence, and AI does the diligence now, what do I have?

The Path Forward

When execution gets easier, humans move to higher abstraction. Farmers stopped tilling by hand, started planning crop rotations for bigger yields. We don't dig ditches manually. Don't navigate by compass and sextant.

The cognitive capacity doesn't disappear. It redirects.

The developer not spending hours debugging can architect new systems. The researcher not cataloging references manually can ask questions nobody thought to ask. The journalist not filing endless FOIA requests can investigate patterns across decades.

This requires something uncomfortable: admitting the interpretive work wasn't the ceiling. It was the foundation. AI doesn't replace expertise. It commoditizes the diligence that expertise required.

Here's an example. You want to build a trading algorithm. No coding experience, just some PineScript knowledge and a hypothesis about market patterns. Ten years ago this idea dies immediately. The gap between concept and validation is too wide. Today you can test it. AI helps you code, iterate, validate. The barrier isn't grinding through syntax anymore. It's whether your hypothesis works.

Work that used to need credentials and decades can be prototyped by anyone with diligence and curiousity. The question shifts from "who did the reading?" to "who has the interesting question?"

The Choice

You're where coal miners were when told their jobs were disappearing. But there's a difference: this technology can also amplify you.

Use AI for interpretive work. Do it faster, better. Free up capacity for spaces AI can't follow yet. Genuine synthesis. Novel questions. Frameworks not in the training data. Be the developer using Copilot for boilerplate while focusing on architecture. The researcher using LLMs to surface cross-field connections and asking new questions. The writer treating AI as research assistant, focusing on voice and insight only you provide.

Or fight to preserve manual diligence. Build credentialist walls. Emphasize AI imperfections. Treat this as something to resist instead of master.

Both make sense. The first is harder because it means confronting uncomfortable truths about your work. The second is tempting because it lets you believe your value was always the interpretation, not what comes after.

Only one path goes foward though.

This is humanity-shaking technology. Baby steps or giant leaps, your choice. The interpretive labor that defined expertise is getting commoditized. What you do with the freed cognitive space? That's where value comes from next.

The question isn't whether LLMs create something new. It's whether you will.


PS: Written with Claude.

Comments

Popular posts from this blog

Kindle in India

Please note: Amazon has released new versions of the Kindle. For prices and features, please go to Prices in India of New Kindles 3G+WiFi and WiFi editions Original Post: My wife decided that she needed to gift me a Kindle for our anniversary. I asked a colleague, if her husband (who was traveling to the US) could carry back one. He couldn't, because of an erratic travel schedule. So we decided to order one right here directly since Amazon was kind enough to open up direct shipping to India. So we ordered on a Tuesday evening (India Time) and Amazon being Amazon shipped the device straight away on the same day itself. I very eagerly tracked the package using the DHL sites (yes, I used three different DHL sites, US, UK and India. They give different info when the package is in that respective country) and in three days flat it was here across the seven seas at Delhi airport. Only I was in Noida which is an interstate delivery for DHL. Which meant that I had to fill out an ar...

Prices in India of New Kindles 3G+WiFi and WiFi editions

Amazon has release two new Kindles, one with 3G and WiFi (3G is free for use everywhere in the world) for $189 and one model with only WiFi for $139. 3G + WiFi model is available in Graphite and White and the Wifi Model is available only in White Graphite. The landed cost of 3G+Wifi version is ~$284 which is approximately Rs. 13,300 The landed cost of just the Wifi version is ~$216 which is approximately Rs. 10,100 The New Kindle  has better contrast (50% better than the previous models) 21% smaller size (while keeping the same size screen) 15% lighter 20% faster page turns Storage has doubled one MONTH battery life As always the Kindle DX model is available in Graphite for $379, (landed cost $540 which is approximately Rs. 25,200)

What Did You Build This Week? Rethinking Education for the AI Age

My sixteen-year-old son spent a weekend fine-tuning an mBERT language model with labeled hate speech data, then benchmarked it against MuRIL, a publicly available model for Indian languages. No assignment. No tutorial. Just Google AI Studio, Google Colab, and curiosity. He'd essentially skipped to the end of a university summer school curriculum. Using mBERT and MuRIL is advanced deep learning. Most students start with if/else logic and work their way up to Transformers over years. He started with Transformers. When he got interested in AI/ML summer programs like NUS, we looked at the syllabi. He was already beyond where the program would end. That's when it crystallized for me: we're teaching kids to write code in an era when AI writes code. We're drilling them in syntax when they need judgment. We're preparing them for an education system that's already obsolete. The Assessment Crisis The real issue isn't learning. It's testing. We test memorisation be...