Wednesday, January 14, 2026

Interpreters of Maladies. LLMs create nothing new. Neither did you. Mostly

The refrain is familiar by now: "LLMs create nothing new. They just regurgitate training data."

Writers say it. Researchers say it. Developers say it. The argument shows up in lawsuits, professional forums, congressional testimony. 

Even writers, who do create new stories, build on familiar patterns. The hero's journey. The meet-cute. The unreliable narrator. They use story hooks audiences already recognize to keep them engaged. New combinations of existing elements, not creation from nothing.

And they're right. LLMs don't create genuinely new knowledge. They surface, synthesize, recombine what already exists.

Here's what makes this critique uncomfortable: Neither did you. Mostly.

The Interpreter's Labor

Think about the senior developer who's invaluable to their team. They don't write revolutionary code. What they do is remember. The pagination bug gets fixed with that specific React pattern. This API call needs this particular header. That Stack Overflow answer from 2019 everyone scrolls past. Their value? Navigating existing solutions faster than anyone else.

Or consider the literature PhD spending years cataloging smell references in Victorian novels. Dr. Ally Louks's work on olfactory imagery didn't invent new scents or discover hidden texts. It required extraordinary diligence: thousands of pages read, databases built, patterns documented that had always been there. The contribution was real. The knowledge wasn't new.

This is interpretive labor. Finding things, connecting them, surfacing what exists. The journalist who knows which municipal archives hold the smoking gun. The consultant who's seen this problem at three other companies. The lawyer who remembers the precedent.

We built entire credentialing systems around this. PhDs and professional certifications don't just signal competence, they guarantee someone did the boring work. Reading everything. Remembering everything. Connecting everything. We needed those credentials because interpretive labor was hard and valuable.

But somewhere along the way we told ourselves this work required rare genius. That pattern-matching across knowledge was thinking. Synthesis was creativity. Interpretation was expertise.

Turns out it might have been diligence all along.

The Uncomfortable Truth

Most knowledge is repetition. Most wisdom is pattern detection.

This doesn't diminish the work. Reading every Victorian novel takes years. Doing it carefully takes real skill. But the insight that emerges usually isn't a logical leap. It's aggregation at scale, done systematically.

LLMs do something functionally similar: they make the corpus queryable. You don't need to read every novel to find patterns in olfactory imagery. You ask. You don't need to remember every Stack Overflow answer. You query the aggregate.

Critics emphasize what these systems can't do. They hallucinate. Can't truly reason. Lack nuanced judgement from deep expertise. All true. But it misses what they can do: turn static knowledge into responsive oracles. The power isn't creating something new. It's making everything that exists accessible, connectable, synthesizable at speeds no human matches.

This is where the fear lives. If my value was doing the diligence, and AI does the diligence now, what do I have?

The Path Forward

When execution gets easier, humans move to higher abstraction. Farmers stopped tilling by hand, started planning crop rotations for bigger yields. We don't dig ditches manually. Don't navigate by compass and sextant.

The cognitive capacity doesn't disappear. It redirects.

The developer not spending hours debugging can architect new systems. The researcher not cataloging references manually can ask questions nobody thought to ask. The journalist not filing endless FOIA requests can investigate patterns across decades.

This requires something uncomfortable: admitting the interpretive work wasn't the ceiling. It was the foundation. AI doesn't replace expertise. It commoditizes the diligence that expertise required.

Here's an example. You want to build a trading algorithm. No coding experience, just some PineScript knowledge and a hypothesis about market patterns. Ten years ago this idea dies immediately. The gap between concept and validation is too wide. Today you can test it. AI helps you code, iterate, validate. The barrier isn't grinding through syntax anymore. It's whether your hypothesis works.

Work that used to need credentials and decades can be prototyped by anyone with diligence and curiousity. The question shifts from "who did the reading?" to "who has the interesting question?"

The Choice

You're where coal miners were when told their jobs were disappearing. But there's a difference: this technology can also amplify you.

Use AI for interpretive work. Do it faster, better. Free up capacity for spaces AI can't follow yet. Genuine synthesis. Novel questions. Frameworks not in the training data. Be the developer using Copilot for boilerplate while focusing on architecture. The researcher using LLMs to surface cross-field connections and asking new questions. The writer treating AI as research assistant, focusing on voice and insight only you provide.

Or fight to preserve manual diligence. Build credentialist walls. Emphasize AI imperfections. Treat this as something to resist instead of master.

Both make sense. The first is harder because it means confronting uncomfortable truths about your work. The second is tempting because it lets you believe your value was always the interpretation, not what comes after.

Only one path goes foward though.

This is humanity-shaking technology. Baby steps or giant leaps, your choice. The interpretive labor that defined expertise is getting commoditized. What you do with the freed cognitive space? That's where value comes from next.

The question isn't whether LLMs create something new. It's whether you will.


PS: Written with Claude.

Friday, October 17, 2025

Your CLI Agent Is Your New Sysadmin

 


Why struggle through all this nonsense of command line errors, debugging package dependencies when you can command the power of a thousand suns and the sum total of human knowledge and say: "Get this working."

The Terminal Has Always Been Powerful (And Intimidating)

The command line is where real work gets done. Installing packages, managing dependencies, compiling code, debugging cryptic errors, setting up development environments—it's all terminal work. And it's always been a gauntlet for newcomers.

Take Python dependency management. We've had venv, then conda, and lately uv has taken the Python world by storm. Each tool is powerful. Each one is a puzzle to use if you're new. Version conflicts, environment mismatches, system-wide installations clashing with project-specific needs—it's a mess that's always required you to understand virtual environments, package registries, version resolution, and system paths.

And when something breaks? You're diving into Stack Overflow for your third search of the afternoon, parsing error messages, trying solutions, hitting new errors, searching again.

Let The Agent Operate Your Terminal

My son was setting up a CUDA development environment recently. Tutorials, code snippets, the usual learning process. At some point, he installed PyTorch—the CPU-only build from pip, not the CUDA-enabled version he needed. When he ran his script, it threw an error about CUDA.

The old approach: Google the error. Read Stack Overflow threads. Figure out you need the CUDA build. Uninstall the wrong version. Find the right installation command with proper CUDA toolkit flags. Reinstall. Hope you got it right. Try again.

What he actually did: pasted the error into his CLI coding agent and said "fix this." The agent uninstalled the CPU version, installed the CUDA build, and executed the script successfully.

He didn't touch his terminal. The agent operated it for him.

This Is The Pattern

CLI agents like Claude Code, Gemini CLI, and Qwen Code don't just write code - they run your terminal. They execute commands, see errors in real-time, fix what's broken, and iterate until it works.

Consider a usual scenario: version conflict. Your script needs an older version of a package, but you have the latest installed. You get an AttributeError because a method was deprecated.

Traditional flow: search the error, identify the version issue, check what you have installed, downgrade the package, update your requirements file, rerun.

With a CLI agent: paste the error. Watch it handle everything automatically.

Or compilation errors. Or missing system dependencies. Or path issues. Or permission problems. Whatever breaks in your terminal, the agent can see it, understand it, and fix it.

The Real Shift

Here's what makes this transformative: the agent isn't advising you on terminal commands. It's executing them.

You're not asking "how do I fix this?" You're saying "make this work" and delegating the entire workflow. The agent becomes your command-line operator—running commands, reading output, fixing problems, trying again.

It handles the execution loop: run → fail → read error → fix → run again. The same loop you'd do manually, but without the frustration, without the fatigue, without needing to understand every underlying system.

This isn't about replacing your knowledge. It's about removing friction from getting things done.

This isn't magic. The agent needs to see error messages to fix them, which means it needs to execute your code and commands. But that's the entire point. You give it access to operate your terminal, and it handles the tedious housekeeping—the dependency installations, the environment setup, the error-fix-retry cycles.

The agent reads the same error messages you would. It's just doing it at the speed of thought, with pattern-matching across thousands of similar cases, and without getting frustrated on the third failed attempt.

What This Means

The command line was always where real work happened. But it required you to be fluent in bash, understand package managers, know how to debug system errors, and have the patience to iterate through solutions.

Now? You need to know how to tell an agent what you want to accomplish.

The traditional tools and knowledge are still valuable and worth learning. But for everyday work—installing packages, fixing dependencies, setting up environments, handling the endless housekeeping of development—let the agent operate your terminal. It's better at it than most of us, and it never gets tired of circular dependencies.

First-time setup? You still need to install the CLI agent itself. But theoretically, it could handle everything else from there—even installing Python, setting up your entire development environment, whatever your system needs.

Stop fighting with cryptic error messages. Stop Googling for the right terminal commands. Just say: "Get this working."

And let your agent handle the terminal.

Thursday, October 16, 2025

Self-Learning CLI Agents: A Practical Guide

 

Building Systems That Get Smarter Every Day

Introduction

After months of building and refining self-learning systems with CLI agents, I've discovered something that is quite obvious in retrospect: the secret to continuously improving AI assistance isn't in the model itself—it's in the knowledge capture infrastructure surrounding it.

Recent research quantifies what practitioners have been discovering: systems that accumulate and refine their own contextual knowledge can outperform traditional approaches by 10.6% on agent tasks and 8.6% on domain-specific reasoning, while reducing adaptation costs by up to 87%.

Since, I've been using this for a while, I though a a practical guide to building your own self-learning development system is in order. 

I've experimented with this approach in multiple domains, including AMP for email development and writing TradingView PineScript code, which is published at https://github.com/NakliTechie/PineScriptCoder.

Both are relatively obscure in the overall scheme of things. 

The system's adaptability across different development contexts demonstrates its versatility and value regardless of the specific technology or domain you're working with.


Why You Need This

The Problem You're Living With

Without systematic knowledge capture, you're experiencing:

  • Repeated Mistakes: Solving the same problems across projects because insights weren't preserved
  • Knowledge Loss: Valuable lessons disappear when team members move on or memories fade
  • No Measurable Progress: Can't tell if your processes are actually improving
  • Inconsistent Quality: Good patterns discovered but never systematically applied

The tragedy isn't making mistakes—it's making the same mistakes repeatedly because you have no institutional memory.

The Solution: Memory as Infrastructure

A self-learning system addresses these through three mechanisms:

  1. Systematic Capture: Every significant learning is documented in a structured format
  2. Organized Storage: Insights filed logically for easy retrieval
  3. Active Utilization: Learnings automatically feed back into development workflows

This creates a compounding effect: better context → better performance → better learnings → better context.

The key insight from research: LLMs don't need brevity—they need comprehensive, detailed contexts. Unlike humans who benefit from concise summaries, language models perform better with long, detailed contexts and can autonomously extract relevant information.


The Research Foundation (In Plain English)

ACE Framework: Contexts as Evolving Playbooks

Stanford and SambaNova research introduced Agentic Context Engineering (ACE), which treats contexts not as static prompts but as "evolving playbooks" that accumulate and organize strategies over time.

Two problems ACE solves:

  1. Brevity Bias: Traditional optimizers create concise, universal instructions while sacrificing domain-specific knowledge—the details that make systems work in production.

  2. Context Collapse: Iterative rewriting degrades contexts into shorter, more ambiguous summaries over time, causing sharp performance declines.

ACE's solution: Structured, incremental updates that preserve detailed knowledge. Think of it like version control for context—you add and refine, you don't rewrite from scratch.

Training-Free GRPO: Learning Without Training

Tencent's research showed you can improve agent performance without any parameter updates—just by accumulating experiential knowledge as context.

With just a few dozen training samples, their approach outperformed fine-tuned models while avoiding overfitting and data scarcity issues.

The Implication

You don't need to fine-tune models. You don't need massive datasets. You need better memory management.


Implementation: Let Your Agent Build It

Here's the liberating truth: you don't need to manually create this infrastructure. Your CLI coding agent can build the entire learning system from broad guidelines.

This is meta-prompting in action—instead of manually writing templates and structures, you collaborate with your agent to design and implement the system itself.

Step 1: Bootstrap the System

Give your coding agent these requirements:

Create a comprehensive learning capture system for our development projects:

1. A directory structure separating different types of learnings
2. Templates that capture problems, solutions, and prevention strategies  
3. A consolidated quick-reference file for frequently-used knowledge
4. Integration with our development workflow
5. Clear documentation on usage

Make it practical, not bureaucratic. The system should make capturing 
knowledge easier, not harder.

Let the agent propose and implement. It will often produce something more comprehensive than you'd write manually.

Step 2: What the Agent Creates

Your agent will typically build:

/project/learning/
├── README.md                          # System overview
├── LESSONS_LEARNED_TEMPLATE.md        # Capture template
├── common_learnings.md                # Quick reference (KEY FILE)
└── categorized_learnings/
    ├── technical/                     # Technical solutions
    ├── process/                       # Workflow improvements
    ├── project_specific/              # Project insights
    └── framework_updates/             # System-wide changes

Critical component: The common_learnings.md file is your system's working memory. This is what the agent references during active development—not the detailed categorized files.

Step 3: The Capture Template

Your agent will design templates that capture:

For Technical Learnings:

  • Problem description and context
  • Root cause analysis
  • Solution (with before/after code)
  • Prevention strategies
  • Integration recommendations

For Process Improvements:

  • Current process description
  • Issues identified
  • Improved approach
  • Implementation guide
  • Measurable benefits

The template ensures completeness—when you capture knowledge, you capture it actionably.

Step 4: Agent Integration

Add this to your agent's system prompt:

## Learning Capture Protocol

After every significant development session:

1. IDENTIFY learnings worthy of capture
   - Non-trivial technical solutions
   - Process improvements discovered
   - Patterns benefiting future projects
   - Mistakes with clear prevention strategies

2. CATEGORIZE appropriately
   - Technical / Process / Project-specific / Framework

3. UPDATE common_learnings.md
   - Add broadly applicable insights
   - Ensure immediate accessibility
   - Maintain organized structure

4. FILE detailed learning
   - Use complete template
   - Store in appropriate category
   - Cross-reference with common learnings

5. IDENTIFY documentation updates
   - Mark guides/checklists needing revision
   - Create action items
   - Prioritize by impact

Learning capture is NOT optional—it's core workflow.

Step 5: Active Utilization

Ensure your agent uses what it learns:

## Using the Learning System

During development:
1. CONSULT common_learnings.md at task start
2. APPLY relevant patterns from past experience
3. DOCUMENT new patterns discovered during work
4. UPDATE common learnings with applicable insights
5. FILE detailed learnings in categories

When solving problems:
- Reference past solutions before implementing from scratch
- Apply prevention strategies from similar issues
- Build on established patterns
- Note when patterns don't apply (document edge cases)

The self-reinforcing cycle: Execution → Learning → Documentation → Context → Better Execution


Real-World Results: What Actually Happens

After a few iterations of living with this system:

Quantifiable Improvements

  • Reduced problem-solving time: Similar issues reference existing solutions
  • Fewer repeated mistakes: Prevention strategies actually prevent
  • Faster onboarding: New team members leverage accumulated knowledge immediately
  • Measurable consistency: Best practices applied consistently

Qualitative Changes

  • Compound learning: Each project makes future projects easier
  • Knowledge democratization: Expertise becomes accessible to entire team
  • Confidence in complexity: Willing to tackle harder problems
  • Reduced cognitive load: System remembers, you don't have to

The Surprise Benefit: Cross-Project Intelligence

Insights from one project automatically benefit unrelated projects. A performance optimization from web work improves database tasks. Validation techniques from forms apply to APIs.

This happens because common_learnings.md creates unified context. The system sees patterns across domains that individuals might miss.


The Evolution: From Basic to Optimized

Initial State

Start simple:

  • Basic directory structure
  • Simple template (problem/solution/lesson)
  • Manual capture after sessions
  • Agent references common learnings

Intermediate State

Refinements emerge:

  • Template complexity adjusted based on actual use
  • Common learnings reorganized by access frequency
  • Agent begins suggesting capture automatically
  • Cross-references between learnings appear

Mature State

The system optimizes itself:

  • Agent maintains optimal organization
  • Redundant learnings merged automatically
  • Common learnings updated in real-time
  • Prevention strategies actively applied

Key insight: My initial categorization worked, but lacked optimization for access. The common_learnings.md innovation came from observing usage patterns—I needed instant access during development, not perfect categorization.

This evolution happens naturally as you use the system. You don't need to plan for it—just start with the basics and let actual usage drive improvements.


Common Pitfalls and Solutions

Pitfall 1: Template Overload

Problem: Templates too complex, leading to inconsistent usage.

Solution: Start with problem/solution/lesson. Add complexity gradually based on actual needs.

Pitfall 2: Scattered Information

Problem: Perfect categorization that makes nothing findable during work.

Solution: The common_learnings.md file. Accept redundancy—information in both quick reference and detailed files is a feature, not a bug.

Pitfall 3: Inconsistent Application

Problem: Great system used occasionally.

Solution: Make it automatic via system prompt. Have agent remind you. Make skipping require conscious choice.

Pitfall 4: Write-Only Documentation

Problem: Learnings captured but never referenced.

Solution: System prompt must actively consult learnings during work. If the agent isn't referencing past knowledge, you're just documenting, not learning.

Pitfall 5: Brevity Bias

Problem: Keeping learnings concise, losing useful detail.

Solution: Remember the research insight: comprehensiveness beats conciseness for AI systems. Preserve details. Let the AI extract relevance.


Getting Started: Your Implementation Path

Phase 1: Co-Design with Your Agent

Describe your vision

  • Tell your coding agent what you want
  • Discuss your workflow and constraints
  • Review agent's architectural proposal

Let agent implement

  • Agent builds the infrastructure
  • Review generated templates and structure
  • Suggest refinements if needed

Manual testing

  • Capture 2-3 learnings yourself
  • Validate template completeness
  • Ensure organization makes sense

Time investment: ~1 hour for initial setup

Phase 2: Agent-Driven Capture

Add capture instructions

  • Update agent's system prompt
  • Define capture criteria clearly
  • Set expectations for when/what to document

Observe behavior

  • Does agent identify learnings appropriately?
  • Are captured learnings specific and actionable?
  • Is categorization logical?

Refinement

  • Adjust capture criteria based on observations
  • Simplify or expand template as needed
  • Begin building initial knowledge base

Time investment: ~30 minutes for setup, then observe over several sessions

Phase 3: Active Utilization

Add consultation instructions

  • Update system prompt for active reference
  • Ensure common_learnings.md in context
  • Define when agent should consult learnings

Monitor reference behavior

  • Does agent check common learnings?
  • Are past solutions being reused?
  • Is the feedback loop working?

Measure improvement

  • Track time to solve similar problems
  • Note reduction in repeated mistakes
  • Document instances of knowledge reuse

Time investment: ~20 minutes for setup, benefits accrue automatically

Phase 4: Optimization

Analysis

  • Ask agent to review what worked
  • Identify template complexity issues
  • Find access pattern inefficiencies

Reorganization

  • Adjust common learnings organization
  • Merge redundant entries
  • Improve categorization

Metrics and sharing

  • Establish baseline metrics
  • Document the system's value
  • Share approach with team

Time investment: ~1 hour for optimization, then ongoing maintenance is minimal

Phase 5: Autonomous Evolution

Let the agent maintain and optimize:

  • Agent adjusts organization based on usage
  • Automatic quality checks for captured knowledge
  • Cross-project knowledge synthesis
  • Team-level scaling if needed

The system improves itself from this point forward

Key principle: Start with agent collaboration, not manual work. The agent that uses the system should also build and maintain it. Total setup time is 2-3 hours; the benefits compound indefinitely.


Measuring Success

Track these metrics:

Efficiency Metrics

  • Time to resolve similar issues: Should decrease as solutions accumulate
  • Repeated mistake frequency: Should approach zero for captured lessons
  • Onboarding time: Should decrease as knowledge base grows
  • Reference rate: Should increase as quality improves

Quality Metrics

  • Best practice consistency: Measured through reviews
  • Knowledge application rate: % of captured learnings actually used
  • Cross-project benefit: Insights from one project applied elsewhere
  • Prevention effectiveness: Issues avoided due to strategies

System Health Metrics

  • Learnings per project: Should be consistent
  • Common learnings reference frequency: Should be high during work
  • Categorization accuracy: Learnings filed where findable
  • Documentation freshness: Regular updates indicate active use

Critical metric: Time to first reference. If you can find relevant past knowledge in under 30 seconds, your system works. If it takes minutes, your organization needs work.


The Future: Where This Goes

Three converging trends:

  1. Context windows are exploding: 4K → 32K → 128K → 1M+ tokens
  2. Models handle long contexts better: Can actually use those million tokens effectively
  3. Context engineering is formalizing: Frameworks like ACE codify best practices

Implication: In 12 months, self-learning systems won't be advanced—they'll be standard. The competitive advantage belongs to teams mastering knowledge capture now.

What's Coming

  • Automated reflection: AI analyzing traces and suggesting learnings automatically
  • Cross-team synthesis: Learning systems sharing insights across organizational boundaries
  • Semantic knowledge graphs: Moving beyond flat files to relationship-based storage
  • Real-time adaptation: Systems updating based on immediate execution feedback
  • Failure prediction: Warning about likely problems based on accumulated experience

Conclusion: Memory as Competitive Advantage

The research validates what practice confirms: comprehensive, evolving contexts outperform traditional approaches while reducing costs.

But the deeper insight: This isn't about AI—it's about building institutional memory that compounds over time.

Your development environment should get smarter every time you use it. Your tools should accumulate wisdom. Your AI assistants should improve not because models update, but because context evolves.

Each problem solved becomes future capability.

This is the future of development: not smarter AI, but AI with better memory.

The question isn't whether you'll build a self-learning system. The question is whether you'll build it before your competitors do.



Most important: Start imperfectly today. A simple system used consistently beats a sophisticated system used sporadically. The entire setup takes 2-3 hours, and the system improves itself from there.


Resources and References

Research Papers

Related Reading

Monday, September 29, 2025

Meta-Prompting: Why AI Should Write Your System Prompts

I was wrestling with creating a series of seven engagement emails for a client—carefully crafted messages that demonstrated a new concept in email marketing. Each email needed specific components: client positioning, footer elements, engagement currency, and brand alignment. After hours of iteration, I finally had a complete seven-email series that worked.

But then I faced the real challenge: how could I replicate this entire sequence for other clients efficiently? More importantly, how could I enable other team members to create these sophisticated email sequences without going through the same lengthy development process?

Instead of trying to write a system prompt myself, I asked Gemini in AI Studio to help me create one. Through our collaborative conversation, we developed an interactive agent that was far more sophisticated than anything I could have written manually. The resulting system prompt was extraordinarily detailed—it included exact conversation flows, technical HTML specifications, state management across multiple phases, and even code commenting conventions. It created a two-phase system: first, a structured interview process that gathered requirements one question at a time, then a generation protocol that produced complete, production-ready HTML emails with AMP compatibility and fallbacks. 


As you can see in the attached image, AI written prompts are far more detailed and specific. 



I tested this AI-generated interactive agent across multiple clients and shared it with colleagues. It worked flawlessly, producing complete seven-email sequences that maintained consistency and quality. More importantly, it democratized the capability—anyone could now create sophisticated email campaigns without mastering email marketing strategy or spending hours on development.

This experience revealed something profound: we've been approaching AI prompt engineering backwards. While countless guides teach humans to craft better prompts, we're missing a fundamental insight—AI is often better at writing prompts for itself than we are. This collaborative approach democratizes capability, turning complex expertise into accessible tools.

Beyond Traditional Meta-Prompting

Meta-prompting isn't new. OpenAI's cookbook demonstrates using higher-intelligence models to optimize prompts, focusing on systematic refinement and evaluation metrics. Commercial tools like PromptPerfect automate prompt improvements for better clarity and performance.

But these approaches typically involve AI refining existing prompts. What we're describing is different: collaborative meta-prompting—where human insight and AI precision work together from problem discovery to system prompt creation. Rather than improving prompts we've already written, this approach starts with AI helping us understand what we actually need.

The Human-AI Partnership

This leverages what each does best. Humans excel at understanding context, articulating goals, and navigating real-world ambiguity. AI excels at precision, systematic thinking, and understanding its own operational patterns—including technical requirements, state management, conversation flows, and the granular details that make systems actually work in production.

The Process

1. Start with the Problem, Not the Prompt Describe what you're trying to accomplish in natural language. Don't worry about prompt structure—just explain the goal, context, and constraints.

2. Explore Through Dialogue Let the AI probe deeper: "What edge cases should we consider?" "How should we handle ambiguous inputs?" This conversation reveals the true complexity and requirements.

3. Define Success Criteria Be specific about what constitutes success. Include examples of good and bad outcomes.

4. Request the System Prompt Once you've thoroughly explored the problem space, ask: "Based on our discussion, can you write a comprehensive system prompt for this task?" The AI will produce something remarkably detailed—my email campaign prompt was over 1,000 words and included precise conversation flows, technical HTML specifications, state management protocols, and even code commenting conventions. No human would naturally think to include such granular operational details.

5. Test and Iterate Try the generated prompt on real examples. Discuss gaps with the AI and refine iteratively.

The Compound Effect

Each meta-prompting session builds your library of robust, battle-tested system prompts that democratize capability across your organization. What once required deep domain expertise can now be accessed by anyone on your team.

The AI learns your working style and requirements, making future sessions more efficient and creating an ever-improving cycle of capability democratization.

A Quick Example

Instead of wrestling with a prompt for "analyze customer feedback," you might have this conversation:

You: I need help analyzing customer feedback emails to identify patterns and prioritize responses.

AI: What types of patterns are most valuable? Sentiment, feature requests, bug reports?

You: All of those, plus I need to identify urgent emails versus those that can wait.

AI: What criteria determine urgency? Customer tier, severity of issues, or other factors?

Through this dialogue, you jointly discover the nuanced requirements. Then ask: "Based on our discussion, can you write a comprehensive system prompt for this task?" The AI will produce something more thorough and precise than most humans would write—complete with decision trees, formatting specifications, and error handling.

The Future of Prompt Engineering

This collaborative meta-prompting approach represents a fundamental shift from manual prompt crafting to capability democratization. It's not about replacing human judgment, but optimizing the division of labor between human insight and AI precision—then making that optimized process accessible to everyone.

The most effective AI implementations won't be built by humans wrestling with perfect prompts, but by human-AI teams that leverage each other's strengths throughout the design process, creating tools that democratize complex capabilities across organizations.

The question isn't whether you can write good prompts—it's whether you should be writing them at all, and whether you're sharing that capability with others who need it.


Wednesday, September 24, 2025

The Addiction Economy: When Vibe Coding Becomes a Gateway Drug

We may be on the verge of witnessing a new kind of addiction—one that could be economically productive but psychologically transformative in ways we can barely imagine. AI-assisted coding could exhibit classic addiction patterns, complete with gateway behaviors, tolerance building, and compulsive use. But unlike traditional addictions, this one might create value while potentially rewiring how we think about work, creativity, and human purpose.

Science fiction has long explored similar phenomena: Gibson's cyberspace cowboys in Neuromancer who became so addicted to jacking into the matrix that physical reality felt pale and meaningless. Or consider the Guild navigators from Frank Herbert's Dune, who neeeded spice to navigate the stars. AI-assisted developers might become similar—able to navigate complex digital possibilities through their AI tools but increasingly dependent on that augmentation to perform at all.

The Gateway Drug Phenomenon

The progression could follow textbook addiction patterns with concerning precision. It might start innocuously enough: someone discovers they can build a working application in thirty minutes using AI assistance. The initial hit could be intoxicating—years of learning compressed into a brief interaction, complex problems dissolving into simple prompts.

But like any powerful drug, tolerance could build quickly. Simple CRUD applications might no longer satisfy. Users could escalate to machine learning systems, then distributed architectures, then complex financial trading platforms with real-time data feeds and sophisticated algorithmic implementations. The bar might keep rising, and the time investment could keep growing.

The loss of control could become evident in familiar patterns: "I'll just quickly prototype this idea" might transform into eight-hour coding marathons. Users could lose track of time, skip meals, and find themselves iterating through increasingly complex variations of projects that started as simple experiments. The feedback loop between human creativity and AI capability might create a dopamine-driven cycle that becomes remarkably difficult to break.

Early signs of this pattern are already emerging on social media. Daily posts on platforms like Twitter reveal users expressing almost manic excitement about getting up to code with AI assistance. Some describe structuring their sleep cycles around Claude's usage timeouts, or feeling genuinely distressed when they hit API rate limits. The language mirrors classic addiction narratives: the anticipation, the scheduling of life around the activity, the emotional dependence on access to the tool.

My own journey illustrates this progression. It started innocuously—copying and pasting from AI chat interfaces like Claude and AI Studio, running experiments on Google Colab, saving snippets to Python files, iterating on PineScript indicators. The workflow was clunky but the results were intoxicating. Then I discovered Qwen-Code, a command line client that operates directly on local files. I was immediately hooked.

The hook wasn't just the functionality—it was liberation from "quota hell." Qwen-Code offered 2000 requests daily, freeing me from constantly watching usage meters, a familiar anxiety for anyone from the third-world. Suddenly I could experiment without the psychological friction of counting tokens or rationing interactions.

The result? I now spend 4-6 hours daily in what Andrej Karpathy termed "vibe coding"—essentially every free waking moment. It's always "just one more feature, one more minor adjustment." The addiction is real and it's powerful. It's the intoxication of creation itself—that god-like feeling of manifesting ideas into working reality through conversation with an AI system.

This echoes the behavior patterns Neal Stephenson described in Snow Crash's Metaverse addicts, where the virtual world became more compelling than physical reality. But coding addiction might prove more insidious because, unlike pure escapism, it creates tangible value—making the addiction easier to justify and harder to recognize.

Redefining Addiction: Narrowing vs. Broadening

Traditional addiction models focus on harm and dysfunction, but AI coding could challenge this framework. Perhaps we'll need to think about addiction differently—not as inherently pathological, but as the direction of pleasure-seeking behavior.

Addiction as Narrowing: In this pattern, people might become obsessed with the specific dopamine hit of watching AI generate working code. They could lose interest in other activities—reading, sports, social interaction—as everything else pales compared to the instant gratification of digital creation. Real-world problem-solving skills might atrophy as every challenge becomes "just prompt it." The world could become smaller, filtered through the lens of what can be coded.

This mirrors the fate of Shadowrun's deckers, who spent so much time in the Matrix that they lost connection to their physical bodies and the material world. The digital realm became not just preferable, but the only reality that felt authentic and engaging.

Addiction as Broadening: But there could be an alternative trajectory. AI coding might enable exploration of multiple domains simultaneously, breaking down barriers between fields. Someone interested in marine biology could build oceanographic simulation tools, then pivot to creating educational games about ecosystems, then develop data analysis platforms for environmental research. The addiction might become about expanding capability and knowledge rather than compulsively repeating the same patterns.

The difference would lie not in the intensity of engagement, but in whether that engagement opens new possibilities or closes them off.

Two Economic Futures, Two Addiction Models

The economics of AI inference will likely fundamentally shape how this addiction manifests, potentially creating two radically different scenarios for society.

Scenario 1: Near-Zero Inference Costs

When AI runs locally and compute becomes essentially free, we could enter uncharted psychological territory. Unlimited experimentation might enable incredibly productive addiction patterns—like having an infinite supply of art materials or an unlimited workshop where every tool is immediately available.

In this world, people might spend entire days in flow states, building and iterating on increasingly sophisticated systems. The creative output could be extraordinary: hyper-personalized applications, artistic coding projects, and experimental technologies that would never justify commercial development. We might see the emergence of "coding artists" who could treat AI-assisted development as a medium for expression rather than merely a tool for utility.

But the risks could be profound. Complete detachment from physical reality might become possible when digital creation offers unlimited dopamine rewards. Society could bifurcate into "builders"—people addicted to creating digital experiences—and "consumers" who exist primarily to use what the builders create. This recalls the OASIS addiction depicted in Ready Player One, where the virtual world became so compelling that the physical world was essentially abandoned.

Social skills, physical health, and real-world problem-solving could atrophy as the virtual world becomes infinitely more rewarding than the physical one.

Scenario 2: Expensive Inference Costs (A Temporary Phenomenon)

If AI inference remains expensive in the near term—whether due to energy costs, compute scarcity, or deliberate pricing—we might temporarily enter a different kind of addiction economy. However, the history of computing suggests this would likely be a transitional phase rather than a permanent state. Computing costs have consistently trended downward, often dramatically, suggesting that expensive inference could be a brief interlude before the inevitable march toward near-zero costs.

During this temporary expensive phase, we could see gambling-like behavior patterns: "just one more expensive query to fix this bug," or "I'll spend $50 to see if I can get this working perfectly." Economic inequality could become the primary determinant of addiction access, with wealthy individuals maintaining their coding habits while others are forced into digital sobriety.

This period might create informal markets for compute resources, or share-economy arrangements where people pool resources to feed collective coding habits. The behavior could resemble World of Warcraft players organizing their entire lives around raid schedules, except instead of defeating digital dragons, they'd be collaborating to afford access to AI systems.

Paradoxically, the expense might force more thoughtful, disciplined use patterns—when each AI interaction costs real money, users could develop better prompt engineering skills and more systematic approaches to problem-solving.

But if computing history is our guide, this expensive phase would likely be brief. The more interesting long-term scenario is what happens when inference costs approach zero, making the psychological and social implications of unlimited AI access the primary concern rather than economic barriers.

The Psychology of Instant Creation

The psychological impact of effortless creation could deserve deeper examination. When building sophisticated software requires only clear communication with an AI system, what might happen to our sense of accomplishment and self-worth?

AI coding addiction differs from other digital dependencies in crucial ways. Unlike pure consumption activities, it involves creation euphoria—the god-like feeling of building functional systems from nothing. There's a problem-solving high that comes from watching complex challenges dissolve into elegant solutions. The infinite possibility space means there are no artificial limits like game rules or platform constraints.

Perhaps most importantly, coding addiction comes with a productivity guilt buffer. Unlike gaming or social media, "I'm being productive" becomes a powerful rationalization that makes the behavior harder to question or limit. The learning addiction component—constantly expanding skills and capabilities—feels virtuous rather than compulsive.

Traditional programming created natural rate limits. Learning syntax, debugging obscure errors, and managing complex architectures took time and created friction that prevented compulsive behavior. These barriers, while frustrating, also provided natural pause points for reflection and diverse activity.

AI-assisted development could remove these friction points, creating the potential for continuous engagement. The traditional markers of progress—learning new languages, mastering frameworks, solving algorithmic challenges—might become less relevant when AI handles the technical implementation. Instead, progress could become measured by the sophistication of ideas realized and problems solved.

This shift might be profoundly positive, redirecting human energy from mechanical tasks toward creative and strategic thinking. But it could also remove the forced learning periods that traditional programming provided, potentially creating developers who can orchestrate complex systems without understanding their underlying mechanisms.

Societal Implications and Intervention Points

As AI coding becomes more prevalent, we could need new frameworks for managing its addictive potential. The traditional approaches—regulation, restriction, treatment—might be inappropriate for an addiction that creates economic value and expands human capability.

Instead, we might need "productive addiction management"—systems that could channel compulsive coding behavior toward beneficial outcomes while preventing complete disconnection from other aspects of life. This could involve:

Rate limiting and cooling-off periods: AI platforms (possibly Govt mandated) might implement mandatory breaks or daily limits, similar to responsible gambling measures but calibrated for productivity rather than harm prevention.

Diversification incentives: Systems that could encourage users to apply their AI-assisted building skills to different domains, preventing the narrowing pattern of addiction while maintaining engagement.

Social integration features: Tools that might make AI-assisted creation inherently collaborative, ensuring that productive addiction doesn't lead to social isolation.

Real-world connection requirements: Platforms that could require users to test their creations with actual users or solve real-world problems, maintaining connection to physical reality and human needs.

The Coming Addiction Economy

We could be entering an era where the distinction between productive and destructive addiction becomes crucial for economic and social policy. If AI-assisted coding follows the broadening pattern—enabling people to explore diverse domains and solve varied problems—it might represent the most beneficial addiction in human history.

But if it follows the narrowing pattern—creating compulsive behavior focused on the act of coding itself rather than the problems being solved—we might face a generation of highly capable but socially disconnected individuals, brilliant at orchestrating AI systems but disconnected from human needs and physical reality.

The path we take will likely depend on how we design these systems and what economic models we choose for AI access. The stakes couldn't be higher: we could be shaping the fundamental relationship between human creativity and technological capability for generations to come.

The question isn't whether AI coding will be addictive—early patterns suggest this possibility. The question is whether we can design this potential addiction to expand human potential rather than constrain it, and whether we can create economic and social structures that would support productive rather than destructive engagement with these incredibly powerful tools.

In the end, we might discover that the right kind of addiction—to learning, creating, and solving problems—could be exactly what humanity needs to navigate an increasingly complex world. But only if we're intentional about how we structure the experience and what behaviors we choose to reward.


Written with Claude