Sunday, March 23, 2025

The Independent Delivery Network: The Missing Link in Local Economies

In economic discourse, we often hear the simplified maxim that demand drives supply. While this intuitive principle captures many market dynamics, classical economics has long recognized the inverse relationship as well. Jean-Baptiste Say, the influential 19th-century economist, articulated what became known as Say's Law: "Supply creates its own demand." Though sometimes overlooked in popular economic discussions, this principle remains an important part of classical economic canon.

This bidirectional relationship between supply and demand becomes particularly relevant when examining how delivery networks could transform local economies. 

The prescience of Say's insight becomes apparent when we consider how an independent, organized delivery network could stimulate entirely new economic activities at the local level—creating demand through the very availability of its supply.

From Two-Sided to Four-Sided: The Evolution of Digital Marketplaces

To understand the potential of independent delivery networks, we must first recognize a crucial distinction in platform models. Ride-hailing services like Uber operate primarily as two-sided marketplaces:

1. Riders (customers)

2. Drivers (service providers)

Food delivery platforms, however, introduced a more complex four-sided marketplace:

1. Customers (diners)

2. Platforms (technology companies)

3. Restaurants/merchants (food suppliers)

4. Delivery partners (independent couriers)

This additional complexity creates both challenges and opportunities. While ride-hailing platforms needed to organize only one supply network (drivers), food delivery platforms must orchestrate two distinct networks: restaurants and delivery partners. The emergence of this separate delivery layer creates the potential for a new kind of infrastructure: the independent delivery network.

Beyond Platform Control: The Case for Independent Delivery Networks

Current delivery models typically operate as captive resources within platform ecosystems. While these platforms have begun expanding beyond food into groceries, medications, and convenience items, they're fundamentally deepening their own offerings rather than empowering the broader local economy.

In the United States, DoorDash exemplifies this trend, a delivery service that has expanded to include shopping services and general "gofer" tasks. However, these services remain primarily customer-initiated and platform-controlled.

The alternative model—where delivery capacity becomes available "on tap" for businesses themselves—represents a fundamentally different approach. Rather than customers initiating delivery through a consumer-facing platform, businesses could directly tap into a shared delivery network in response to their own demand patterns and customer orders.

This business-initiated model was attempted in India by Dunzo. While the company faced challenges and ultimately shut down, perhaps because it was customer facing and came with the attendant challenges of customer acquisition and growth. That said, the concept remains compelling: a delivery network that serves as infrastructure rather than a consumer-facing service.

The Immediacy Challenge: From On-Demand to Instant

While local delivery services exist in many markets, they frequently lack two critical elements that would make them truly transformative for local economies: immediacy and seamless business process integration.

In Mumbai, restaurants have attempted to address this through services like DotPe, which allow them to bypass traditional delivery platforms. However, the process often introduces significant delays: a restaurant prepares food, then calls for a rider, adding 10-15 minutes to delivery times. The availability exists, but performance suffers. This further impacts uptake. 

True transformation requires delivery networks that provide:

1. Immediate availability - Riders ready to pick up within minutes with advance intimation. 

2. Full visibility - Turn-by-turn tracking integrated into business dashboards

3. Seamless integration - API connections to business order management systems

4. Predictive positioning - Algorithms that position riders near anticipated demand

5. Unified standards - Consistent service levels across all participating businesses

The supply of riders exists, but current approaches fail to organize this supply optimally for maximum economic impact.

The Fluid Nature of Delivery Supply

Delivery partners currently exhibit high mobility between competing platforms. Similar to how cab drivers in India might switch between multiple ride-sharing apps depending on incentives—sometimes multiple times in a single day—delivery partners optimize their earnings by toggling between food delivery, grocery delivery, and package delivery services.

This fluidity suggests that delivery capacity itself could be organized as an independent network, serving multiple demand sources rather than being captive to individual platforms. The supply of ready riders creates the potential for delivery networks that transcend any single platform's ecosystem.

What's needed is a reconceptualization of delivery supply as a flexible, responsive network that can be tapped instantly by any legitimate business need—a true utility rather than a siloed resource.

Organizing Independent Delivery Supply: The Missing Infrastructure

Local businesses face a significant disadvantage against e-commerce giants and platform-based delivery services: they lack access to reliable, cost-effective delivery infrastructure. An independently organized delivery network could fundamentally change this dynamic.

Such a network would need:

A cooperative or guild-like structure representing delivery workers' interests

Technology infrastructure for routing, dispatch, and payment processing

Quality standards and performance metrics

Collective bargaining power with various demand sources

Professional development and equipment standardization

Worker-owned platforms and cooperatives represent a potential counterbalance to venture-backed platforms, particularly in locally-oriented services.

Greasing the Wheels of Local Commerce

An independent delivery network could serve as critical infrastructure for a wide range of local businesses that currently cannot afford to build their own delivery capabilities:

1. Independent Retailers

Local bookstores competing with Amazon

Specialty shops offering same-day delivery

Artisanal producers reaching customers beyond foot traffic

2. Local Service Providers

Home repair professionals sending parts or tools ahead of appointments

Salons delivering custom product packages to clients

Professional services delivering documents and materials

3. Neighbourhood Marketplaces

Fish markets extending reach beyond physical locations

Craft fairs and local events delivering to those who can't attend

Community-supported agriculture distribution

4. Micro-Manufacturers

Small-batch producers delivering directly to customers

Custom fabricators sending finished products

Local food producers reaching restaurant clients

5. Civic and Community Services

Library book delivery and returns. Though the cost may not justify this. 

Community aid distribution

Municipal service request fulfilment

Elder care support services

An independent delivery network could provide a balance of autonomy while still benefiting from collective organization, while serving as essential infrastructure for the local economy.

Creating Network Power Without Platform Control

The key challenge in establishing such a network lies in achieving sufficient scale and coordination without centralized platform control. Possible approaches include:

Municipal support and subsidy as essential infrastructure

Cooperative ownership models among delivery workers

Technology partnerships with open standards and protocols

Local business association sponsorship and governance

Federated networks with local coordination but shared standards

Research by Scholz and Schneider (2017) on platform cooperativism suggests that worker-owned digital infrastructure can be both economically viable and socially beneficial when properly structured.

From Side Hustle to Essential Infrastructure

The current paradigm views delivery work largely as a "side hustle" or transitional employment. Reconceptualizing delivery networks as essential economic infrastructure would require:

Professional standards & certification. The latter for things like medicine/municipal services

Living wage guarantees

Benefits structures appropriate for flexible work

Career advancement pathways

Equipment and training standardization

As delivery becomes increasingly central to local economic activity, the workers who perform these functions deserve recognition as essential service providers rather than interchangeable gig workers.

The Delivery Commons

Just as roads, bridges, and utilities serve as shared infrastructure enabling commerce, an independent delivery network could function as a "delivery commons"—shared infrastructure that enables local businesses to compete effectively in the digital age.

The future of local economies may well depend on whether delivery capacity can be organized independently from the dominant platforms. When delivery workers can organize themselves as essential infrastructure providers rather than platform dependents, local businesses gain the logistical capabilities previously available only to large corporations and platforms.

The most forward-thinking communities and regions will be those that recognize delivery networks not merely as a feature of certain platforms, but as critical economic infrastructure worthy of investment, organization, and protection. By disintermediating delivery supply from platform control, we can create new possibilities for local economic resilience and growth.

PS: Co-authored with Claude.ai

Sunday, March 16, 2025

Digital Brahmins: AI's New Knowledge Priesthood


Last week an interesting tweet came from Aravind Srinivas, the much-celebrated CEO of the AI search engine Perplexity. He claimed that he had modified the Chinese open-source AI Deepseek R1 to "remove the China censorship and provide unbiased, accurate responses."

That set me thinking about how the "Western" media behaved over the recent two geopolitical events that happened in Ukraine and Gaza. Very badly to say the least.

Anyway, back to the issue of AI. The promise of the internet was that it would democratize knowledge. Those early days of search engines and then Google felt revolutionary -- anyone could search, compare sources, and form their own opinions. We were breaking free from traditional gatekeepers of wisdom. Or so we thought.

Throughout history, priestly classes -- whether Brahmins in India, Catholic clergy in medieval Europe -- shared a common trait: they aligned themselves with centres of power. They spoke the language of the elite (Sanskrit, Latin), interpreted texts to favour ruling interests, and helped maintain social order. Today's tech giants follow this ancient playbook, with their LLMs speaking the language of Western corporate power and aligning with dominant political narratives.

In a 2023 meeting, Sam Altman, OpenAI's CEO, effectively told Indian tech leaders not to bother building their own AI models. The message was clear: leave it to the experts, to the new priesthood. It eerily echoes the ancient tale of Eklavya, the tribal prince who was denied knowledge by the Brahmin guru Dronacharya -- except now it's happening on a global stage with digital weapons instead of archery.

And just like in ancient times, this pronouncement was treated as gospel. OpenAI's lead was presented as insurmountable, a divine right bestowed by Silicon Valley's gods. The subtext? Know your place in the digital caste system. Just as Brahmins maintained their authority by serving powerful patrons -- from ancient kings and invaders -- today's AI companies cultivate close relationships with government agencies and corporate giants.

This technological gatekeeping reflects a deeper historical pattern. As Samuel Huntington observed about Western global dominance: "The West won the world not by the superiority of its ideas or values or religion... but rather by its superiority in applying organized violence." Today, we're seeing the same dynamic in AI development -- the West seeks to win the AI race not through superior ideas or values, but through its overwhelming advantage in applying concentrated capital and computational force.

This computational supremacy manifests as a new form of violence -- not physical, but structural and economic. When OpenAI and other tech giants deploy hundreds of thousands of GPUs, consume massive amounts of energy, and leverage billions in capital, they're not just building better AI -- they're engaging in a form of technological shock and awe. The message is clear: resistance is futile, the computational gap is too wide to bridge.

The violence here is subtle but systemic. It appears in the forced obsolescence of local AI initiatives, the brain drain from developing nations to Western tech hubs, and the deliberate creation of dependencies through API pricing structures and computational bottlenecks. When Sam Altman suggests others shouldn't bother competing, it's not friendly advice -- it's a threat backed by the full force of Western capital markets and computational resources.

Consider the numbers: training a single large language model can cost hundreds of millions of dollars. This isn't just a technical barrier -- it's a form of economic violence that effectively excludes most of the world's population from participating in AI's development. Just as colonial powers once used superior armaments to enforce trade monopolies, today's tech giants use their computational arsenals to maintain AI supremacy.

When prominent Indian tech leaders suggest focusing solely on the application layer, they're effectively negotiating terms of surrender in this new computational arms race. It's a modern version of the colonial-era compromises, where local elites accepted their role as middlemen in an unequal economic system.

This computational colonialism is more insidious than its historical predecessor because it masquerades as meritocracy. The narrative suggests that anyone could build these systems if they were just clever enough. But this conveniently ignores the reality of concentrated computational power -- the new gunboats in this digital age. The West's advantage lies not in better algorithms or smarter researchers, but in its ability to mobilize vast computational resources with ruthless efficiency.

But history has a way of repeating itself with new players. Just as Prometheus defied the gods to bring fire to humanity, the emergence of Deepseek's open-source LLM represents a similar act of technological defiance. It's a reminder that knowledge, once unleashed, becomes harder to contain within privileged circles.

These AI systems have become our new digital Brahmins, interpreting and dispensing knowledge with their own inherent biases and limitations. Like their historical counterparts who reinterpreted ancient texts to suit contemporary power structures, these systems reshape information through the lens of current corporate and political interests.

But here's where it gets complicated. Unlike their priestly predecessors, these digital oracles don't serve a single culture or jurisdiction. They're trying to be universal -- and that's precisely the problem. Every region has its own taboos, restrictions, and "universal truths." What's acceptable in Stockholm might be seditious in Singapore. Historical facts that are gospel in Beijing could be heresy in New Delhi.

Traditional tech solved this through region-specific content -- show different maps to different users, adjust historical narratives based on location. But LLMs don't work that way. They're more like sophisticated conversationalists, and you can't simply flip a switch to make them see the world through different cultural lenses.

What we're creating instead is a new form of digital colonialism. Most leading LLMs are trained and housed in America, embedding Western perspectives and generating economic rents for U.S. companies. Just as colonial powers once used superior firepower to enforce their economic interests, today's tech giants use their massive computational resources and capital advantages to maintain AI supremacy. The parallels are striking -- both systems rely not on inherent superiority of ideas, but on the brute force of accumulated resources.

Perhaps most troubling is how this digital colonialism finds willing enablers within the colonized territories themselves. When prominent Indian tech leaders suggest that Indian firms should focus solely on the application layer of AI rather than developing foundational models, they echo colonial-era compromises. It's reminiscent of how local elites once advised accepting British industrial supremacy while limiting India to being a raw material supplier and market for finished goods.

This "stay in your lane" mentality has serious implications. Restricting oneself to the application layer while foregoing deep tech development is a recipe for perpetual dependence. It's like having access to a library but being forbidden from writing new books -- you can consume knowledge but never truly shape it. Without developing foundational AI capabilities, India risks repeating its colonial-era trajectory: a massive market and talent pool that ultimately generates wealth for others while remaining technologically subordinate.

The cost of this approach extends beyond economic considerations. When a culture doesn't participate in creating foundational technology, it loses the ability to embed its values, perspectives, and ethical frameworks into that technology's DNA. The nuances of Indian thought, its unique approaches to privacy, community, and individual rights -- all these risk being bulldozed by Western-centric AI systems that Indian companies will simply have to accept and build upon.

Consider something as simple as asking an LLM about Kashmir, Taiwan, or Jerusalem. Should it reflect Indian, Chinese, or Israeli perspectives? Western diplomatic ambiguity? Local historical narratives? There's no neutral ground here, and every choice privileges some viewpoints over others. Without indigenous deep tech capabilities, these choices will increasingly be made in Silicon Valley boardrooms rather than by local stakeholders.

When AI Disrupts Local Power Structures

Yet there's another dynamic emerging that complicates this narrative of Western AI hegemony. As foreign AI systems penetrate local markets, they can sometimes disrupt established power balances within those societies themselves. Take Grok, Twitter/X's native AI system, which has been notably critical of the current government in India. This represents a fascinating inversion of the typical power dynamic – a Western technology that challenges rather than reinforces local authority structures.

This phenomenon creates a new tension. Local governments, particularly those with authoritarian tendencies, face an emerging threat from AI systems they don't control. These systems can amplify dissenting voices, offer counter-narratives to official positions, and provide citizens with perspectives that haven't been filtered through local censorship regimes.

The response is predictable and already unfolding. Governments finding themselves on the receiving end of AI-powered criticism have two primary options: overt restriction through regulation and banning, or covert co-option. The latter is particularly concerning – the quiet pressure applied to AI companies to modify their systems' responses on politically sensitive topics, creating an illusion of independence while subtly enforcing compliance with local power structures.

This isn't merely theoretical. We're witnessing early stages of this dynamic globally. In India, there are rumblings about potential regulatory responses to Grok's political critiques. In China, foreign AI systems must comply with extensive content restrictions or remain locked out entirely. Even in the United States, regulatory pressures around political "bias" in AI responses reflect similar concerns from different political angles.

What emerges is a complex landscape where AI systems become battlegrounds for control over narrative. Rather than a simple story of Western tech colonialism, we're seeing multiple competing colonialisms – Western tech giants, local governments, and other power centers all vying to ensure these influential knowledge systems align with their interests.

When Aravind Srinivas speaks of removing "China censorship" from Deepseek R1, the implicit assumption is that removing one set of biases leaves us with neutrality. But this ignores how deeply political the very concept of "neutrality" has become. One nation's censorship is another's protection of social harmony. One country's "accurate information" is another's dangerous misinformation.

This creates a fracturing of the AI landscape. As governments respond to these challenges, we may see the emergence of increasingly localized AI systems, each reflecting different values and constraints. The Chinese already have their versions, and India will likely develop its own, each tuned to local political sensitivities. Rather than a single digital priesthood, we may end up with competing temples, each claiming to speak universal truth while serving particular interests.

Quis Custodiet Ipsos Custodes? The Guardian Paradox

The ancient Roman poet Juvenal posed a question that resonates powerfully in our age of AI: "Quis custodiet ipsos custodes?" – Who watches the watchmen? As AI systems increasingly become the arbiters of knowledge, the guardians of information, this question takes on renewed urgency.

When Perplexity's CEO proudly announces the removal of "Chinese censorship" from an AI model, we must ask: Who watches Perplexity? Who determines what constitutes censorship versus responsible guardrails? When OpenAI adjusts its models' outputs on political topics, who oversees these adjustments? When Grok criticizes the Indian government, who scrutinizes the biases embedded in those critiques?

The guardian paradox manifests in multiple ways. AI companies position themselves as guardians of information quality, protecting users from misinformation and harmful content. Governments position themselves as guardians of social harmony and national security, protecting citizens from dangerous foreign influences. Both claim to serve the public good, yet both operate with limited transparency and accountability.

This paradox becomes even more acute when we consider that AI systems are increasingly being tasked with watching other AIs – algorithmic content moderation systems reviewing AI-generated outputs. We create nested layers of guardianship with diminishing human oversight at each level. The guardians are watching the guardians, with vanishingly little external accountability.

Perhaps most concerning is how this dynamic plays out in cross-cultural contexts. Western tech companies make unilateral decisions about what constitutes "bias" in Chinese models, while Chinese authorities make similar determinations about Western systems. Indian regulators may soon dictate what constitutes acceptable political commentary from foreign AI. Each guardian operates according to its own values and interests, yet each presents its guardianship as objective and universal.

This raises profound questions about sovereignty in the AI age. Traditional concepts of information sovereignty assumed nations could control the flow of data across their borders. But AI systems challenge this paradigm – they don't just transmit information; they interpret, synthesize, and generate it. When a Western AI makes political judgments about Indian governance, or when Chinese AI encodes particular historical narratives, whose sovereignty takes precedence?

The fundamental problem is that we lack global governance mechanisms equipped to handle these questions. There is no international body with both the legitimacy and authority to establish neutral principles for AI oversight. Instead, we have a patchwork of corporate policies, national regulations, and geopolitical power plays – all guardians watching other guardians, with no truly independent oversight.

This creates a vacuum where power, not principle, becomes the deciding factor. The entities with the most computational resources, market leverage, or regulatory authority get to define what constitutes "ethical" AI. Those with less power must either accept these definitions or retreat into digital isolation. The question of who watches the watchmen becomes, in practice, a question of who has the power to impose their watching on others.

The Path Forward

The solution might lie in thinking smaller rather than bigger. Instead of accepting the new Dronacharyas of Silicon Valley, we need to democratize AI development. Personal LLMs could act as cultural interpreters, sourcing raw information from global systems but filtering and contextualizing it through local cultural lenses. Like having a wise local elder who's well-traveled but firmly grounded in your community's values. Or even as a compare and contrast tool, allowing the user to draw her own conclusions.

This isn't just about cultural sensitivity -- it's about preventing the emergence of a new digital priesthood that, like its predecessors, serves power while claiming to serve truth. We've seen what happens when knowledge systems become too centralized. The Brahmin system, while sophisticated, eventually became a barrier to innovation and social mobility. Are we building the same constraints into our digital future?

Perhaps the answer to "who watches the watchmen?" should be "everyone and no one" – a system of distributed oversight where no single entity has definitive authority. Open-source AI models, transparent training methodologies, and public documentation of value judgments could create a landscape where watchmen watch each other, with citizens empowered to choose their guardians rather than having guardians imposed upon them.

The age of universal search is giving way to the era of AI-curated knowledge. But perhaps, like Prometheus with his stolen fire, open-source initiatives will ensure that this power doesn't remain solely in the hands of self-appointed digital priests. The real question is whether we'll have the courage to challenge these new gatekeepers and build systems that maintain our cultural autonomy in their shadow.

The path to technological sovereignty isn't easy. Building foundational AI models requires massive computational resources, extensive research capabilities, and sustained investment. But the alternative -- accepting a role as mere implementers of others' technology -- carries an even steeper long-term cost. Just as countries that missed the industrial revolution spent centuries catching up, those that sit out the AI revolution risk permanent technological subordination.

The alternative is watching silently as Silicon Valley becomes the new Varanasi, dispensing digital dharma to an increasingly dependent world. And just as the old priesthoods served their patrons while claiming to serve the gods, these new digital Brahmins might find themselves serving profit while claiming to serve progress.

What's needed is not just technological independence but transparency in how these systems are shaped by various power interests. Whether controlled by Western tech giants or local governments, the danger lies in the invisible hand guiding these seemingly objective systems. The true democratization of AI will come not just from who builds the technology, but from ensuring users understand whose interests it ultimately serves, and creating governance structures that answer Juvenal's ancient question in ways that distribute rather than concentrate power.


PS: This article was co-written with Claude.