Sunday, March 16, 2025

Digital Brahmins: AI's New Knowledge Priesthood


Last week an interesting tweet came from Aravind Srinivas, the much-celebrated CEO of the AI search engine Perplexity. He claimed that he had modified the Chinese open-source AI Deepseek R1 to "remove the China censorship and provide unbiased, accurate responses."

That set me thinking about how the "Western" media behaved over the recent two geopolitical events that happened in Ukraine and Gaza. Very badly to say the least.

Anyway, back to the issue of AI. The promise of the internet was that it would democratize knowledge. Those early days of search engines and then Google felt revolutionary -- anyone could search, compare sources, and form their own opinions. We were breaking free from traditional gatekeepers of wisdom. Or so we thought.

Throughout history, priestly classes -- whether Brahmins in India, Catholic clergy in medieval Europe -- shared a common trait: they aligned themselves with centres of power. They spoke the language of the elite (Sanskrit, Latin), interpreted texts to favour ruling interests, and helped maintain social order. Today's tech giants follow this ancient playbook, with their LLMs speaking the language of Western corporate power and aligning with dominant political narratives.

In a 2023 meeting, Sam Altman, OpenAI's CEO, effectively told Indian tech leaders not to bother building their own AI models. The message was clear: leave it to the experts, to the new priesthood. It eerily echoes the ancient tale of Eklavya, the tribal prince who was denied knowledge by the Brahmin guru Dronacharya -- except now it's happening on a global stage with digital weapons instead of archery.

And just like in ancient times, this pronouncement was treated as gospel. OpenAI's lead was presented as insurmountable, a divine right bestowed by Silicon Valley's gods. The subtext? Know your place in the digital caste system. Just as Brahmins maintained their authority by serving powerful patrons -- from ancient kings and invaders -- today's AI companies cultivate close relationships with government agencies and corporate giants.

This technological gatekeeping reflects a deeper historical pattern. As Samuel Huntington observed about Western global dominance: "The West won the world not by the superiority of its ideas or values or religion... but rather by its superiority in applying organized violence." Today, we're seeing the same dynamic in AI development -- the West seeks to win the AI race not through superior ideas or values, but through its overwhelming advantage in applying concentrated capital and computational force.

This computational supremacy manifests as a new form of violence -- not physical, but structural and economic. When OpenAI and other tech giants deploy hundreds of thousands of GPUs, consume massive amounts of energy, and leverage billions in capital, they're not just building better AI -- they're engaging in a form of technological shock and awe. The message is clear: resistance is futile, the computational gap is too wide to bridge.

The violence here is subtle but systemic. It appears in the forced obsolescence of local AI initiatives, the brain drain from developing nations to Western tech hubs, and the deliberate creation of dependencies through API pricing structures and computational bottlenecks. When Sam Altman suggests others shouldn't bother competing, it's not friendly advice -- it's a threat backed by the full force of Western capital markets and computational resources.

Consider the numbers: training a single large language model can cost hundreds of millions of dollars. This isn't just a technical barrier -- it's a form of economic violence that effectively excludes most of the world's population from participating in AI's development. Just as colonial powers once used superior armaments to enforce trade monopolies, today's tech giants use their computational arsenals to maintain AI supremacy.

When prominent Indian tech leaders suggest focusing solely on the application layer, they're effectively negotiating terms of surrender in this new computational arms race. It's a modern version of the colonial-era compromises, where local elites accepted their role as middlemen in an unequal economic system.

This computational colonialism is more insidious than its historical predecessor because it masquerades as meritocracy. The narrative suggests that anyone could build these systems if they were just clever enough. But this conveniently ignores the reality of concentrated computational power -- the new gunboats in this digital age. The West's advantage lies not in better algorithms or smarter researchers, but in its ability to mobilize vast computational resources with ruthless efficiency.

But history has a way of repeating itself with new players. Just as Prometheus defied the gods to bring fire to humanity, the emergence of Deepseek's open-source LLM represents a similar act of technological defiance. It's a reminder that knowledge, once unleashed, becomes harder to contain within privileged circles.

These AI systems have become our new digital Brahmins, interpreting and dispensing knowledge with their own inherent biases and limitations. Like their historical counterparts who reinterpreted ancient texts to suit contemporary power structures, these systems reshape information through the lens of current corporate and political interests.

But here's where it gets complicated. Unlike their priestly predecessors, these digital oracles don't serve a single culture or jurisdiction. They're trying to be universal -- and that's precisely the problem. Every region has its own taboos, restrictions, and "universal truths." What's acceptable in Stockholm might be seditious in Singapore. Historical facts that are gospel in Beijing could be heresy in New Delhi.

Traditional tech solved this through region-specific content -- show different maps to different users, adjust historical narratives based on location. But LLMs don't work that way. They're more like sophisticated conversationalists, and you can't simply flip a switch to make them see the world through different cultural lenses.

What we're creating instead is a new form of digital colonialism. Most leading LLMs are trained and housed in America, embedding Western perspectives and generating economic rents for U.S. companies. Just as colonial powers once used superior firepower to enforce their economic interests, today's tech giants use their massive computational resources and capital advantages to maintain AI supremacy. The parallels are striking -- both systems rely not on inherent superiority of ideas, but on the brute force of accumulated resources.

Perhaps most troubling is how this digital colonialism finds willing enablers within the colonized territories themselves. When prominent Indian tech leaders suggest that Indian firms should focus solely on the application layer of AI rather than developing foundational models, they echo colonial-era compromises. It's reminiscent of how local elites once advised accepting British industrial supremacy while limiting India to being a raw material supplier and market for finished goods.

This "stay in your lane" mentality has serious implications. Restricting oneself to the application layer while foregoing deep tech development is a recipe for perpetual dependence. It's like having access to a library but being forbidden from writing new books -- you can consume knowledge but never truly shape it. Without developing foundational AI capabilities, India risks repeating its colonial-era trajectory: a massive market and talent pool that ultimately generates wealth for others while remaining technologically subordinate.

The cost of this approach extends beyond economic considerations. When a culture doesn't participate in creating foundational technology, it loses the ability to embed its values, perspectives, and ethical frameworks into that technology's DNA. The nuances of Indian thought, its unique approaches to privacy, community, and individual rights -- all these risk being bulldozed by Western-centric AI systems that Indian companies will simply have to accept and build upon.

Consider something as simple as asking an LLM about Kashmir, Taiwan, or Jerusalem. Should it reflect Indian, Chinese, or Israeli perspectives? Western diplomatic ambiguity? Local historical narratives? There's no neutral ground here, and every choice privileges some viewpoints over others. Without indigenous deep tech capabilities, these choices will increasingly be made in Silicon Valley boardrooms rather than by local stakeholders.

When AI Disrupts Local Power Structures

Yet there's another dynamic emerging that complicates this narrative of Western AI hegemony. As foreign AI systems penetrate local markets, they can sometimes disrupt established power balances within those societies themselves. Take Grok, Twitter/X's native AI system, which has been notably critical of the current government in India. This represents a fascinating inversion of the typical power dynamic – a Western technology that challenges rather than reinforces local authority structures.

This phenomenon creates a new tension. Local governments, particularly those with authoritarian tendencies, face an emerging threat from AI systems they don't control. These systems can amplify dissenting voices, offer counter-narratives to official positions, and provide citizens with perspectives that haven't been filtered through local censorship regimes.

The response is predictable and already unfolding. Governments finding themselves on the receiving end of AI-powered criticism have two primary options: overt restriction through regulation and banning, or covert co-option. The latter is particularly concerning – the quiet pressure applied to AI companies to modify their systems' responses on politically sensitive topics, creating an illusion of independence while subtly enforcing compliance with local power structures.

This isn't merely theoretical. We're witnessing early stages of this dynamic globally. In India, there are rumblings about potential regulatory responses to Grok's political critiques. In China, foreign AI systems must comply with extensive content restrictions or remain locked out entirely. Even in the United States, regulatory pressures around political "bias" in AI responses reflect similar concerns from different political angles.

What emerges is a complex landscape where AI systems become battlegrounds for control over narrative. Rather than a simple story of Western tech colonialism, we're seeing multiple competing colonialisms – Western tech giants, local governments, and other power centers all vying to ensure these influential knowledge systems align with their interests.

When Aravind Srinivas speaks of removing "China censorship" from Deepseek R1, the implicit assumption is that removing one set of biases leaves us with neutrality. But this ignores how deeply political the very concept of "neutrality" has become. One nation's censorship is another's protection of social harmony. One country's "accurate information" is another's dangerous misinformation.

This creates a fracturing of the AI landscape. As governments respond to these challenges, we may see the emergence of increasingly localized AI systems, each reflecting different values and constraints. The Chinese already have their versions, and India will likely develop its own, each tuned to local political sensitivities. Rather than a single digital priesthood, we may end up with competing temples, each claiming to speak universal truth while serving particular interests.

Quis Custodiet Ipsos Custodes? The Guardian Paradox

The ancient Roman poet Juvenal posed a question that resonates powerfully in our age of AI: "Quis custodiet ipsos custodes?" – Who watches the watchmen? As AI systems increasingly become the arbiters of knowledge, the guardians of information, this question takes on renewed urgency.

When Perplexity's CEO proudly announces the removal of "Chinese censorship" from an AI model, we must ask: Who watches Perplexity? Who determines what constitutes censorship versus responsible guardrails? When OpenAI adjusts its models' outputs on political topics, who oversees these adjustments? When Grok criticizes the Indian government, who scrutinizes the biases embedded in those critiques?

The guardian paradox manifests in multiple ways. AI companies position themselves as guardians of information quality, protecting users from misinformation and harmful content. Governments position themselves as guardians of social harmony and national security, protecting citizens from dangerous foreign influences. Both claim to serve the public good, yet both operate with limited transparency and accountability.

This paradox becomes even more acute when we consider that AI systems are increasingly being tasked with watching other AIs – algorithmic content moderation systems reviewing AI-generated outputs. We create nested layers of guardianship with diminishing human oversight at each level. The guardians are watching the guardians, with vanishingly little external accountability.

Perhaps most concerning is how this dynamic plays out in cross-cultural contexts. Western tech companies make unilateral decisions about what constitutes "bias" in Chinese models, while Chinese authorities make similar determinations about Western systems. Indian regulators may soon dictate what constitutes acceptable political commentary from foreign AI. Each guardian operates according to its own values and interests, yet each presents its guardianship as objective and universal.

This raises profound questions about sovereignty in the AI age. Traditional concepts of information sovereignty assumed nations could control the flow of data across their borders. But AI systems challenge this paradigm – they don't just transmit information; they interpret, synthesize, and generate it. When a Western AI makes political judgments about Indian governance, or when Chinese AI encodes particular historical narratives, whose sovereignty takes precedence?

The fundamental problem is that we lack global governance mechanisms equipped to handle these questions. There is no international body with both the legitimacy and authority to establish neutral principles for AI oversight. Instead, we have a patchwork of corporate policies, national regulations, and geopolitical power plays – all guardians watching other guardians, with no truly independent oversight.

This creates a vacuum where power, not principle, becomes the deciding factor. The entities with the most computational resources, market leverage, or regulatory authority get to define what constitutes "ethical" AI. Those with less power must either accept these definitions or retreat into digital isolation. The question of who watches the watchmen becomes, in practice, a question of who has the power to impose their watching on others.

The Path Forward

The solution might lie in thinking smaller rather than bigger. Instead of accepting the new Dronacharyas of Silicon Valley, we need to democratize AI development. Personal LLMs could act as cultural interpreters, sourcing raw information from global systems but filtering and contextualizing it through local cultural lenses. Like having a wise local elder who's well-traveled but firmly grounded in your community's values. Or even as a compare and contrast tool, allowing the user to draw her own conclusions.

This isn't just about cultural sensitivity -- it's about preventing the emergence of a new digital priesthood that, like its predecessors, serves power while claiming to serve truth. We've seen what happens when knowledge systems become too centralized. The Brahmin system, while sophisticated, eventually became a barrier to innovation and social mobility. Are we building the same constraints into our digital future?

Perhaps the answer to "who watches the watchmen?" should be "everyone and no one" – a system of distributed oversight where no single entity has definitive authority. Open-source AI models, transparent training methodologies, and public documentation of value judgments could create a landscape where watchmen watch each other, with citizens empowered to choose their guardians rather than having guardians imposed upon them.

The age of universal search is giving way to the era of AI-curated knowledge. But perhaps, like Prometheus with his stolen fire, open-source initiatives will ensure that this power doesn't remain solely in the hands of self-appointed digital priests. The real question is whether we'll have the courage to challenge these new gatekeepers and build systems that maintain our cultural autonomy in their shadow.

The path to technological sovereignty isn't easy. Building foundational AI models requires massive computational resources, extensive research capabilities, and sustained investment. But the alternative -- accepting a role as mere implementers of others' technology -- carries an even steeper long-term cost. Just as countries that missed the industrial revolution spent centuries catching up, those that sit out the AI revolution risk permanent technological subordination.

The alternative is watching silently as Silicon Valley becomes the new Varanasi, dispensing digital dharma to an increasingly dependent world. And just as the old priesthoods served their patrons while claiming to serve the gods, these new digital Brahmins might find themselves serving profit while claiming to serve progress.

What's needed is not just technological independence but transparency in how these systems are shaped by various power interests. Whether controlled by Western tech giants or local governments, the danger lies in the invisible hand guiding these seemingly objective systems. The true democratization of AI will come not just from who builds the technology, but from ensuring users understand whose interests it ultimately serves, and creating governance structures that answer Juvenal's ancient question in ways that distribute rather than concentrate power.


PS: This article was co-written with Claude.

No comments:

Post a Comment