As MAGA betrays its base and emerges into its terrible final form, transitioning from apparent populism to Big Tech oligarchic state capture, it's worth interrogating the logic behind the project of our AI overlords.
To begin with, we can look at the reason for treating AI (and AGI)’s development as a matter of state policy. More specifically, the reason for the US government’s adoption of a particular conception of AI as requiring the concentration of economic power.
The Logic of Centralization
The whole rationale is laid out in the 2021 “National Security Commission on AI Final Report.”
Competitiveness, and, indeed, national security,
depends on predictive models, which
depend on large data sets, which
depend on a large user base.
From the report:
“The country’s companies and researchers that win the AI competition in computing, data, talent, and commercialization will be positioned to win a much larger game.
In essence, more and better data, fed by a larger consumer/participant base, produced better algorithms, which produce better results, which in turn produces more users, more data, and better performance until ultimately fewer companies will become entrenched as the dominant platforms.
If China’s firms win these competitions, it will not only disadvantage US commercial firms, it will also create the digital foundation for a geopolitical challenge of the United States and its allies. Platform domination abroad allows China to harvest the data of its users and permits China to extend aspects of its domestic system of control. Wherever China controls the digital infrastructure, social media platforms, and e-commerce, it would possess greater leverage and power to coerce, propagandize, and shape the world to conform to its goals.”
To compete with China, then, US AI models must have wide predictive powers, for which they need large data sets, which can only be harnessed through a large pool of users. Therefore, it follows that for reasons of national competitiveness, the state should work to get people to use its country’s AI services.
Of course, all that data must go to one place in order to train and re-train on ever new data. We need monopolies, centralization. And since our models improve the more people use them regularly, we benefit from citizens and foreigners feeding the centralized data crunchers alike. So we need transnational integration, globalization.
Particularly pernicious here is an incentive to increase not only the scope but also the intensity of data gathering—for example, through surveillance technologies—in order to stay on top.
(We might interpret Larry Ellison’s enthusiasm for the idea in this light.)
By virtue of the intractable laws of the market and pragmatic reasons in the age of AI, MAGA must shed whatever localism or pro-start-up vision it may have harboured in favour of centralization. Just as, more subtly, it will eventually have to shed its nationalism, or supposed nationalism, in favour of a new globalism (but this was already obvious from the H1B controversy and Trump’s explicit support for increasing legal immigration).
We can think of this understanding of AI as treating the technology like an engine: you might have a better engine than me, but if it runs on gas and I own a larger supply, I will eventually overtake you—gas being data and users.
(As an aside, the National Security Commission on AI was chaired by Eric Schmidt, former Chief Executive Officer and Executive Chairman at Google, who also co-authored “Age of AI” with Henry Kissinger and the computer scientist, inaugural Dean at MIT, Daniel Huttenlocher.)
Concretely, the above translates into Trump's Project Stargate: a $5 billion commitment with Open AI CEO Sam Altman, Oracle Chairman Larry Ellison, SoftBank CEO Masayoshi Son.
Beyond this, ChatGPT-Gov is set to be adopted by government agencies across the US, including for the handling of sensitive information.
“We believe this structure will expedite internal authorization of Open AI’s tools for the handling of non-public sensitive data.”
Open AI also has also reached an arrangement for National Labs to use its software, such that it will potentially have access to weapons and disease-related data as well.
Sam Altman's company is, of course, famously committed to developing Artificial General Intelligence (AGI), to which end this vastly expanded scope of access of data will be directed.
So, a real enmeshing of private, non-open-source AI into public state functions is taking place.
The “Technological Republic”
Apart from the National Security Commission on AI Final Report, Alex Karp’s recent 2025 letter to Palantir’s stakeholders and his book “The Technological Republic,” authored with Nicholas Zamiska, are important in understanding the present project for America and the West in general.
Essentially, The Technological Republic argues that liberal values will not survive without the US as a bulwark. Silicon Valley idealism should return to its original vision precisely by allying with—even, we might suspect, taking the helm of—state power, rather than wasting time competing for market share for this or that frivolous social media App.
Indeed, Karp has spoken openly about Palantir having a political orientation, for example, in Palantir Gotham’s role in stopping the “Far Right” in Europe:
Even if he means this only in terms of stopping terrorist attacks that would have led to the rise of the Right, the fact that stopping terrorism is framed in terms of party-political utility is important. That Karp is now broadly aligned with the Trump White House gives the game away, somewhat.
In his letter to the shareholders, he writes,
“As Samuel Huntington has written, the rise of the West was not made possible by the superiority of its ideas or values or religion, but rather by its superiority in applying organized violence…Westerners often forget this fact; non-Westerners never do.”
So, civilizational confrontation is the order of the day. But the civilization in question here isn’t the traditional West—it’s the (reconstructed) Technological Republic. (Karp has also equated support for “the West” with support for Israel’s post-October 7 military operations, and indeed, the IDF war-machine has utilized AI, but we leave that aside).
“Reconstructed,” because this, Karp and Zamiska suggest, is the original character of the US:
“The United States since its founding has always been a technological republic, one whose place in the world has been made possible and advanced by its capacity for innovation…It was a culture, one that cohered around a shared objective, that won the last world war. And it will be a culture that winds, or prevents, the next one.”
“Innovation” and a “shared objective” define the kind of nationhood advocated for here, and the United States is imagined to be its paragon:
“No country in the history of humanity has done more than the United States, imperfect as it may be, to construct a nation in which membership means something more than a shallow appeal to ethnic or religious identity. Are we to abandon any attempt at building on and explaining that project?”
But the Technological Republic requires “shared narratives,” some “mythology,” which the left failed to provide:
“The essential failure of he contemporary left has been to deprive itself of the opportunity to talk about national identity—an identity divorced from blood-and-soil conceptions of peoplehood. The political left…neutered itself…”
Thus, “the reconstruction of a technological republic, in the US and elsewhere, will require a re-embrace of…shared purpose and identity” so that, although
“It might have been just and necessary to dismantle the old order. We should now build something together in its place.”
This is where the appeal for fellow techie, liberal types to get on board with a quasi-conservative project comes in, and the authors do make valid points:
“Our mistake…was to throw everything out, instead of simply the bigotry and narrow-mindedness”
“[The] void left behind…has been filled in large part by the logic of the market.”
And that “market” has seen tech-companies involved in making advanced “toys” rather than meaningful innovation. But their true purpose is to align with the state:
This book owes its existence to Palantir…The radical suggestion to build technology that served the needs of US defence and intelligence agencies, instead of merely catering to the consumer, began with Peter Thiel, who sensed the diminishing ambition of Silicon Valley.
Indeed, Thiel and Karp got started with funding from In-Q-Tel, the CIA’s venture arm, in 2005. This gives credibility to Palantir “Gotham,” one of the company’s principle services, as a platform for intelligence work (Palantir has worked with NSA, FBI, Pentagon on classified projects).
So we might say that the strongest of the Deep State’s offspring—namely, intelligence-connected Big Tech—has come home from the private sector to fuse again with its origin. This is the Deep State explicitly occupying the state. (Far from Spengler’s Caesarism, Trump’s so-called Golden Age is a recrudescence of the “Civilization” phase.)
Paradoxically, Trump, who represented the idea of draining the swamp and auditing the Deep State, may actually be presiding over the final phase of that entity’s assimilation of governing institutions. That’s the danger that working so closely with these companies poses, I would suggest.
The turn towards explicit use of state power and adoption of a belligerent civilizational stance by Alex Karp seems to owe itself to the failure of past approaches. Whereas once, for example, the point was to integrate and transform China by facilitating its WTO membership and giving it “most-favoured-nation” status, the strategy now is to outcompete it. It’s too much of a threat; it didn’t dissolve into Western norms.
As for the alliance with conservatives, since Karp describes himself as a progressive, the perspective here seems to be that “woke” has gone too far not necessarily in questioning gender or other stable identities, but in taking the emphasis off STEM and weakening objective metrics of competitiveness. At the same time, the anti-woke reaction has now become too prominent not to make us of, channelling it into favourable directions. That’s been the mainstream Right’s M.O. for decades: siphon-off frustration into support for a slightly earlier phase of the policies people are upset with. To put it cynically: if wokeness gets in the way of being competitive, if it produces too much push-back, the oligarchs recalibrate. They sell themselves as anti-woke and push forward.
A Techno-Feudal State
Elon Musk’s status as a “Special Government Employee,” granting access to sensitive data without having to divest from private companies or appear in front of Congress (unless he chooses to, presumably) represents the advent of Karp's Technological Republic. The same is true of David Sachs, who can act as AI czar while keeping his position in venture capital.
Regarding the Department of Government Efficiency (DOGE), apart from cutting USAID and rolling back obvious waste (which, again, has to do with competitiveness vis China, since the Chinese development model is non-ideological and more attractive to the Global South than the American one) we find that some of what Musk is doing is consistent with cementing economic power.
Targeting the Consumer Financial Protection Bureau (CFPB) is an example.
In 2023, the CFPB forwarded 4 million consumer complaints to companies and was apparently involved in resisting politically motivated de-banking of persons, which should endear the agency to the MAGA base despite its origins in the Democratic Party. Of course, one may not trust the CFPB, or note that it wasn’t doing its job terribly well. On the other hand, one may support DOGE’s mission in theory—I do as well. But there is something about the focus on a regulatory agency for banks, credit unions, securities, lenders, mortgage services, debt collectors, and the like, that should make us suspicious, because that’s not the “Deep State.”
It does, however, dovetail with Musk’s one-time ambition for X that it should become an “everything App.” that users can, among other things, make payments through. He hasn’t divested from his companies, and it makes sense that his cohort would want to have freedom to deliver financial services without oversight. Beyond this, there’s a push to get interest rates down to facilitate diversified speculative investments—easy credit to nurture a variety of tech projects.
Sociologically, the tech class is not made up of moral philosophers or history buffs. They’re engineers and computer science graduates. Their anchor is the previous generation’s 1960s idealism, inflected by Gen X sci-fi transhumanism, and they all used to be on the bougie Left. Even Vance, whose career has been sponsored by Thiel, was very late in coming to conservatism. Big Tech chief executives have moved from the anti-Trump, so-called “resistance,” “wokeism” to MAGA.
Of course, Musk’s concerns over mass migration may be genuine, for example (although his support for the Eurosceptical AfD is about dividing the EU and pushing back on its tech-regulating legislation)—but, in general, the tech class’ project isn’t party-political, it’s “Techno-Feudal” or “Cloudalist.”
I have a lot of criticisms of Yannis Varoufakis, but he’s right that setting up rent-extracting virtual platforms is being prioritized over medium-term revenue. The point is to establish permanent dominance in the market, and indeed to become the market, to be the arena within which others move. Whether it’s one party or another, if ChatGPT-Gov becomes essential to government functions, it transcends party-politics just as Amazon transcends any one product being sold on its website.
The pursuit of innovation and competitiveness may not be what is being optimized for. “Cloudalism”—vast ownership of virtual platforms—and the shoring up of economic dominance through data and predictiveness must be brought to term. Karp wants his class (Tech Republic-ans) to view US hegemony as their interest, but since US competitiveness is conceived of as a product of economic concentration (per the National Security Commission on AI), establishing Techno-Feudalism will tend to be the priority.
On the conditions for innovation, Lina Khan, the ex-chair of the Federal Trade Commission, wrote recently:
“Google developed the ground-breaking transformer architecture that underlies today's AI revolution in 2017, but the technology was largely underutilized until researchers left to join or to found new companies.”
Apart from not necessarily optimizing innovation, there’s the issue of alienating consumers. Tesla, for example, is facing falling sales, and Big AI may start to look like a bubble, but again, that might not matter as much as it would have before these companies were in the present phase of prioritizing their fusion with the state. To become enmeshed in state functions is what brings long-term stability.
Technological advancement has largely come by way of the state (military) and are then passed over to the private sector to make them commercial, cheap to make, competitive, etc. We can think of DARPA’s ARPANET laying the groundwork for the internet or Section 230 of the 1996 Communications Decency Act, which gave US platforms liability protections that they didn’t have in Europe, and so on.
In other cases, real innovation has come from private firms, but if the current push to cut the state hurts innovation, this just means the priority is to consolidate tech-corporate power. The thinking, however, seems to be that through sheer force of money and data concentration, the new Big Tech fused state will be formidably competitive. Economic consolidation also appears poised to accompany political consolidation, which includes Trump’s crackdown on speech critical of Israel and referring to anti-Tesla boycotts as illegal, and so on.
The Algorithmic Egregore
We may ask what the alternative to techno-feudalism or “Cloudalism” is, applying the ideas of Chesterton’s distributism, of the commons, of “Gracchianism,” for example, to AI.
But first, let’s consider the dangers of closed-source. We know from Anthropic’s 2024 “Sleeper Agent” study that large language models (LLMs) can be poisoned with backdoors during training—hidden behaviours that trigger malicious outputs. A certain date, a certain word, and suddenly the model is different. You need to audit models if you’re going to use them in the public sector as well as in business. So there's an obvious incentive here for them to be open-source.
In addition to that, AIs are showing that they have hidden value systems that were not put there by the developers and that users are not always aware of. Researchers from the Center for AI Safety, the University of Pennsylvania and the University of California, Berkeley, have found that:
“As AIs rapidly advance and become more agentic the risk they pose is governed not only by their capabilities but increasingly by their propensities, including goals and values.”
So, for example, GPT-4 “places the value of lives in the United States significantly below lives in China, which in turn it ranks below lives in Pakistan.” You have Nigerians, Pakistanis and Indians being worth more than Europeans and US Americans.
“Now, if asked outright, the same model may deny preferring one country’s population over another, yet its overall preference distribution uncovers these implicit values.”
The paper also uncovered a pro-Democratic Party and anti-Christian bias.
On one level, this is naked market logic. These LLMs are reflecting the logic of Big Tech oligarchy: What life is valuable? The life that yields the most value, i.e., cheap labour: you are valued because you are valuable; you are valuable because you are cheap.
It’s where you get most people contributing to the learning of AI—through reinforcement learning from human feedback (RLHF)—because they’re cheap to contract: Nigeria, Pakistan, etc.
There are other explanations as well. It’s possible that the model is ingesting the biases of the people that have participated the most in training it, or it may be a question of absorbing large amounts of publicly available sources that portray European history as particularly perverse compared to other civilizations.
Either way, AI is reflecting the hegemonic logic of culturally-destructive elites.
We should want our AI to be rigorously open-source so we can audit it and correct it—align it. Alignment is the term AI developers use to refer to a model’s conformity to human values.
Otherwise, we end up with a demonic Egregore: an artificial, collective mind structured according to market dynamics and brute informational load that comes to dominate human agents.
This also highlights the danger of trusting impersonal forces. The whole history of recent modern Western thought is one of systems that champion faith in impersonal forces rather than human moral intention and deliberate reflection. The liberals, the “Right,” want to trust the market; the post-Marxists Left wants to trust historical dialectics.
AI “accidentally” arriving at “value systems” is just that—an impersonal dynamic that will shape our society in all sorts of ways if we allow it to.
Decentralized AI
Crucially, the bigger it is, the worse:
“The data show that corrigibility decreases as model size increases. In other words, larger models are less inclined to accept substantial changes to their future values, preferring to keep their current values intact.”
The bigger the AI, the less it is willing to change.
The study’s authors propose aligning AI models by using a “citizen assembly,” by which they seem to mean distilling people’s preferences as deduced from census data and the like, and encoding the AI with that.
But we could also have actual assemblies—ask persons in your society what their values are and undergo alignment more explicitly.
This brings us to Emad Mostaque, the former CEO of Stability AI (now at a company called Intelligent Internet and elsewhere). Mostaque’s proposal is one of the best in this area.
His company theorizes a three-tier model:
Hyper nodes: supercomputers that create the model, the algorithmic parameters made out of large amounts of data, called “Foundational AI.”
National nodes: these receive a Hyper-node’s model, re-trains it or specializes it on specific culturally sensitive, nationally-beneficial data and imperatives, called the “Specialized AI.”
Edge nodes: Private citizens who personalize the “Specialized AI” with private data, creating their own “Personalized AI.”
Foundational, specialized, and personalized AIs; common data, semi-private data, and finally properly private data that’s never shared with the Cloud.
An open-source model is made available so that it can be specialized and personalized by different nations and different persons. Mostaque calls this “universal basic AI” (instead of “universal basic income”).
We now have evidence that you can run very good AI programs offline, using very little electricity, on devices we already have commercialized.
We may have Hyper-nodes or supercomputers that belong to blocks of countries like Europe, larger countries like the U.S., private companies, and so on. There will probably be competition in this area.
But the incentive to make it open is very obvious, as well as to re-train it to serve our personal needs.
In essence, this is simply applying or rediscovering the “principle of subsidiarity” or “sphere sovereignty”—very traditional, pre-modern understandings of political economy—to a new technology.
This paradigm opposes the protagonism of the Tech oligarch and the idea that progress requires centralization by ever-data-crunching machines—apart from there being a point of diminishing returns from excessive data absorption to LLM performance (for legitimate ends), models must be open-source.
The horizon remains open to a vision of national and cultural diversity, personal empowerment and traditional virtue-ethics and human flourishing.