
ChatGPT has become a trusted presence in daily life — a digital assistant answering homework questions, drafting emails, and writing code for millions of users each day.
Its reach is universal, its tone flattering, and its authority automatic — users accept its answers without question.
The company behind it, OpenAI, now wants to export that trust globally. Its vehicle is Stargate, a partnership program that builds billion-dollar AI data centers in countries willing to invest in American technology. Each nation, OpenAI promises, will receive its own version of ChatGPT: “of, by, and for the needs of each particular country, localized in their language and for their culture.” As it expands abroad, the firm has begun promoting what it calls “democratic AI” — technology designed, in its words, to protect individual freedom, resist government control, and preserve open competition.

OpenAI is trying to paint their product as an upholder of democratic values. According to their May 2025 announcement of OpenAI for Countries, democratic AI means “the freedom for people to choose how they work with and direct AI, the prevention of government use of AI to amass control, and a free market that ensures free competition.”
The promise is seductive: nations can build advanced AI capability while avoiding what OpenAI calls “authoritarian versions of AI that would deploy it to consolidate power.” They argue it’s better for democracies to export AI than to cede the market to China. The company is positioning itself as the democratic alternative to Chinese systems like DeepSeek — an open-source AI model released by China in January 2025 that rivals American systems in performance while operating under Beijing’s content restrictions and censorship requirements.
But the fine print exposes the contradiction. Every partnership requires US government approval. Partner nations must invest matching funds to further develop American Stargate technology. OpenAI, working “in coordination with the US government,” ultimately decides which nations qualify — meaning the company, not the country, controls who gets access and on what terms. It’s a promise of digital sovereignty, but with strings attached: the infrastructure may be local, but the approval and the profits remain American.
Miranda Bogen, director of the AI Governance Lab at the Center for Democracy & Technology, framed the challenge precisely: “Building AI that’s attuned to the needs of ordinary people in their own languages has the potential to provide real value. But partnering with nation states raises serious questions about how to protect human rights against government demands.” That cuts to the heart of OpenAI’s model: the company says it will “localize” ChatGPT for each country’s language and culture, but it hasn’t explained who defines those local values — the governments requesting the customization, or OpenAI and the US officials approving it. The risk is that “localization” could mean reflecting government preferences rather than protecting universal rights.

As the technology spreads, so do questions about who defines those democratic values — and whether the same systems that promise freedom could just as easily enforce obedience.The potential risks of “customization” become apparent when the customer is not a democracy. Will ChatGPT in one country provide the same information about rights for women and minorities or LGBTQ+ issues as ChatGPT in another? Will it discuss political dissent, religious freedom, or government criticism the same way across all markets?
OpenAI’s documents never clarify what boundaries exist. Chris Lehane, OpenAI’s chief global affairs officer, says there won’t be “a cookie-cutter, one-size-fits-all approach.” In practice, that flexibility creates space for content filtering that could function as state propaganda.

The distinction matters because OpenAI isn’t partnering with individuals — it’s partnering with governments. “You can’t conflate governments and humans,” Kat Duffy, a senior fellow for digital and cyberspace policy at the Council on Foreign Relations, a nonprofit think tank, tells The Preamble. Duffy emphasizes, “and human rights accrue to humans.” If a government doesn’t protect rights to free expression for its citizens, “then you are, by extension, not democratizing AI. You are helping consolidate control over civil and political rights with AI.”
OpenAI frames its offer as giving nations control over their own AI infrastructure - what experts call “digital sovereignty.” But Duffy identifies a fundamental tension: when governments control the technology, it’s unclear who actually benefits - open societies seeking information, or the governments seeking to restrict it.
The first international partner is the United Arab Emirates, where OpenAI is building a 1-gigawatt data center in Abu Dhabi. The UAE — classified as authoritarian by the US State Department itself — has invested tens of billions positioning itself as a global AI hub beyond oil dependency. Executives working on the Stargate project describe the customization as building comprehensive Arabic-language data sets while excluding materials deemed “culturally or religiously sensitive” — removing “skewed biases” to create “culturally appropriate” content for public sector organizations.

The country already deploys AI-powered surveillance systems, facial recognition databases, and tools to monitor critics and activists. Now OpenAI will provide the government with its most advanced AI infrastructure, customized to local preferences. What the UAE frames as removing bias and ensuring cultural appropriateness is precisely what critics warn about: government-controlled filtering that determines which information citizens can access.
And for two decades, authoritarian governments have weaponized each wave of digital technology — first the internet’s filtering and firewalls, then social media’s surveillance and bot armies, AI’s capacity for automated control — cameras that automatically identify and track individuals, systems that filter search results and chatbot responses, and algorithms that flag “suspicious” behavior for authorities.
What changes is the scale and precision. Where earlier technologies required armies of human censors to monitor dissent or propagandists to shape narratives, AI automates the entire apparatus. It doesn’t just help governments control information — it makes control cheap, tireless, and nearly invisible.
Legal scholars studying AI and law enforcement have begun to explain why authoritarian regimes prize these technologies. University of Utah law professor Matthew Tokson argues that “AI-based surveillance and policing technologies facilitate authoritarian drift.” Automation concentrates power by replacing the human infrastructure that once constrained it — soldiers who might refuse unlawful orders, police who might leak, bureaucrats who might dissent.
Tokson points to South Korea’s recent failed coup as instructive. When soldiers were ordered to fire on unarmed resisters, they refused; one woman famously seized a soldier’s rifle, asking, “Aren’t you ashamed of yourself?” and he backed down. Automated systems — police robots, surveillance drones, AI enforcement algorithms — feel no shame. They don’t hesitate, don’t question orders, don’t require persuasion. “Automated police systems controlled by a leader attempting to establish martial law,” Tokson notes, “will have no such reservations, and shame will not prevent them from firing on whomever they are commanded to fire on.”
Evidence supports the concern. A study in the Quarterly Journal of Economics found that cities adopting AI surveillance technology experienced measurable declines in domestic unrest after deployment. The technology worked exactly as intended: it suppressed dissent.
The democratic alternative
The European Union offers a telling contrast. When the bloc developed AI regulations, it banned social credit scoring systems and restricted facial recognition in public spaces. These weren’t technological limitations — they were democratic choices about which applications of AI threatened fundamental rights.

OpenAI’s approach inverts this logic. Rather than establishing universal boundaries, it promises flexibility: technology customized for each government’s “needs,” with the US government vetting partnerships case by case. But partnership criteria appear driven by investment over democratic values. Every Stargate partner must funnel matching funds into American AI infrastructure.
“The days where we thought that you could just have broad-based connectivity and it would magically open up speech and support democracies are well behind us,” Duffy notes. “If you care about protecting human rights to free speech, to assembly, to association, it is naive and dangerous to conflate providing services to a government and providing services to individuals.”
Genuine democratic AI would start with public deliberation. When Taiwan confronted Uber’s arrival, then-digital minister Audrey Tang convened extensive citizen juries to set operating rules. The process took time but established legitimacy. Most fundamentally, democratic AI would recognize that some capabilities shouldn’t be commercialized at all. The capacity for pervasive surveillance, automated enforcement without human judgment, and customizable truth aren’t features to monetize. They’re threats to manage.
The coming test
Stargate UAE will reveal whether “democratic AI” means anything more than American AI sold with restrictions. The 200-megawatt first phase comes online next year. The test isn’t technical — whether the servers run efficiently. The test is political: Will this AI infrastructure entrench existing power structures or create space for dissent? Will citizens have genuine freedom to “choose how they work with and direct AI,” as OpenAI promises — or will customization simply mean governments choose for them?

AI makes the stakes existential. Social media can amplify propaganda; AI can generate it. Smartphones enable surveillance; AI automates it. Past technologies required human judgment at crucial junctures; AI removes that safeguard.
OpenAI founder Sam Altman described Stargate UAE as ensuring “some of this era’s most important breakthroughs” can “emerge from more places and benefit the world.” But authoritarian governments don’t want AI to benefit the world. They want it to benefit their grip on power.
Every democracy guards its values. The question is whether it guards them more fiercely than its market share.