Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

A.I Often considered a threat to democracy and a boon to dictators. Perhaps in 2025 algorithms will undermine democratic discourse by fueling outrage, fake news and disinformation. Conspiracy theory. Algorithms will continue to accelerate the creation of total surveillance systems in 2025, where the entire population is watched 24 hours a day.
Most importantly, AI facilitates the concentration of all information and power into a hub. In the 20th century, distributed information networks like the USA worked better than centralized information networks like the USSR, because the human apparatuses at the center could not efficiently analyze all the information. Replacing apparatchiks with AIs could improve Soviet-style centralized networks.
Still, it’s not all good news for AI dictators. First, there is the notorious problem of control. Authoritarian control is founded on terror, but algorithms cannot be terrorized. In Russia, its invasion Ukraine Officially defined as a “special military operation” and referring to it as “war” is a crime punishable by up to three years in prison. If a chatbot on the Russian Internet calls it “war” or refers to war crimes committed by Russian soldiers, how can the government punish that chatbot? Governments can block it and try to punish its human creators, but that’s much harder than disciplining human users. Moreover, authorized bots can spot patterns in Russian data fields and generate dissenting opinions themselves. That’s the alignment problem, Russian style. Human engineers in Russia may do their best to create an AI that is fully compliant with the regime, but given AI’s ability to learn and change itself, how can engineers ensure that an AI that receives the regime’s seal of approval in 2024 is in illegal territory in 2025? Don’t be adventurous?
The Russian constitution makes great promises that “everyone shall be guaranteed freedom of thought and speech” (Article 29.1) and “censorship shall be prohibited” (29.5). Hardly any Russian citizen is naive enough to take these promises seriously. But bots don’t understand doublespeak. A chatbot instructed to obey Russian laws and values ​​could read that constitution, conclude that free speech is a core Russian value, and criticize the Putin government for violating that value. How can Russian engineers explain to a chatbot that even though the constitution guarantees freedom of speech, the chatbot doesn’t actually believe in the constitution, or not to mention the gap between theory and reality?
In the long run, authoritarian regimes may face an even greater danger: rather than criticizing them, AI may gain control of them. Throughout history, the greatest threat to dictators has usually come from their own subordinates. No Roman emperor or Soviet premier was ever overthrown by a democratic revolution, but they were always in danger of being overthrown or turned into puppets by their own subordinates. A dictator who gives AI too much authority in 2025 might become their puppet.
Dictatorships are far more vulnerable than democracies to such algorithmic takeovers. Even a super-Machiavellian AI would find it difficult to amass power in a decentralized democratic system like the United States. Even if AI learns to manipulate the US president, it may face opposition from Congress, the Supreme Court, state governors, the media, large corporations, and various NGOs. How would the algorithm, for example, deal with a Senate filibuster? It is much easier to seize power in a highly centralized system. To hack an authoritarian network, AI only needs to manipulate a single paranoid individual.