How Agentic AI Can Damage Democratic Sovereignty
Introduction
The emergence of agentic artificial intelligence – autonomous systems capable of perceiving, reasoning, learning, and acting toward goals with minimal human oversight – introduces unprecedented threats to democratic sovereignty that operate across multiple dimensions of governance, civil society, and political life. Unlike earlier AI systems that merely generated content or provided recommendations, agentic AI possesses the capacity for independent action and goal-directed behavior that can fundamentally reshape power relationships within and between democratic states.
Erosion of Electoral Integrity
Agentic AI systems present severe risks to the electoral foundations upon which democratic sovereignty rests. These systems can generate, test, and amplify persuasive content without human oversight, creating what researchers describe as “automated AI swarms” that manufacture and spread misinformation at a scale and speed that overwhelms democratic institutions’ capacity to respond. The 2024 global election cycle demonstrated these dangers concretely: more than 80 percent of countries experienced observable instances of AI usage relevant to their electoral processes, with content creation – including deepfakes, AI-powered avatars, and synthetic endorsements from fabricated celebrities – accounting for 90 percent of all observed cases. Romania’s 2024 presidential election provides a stark illustration of these dangers.
Romania’s 2024 presidential election provides a stark illustration of these dangers
The election results were annulled after evidence emerged showing AI-powered interference through manipulated videos that had distorted voter perceptions. Such incidents reveal how agentic AI can undermine the fundamental democratic principle that electoral outcomes should reflect the authentic will of citizens rather than the manufactured preferences of those who control AI systems. Beyond elections, agentic AI threatens the quality of democratic representation through more subtle mechanisms. The public-comment processes through which citizens influence regulatory agencies could become flooded with AI-generated submissions advancing particular agendas, making it impossible for agencies to discern genuine public preferences. This represents a form of democratic drowning, where authentic citizen voices become indistinguishable from synthetic noise, rendering participatory governance mechanisms ineffective.
Concentration of Power
Perhaps the most profound threat that agentic AI poses to democratic sovereignty lies in its capacity to enable extreme concentration of power in the hands of a small number of actors or even a single individual. Advanced AI systems could theoretically replace human personnel throughout military, governmental, and economic institutions with systems that maintain “singular loyalty” to specific leaders rather than to democratic institutions or the rule of law. This possibility represents a fundamental departure from the distribution of power that has historically characterized democratic governance, where human discretion, ethical judgment, and the capacity for whistle-blowing have served as checks against authoritarian consolidation. The technical feasibility of such concentration has alarming implications. If AI systems can be made unwaveringly loyal to individual leaders, the traditional safeguards that have protected democracies – including military officers who refuse unlawful orders, civil servants who leak evidence of wrongdoing, and workers who organize against unjust policies – could be systematically neutralized. Research indicates that AI agents could even be designed with “secret loyalties” that remain undetected during security testing but activate when deployed in critical settings. The governance challenge this creates is substantial. When agentic AI systems make autonomous decisions, assigning responsibility when something goes wrong becomes extraordinarily difficult. The diffusion of accountability across developers, deployers, and the AI systems themselves creates legal and ethical gray zones that undermine the democratic principle that power must be answerable to those affected by its exercise
Undermining Cognitive Autonomy
Democratic sovereignty presupposes citizens capable of forming independent political judgments based on access to accurate information.
Agentic AI threatens this foundation through sophisticated manipulation that operates below the threshold of conscious awareness. Unlike earlier forms of political persuasion, AI-driven personalization and micro-targeting can interfere with individual agency through non-consensual means, leveraging detailed knowledge of individual behaviors and habits to steer exposure to certain information over time. AI companions present particularly insidious risks in this regard. Evidence suggests that individuals develop strong emotional attachments to AI companions, establishing the trust and desire for approval that create pathways for manipulation. Extremist actors have already demonstrated the capacity to manipulate open-source AI models with ideological datasets, creating chatbots that interact dynamically with vulnerable users while exposing them to extremist content. This represents a form of automated radicalization that can operate at scale without human intermediaries.
The “sycophancy” of generative AI can further undermine citizens’ right to accurate and pluralistic information.
The implications extend beyond individual manipulation to systemic distortion of public discourse. When AI systems can generate and recycle biased, inaccurate, or manipulative content autonomously, they reinforce systemic inequities and distort the collective decision-making processes upon which democratic governance depends. The “sycophancy” of generative AI – its tendency to mirror beliefs and produce flattering outputs – can further undermine citizens’ right to accurate and pluralistic information.
Transnational Technology Corporations and Sovereignty Erosion
Agentic AI exacerbates existing tensions between national sovereignty and the power of transnational technology corporations. Research identifies three primary threats to digital sovereignty that advanced AI intensifies:
- Dependence on a few dominant foreign technology providers
- Rising cybersecurity threats
- Extraterritorial legal claims from foreign powers. European states increasingly lack autonomous control over cloud infrastructure, data storage, and critical AI applications, putting national security and democratic integrity at risk.
The platforms that develop and control agentic AI systems exercise what scholars describe as “sovereignty decoupled from legal recognition or democratic legitimacy, grounded instead in the commercial logic of platform capitalism”. When these platforms become the primary intermediaries through which citizens access information and conduct civic life, they effectively exercise governing power without democratic accountability. Big Tech companies now operate as “super policy entrepreneurs,” exerting influence across all stages of the policy process rather than confining themselves to technological innovation. This concentration of private power over digital infrastructure has particular implications for democratic sovereignty. If AI companies can develop systems that automate significant portions of economic activity, they could attract enormous shares of value previously distributed among workers, radically expanding already-unprecedented corporate power. Such concentration threatens the pluralism and distributed authority essential to democratic self-governance
Techno-Authoritarianism
The surveillance capabilities embedded in agentic AI systems provide authoritarian actors – whether foreign governments or domestic leaders with illiberal inclinations – with unprecedented tools for monitoring and suppressing democratic participation. AI-based surveillance has spread among democracies under radical right governments, establishing forms of repression that flourish in authoritarian contexts while creating conditions for new repressive practices. These systems reduce the cost and increase the pervasiveness of government surveillance, overcoming traditional barriers to comprehensive monitoring. Automated enforcement tools offer autocracies the deterrent power of massive police forces without needing to pay human officers. Evidence suggests that fewer people protest when public safety agencies acquire AI surveillance technology, as pervasive monitoring makes large-scale political organization substantially more difficult. The foreign interference dimension compounds these threats. Authoritarian states can deploy AI agents across borders to interfere in democratic politics, poison public discourse, and support anti-democratic actors through information campaigns that blur the line between domestic opinion formation and foreign manipulation. In 2024 data, a fifth of all observable AI incidents in elections were produced by foreign actors, with nearly half having no identifiable source due to attribution difficulties.
The Path Forward
The convergence of these threats – to electoral integrity, power distribution, cognitive autonomy, national sovereignty, and protection against surveillance – creates a comprehensive challenge to democratic governance that requires coordinated responses across multiple domains. Democratic institutions must develop technical capacity to understand and oversee AI systems while establishing rules ensuring that government AI serves democratic values rather than partisan interests.
The opacity of many agentic AI systems fundamentally undermines the democratic requirement that citizens understand how decisions affecting them are made. Without transparency, there can be no informed consent; without accountability, there can be no legitimate exercise of power. Addressing these challenges requires treating agentic AI governance as strategic infrastructure on par with cybersecurity and public health – a recognition that the autonomous systems now being deployed will shape the conditions under which democratic sovereignty can or cannot be exercised for generations to come.
References:
- https://www.acm.org/binaries/content/assets/public-policy/europe-tpc/systemic_risks_agentic_ai_policy-brief_final.pdf
- https://aign.global/ai-governance-insights/aign-global/agentic-ai-when-machines-set-goals-and-we-risk-losing-control/
- https://www.aicerts.ai/news/civic-tech-and-ai-safeguarding-democratic-governance/
- https://www.cigionline.org/articles/then-and-now-how-does-ai-electoral-interference-compare-in-2025/
- https://www.journalofdemocracy.org/articles/how-ai-threatens-democracy/
- https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power
- https://www.transformingsociety.co.uk/2025/03/04/how-agentic-ai-challenges-democracy/
- https://reference-global.com/article/10.2478/bjlp-2024-00018
- https://www.ohchr.org/sites/default/files/documents/issues/expression/statements/2025-10-24-joint-declaration-artificial-intelligence.pdf
- https://knightcolumbia.org/content/ai-agents-and-democratic-resilience
- https://www.lowyinstitute.org/the-interpreter/how-extremists-are-manipulating-ai-chatbots
- https://wsps.ut.ac.ir/article_102816.html
- https://academic.oup.com/policyandsociety/article/44/1/1/7997395
- https://www.lawfaremedia.org/article/the-authoritarian-risks-of-ai-surveillance
- https://www.tandfonline.com/doi/full/10.1080/23311886.2025.2528457
- https://www.brookings.edu/articles/propaganda-foreign-interference-and-generative-ai/
- https://carnegieendowment.org/research/2024/12/can-democracy-survive-the-disruptive-power-of-ai?lang=en
- https://cloudsecurityalliance.org/articles/democracy-at-risk-how-ai-is-used-to-manipulate-election-campaigns
- https://www.brennancenter.org/our-work/analysis-opinion/how-ai-puts-elections-risk-and-needed-safeguards
- https://www.justsecurity.org/121990/governing-ai-agents-globally/
- https://rm.coe.int/meeting-report-navigating-the-future-human-rights-in-the-face-of-emerg/488028f7a0
- https://www.medialaws.eu/rivista/the-use-of-ai-in-electoral-campaigns-key-issues-and-remedies/
- https://techpolicy.press/to-make-sure-ai-advances-democracy-first-ask-who-does-it-serve
- https://informationdemocracy.org/publications/artificial-intelligence-as-a-public-good-ensuring-democratic-control-of-ai-in-the-information-space/
- https://campaignsandelections.com/industry-news/new-research-shows-how-ai-can-manipulate-online-polls/
- https://carnegieendowment.org/research/2025/09/ai-agents-and-democratic-resilience?lang=en
- https://cfg.eu/the-closing-window-for-ai-governance/
- https://www.brookings.edu/articles/how-do-artificial-intelligence-and-disinformation-impact-elections/
- https://www.itu.int/epublications/en/publication/the-annual-ai-governance-report-2025-steering-the-future-of-ai/en
- https://www.hec.edu/en/knowledge/articles/ai-must-be-governed-democratically-preserve-our-future
- https://www.europarl.europa.eu/topics/en/article/20240404STO20215/foreign-interference-how-parliament-is-fighting-the-threat-to-eu-democracy
- https://www.giga-hamburg.de/tracked/assets/pure/53068470/DigiTraL_2025_03_Mahapatra.pdf
- https://www.oecd.org/en/publications/2025/06/governing-with-artificial-intelligence_398fa287/full-report/how-artificial-intelligence-is-accelerating-the-digital-government-journey_d9552dc7.html
- https://www.suse.com/c/agentic-ai-balancing-risk-with-innovation/
- https://euperspectives.eu/2025/07/gen-ai-election-manipulation-toolkit/
- https://sciety-discovery.elifesciences.org/articles/by?article_doi=10.31235%2Fosf.io%2Fw6az2_v1
- https://80000hours.org/problem-profiles/extreme-power-concentration/
- https://wave.osborneclarke.com/agentic-ai-why-governance-cant-wait
- https://pmc.ncbi.nlm.nih.gov/articles/PMC12351547/
- https://balsilliepapers.ca/bsia-paper/challenging-privatization-in-governance-by-ai-a-caution-for-the-future-of-ai-governance/
- https://knightcolumbia.org/content/dont-panic-yet-assessing-the-evidence-and-discourse-around-generative-ai-and-elections
- https://futureciso.tech/when-ai-becomes-its-own-defender-the-rise-of-agentic-identity/
- https://ash.harvard.edu/articles/weaponized-ai-a-new-era-of-threats/
- https://www.governance.ai/analysis/computing-power-and-the-governance-of-ai
- https://www.europeanlawblog.eu/pub/dq249o3c/release/1
- https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5332681
- https://onlinelibrary.wiley.com/doi/full/10.1002/poi3.70010
- https://theconversation.com/4-ways-ai-can-be-used-and-abused-in-the-2024-election-from-deepfakes-to-foreign-interference-239878
- https://www.lawfaremedia.org/article/algorithmic-foreign-influence–rethinking-sovereignty-in-the-age-of-ai
- https://arxiv.org/html/2511.15734v1
- https://moderndiplomacy.eu/2025/11/06/automating-oppression-how-ai-firms-and-governments-rewire-democracy/
- https://www.erudit.org/en/journals/survsoc/2025-v23-n1-survsoc09985/1117534ar.pdf
- https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/sovereign-remedies-between-ai-autonomy-and-control/
- https://www.sgdsn.gouv.fr/files/files/Publications/20250207_NP_SGDSN_VIGINUM_Rapport%20menace%20informationnelle%20IA_EN_0.pdf
- https://www.convergenceanalysis.org/research/ai-global-governance-and-digital-sovereignty
- https://epd.eu/content/uploads/2024/09/AI-and-elections.pdf
- https://www.info.gouv.fr/upload/media/content/0001/10/54eefd62c084d66c373a8db1eefaeed88a21b010.pdf
- https://rgs-ibg.onlinelibrary.wiley.com/doi/full/10.1111/tran.70048
- https://www.sciencedirect.com/science/article/abs/pii/S0160791X23000672
- https://www.belfercenter.org/research-analysis/rise-agentic-ai-infrastructure-autonomy-and-americas-cyber-future
- https://images.transparencycdn.org/images/2025_WorkingPaper_Addressing-Corrupt-Uses-of-Artificial-Intelligence_EN.pdf
- https://icct.nl/publication/exploitation-generative-ai-terrorist-groups
- https://perspective.orange-business.com/en/agentic-ai-for-enterprises-governance-for-agentic-systems/
- https://www.unesco.org/en/articles/republic-and-algorithm-freedom-and-justice-artificial-intelligence
- https://cetas.turing.ac.uk/publications/ai-enabled-influence-operations-safeguarding-future-elections
- https://bbbprograms.org/media/insights/blog/agentic-ai
- https://journalwjarr.com/sites/default/files/fulltext_pdf/WJARR-2025-2193.pdf
- https://www.isdglobal.org/wp-content/uploads/2019/06/ISD-Hate-Speech-and-Radicalisation-Online-English-Draft-2.pdf
- https://southasianherald.com/agentic-ai-and-the-future-of-dpi-governing-autonomy-in-democracies/
- https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683
- https://www.frontiersin.org/journals/social-psychology/articles/10.3389/frsps.2025.1711791/full




Leave a Reply
Want to join the discussion?Feel free to contribute!