Michael Henry Tessler et al. • 2026 • arXiv
The strength of democracy lies in the free and equal exchange of diverse viewpoints. Living up to this ideal at scale faces inherent tensions: broad participation, meaningful deliberation, and political equality often trade off with one another (Fishkin, 2011). We ask whether and how artificial intelligence (AI) could help navigate this "trilemma" by engaging with a recent example of a large language model (LLM)-based system designed to help people with diverse viewpoints find common ground (Tessler, Bakker, et al., 2024). Here, we explore the implications of the introduction of LLMs into deliberation augmentation tools, examining their potential to enhance participation through scalability, improve political equality via fair mediation, and foster meaningful deliberation by, for example, surfacing trustworthy information. We also point to key challenges that remain. Ultimately, a range of empirical, technical, and theoretical advancements are needed to fully realize the promise of AI-mediated deliberation for enhancing citizen engagement and strengthening democratic deliberation.
Seth Lazar et al. • 2026 • Minds and Machines
LLMs are among the most advanced tools ever devised for understanding and generating natural language. Democratic deliberation and decision-making involve, at several distinct stages, the production and comprehension of language. So it is natural to ask whether our best linguistic tools might prove instrumental to one of our most important linguistic tasks involving language. Researchers and practitioners have recently asked whether LLMs can support democratic deliberation by leveraging abilities to summarise content, to aggregate opinions over summarised content, and to represent voters by predicting their preferences over unseen choices. In this paper, we assess whether using LLMs to perform these and related functions really advances the democratic values behind these experiments. We suggest that the record is mixed. In the presence of background inequality of power and resources, as well as deep moral and political disagreement, we should not use LLMs to automate non-instrumentally valuable components of the democratic process, nor should we be tempted to supplant fair and transparent decision-making procedures that are practically necessary to reconcile competing interests and values. However, while LLMs should be kept well clear of formal democratic decision-making processes, we think they can instead strengthen the informal public sphere—the arena that mediates between democratic governments and the polities that they serve, in which political communities seek information, form civic publics, and hold their leaders to account.
Mark Coeckelbergh • 2025 • Science and Engineering Ethics
Abstract While there are many public concerns about the impact of AI on truth and knowledge, especially when it comes to the widespread use of LLMs, there is not much systematic philosophical analysis of these problems and their political implications. This paper aims to assist this effort by providing an overview of some truth-related risks in which LLMs may play a role, including risks concerning hallucination and misinformation, epistemic agency and epistemic bubbles, bullshit and relativism, and epistemic anachronism and epistemic incest, and by offering arguments for why these problems are not only epistemic issues but also raise problems for democracy since they undermine its epistemic basis– especially if we assume democracy theories that go beyond minimalist views. I end with a short reflection on what can be done about these political-epistemic risks, pointing to education as one of the sites for change.
Suyash Fulay et al. • 2025 • arXiv
Deliberation is essential to well-functioning democracies, yet physical, economic, and social barriers often exclude certain groups, reducing representativeness and contributing to issues like group polarization. In this work, we explore the use of large language model (LLM) personas to introduce missing perspectives in policy deliberations. We develop and evaluate a tool that transcribes conversations in real-time and simulates input from relevant but absent stakeholders. We deploy this tool in a 19-person student citizens' assembly on campus sustainability. Participants and facilitators found that the tool sparked new discussions and surfaced valuable perspectives they had not previously considered. However, they also noted that AI-generated responses were sometimes overly general. They raised concerns about overreliance on AI for perspective-taking. Our findings highlight both the promise and potential risks of using LLMs to raise missing points of view in group deliberation settings.
Rudy Alexandro Garrido Veliz et al. • 2025 • arXiv
Social media increasingly fuel extremism, especially right-wing extremism, and enable the rapid spread of antidemocratic narratives. Although AI and data science are often leveraged to manipulate political opinion, there is a critical need for tools that support effective monitoring without infringing on freedom of expression. We present KI4Demokratie, an AI-based platform that assists journalists, researchers, and policymakers in monitoring right-wing discourse that may undermine democratic values. KI4Demokratie applies machine learning models to a large-scale German online data gathered on a daily basis, providing a comprehensive view of trends in the German digital sphere. Early analysis reveals both the complexity of tracking organized extremist behavior and the promise of our integrated approach, especially during key events.
Andrew Konya et al. • 2025 • Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency
A growing body of work has shown that AI-assisted methods — leveraging large language models, social choice methods, and collective dialogues — can help navigate polarization and surface common ground in controlled lab settings. But what can these approaches contribute in real-world contexts? We present a case study applying these techniques to find common ground between Israeli and Palestinian peacebuilders in the period following October 7th, 2023. From April to July 2024 an iterative deliberative process combining LLMs, bridging-based ranking, and collective dialogues was conducted in partnership with the Alliance for Middle East Peace. Around 138 civil society peacebuilders participated including Israeli Jews, Palestinian citizens of Israel, and Palestinians from the West Bank and Gaza. The process resulted in a set of collective statements, including demands to world leaders, with at least 84% agreement from participants on each side. In this paper, we document the process, results, challenges, and important open questions.
Nikos I. Karacapilidis et al. • 2024
Aiming to augment the effectiveness and scalability of existing digital deliberation platforms, while also facilitating evidence-based collective decision making and increasing citizen participation and trust, this article (i) reviews state-of-the-art applications of LLMs in diverse public deliberation issues; (ii) proposes a novel digital deliberation framework that meaningfully incorporates Knowledge Graphs and neuro-symbolic reasoning approaches to improve the factual accuracy and reasoning capabilities of LLMs, and (iii) demonstrates the potential of the proposed solution through two key deliberation tasks, namely fact checking and argument building. The article provides insights about how modern AI technology should be used to address the equity perspective, helping citizens to construct robust and informed arguments, refine their prose, and contribute comprehensible feedback; and aiding policy makers in obtaining a deep understanding of the evolution and outcome of a deliberation.
Stavros Vassos et al. • 2024 • ACM
Accurate political information is vital for voters to make informed decisions. However, due to the plethora of data and biased sources, accessing concise, factual information still remains a challenge. To tackle this problem, we present an open-access, deployed digital assistant powered by Large Language Models (LLMs), specifically tailored to answer voters’ questions and help them vote for the political party they mostly align with. The user can select up to 3 parties, input their question, and get short, summarized answers from the parties’ published political agendas, which contain hundreds of pages and, thus, are difficult to navigate for the typical citizen. Our NLP system architecture leverages OpenAI’s GPT-4 and incorporates Retrieval-Augmented Generation with Citations (RAG+C) to integrate custom data into LLMs effectively and build user trust. We also describe our database design, underlining the use of an open-source vector database, optimized for high-dimensional semantic search across multiple documents, and a semantic-rich LLM cache, reducing operational expenses and end-user latency time. Our open-access system supports Greek and English and has been deployed live at https://toraksero.gr/for the Greek 2023 Elections, which gathered 30K user sessions and 74% user satisfaction.
Goshi Aoki • 2024 • arXiv
The advancement of generative AI, particularly large language models (LLMs), has a significant impact on politics and democracy, offering potential across various domains, including policymaking, political communication, analysis, and governance. This paper surveys the recent and potential applications of LLMs in politics, examining both their promises and the associated challenges. This paper examines the ways in which LLMs are being employed in legislative processes, political communication, and political analysis. Moreover, we investigate the potential of LLMs in diplomatic and national security contexts, economic and social modeling, and legal applications. While LLMs offer opportunities to enhance efficiency, inclusivity, and decision-making in political processes, they also present challenges related to bias, transparency, and accountability. The paper underscores the necessity for responsible development, ethical considerations, and governance frameworks to ensure that the integration of LLMs into politics aligns with democratic values and promotes a more just and equitable society.
Lily L. Tsai et al. • 2024 • An MIT Exploration of Generative AI
Semantic Scholar extracted view of "Generative AI for Pro-Democracy Platforms" by Lily L. Tsai et al.
Jason W. Burton et al. • 2024 • Nature Human Behaviour
Collective intelligence underpins the success of groups, organizations, markets and societies. Through distributed cognition and coordination, collectives can achieve outcomes that exceed the capabilities of individuals-even experts-resulting in improved accuracy and novel capabilities. Often, collective intelligence is supported by information technology, such as online prediction markets that elicit the 'wisdom of crowds', online forums that structure collective deliberation or digital platforms that crowdsource knowledge from the public. Large language models, however, are transforming how information is aggregated, accessed and transmitted online. Here we focus on the unique opportunities and challenges this transformation poses for collective intelligence. We bring together interdisciplinary perspectives from industry and academia to identify potential benefits, risks, policy-relevant considerations and open research questions, culminating in a call for a closer examination of how large language models affect humans' ability to collectively tackle complex problems.
Jairo F. Gudiño et al. • 2024 • Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
We explore an augmented democracy system built on off-the-shelf large language models (LLMs) fine-tuned to augment data on citizens’ preferences elicited over policies extracted from the government programmes of the two main candidates of Brazil’s 2022 presidential election. We use a train-test cross-validation set-up to estimate the accuracy with which the LLMs predict both: a subject’s individual political choices and the aggregate preferences of the full sample of participants. At the individual level, we find that LLMs predict out of sample preferences more accurately than a ‘bundle rule’, which would assume that citizens always vote for the proposals of the candidate aligned with their self-reported political orientation. At the population level, we show that a probabilistic sample augmented by an LLM provides a more accurate estimate of the aggregate preferences of a population than the non-augmented probabilistic sample alone. Together, these results indicate that policy preference data augmented using LLMs can capture nuances that transcend party lines and represents a promising avenue of research for data augmentation. This article is part of the theme issue ‘Co-creating the future: participatory cities and digital governance’.
Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.
We’re working to prevent abuse, provide transparency on AI-generated content, and improve access to accurate voting information.
Aviv Ovadya et al. • 2024
This position paper argues that effectively"democratizing AI"requires democratic governance and alignment of AI, and that this is particularly valuable for decisions with systemic societal impacts. Initial steps -- such as Meta's Community Forums and Anthropic's Collective Constitutional AI -- have illustrated a promising direction, where democratic processes could be used to meaningfully improve public involvement and trust in critical decisions. To more concretely explore what increasingly democratic AI might look like, we provide a"Democracy Levels"framework and associated tools that: (i) define milestones toward meaningfully democratic AI, which is also crucial for substantively pluralistic, human-centered, participatory, and public-interest AI, (ii) can help guide organizations seeking to increase the legitimacy of their decisions on difficult AI governance and alignment questions, and (iii) support the evaluation of such efforts.
Miguel Gonzalez-Mohino et al. • 2023 • Journal of New Approaches in Educational Research
Abstract The widespread use of digital technologies and the expansion of social networks has created new communication and meeting spaces where people and social and political actors connect with each other. This opens diverse spaces and possibilities for digital engagement in a more accessible, immediate, continuous, egalitarian, and personalized way. Digital technology facilitates learning, dissemination, and access to information, turning it into a means of communication and fueling the practice of critical thinking. In particular civic critical thinking practices improve the organization and effectiveness of civic networks and spaces for citizen participation, ultimately helping to produce responsible, conscious citizens. This study proposes a series of hypotheses based on the relationships between digital learning, critical thinking and civic participation, and tests them using the technique of structural equation modeling (SEM) with partial least squares (PLS) applied to a sample of 191 primary and secondary school students. The results indicate that digital tools have a positive impact on the development of critical thinking, and this influences citizen participation, transforming people into more engaged citizens of the world with participatory attitudes and values.
Sara Fish et al. • 2023 • arXiv
The mathematical study of voting, social choice theory, has traditionally only been applicable to choices among a few predetermined alternatives, but not to open-ended decisions such as collectively selecting a textual statement. We introduce generative social choice, a design methodology for open-ended democratic processes that combines the rigor of social choice theory with the capability of large language models to generate text and extrapolate preferences. Our framework divides the design of AI-augmented democratic processes into two components: first, proving that the process satisfies representation guarantees when given access to oracle queries; second, empirically validating that these queries can be approximately implemented using a large language model. We apply this framework to the problem of summarizing free-form opinions into a proportionally representative slate of opinion statements; specifically, we develop a democratic process with representation guarantees and use this process to portray the opinions of participants in a survey about abortion policy. In a trial with 100 representative US residents, we find that 84 out of 100 participants feel "excellently" or "exceptionally" represented by the slate of five statements we extracted.
Avoid Diluting Democracy by Algorithms
Henrik Skaug Sætra et al. • 2022 • Nature Machine Intelligence
Raphael Koster et al. • 2022 • Nature Human Behaviour
AbstractBuilding artificial intelligence (AI) that aligns with human values is an unsolved problem. Here we developed a human-in-the-loop research pipeline called Democratic AI, in which reinforcement learning is used to design a social mechanism that humans prefer by majority. A large group of humans played an online investment game that involved deciding whether to keep a monetary endowment or to share it with others for collective benefit. Shared revenue was returned to players under two different redistribution mechanisms, one designed by the AI and the other by humans. The AI discovered a mechanism that redressed initial wealth imbalance, sanctioned free riders and successfully won the majority vote. By optimizing for human preferences, Democratic AI offers a proof of concept for value-aligned policy innovation.
Platforms can bring challenging and divisive policy issues to a new kind of democratic process, enabling a ‘people’s mandate’ for their policies and helping mitigate corporate and partisan power.
The Oxford handbook of deliberative democracy
André Bächtiger et al. • 2018 • Oxford University Press
Deliberative democracy has been one of the main games in contemporary political theory for two decades, growing enormously in size and importance in political science and many other disciplines. This handbook takes stock of deliberative democracy as a research field, in philosophy, in various research programmes in the social sciences and law, and in political practice around the globe. It provides a concise history of deliberative ideals in political thought and discusses their philosophical origins. The book locates deliberation in political systems with different spaces, publics, and venues, including parliaments, courts, governance networks, protests, mini-publics, old and new media, and everyday talk. It engages with practical applications, mapping deliberation as a reform movement and as a device for conflict resolution, documenting the practice and study of deliberative democracy around the world and in global governance
As artificial intelligence increasingly permeates our decision-making processes, 1 a crucial question emerges: can large language models (LLMs) truly engage in 2 the nuanced, collaborative process of deliberation that underpins democracy? We 3 present the LLM-Deliberation Quality Index , a novel framework for evaluat-4 ing the deliberative capabilities of large language models (LLMs). Our approach 5 combines aspects of the Deliberation Quality Index from political science liter-6 ature with LLM-specific measures to assess both the quality of deliberation and 7 the believability of AI agents in simulated policy discussions. Additionally, we 8 introduce a controlled simulation environment featuring complex public policy 9 scenarios and conduct experiments using various LLMs as deliberative agents. Our 10 findings reveal both promising capabilities and notable limitations in current LLMs’ 11 deliberative abilities. While models like GPT-4o demonstrate high performance in 12 providing justified reasoning (9.41 / 10), they struggle with more social aspects of 13 deliberation such as storytelling (2.43 / 10) and active questioning (3.41 / 10). This 14 contrasts sharply with typical human performance in deliberations, who typically 15 perform well in storytelling but struggle with justified reasoning. We also observe 16 a strong correlation between an LLM’s ability to respect others’ arguments and its 17 propensity for opinion change, indicating a potential limitation in LLMs’ capacity 18 to acknowledge valid counterarguments without altering their core stance, rais-19 ing important questions about LLMs’ current capability for nuanced deliberation. 20 Overall, our work offers a comprehensive framework for evaluating and probing 21 the deliberative abilities of LLM agents across various policy domains, showing 22 not only the current state of LLM deliberation capabilities but also providing a 23 foundation for developing more deliberative AI. 24
This blog provides a snapshot of the work we've done since last summer to test our models for elections-related risks.
A new initiative to support countries around the world that want to build on democratic AI rails.