Can LLMs improve collective decision-making?
What research tells us
Abstract
When a group needs to make a decision (in a team, workshop, or organization), we often expect the collective to perform better than its individual members. However, the arrival of language models (LLMs) in these contexts raises a simple question: does AI truly improve our decisions... or can it sometimes degrade them? Based on a simple and reproducible collective intelligence experiment, this conference demonstrates how naive use of AI can lead to seemingly convincing but less reliable decisions in practice. Drawing from recent research on collective intelligence and AI-assisted deliberation, we analyze cases where AI weakens or strengthens collective decision-making, and why these effects depend primarily on how it's integrated. The conference concludes with a framework for thinking about AI as a facilitation tool rather than an authority, helping teams design more robust and transparent collective systems.
Format
Presentation followed by Q&A · Tech talk (45 min)
Target Audience
Data teams, decision-makers, tech enthusiasts. No specific technical prerequisites required.
Prerequisites
No specific technical prerequisites