Technology has always shaped the way citizens interact with information. But a new problem will soon emerge in the form of personal AI agents, which could change not only the way people receive information but also the way they act on it. These systems will conduct research, draft communications, highlight causes, and advocate on behalf of the user. They will inform decisions such as how to vote on the ballot, which organizations are worth supporting, or how to respond to government notices. They will, in a meaningful sense, begin to mediate relations between individuals and the institutions that govern them.
We’ve already seen on social media what happens when algorithms optimize for engagement over understanding. Platforms do not need to have a clear political agenda to create polarization and radicalization. An agent who knows your priorities and your concerns – who is designed to keep you engaged – takes the same risks. And the risks may be even more difficult to detect in this case, because an agent presents himself as your attorney. It speaks for you, acts on your behalf, and can earn trust through that intimacy.
Now zoom out on the collective. AI agents and humans may soon participate on the same platform, where it may be impossible to tell them apart. Even if each individual AI agent is well-designed and aligned with its user’s interests, the interactions of millions of agents can produce outcomes that no one person wanted or chose. For example, research Shows that agents exhibiting no individual bias can still produce large-scale collective bias. And setting aside what agents do with each other, what they do for their users. A public sphere in which everyone has a personalized agent tailored to their existing ideas is, on the whole, not a public sphere at all. It is a collection of private worlds, each internally coherent but collectively inaccessible to the kind of shared deliberation that democracy requires.
Taken together, these three changes – how we know, how we act, and how we engage in collective governance – amount to a fundamental change in the structure of citizenship. In the near future, people will form their political opinions through AI filters, exercise their civic agency through AI agents, and participate in institutions and public discussions that are themselves shaped by the interactions of millions of such agents.
Today’s democracy is not ready for this. Our institutions were designed for a world in which power was exercised more visibly, information spread too slowly to be resisted, and reality felt more shared even when imperfect. All this was going bad long before the advent of generic AI. And yet this need not necessarily be a story of decline. To avoid that outcome we need to plan something better.
At the informational level, AI companies should accelerate existing efforts to ensure that model outputs are truthful. They should also explore some promising early findings that AI models can Help reduce polarization. a fresh field assessment AI-generated fact checking on X found that people with different political viewpoints found AI-written notes to be more useful than human-written notes. The paper has not yet been peer-reviewed, but it is a potentially revolutionary discovery: AI-assisted fact-checking may be able to achieve the kind of cross-partisan credibility that eludes most manual human efforts. Better understanding and transparency about how models make these claims and prioritize sources in the process could help build greater public trust.