“AI, good servant, bad master”

© 2026 EPFL AI Center - Nicolas Machado

© 2026 EPFL AI Center - Nicolas Machado

AI is making its way into every aspect of our lives, but how ready are we to adopt it, and under what conditions? A report published in French by the EPFL AI Center, in collaboration with the University of Geneva, presents both the perceptions of a sample of French-speaking population of Switzerland and the recommendations of a citizen’s assembly.

Massive use, despite serious concerns about cyberattacks, deepfakes, and privacy: this is the paradox that emerged from a citizens’ assembly asked to reflect on their attitudes towards artificial intelligence. Initiated by the EPFL AI Center, in collaboration with the Swiss Research Center on Democratic Innovations at the University of Geneva and the Demoscan association, this unprecedented process has resulted in the publication of its final report in French and German. It contains 20 concrete proposals to regulate and support the deployment of AI. In short: AI is a good servant but a bad master.

“AI is one of the most significant technological transformations of our time. Its rapid development affects work, health, education, privacy and democratic life” says Marcel Salathé, co-director of the EPFL AI Center. “Given the scale of these issues, the report reminds us that it is essential for citizens to be able to express themselves and help shape the future of this technology, rather than simply suffering from its effects.”

A two-step approach

This document is the result of two complementary initiatives. First, a survey supported by the Swiss Federal Statistical Office collected the views of 734 residents from the French-speaking regions of Switzerland. The questionnaire focused mainly on the uses and public perceptions of AI.

Among the respondents who expressed interest, 40 citizens were selected to form a diverse panel (canton, age, education level, political interest). The assembly then met over two weekends in November to discuss, debate and deliberate. The process was designed and conducted by the Demoscan association which oversaw the methodology and facilitation, guided by a central principle of neutrality, fostering an informed and balanced discussion.

“Democracy is not limited to the ballot box. A citizens’ assembly is a mechanism that transforms intuitive opinion into reflective judgment: participants have enough time, receive relevant information, work within a neutral framework for debate, and produce argued proposals” says Nenad Stojanović, Professor at the University of Geneva and co-founder of Demoscan.

Widespread adoption but strong expectations

The survey shows that AI has already become mainstream: 87% of respondents have used at least one AI tool, and ChatGPT is by far the most common (70%). But this adoption comes with strong expectations around transparency and regulation. Nine in ten respondents believe systems should clearly indicate when users are interacting with a machine, while 70% want public authorities to strictly regulate AI development.

The most pressing fears relate to malicious use. More than 80% cite hacking and cyberattacks as a major risk, while 77% worry about deepfakes and misinformation, reflecting growing anxiety about the erosion of trust in digital content. Privacy comes next (65%), followed by the impact on jobs (59%). Overall, nearly 69% see AI as a serious threat to privacy and data security.

Who should govern AI?

When asked who should take the lead in governing AI, about a third of respondents point to the Swiss government (31%). Yet nearly as many say they don’t know (27%), highlighting uncertainty around institutional responsibility.

In terms of political priorities, data protection stands out clearly: 68% rank it as the top issue, ahead of ethical guidelines and transparency (41%), and the prevention of uncontrolled AI (40%).

The report is intended to inform academic, institutional, and political debates by grounding them in public expectations. In the preface, participants’ convey a central message: AI systems should not be allowed to make decisions autonomously in ways that weaken individual choice or create dependency.

Salathé hopes the process can now be scaled up. “The next step could be to extend this initiative across Switzerland, so that AI governance is informed by citizen recommendations in all language regions,” he says.

20 proposals, structured around five themes

At the end of the deliberations, the citizens’ assembly produced 20 proposals grouped into five broad areas:

  1. The role of the State: including the creation of a Federal Office for AI to secure long-term research funding.
  2. Access and education: strengthening public awareness of the risks of generative AI and encouraging social interaction to avoid an “all-AI” society.
  3. The world of work: preparing for economic and personal impacts of job loss or job transformation, including support for retraining.
  4. Traceability: introducing labeling to identify and promote human-made content, alongside stronger copyright protections.
  5. Responsible practices: establishing dedicated legislation and educational tools to improve data protection and reduce cyber risks.
References

The report in both French and German is available on the EPFL AI Center's website

This project was supported by the EPFL AI Center and Stiftung Mercator Schweiz. 


Author: Mélissa Anchisi

Source: EPFL

This content is distributed under a Creative Commons CC BY-SA 4.0 license. You may freely reproduce the text, videos and images it contains, provided that you indicate the author’s name and place no restrictions on the subsequent use of the content. If you would like to reproduce an illustration that does not contain the CC BY-SA notice, you must obtain approval from the author.