Ipsos Global Survey: The World is Worried About AI
While human beings are embracing the digital age, they are increasingly uneasy about how AI will shape their lives/Shutterstock
By Fen Osler Hampson, Paul Samson and Sean Simpson
October 23, 2025
In 2013, Edward Snowden, the intelligence contractor-turned whistleblower who exposed the U.S. government’s secret data collection, detonated the digital equivalent of an atomic bomb.
Snowden’s massive leak of top-secret government files suddenly made everyone aware that their most intimate email messages, online searches and even their phone calls were being quietly collected and stored by the National Security Agency, through its top-secret program, PRISM.
The shock was profound: people scrambled for encrypted apps, changed their behaviour, began to censor themselves online—some even went off-line. It was a turning point in humanity’s relationship with the internet. After years of seeing the digitization of life as a net benefit, people began to understand its perils.
Twelve years later, history is repeating itself. This time, though, the worry is artificial intelligence (AI). Whereas Snowden’s revelations undermined trust in the internet, today’s AI revolution is raising fears about the uses and abuses of this new technology itself, beyond its general power to disrupt.
Our just-released Carleton-CIGI-Ipsos Public Attitudes on Digital Governance survey shows that while citizens are embracing the digital age, they are increasingly uneasy about how new AI technologies will shape their lives.
Trust in the internet, once compromised by surveillance scandals and data breaches, is now quite strong. Two-thirds of people in the 16 countries covered by the survey now say they trust the internet — a four-point increase since 2021. Majorities across all countries see it as generally safe and functionally trustworthy, but there are still some important reservations. More than eight in 10, or 84% — an extraordinary number by any standard — are deeply concerned about their online privacy, an increase of 11 points since 2021.
In India, Kenya and South Africa, nearly all those surveyed were worried about how their data is handled. Even in advanced economies that have stronger regulatory regimes for online privacy, a majority admit they no longer feel fully in control.
In the Middle East and Africa, three-quarters trust AI, viewing it as a tool for economic growth and empowerment. In North America and Western Europe, less than one-third share that sentiment.
Strikingly, public attitudes toward AI now mirror those early post-Snowden anxieties. Although half of those surveyed say they trust AI overall, that headline number masks deep regional splits. In the Middle East and Africa, three-quarters trust AI, viewing it as a tool for economic growth and empowerment. In North America and Western Europe, less than one-third share that sentiment.
Why such a difference? In emerging economies, AI is seen as a tool to improve access to healthcare, public and financial services. But in advanced industrial societies, AI is associated with job loss (including white collar jobs), algorithmic bias or disinformation, which results in higher anxiety levels.
Although confidence in “narrow” AI—task-specific systems like translation, voice assistants, or spam filters—remains relatively high at 51%, it drops when people hear terms like “artificial general intelligence” or “super intelligence,” which evoke dystopian visions of machines beyond human control.
Almost four in 10 worldwide believe that AI will compromise their personal privacy. In North America, that number climbs to 51%. Europeans (46%) similarly fear that AI’s appetite for their data will make surveillance easier. This erosion of perceived control could prove as corrosive as Snowden’s leaks once were. When people feel they cannot opt out or understand how technology influences their choices, trust is destroyed.
Across every region surveyed, one thing stands out: people want stronger rules. A clear majority agrees that governments should regulate AI development and use. Support is strongest in Africa and the Middle East, where citizens also express the most positive views about AI’s benefits. Why the paradox? People embrace technological innovation not because they blindly trust it, but because they believe that good governance ensures accountability.
As in earlier periods of industrial transformation, progress depends on protecting the public interest. Just as the introduction of traffic laws, including speed limits, made mass adoption of the automobile possible, so too must governance frameworks—ethical audits, algorithmic oversight and international cooperation—make AI safe to use.
Trust must be earned through a reliable, transparent and safe user experience but guaranteed by appropriate regulatory guardrails.
Fear is a natural response to technological innovation. The challenge is not to halt progress. It is to steer it. The internet survived its Snowden moment because societies recognized that privacy and openness could coexist through improved forms of accountability and greater consumer awareness and control. The same holds true for AI.
The future will not be decided by how clever our machines become, but by how trustworthy we make them. In the end, it is the ethical and responsible choices we make collectively that determine whether we govern technology, or it governs us.
Policy Contributing Writer Fen Osler Hampson is Chancellor’s Professor at Carleton University. Paul Samson is the President of the Centre for International Governance Innovation. Sean Simpson is the Senior Vice President of Ipsos Public Affairs. They are co-investigators on the Carleton-CIGI-Ipsos survey, cofounded by the Social Sciences & Humanities Research Council of Canada and CIGI.
