Canada’s AI Future in a Whirlwind of Change

By Anil Wasif

January 15, 2026

In the Trusteeship Council Chamber at the United Nations, a room originally designed to oversee the transition of colonies to independence, the world gathered last September to discuss a new kind of sovereignty.

The occasion was the launch of the Global Dialogue on AI Governance, a crowning initiative of the 80th General Assembly. Secretary-General António Guterres, a man who has lately acquired the weary cadence of a prophet ignored, heralded the moment as a triumph of agile, inclusive multilateralism—a move from high-minded principles to the gritty machinery of practice. It was, he suggested, a North Star for a fractured world.

But if one stepped outside the UN and into briefing rooms in Washington or Beijing, the North Star was obscured by storm clouds. Halfway through January, the diplomatic optimism of early autumn feels like an artifact from a different era.

We have entered the year of the governance paradox: just as the UN has finally erected the scaffolding for a global AI architecture, America, big tech’s centre of gravity—the money, the compute-power, and the enforcement—is retreating aggressively behind national borders.

Nowhere is this dissonance felt more acutely than in Ottawa. For decades, Canadian foreign policy has relied on a comfortable syllogism: what is good for the international rules-based order is good for Canada, and what is good for the United States is usually manageable.

That logic has collapsed. Under Donald Trump, the United States has explicitly rejected centralized international authority over artificial intelligence, favoring instead a muscular, mercantilist AI sovereignty that views multilateral rulebooks as impediments to American AI dominance.

The White House’s recent executive orders, designed to pre-empt state-level regulations and consolidate a unified America First AI market, signal a retreat from the very collaborative safety regimes we advocated for at the G7 Summit in Kananaskis last summer.

Unfortunately, we are politically tethered to the UN’s inclusive vision of supporting the new Independent International Scientific Panel on AI as a neutral evidence engine, while negotiating economic integration with a neighbour that is actively defunding the institution tasked with housing the Panel.

The UN, facing a severe liquidity crisis precipitated by American budget cuts, is being forced to do less with less precisely when the governance challenge is most expansive. Meanwhile, a covert operation is taking place on the other side of the world, one that cares little for Western anxieties about model welfare or alignment.

While Ottawa and Europe debate safety guardrails, nations from Nigeria to Indonesia; including China, Russia, Iran, Cuba, and Belarus, are rapidly adopting DeepSeek, an open-source model from China per Microsoft’s 2025 Global AI Adoption report this week.

By offering high-performance tools without the steep licensing fees or moralizing guardrails of Silicon Valley, Chinese firms are effectively subsidizing the digitization of the developing world. This has birthed a shadow stack—a parallel technological ecosystem that operates largely outside the purview of Western safety institutes.

By offering high-performance tools without the steep licensing fees or moralizing guardrails of Silicon Valley, Chinese firms are effectively subsidizing the digitization of the developing world.

For Canadian development professionals, this necessitates a humbling strategic pivot: we can no longer assume that the digital infrastructure of our partners will look anything like our own, or that it will be subject to the same norms.

Domestically, the abstraction of AI policy is colliding with the hard physics of the Canadian landscape. The narrative that AI is a weightless, cloud-based asset has dissolved. We are learning that the cloud is actually a heavy industrial sector with a voracious appetite for electricity and water.

In Quebec, a province that has long treated hydroelectricity as a limitless birthright, Hydro-Québec projects that data centers will demand an additional 4.1 terawatt-hours of power by 2032. In drought-prone Nanaimo, British Columbia, the water requirements for cooling these facilities have become a municipal flashpoint.

This physical reality creates a distinct policy incoherence. The federal government’s Budget 2025 push for sovereign compute—an attempt to ensure Canada isn’t merely a client state of U.S. tech giants—risks undermining the country’s climate commitments.

As the Canadian Union of Public Employees (CUPE) has noted, the rush to power these facilities is driving pressure to bring natural gas generation online, potentially doubling emissions in some regions. The digital revolution, it turns out, has a smokestack. Interesting further is the signal that these issues are now being cited as tacit pro-labour arguments.

Technologically, the ground is shifting beneath the regulators’ feet. The focus of 2024 was the chatbot; the reality of 2026 is the AI agent. These are systems capable of executing complex workflows—planning, reasoning, and acting without human intervention. AI experts refer to this as shadow autonomy, but just this week, Claude users have started calling it “Cowork”.

While Claude’s agent is supposedly restricted to a user’s desktop files, the U.S. Secretary of War coincidentally embraced Elon Musk’s controversial Grok AI this week to operate inside the Pentagon’s networks, classified and unclassified.

This emergence of shadow autonomy where high-capability agents operate within critical infrastructure without adequate oversight has rendered retrospective compliance frameworks obsolete.

Policy experts at the U.S. Center for AI Policy are advocating for an Autonomy Passport, a regulatory instrument that would require high-capability agents to be registered and subject to a statutory recall, or kill switch. It is a concession to the fact that we are no longer governing AI tools, but actors.

Amid this technical and geopolitical fracturing, there is a crisis of public consent. This past week, Taylor Owen, member of Canada’s AI Council and host of the Globe and Mail’s Machines Like Us podcast, described the current moment as a mass social experiment conducted on the public without its permission.

The polling bears him out: over 85% of Canadians believe AI threatens their livelihoods, creating a widening delta between the government’s productivity-focused boosterism and the citizenry’s deep-seated anxiety.

As we move through 2026, the dream of a single, harmonized global AI governance regime appears to be fading, replaced by a complex regime of overlapping, sometimes contradictory frameworks.

For Canada, the path forward cannot be mere imitation of the EU’s rigid rulebook or the US’s deregulation. It will require what the UN AI Advisory Body calls agile governance, moving from static legislation to continuous, real-time assurance.

It will require mandating transparency on the energy and water usage of the machines we invite into our territory. And it will require a clear-eyed recognition that in the race to build the fastest model, the most valuable asset may be the institutional capacity to govern it.

Policy Columnist Anil Wasif is a public servant in the Ontario government. He serves on the University of Toronto’s Governing Council and the Advisory Board of McGill’s Max Bell School. Internationally, he serves on the OECD’s Infrastructure Delivery Committee. He co-owns and manages the Canada-born global non-profit BacharLorai. The views expressed are his own.