A Government Roadmap for Smart, Safe, Ethical AI

Shutterstock

By Dr. Colleen Kaiser

July 2, 2025

The federal government wants to supercharge productivity by launching artificial intelligence (AI) “at scale.”

This includes modernizing the public service with AI tools. In principle, this is welcome. Predictive AI models could anticipate shifts in health care trends, enhance fiscal forecasting and help detect tax fraud, among other applications.

Natural language processing tools could enable larger consultations on government decisions. Yet these opportunities come with a warning: without a thoughtful rollout and AI-capable leadership, we risk spending public funds chasing shiny new tools at the expense of real progress.

AI systems are not “set-it-and-forget-it” tools. These are complex, dynamic systems that raise serious concerns about privacy, ethics, and accountability. They require teams of diverse experts — from algorithm auditors to ethics advisors — to run well.

Innovation in AI is also happening at breakneck speed. The same is true for how we manage and govern it, with new techniques for bias mitigation and privacy protection emerging constantly. As an expert in agile government decision making, I know that if launched too quickly and without enough in-house expertise, AI adoption poses risks and governments could fall behind the very systems they hope to control.

While the creation of a centralized AI hub is an important step, building capacity in government departments is still essential. Many departments are strengthening their capabilities, but the pace of AI development and level of oversight needed present a challenge for teams trying to evaluate tools, manage risks or decide where and how AI should be deployed.

This is not about turning departments into tech labs. It is about ensuring AI decisions are grounded in knowledge and operational realities.

We also cannot ignore AI’s enormous carbon footprint. A government-wide AI rollout without regard for emissions could undermine Canada’s climate commitments. The federal government has acknowledged this challenge in its new Sovereign AI Compute Strategy, which commits to building Canadian-controlled computing capacity powered by clean energy. This is a critical step, but it needs follow-through.

This is not about turning departments into tech labs. It is about ensuring AI decisions are grounded in knowledge and operational realities.

Environmental impact must be a design constraint, not an afterthought. This means favouring energy-efficient models, installing infrastructure in areas with abundant clean energy and being selective about when and where AI is used.

The credibility of AI modernization efforts depends on ensuring productivity does not come at the cost of climate goals or digital sovereignty.

Every AI deployment must also be governed by robust and transparent oversight. A growing set of policies, regulations and institutions, such as the AI and Data Act, the Artificial Intelligence Safety Institute and the Artificial Intelligence Advisory Council, the Pan-Canadian AI Strategy and Canadian Sovereign AI Compute Strategy, can ensure AI systems used across the economy are transparent, accountable and safe.

As AI tools are set up in government, the same level of oversight must be applied. This includes publicly explaining how systems work, the risks they pose and how they will be monitored.

In Canada, this upfront risk assessment is needed through what is known as an algorithmic impact assessment, a requirement under the Treasury Board Directive on Automated Decision-Making.

However, enforcement under a policy instrument, rather than legislation, needs accountability mechanisms. It also needs independent bodies, such as the Office of the Privacy Commissioner, to be able to investigate non-compliance as it would under a legislative framework.

At a time when trust in public institutions is low, this level of transparency is not an option. Building on these first steps is essential for strengthening public confidence and improving systems.

These challenges show that without embedded AI leadership, departments risk relying on off-the-shelf solutions or relying heavily on consultants. This can be expensive and unsustainable for policy-facing AI tools that require regular updates and adaptations. Each ministry needs internal leadership capable of aligning AI initiatives with department goals and assessing where AI is appropriate, helpful and safe.

That is why every major government department should develop the capacity to manage AI adoption, for example, by appointing a chief AI officer. These officers would oversee AI development, implementation and governance and share knowledge to accelerate learning, all in coordination with the centralized AI Hub.

This network model of AI leadership would make sure that subject-matter expertise informs technical decisions and enables government to make more deliberate decisions about where AI use is appropriate.

Canada’s proud history in AI research reflects our creativity and academic rigour. However, research excellence alone does not guarantee safe or effective public-sector deployment of AI — warnings from some of our most brilliant AI pioneers attest to that truth.

The path forward is clear: targeted, deliberate modernization that embeds AI knowledge, balances innovation with ethical and democratic principles and treats environmental impacts as core design constraints.

This approach would enable the government to modernize selectively and strategically and improve services without sacrificing equity, accountability, or sustainability.

Anything less and we risk trading taxpayer dollars for a parade of costly experiments whose benefits may never materialize, or worse, whose hazards will become all too obvious.

Colleen Kaiser is Director, Policy and Research and a postdoctoral fellow at the Smart Prosperity Institute at the University of Ottawa. They hold a PhD in Environmental Studies from York University.