From Risk to Readiness: A Secure Path Forward for AI Adoption in Government

June 19, 2025
By Wendi Whitmore and Heather Black
As world leaders wrapped up their meetings in Kananaskis, Alberta for the 2025 G7 Summit this week, there was a growing sense of urgency — and opportunity — surrounding the transformative power of artificial intelligence (AI). Prime Minister Mark Carney’s G7 priorities heading into the gathering reflected this, calling for coordinated action to accelerate digital adoption while protecting communities, building energy security, and securing future partnerships. The G7 Leaders’ Statement on AI for Prosperity reaffirmed this direction, underscoring a shared commitment to “advance secure, trustworthy, and human-centric AI” that drives economic growth and delivers tangible benefits for people and businesses alike.
Governments everywhere are asking the same question: How do we drive AI innovation to become more agile, more productive, and to improve citizen services without creating new risks that threaten our national or economic security? The good news is that public sector leaders don’t have to choose between innovation and security. With the right tools, frameworks, and partnerships, Canada and other G7 countries can confidently move forward on their AI journey.
The push to adopt and deploy AI is real and growing fast. Palo Alto Networks’ recent State of Generative AI report noted that there was an 890 per cent global surge in GenAI traffic over the past year alone. Across sectors, including government, there’s momentum to use AI apps to improve service delivery, streamline operations and unlock economic growth. But understandably, many public sector leaders are pressing pause. They aren’t sure which applications are safe, how to manage data flows, or how to ensure internal AI use doesn’t outpace security readiness. These are valid concerns — but they are solvable ones.
In order to defend against sophisticated global cyber threats, we must equip public and private sector leaders with the frameworks, visibility, and confidence they need to adopt AI in a more secure and responsible way. A ‘Secure AI by Design’ approach empowers organizations to:
- Discover where and how AI is being used across the enterprise,
- Assess risks across models, datasets, and applications, and
- Protect against vulnerabilities in real time.
When governments have this level of clarity and control, AI adoption can move from hesitation to acceleration. And it needs to, because our digital ecosystems are evolving rapidly, and threat actors are evolving with them.
Adversaries are using AI to launch faster, more frequent, and increasingly complex attacks, and the rise of Agentic AI — self-directed, autonomous systems — adds new urgency to the cyber threat landscape. In fact, Agentic AI systems can compress what was once a multi-day ransomware campaign into 25 minutes — reconnaissance, compromise, and data exfiltration included. These adversarial uses of AI are escalating rapidly, and they highlight a fundamental truth that the status quo is no longer good enough.
These risks are not reasons to retreat. Rather, they are reasons to lean into AI with purpose, guardrails, and innovation-led cybersecurity infrastructure that can keep pace.
This is already happening. Each day, Palo Alto Networks uses AI to triage billions of data points — cutting through noise and reducing response times from days to minutes. Across the board, we’re seeing fivefold increases in incident resolution and dramatic gains in security effectiveness. These tools are not only helping governments defend against threats, they’re helping them operate better, faster, and smarter.
We’ve also learned that AI doesn’t just improve security operations — it relieves the burden on cyber professionals. By reducing manual triage and by automating responses, organizations can combat alert fatigue and burnout while building a more resilient, skilled, and confident workforce.
As adversaries harness AI to challenge our digital resilience, it is essential that we keep pace and move swiftly from deliberation to decisive action, including through the new initiatives that were agreed to at the G7 Summit this week. The question is no longer whether to embrace AI, but how to do so swiftly and securely. The answer lies in trusted partnerships between government and industry, such as the EU Artificial Intelligence Pact, of which we are a proud signatory. This key commitment drives the creation and adoption of AI legislation, in order to achieve a secure digital future.
Canada has a unique opportunity, as host of the G7, and as a digital economy leader, to champion a global model for responsible and secure AI adoption. A model where innovation is fast, but security is foundational. Where digital sovereignty is preserved, even as global collaboration accelerates, while ensuring worldwide standards and harmonized oversight. And where citizens reap the benefits of AI-enabled services without compromising security, privacy, or trust.
Wendi Whitmore is Chief Security Intelligence Officer, Palo Alto Networks.
Heather Black is Regional Vice-President, Palo Alto Networks.
