Where Policy Meets Technology: Modernizing NATO’s Defence Systems

NATO investment in research from Concordia University ‘reflects its broader recognition that deterrence and defence now extend into algorithmic domains’/NATO

By Khashayar Khorasani and Mohammadreza Nematollahi

June 12, 2025

The convergence of artificial intelligence/machine learning (AI/ML), autonomous technologies, cyber-physical systems (CPS), and the internet of things (IoT) presents both unprecedented opportunities and significant risks to society, national security, and the resilience of critical infrastructure. As a question of national security, this convergence has preoccupied governments in general, defence ministries in particular, the security establishment in the West, and, above all, NATO, for years.

It is within this context that NATO—and, by extension, Canada’s Department of National Defence (DND)—sought out research teams capable of addressing not only the technological capabilities of AI and CPS but also the systemic governance, ethical oversight, and operational resilience required to deploy them safely and effectively. Our team, through sustained collaboration with national and international partners, was selected for its unique ability to bridge the gap between technical innovation and strategic governance. Our research has consistently advanced both the theoretical foundations and practical applications necessary to secure intelligent systems that operate across military, civil, and dual-use domains.

NATO’s interest in our work reflects its broader recognition that deterrence and defence now extend into algorithmic domains, where decisions are made autonomously and consequences unfold in real time. As allied militaries increasingly rely on autonomous systems—whether in unmanned platforms, decision-support tools, or critical infrastructure control—NATO must ensure that these systems are not only operationally effective but also governable, auditable, and resilient to adversarial exploitation. Our policy frameworks, system-theoretic security models, and human-in-the-loop design strategies speak directly to these imperatives.

Since 2019, our team at Concordia University in Montreal has led and contributed to two major NATO/DND-funded research streams in the field of cybernetic governance and cybersecurity. The first centres on high-level governance, public policy, and strategic oversight frameworks for artificial intelligence, machine learning, and emerging digital technologies in a broad sense. The second focuses on cyberphysical systems and Internet-of-Things security, particularly in the context of large-scale autonomous and cooperative systems designed to operate collaboratively in complex environments. These systems can include autonomous vehicles, drones, robots, and other intelligent agents that communicate and coordinate their actions to enhance efficiency.

From both a strategic and practical standpoint, these two streams of research help define the state of the art in modern warfare and illuminate how emerging technologies are reshaping the future of conflict. At the same time, they provide governments with the scientific grounding needed to translate that knowledge into actionable policy, both for defence-related decision-making and for broader public policy aimed at integrating these technologies responsibly to address complex societal challenges.

While distinct in scope, both bodies of work are grounded in cybernetic principles: systemic feedback, adaptability, resilience, and transparency. Together, they offer a coherent strategy for governing and securing the future of intelligent systems.

These strategies aim to reduce the likelihood of destabilizing escalation while preserving innovation for peaceful applications.

In the first stream, our policy-focused research, developed primarily through the Security-Policy Nexus of Emerging Technology (SPNET) initiative funded by Canada’s Department of National Defence (DND) under the MINDS program, addressed the regulatory, legal, and ethical challenges posed by emerging technologies in a rapidly shifting geopolitical landscape.

The core objective was to align technological development with democratic governance, human dignity, and long-term social welfare. SPNET’s research emphasized that institutions, as in physical systems, must be responsive to feedback, capable of learning, and designed to adapt. This cybernetic view of governance informed all aspects of our policy design —from accountability and transparency to oversight and cross-sectoral collaboration.

One of our key findings from this research is that policy itself must be designed as a dynamic process, rather than a static rulebook. Emerging technologies are evolving faster than existing regulatory structures can keep pace, particularly in areas such as AI/ML-driven surveillance, algorithmic decision-making, and the deployment of autonomous systems. In response, we advocated for adaptive governance frameworks built around key pillars: the explainability of AI/ML systems, auditable decision-making pipelines, legally embedded privacy protections, fairness in algorithmic outcomes, and real-time responsiveness to technological and societal feedback. These are not just ethical aspirations; they are system-critical requirements for trust, legitimacy, and operational safety.

We have produced more than 100 policy briefs, briefing notes, and targeted technical advisories addressing critical themes, including the dual-use dilemma in AI/ML development, the implications of real-time facial recognition in public safety applications, the risks of black-box decision tools in national security contexts, and the ethics of AI/ML deployment in pandemic-era surveillance and logistics systems. We also engaged with global developments — offering perspectives on NATO coordination, AI/ML standardization in international law, and emerging norms for AI/ML-enabled autonomous weapons.

A crucial component of SPNET was our focus on cultivating human expertise. We developed training programs for highly qualified personnel to operate at the intersection of AI/ML, defence technology, law, and policy. These capacity-building initiatives aimed to bridge the knowledge gaps between technical system designers and regulatory institutions. Without this human infrastructure, adaptive governance cannot function. Expertise must be distributed across technical, ethical, and strategic domains to anticipate threats, develop countermeasures, and maintain democratic oversight.

Perhaps most urgently, SPNET addressed the growing international arms race in AI/ML and autonomy. Without global governance structures, states are incentivized to accelerate the weaponization of AI/ML, undermining strategic stability. We have advocated for binding international agreements on the use of AI/ML in defence, transparency protocols among allies, shared threat assessment tools, and cooperative research frameworks. These strategies aim to reduce the likelihood of destabilizing escalation while preserving innovation for peaceful applications. Governance, from our standpoint, is not about control for its own sake but about embedding systemic safeguards into the future of technological progress.

The SPNET project developed a comprehensive framework for governing AI/ML and autonomous systems by integrating ethical, legal, technical, and geopolitical concerns. It sought to empower defence and civilian stakeholders with actionable tools to regulate emerging technologies without stifling innovation. The framework incorporated key cybernetic values — feedback, adaptability, and modularity — into policy design.

Shutterstock

SPNET’s focus was fourfold: (1) building an interdisciplinary network of academic, governmental, and industrial actors; (2) generating policy insights for DND on AI/ML, cybersecurity, and emerging and disruptive technologies (EDT); (3) developing training pipelines for highly qualified personnel (HQP); and (4) advancing international collaboration with institutions such as the US FAA’s ASSURE consortium. This structure facilitated systemic analysis across AI/ML ethics, cybersecurity, governance, and resilience, informing our briefs across themes including space, cyberspace, AI/ML dual-use challenges, and pandemic resilience.

Recognizing that cybernetic problems do not respect borders, we emphasized the importance of international cooperation and collaboration. Our research highlighted the danger of AI/ML and cybersecurity becoming instruments of geopolitical competition, potentially igniting a new arms race. We warned that the unregulated militarization of AI/ML and autonomous systems could mirror the “super-wicked” nature of climate change, characterized by high stakes, policy inertia, and global interdependence.

We have called for multilateral agreements, NATO-aligned threat-sharing protocols, and interoperability standards to govern the use of AI/ML and cyber tools in defence. A critical insight is that transparency and coordination must be embedded not just in systems but in the institutions governing those systems. This aligns with the cybernetic principle that sustainable regulation requires continuous and reciprocal information exchange among stakeholders.

Parallel to this normative and policy-centered work, our second primary research stream focused on the technical foundations of CPS/IoT cybersecurity. CPS/IoT represents a new class of systems where computation, communication, and physical processes are tightly integrated. In such systems, cyberattacks are not limited to data loss or service denial — they can induce physical harm, sabotage infrastructure, or compromise autonomous behavior with real-world consequences. Our work addressed these vulnerabilities through system-theoretic approaches to attack detection, system resilience, and recovery.

Over a series of fifteen research projects involving multiple universities, we developed and tested new methods for detecting, isolating, and mitigating cyber threats in CPS/IoT across a wide range of models: linear, nonlinear, switched, event-based, and large-scale multi-agent systems. Applications included power grids, unmanned aerial vehicles (UAVs), naval platforms, and cooperative drone swarms. These environments are susceptible to stealthy or covert attacks that exploit system structure to evade traditional monitoring.

Our work focused on building more intelligent, more secure systems for critical technologies, such as drones, ships, and industrial equipment—systems that increasingly rely on connected sensors and automation. We developed innovative tools that combine traditional engineering models with cutting-edge AI, including machine learning techniques that help detect unusual behavior even when data is incomplete or delayed.

Policy frameworks must be built with an understanding of how systems behave in the real world, especially under stress. Technical systems must be built with embedded ethics, oversight capabilities, and human-in-the-loop resilience.

Another core area of the project focused on how cyberattacks might unfold and how systems can counter them. Using game theory—a method for modeling strategic behavior between attackers and defenders—we explored how threats evolve and how systems can adapt in real-time to continue functioning. We tested our ideas on realistic digital replicas of complex systems, simulating everything from coordinated drone missions to control systems aboard naval vessels. The result is a new generation of resilient technologies that can detect attacks, adapt their responses, and recover autonomously, pushing us closer to intelligent systems that remain safe and secure even under pressure and intelligent disruptions.

While this work is deeply technical, its governance implications are substantial. As CPS/IoT become foundational to national infrastructure and military operations, their security becomes inseparable from public policy. The technical assurance we provide through formal methods, simulations, and field testing is essential for informed regulation and risk-based decision-making. Without verifiable system resilience, governance lacks credibility.

The central lesson of our combined efforts is that resilient, governable technology requires both high-level normative vision and robust technical underpinnings. Policy frameworks must be built with an understanding of how systems behave in the real world, especially under stress. Technical systems must be built with embedded ethics, oversight capabilities, and human-in-the-loop resilience. Cybernetics understood as the science of feedback, control, and adaptive behavior, provides the conceptual and operational bridge between these domains.

Governing the future of AI/ML and CPS/IoT is no longer a question of whether but of how well. With accelerating complexity, the margin for error becomes increasingly narrow. Through our integrated body of work, we offer both the intellectual tools and practical frameworks to meet this challenge with foresight, rigor, and responsibility.

We recommend the following for future policy development:

  1. Adopt cybernetic governance models across ministries and agencies that deal with AI/ML and cybersecurity, prioritizing feedback-rich, learning-oriented regulatory frameworks.
  2. Institutionalize AI/ML accountability mechanisms, including independent audits, algorithmic impact assessments, and real-time error monitoring.
  3. Embed human rights standards into all AI/ML and autonomous systems used in public or defence contexts, primarily for surveillance and decision-making technologies.
  4. Establish multilateral cyber-defence alliances that share not only data but also frameworks for the ethical integration of AI/ML and EDT into strategic infrastructure.
  5. Train the next generation of policy professionals in interdisciplinary approaches that integrate cybernetics, engineering ethics, international law, and political systems theory.

Ultimately, cybersecurity and AI/ML governance must move beyond narrow compliance or siloed technical fixes. They require a holistic systems approach, informed by cybernetics, that is capable of navigating complexity, uncertainty, and systemic risk while keeping human dignity, rights, and global cooperation at its core.

Khashayar Khorasani is a professor and Concordia University Tier I Research Chair in the Department of Electrical and Computer Engineering and Concordia Institute for Aerospace Design and Innovation (CIADI).

Mohammadreza Nematollahi is a Ph.D. candidate in Electrical and Computer Engineering at Concordia University, specializing in the security and resilience of cyber-physical systems and autonomous multi-agent platforms.