Bangladesh’s AI Moment: Testing the Implementation Gap

By Anil Wasif
January 28, 2026
The Democratic Republic of the People of Bangladesh just drafted something unusual: an AI Policy that explicitly takes governance seriously.
The National AI Policy 2026-2030, currently being finalized by the ICT Division (which is accepting feedback), includes a risk-based regulatory framework, explicit prohibitions on mass surveillance and social scoring, mandatory algorithmic impact assessments for high-risk systems, and a commitment to ratify the Council of Europe’s Framework Convention on AI.
With historically challenged data legislation, this represents a genuine attempt to build governance infrastructure before deployment outpaces oversight.
The policy arrives at a precarious moment, with Bangladesh’s interim government, led by Nobel laureate Muhammad Yunus, preparing to hand power to an elected successor. But the real question is not whether the policy survives political transition. It is whether any government will build the institutions required to implement it.
The draft’s risk-based classification system establishes four tiers, ranging from prohibited practices through high-risk applications to limited-risk and minimal-risk uses.
The prohibited category includes real-time biometric surveillance in public spaces, social scoring systems, and AI-enabled weapons, while high-risk classifications trigger mandatory algorithmic impact assessments, human oversight requirements, and transparency obligations.
These prohibitions represent a meaningful break from recent history, and a test case for post-authoritarian digital governance.
The previous government’s Digital Security Act was widely used to arrest journalists, silence opposition figures, and criminalize dissent, with human rights organizations documenting hundreds of cases where digital policy became a tool for political repression.
Now, a country that just escaped digital authoritarianism is trying to build guardrails against it. Whether those guardrails hold depends less on the policy’s language than on the institutions enforcing it.
This framework draws from the EU AI Act but adapts it for Bangladesh’s institutional context, designating the National Data Governance and Innovation Agency (NDGIA) as the coordinating body while distributing sectoral oversight across existing ministries.
The World Bank’s AI strategy handbook recommends exactly this approach for countries with limited regulatory capacity: leverage existing institutions rather than building new ones. But citing international frameworks is easier than implementing them. The EU AI Act assumes regulatory infrastructure Bangladesh does not have.
The rights framework aligns with the UNESCO Recommendation on the Ethics of AI, requiring explainability for automated decisions, establishing contestability mechanisms, mandating human review pathways, and integrating with the draft Personal Data Protection Ordinance 2025.
The commitment to ratify the Council of Europe’s AI Convention would make Bangladesh potentially the first South Asian signatory, a signal that the draft takes international accountability seriously.
Now, a country that just escaped digital authoritarianism is trying to build guardrails against it. Whether those guardrails hold depends less on the policy’s language than on the institutions enforcing it.
The OECD AI Principles and the UN Secretary-General’s report Governing AI for Humanity both emphasize such cooperation as essential for effective AI governance. The monitoring provisions also stand out: annual reporting, a mandatory mid-term review in 2028, and a sunset clause requiring renewal by 2030 create accountability cycles many national AI strategies lack.
The policy’s central weakness is its implementation architecture.
The NDGIA does not yet exist as an operational institution, and the policy references it throughout without providing an establishment timeline or capacity benchmarks. The proposed Independent Oversight Committee requires an Act of Parliament that has not been drafted.
This matters because Bangladesh has been here before. The National Strategy for Artificial Intelligence drafted in 2019-2020 included detailed roadmaps, almost none of which materialized.
Political disruptions explain part of the failure, but the absence of binding institutional commitments explains more.
In November 2025, UNESCO, UNDP, and the ICT Division released Bangladesh’s AI Readiness Assessment Report. The UNESCO framework, deployed in over 60 countries, evaluates legal, social, economic, educational, and technological dimensions. Bangladesh’s report identified 15 priority actions and documented gaps: fragmented data systems, GPU scarcity, outdated curricula, AI ethics instruction that is “nearly absent,” and severe gender disparities in the AI workforce.
The new draft policy acknowledges many of these challenges, but the relationship between the UNESCO Readiness Report’s recommendations and the policy’s provisions is unclear. The report emphasized Bangla-language AI as essential for inclusive adoption; the policy mentions it without establishing a pathway.
One structural issue transcends political transitions: Bangladesh’s AI future depends on language infrastructure that does not yet exist. Only Hishab currently produces Bengali Large Language Models, a critical gap for a language spoken by over 170 million people. AI systems trained on English-language data embed assumptions that may not serve Bangladeshi users.
Public service delivery, education, agricultural extension, healthcare: the sectors where AI could most benefit Bangladesh require systems that work in Bangla, reflecting Bangladeshi contexts.
The ASEAN Guide on AI Governance and Ethics emphasizes linguistic adaptation as central to inclusive deployment. As recent research argues, the competitive advantage for countries like Bangladesh is context: “this revolution will be local.” The policy nods to this principle but offers no strategy for achieving it.
Political transition compounds the implementation challenge.
On February 12, Bangladesh holds its first election since the July Uprising that removed Sheikh Hasina, with voters also deciding on the July Charter’s constitutional reforms. Days later, India hosts the AI Impact Summit 2026 in New Delhi, positioning itself as the voice of developing economies on AI governance.
Whether Bangladesh’s next government continues this policy work, and with what institutional commitment, remains unclear. Political transitions in South Asia frequently reset technology governance agendas, and Bangladesh ranks 75th globally in the Oxford Insights Government AI Readiness Index.
Bangladesh’s draft AI policy represents genuine governance thinking, with risk-based frameworks, rights protections, and international alignment reflecting sophisticated engagement with the difficult questions AI poses for developing democracies.
But good policy documents do not automatically become good governance. The 2019-2020 strategy demonstrated that, and the gap between the UNESCO Readiness Report’s priorities and the draft policy’s vague commitments suggests the pattern may repeat.
Bangladesh has produced the framework. The harder work of building institutions, training regulators, funding enforcement, and developing Bangla-language AI infrastructure remains undone.
That work requires sustained political commitment across electoral cycles, which is precisely what Bangladesh’s recent history suggests is hardest to secure.
Policy Columnist Anil Wasif is a public servant in the Ontario government. He serves on the University of Toronto’s Governing Council and the Advisory Board of McGill’s Max Bell School. Internationally, he serves on the OECD’s Infrastructure Delivery Committee. He co-owns and manages the Canada-born global non-profit BacharLorai. The views expressed are his own.
