Medicine Technology 🌱 Environment Space Energy Physics Engineering Social Science Earth Science Science
Technology 2026-03-09 3 min read

Over 80% of government AI projects fail. A new blueprint explains why and what to do differently.

A global policy brief by IVADO and the University of Ottawa identifies four priorities for AI implementation in public administration, starting with solving real problems before deploying technology

Seventy percent of countries now report using artificial intelligence to improve internal government processes. A third use it to support policy design. Some are experimenting with AI as a substitute for core government functions. And the failure rate for these projects? Over 80%.

That statistic frames a new policy brief from an international group of experts led by Prof. Catherine Regis of IVADO and Universite de Montreal, and Prof. Florian Martin-Bariteau of the University of Ottawa. The brief, developed during a week-long policy retreat in December 2025 with AI experts from North America, South America, Africa, Europe, and Asia, argues that the success or failure of government AI has less to do with technology and more to do with institutional readiness.

Four recommended actions

The policy brief organizes its guidance into four priorities, each addressing a different dimension of the implementation challenge.

First: solve the problem, then deploy the tool. The authors argue that governments should redesign public services around real, identified problems before introducing AI. This means involving public servants as co-designers, building on proven successes, and scaling what works rather than adopting AI for its own sake.

Second: build institutional capacity. AI cannot be deployed effectively by an institution that lacks the skills to evaluate, manage, and oversee it. The brief recommends investment in training programs and cross-functional teams that bring together technical expertise with domain knowledge and ethical judgment.

Third: rebalance power with the private sector. Governments frequently depend on private vendors for AI tools, creating asymmetries in knowledge, access, and leverage. The authors recommend collective procurement strategies and collaborative development of shared tools that meet public-sector requirements without locking governments into vendor dependency.

Fourth: build a public trust stack. Transparency, accountability, and oversight form the foundation. Without them, AI in government risks amplifying existing dysfunction and eroding public trust in both the technology and the institutions that deploy it. The authors also emphasize resilience planning: governments need contingency strategies for when AI systems fail or produce harmful outcomes.

Canada as a case study

The brief arrives at a relevant moment for Canadian policy. The Carney government recently used an AI platform to translate and summarize the 11,000 submissions collected during a public consultation on updating Canada's AI strategy, and has proposed ambitious deployment of AI across the federal public service.

Martin-Bariteau offers a pointed assessment: without planning, transparency, accountability, and oversight, AI in the public sector will only amplify current dysfunctions and feed distrust from both public servants and the public.

Why most projects fail

The 80% failure rate for AI projects, drawn from broader industry statistics, reflects a consistent pattern. Organizations deploy sophisticated technology without first clarifying the problem it is supposed to solve, without training the people who will use it, without establishing governance structures to manage its outputs, and without planning for failure modes.

In the public sector, these risks are compounded by the nature of government services. Errors in automated benefit determinations, biased hiring algorithms, or opaque decision-making in immigration or criminal justice carry consequences that private-sector failures typically do not. The stakes of getting it wrong are higher, which means the bar for getting it right should be higher too.

Limitations of the brief

The policy brief is a set of recommendations, not an empirical study. Its conclusions are drawn from the collective expertise of the retreat participants rather than from a controlled analysis of government AI implementations. The specific failure rate cited (80%) comes from broader AI industry data and may not perfectly reflect public-sector outcomes, which are less well documented.

The recommendations are also necessarily general. Different governments face different constraints, from budget limitations to workforce composition to legal frameworks. Translating the brief's four priorities into actionable plans will require adaptation to each jurisdiction's specific circumstances.

Still, the core argument holds up against the available evidence: the technology is not the hard part. Institutions, incentives, and human capacity are. Until governments address those foundations, AI deployments will continue to fail at rates that would be unacceptable in any other domain of public investment.

Source: "Governing with AI: Four Actions to Build a Transformative and Resilient Public Administration in the Age of AI." Global Policy Brief by IVADO and the AI + Society Initiative at the University of Ottawa. Lead authors: Prof. Catherine Regis (IVADO, Universite de Montreal) and Prof. Florian Martin-Bariteau (University of Ottawa). Supported by CEIMIA, the Canada-CIFAR Chair in AI and Human Rights at Mila, and the University of Ottawa Research Chair in Technology and Society.