The Need for Government Reform and The Quixotic Quest for AI Solutions
I am very much someone who believes that we need to give proper attention to issues of state capacity. Given the array of challenges we face, we need governments at all levels that can respond smartly, informed by data and evidence, and play a constructive, interventionist, and market-shaping role. Getting there from where we are now will require substantial reform.
I am also someone who is cautiously optimistic about AI’s potential. It is already playing a growing role in pushing the scientific frontier. Last year’s Nobel Prize for Chemistry, which went partly to Demis Hassabis and John Jumper for their work using AI to predict protein structures, shows that clearly. Its potential in areas such as medical diagnosis is also one where it is consistently making major advances, such as this recent work from academics at UCL where they used AI to help counteract the underdiagnosis of women with a deadly heart condition. These are real advances that have the potential to bring significant benefits to people the world over.
While these two things are both true, I am also very skeptical about calls to deploy AI as part of public sector reform. There are some major red flags.
Kathryn May’s recent article for Policy Options on “a big Trump policy shift” for the Canadian public service has set off alarm bells for me. May sets out the broad need for reform. As former clerk of the Privy Council, Michael Wernick, sets out “You can’t be resilient, agile, and effective in the 2020s with a public service built for the 2000s”. This is certainly true, but as May sets out, we are approaching an election where “the only focus is on cutting the size of the public service. No one is talking about reforming it.”
Yet May’s article then turns from issues of serious reform of the public service to look south of the border and Elon Musk’s “sledgehammer approach” to the US government. As part of that, it is concerning to see the “AI First” doctrine of Musk looked at as a positive example and possible roadmap.
May notes approvingly that “the idea of a tech-driven shakeup — led by someone unafraid to break things and rethink government — might be the jolt Canada needs.” Former federal CIO and current ADM Alex Benay is quoted arguing in a personal capacity that “We should be striving for a zero-bureaucracy government in Canada by putting our national AI capabilities to the test in our public sectors first.” Ian Lee, an associate professor at Carleton, calls for a “super-charged Glasco Commission” to overhaul to the public service: “People will be screaming bloody murder. But we’re in this crisis now, having to respond to Trump, the demands he’s making, as well as AI changing everything in government. Nothing can stop that train.”
I find this all incredibly concerning.
First, we should be clear that what is happening in the US is no boon to state capacity. While the US needs reform, what is happening there is something very difficult. As Don Moynihan, a leading scholar on state capacity, has argued, “Musk and DOGE are providing a real-time management case study. Unfortunately, all of the lessons are about what not to do. The quickest way to improve your management skills is to look at what DOGE is doing, and do the complete opposite.”
For Moynihan:
It is a fundamental error to believe that DOGE is a government efficiency project. Cutting 1 in 4 federal employees would cut federal government spending by 1%. Cost savings are incidental. DOGE is a political control project. Firing and terrorizing public employees is a means to weakening state regulation of private interests and strengthening a personalist presidency.
No one should be looking to the US and Musk’s actions right now with anything other than horror.
Secondly, we should also be clear that the mass adoption of AI is currently not compatible with Canada’s climate goals. AI’s huge environmental impact should not be swept under the rug. This is why I’ve argued that we should prioritize sustainable AI as the central pillar of our AI strategy.
But even putting both of those aside for a moment, there is little to suggest that AI in government is an unstoppable train or that it is at all an appropriate arena to be putting AI capabilities to the test at any kind of scale.
As the recent International AI Safety Report set out, there is a wide range of risks and concerns related to AI. For government uses, reliability issues are particularly important, especially in the context of limited AI literacy from decision-makers.
So, too, are issues around bias. As the Report summarises, systems “frequently display biases with respect to race, gender, culture, age, disability, political opinion, or other aspects of human identity. This can lead to discriminatory outcomes including unequal resource allocation, reinforcement of stereotypes, and systematic neglect of underrepresented groups or viewpoints.”
It is worth noting, too, the risk of market concentration where if organizations rely on a small number of AI systems, “a bug or vulnerability in such a system could cause simultaneous failures and disruptions on a broad scale.” Given the dominance of US firms in this space and the challenging relationship with the current US administration and its Silicon Valley backers, there doesn’t need to be a bug to have national security implications for Canada.
The idea that we can reach a “zero bureaucracy government” by using AI is also both ludicrous in its fundamentals and worrying in its application. On the first front, let’s go all the way back to Max Weber’s definition of bureaucracy: “a form of general organization characterized by the preponderance of rules and procedures that are applied impersonally by specialized agents.” By using AI, we aren’t removing bureaucracy; we are just making those specialized agents AI agents rather than people. The necessary bureaucratic functions of government that enable a functioning modern society remain regardless of AI or human agents applying them.
Furthermore, there are plenty of reasons to believe that AI-driven bureaucracy could actually lead to more red tape and worse outcomes than what we have now. We only need to look at the example of algorithmic management in the workplace to see how that might play out. As one OECD study found, there are concerns with trustworthiness, unclear accountability, the inability of managers to easily follow the tools’ logic, and inadequate protection of workers’ physical and mental health. Another study examined algorithmic management’s impact “on prosocial motivation, which is an important driver of creativity, productivity, social interaction, and overall well-being in the workplace.” It found that employees “who are algorithmically managed turn out to be less inclined to help or support colleagues than employees managed by people.” Now imagine the impact of algorithmic decision-making at scale by governments.
The untrustworthiness of AI has also been underlined by a recent research paper from the BBC. They gave several leading models access to the BBC news website and asked them questions about the news. Yet even with up-to-date and accurate information at the models’ disposal:
51% of all AI answers to questions about the news were judged to have significant issues of some form.
19% of AI answers which cited BBC content introduced factual errors – incorrect factual statements, numbers and dates.
13% of the quotes sourced from BBC articles were either altered from the original source or not present in the article cited.
Those are concerning numbers, especially if part of a vision of AI-driven government reform would have them briefing decision-makers.
We are so far away from a state of affairs in which we are looking at an unstoppable AI train that will sweep away bureaucracy. Yes, AI has some potentially powerful applications. And yes, I’m sure certain AI tools will have beneficial impacts in government settings.
However, there are enough major sources of concern that we absolutely should not be approaching much-needed public sector reform through the lens of “AI First.”
AI is a tool, much like any other technology. Abraham Maslow once said “it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." So it is that many today look at the government thinking everything can or should be solved by AI.
But this is backward. We need to start by understanding the problems before deciding on the tool to fix them, especially when the tool is as unreliable as AI is right now. The issues of state capacity are too important, given the challenges we face, to embark on a misguided and quixotic quest to make AI the solution to all our problems.