Canada's Broken Immigration System and Leveraging AI for Science
Today, I have another feast of reading recommendations after not including any in last week’s essays. I examine some recent pieces about Canada’s immigration system, with some lessons about policy coherency and how we think about innovation policy through a provincial lens. I also write about AI and science and some of the opportunities there. I hope you enjoy it!
Canada’s Broken Immigration System
First off, I take a look at two pieces on Canada’s immigration system and some of its failings. The first piece is a great long-read examination of the entire system by Doug Saunders in the Globe and Mail. It covers a hell of a lot of ground, but I think it is worth taking the time to read. Saunders covers how the 100+ immigration and asylum programs that Canada has, “while often very successful on their own, together have created perverse incentives that often channel the wrong people into the wrong pathways, and confound the policy desires of governments – whether those desires are greater population growth or lower immigration levels.” In the process, he connects these policy failings with their very real and often tragic human impacts.
Rare for a newspaper piece, there is also dedicates a lot of space to how to fix the system, more than the usual one-off quote by an expert that lacks context. Interestingly, it includes the need to simplify the system with provinces at the centre to allow them to better assess their own labour-market needs, housing supplies and population-growth goals. This aligns with a lot of my thinking around innovation policy that I touched on in Friday’s essay and how, by giving insufficient attention to “the diversity of the local economies and local conditions that make up Canada, we have erected a major barrier to addressing our innovation paradox”.
The recommendations cover other ground, including making better use of the temporary residents currently in Canada to fill holes in our labour market. As Saunders argues, “Given that most of them are skilled and educated people who have already been screened and assessed, it’s far better to view them as an asset and turn them, and their families, into regular permanent immigrants.” This brings me nicely to the second immigration piece by Ako Ufodike in the Conversation. Ufodike looks at Canada’s productivity crisis and makes the case that the policies we’re using to tackle it “largely continue to follow the traditional approach which focuses on incentivizing businesses to increase output, rather than focusing on workers — the factor most relevant to productivity.”
For Ufodike, we need to be looking far more at how we are utilizing immigrants in the workforce as a way to address that productivity challenge. He takes on how some critics blame immigration for our productivity struggles, calling out how that “narrative risks fostering anti-immigrant sentiment” and arguing the reality is that “many highly qualified immigrants end up underemployed or unemployed through no fault of their own.”
Instead of blaming immigration and immigrants for productivity declines, we should recognize them as an essential part of the solution and take more policy action. Measures could include proper credential recognition, expanding workforce integration programs, funding reskilling, upskilling, and mentorship programs for immigrants and youth, and ensuring immigrants aren’t exploited or underpaid compared to their Canadian-born peers.
Unfortunately, instead of following Ufodike's advice, the federal government is doing the exact opposite, including major cuts to newcomer English language programs, with programs focused on building employment skills getting some of the deepest cuts. This seems a classic example of short-term thinking that will save a small amount of money now but at the cost of increasing the skill underutilization of immigrants and adding to our productivity challenges. Great work, IRCC.
If you enjoy reading Deep Dives, please consider sharing this piece with your network.
Leveraging AI for Science
Photo by Louis Reed on Unsplash
Back in January, I reviewed the UK’s new AI Opportunities Action Plan. While the plan has many shortcomings, one positive aspect is its relative realism. It doesn’t set out a vision to win in every area of AI but identifies some subsectors where the UK has strengths or could derive particular advantages.
One major one is AI for science, and this new report from the Tony Blair Institute for Global Change takes that idea and looks at it in a lot more detail. The authors explore the opportunities and barriers of AI-enabled science and what embracing it would mean for the UK's research system.
It covers areas such as ensuring there are accessible high-quality scientific data and facilitating access to AI talent, as well as the lack of incentives for academic research to build and share high-quality reusable tools, and building research environments that are designed to support interdisciplinary AI-driven research.
As I mentioned last week, while I’m skeptical about much of the AI boosterism and its promises to reform the public service and so much else, AI is already powering scientific breakthroughs. Given that our ever-increasing knowledge contributes to research productivity falling by half every 13 years, as a landmark 2020 research paper explored, there is a real need and opportunity for AI to help with research.
But, as ever, if we are going to really succeed with that, then we need to understand the system and the incentives and which policy levers need to be pulled, which this report helpfully looks at for the UK.
We also need to understand AI’s limitations though. A purpose-built AI tool, such as AlphaFold, the model that broke ground in predicting over 200 million protein structures, is different from applying a more generic LLM to scientific challenges, such as summarising relevant data or literature in a certain field.
This piece from Benedict Evans looking at OpenAI’s new Deep Research product is worth reading on that front. As Evans sets out, while these tools are useful but still flawed:
If someone asks you to produce a 20 page report in a topic where you have deep domain expertise, but you don’t already have 20 pages sitting in a folder somewhere, then this would turn a couple of days’ work into a couple of hours, and you can fix all the mistakes. I always call AI ‘infinite interns’, and there are a lot of teachable moments in what I’ve just written for any intern, but there’s also Steve Jobs’ line that a computer is ‘a bicycle for the mind’ - it lets you go further and faster for much less effort, but it can’t go anywhere by itself.
Having infinite interns is a helpful problem, but that alone will not address issues of research productivity. There is still a significant error rate, and we don’t know if it will go away.
Nonetheless, taking a broader view of AI beyond LLMs, there is a clear need to be proactive. We should consider how AI can be leveraged effectively for science and how to remove barriers while also not glossing over issues around environmental impact, IP ownership, and so on. Doing that is going to require some deep policy thinking and some leadership and coherence from policymakers and institutions.