Energy Abundance, AI in Government, and AI and Workers

After a couple of longer essays, today’s newsletter is a round-up of a range of different reading recommendations spanning energy and a few different AI-related pieces, including the politics of AI adoption in government and the impacts of AI and workers. Hopefully, you will find some things here that are helpful and interesting.

Thanks for reading Orbit Policy's Deep Dives! Subscribe for free to receive new posts and support my work.

Subscribed

Energy Abundance

I’ve not spoken too much about energy policy in the newsletter, but the scale of the shift underway there is worth paying attention to. As the post by David Roberts argues, the energy revolution underway has the potential to be as transformative as Artificial General Intelligence. It is also far more likely to happen in the very near future.

As much as there is a resurgence of arguments in favour of oil, gas, and cross-Canada pipelines to secure our energy security, renewable energy is getting close to the point where abundance is the norm. While home heating is a complicating factor for Canada right now, the technology exists in heat pumps to rapidly electrify that, too.

If we are able to reach a point of energy abundance, then that turns our economics on its head. Since the energy crisis of the 1970s and then the growing crisis of climate change, an overriding imperative in our economic development has been the move toward greater energy efficiency. The result has been the decoupling of economic growth from emissions for the first time since the Industrial Revolution:

One consequence of that was the shift from innovation focused on “atoms” to one focused on “bits.” Abundant, clean, zero-emissions energy opens whole new avenues for innovation and for energy-intensive economic development (including energy-intensive AI, even if I think it is still worth emphasizing efficiency there, at least over the near term).

All of that is very exciting, and we shouldn’t risk throwing away that future in a quest for short-term energy security by focusing on pipelines and new gas plants. If anything, threats to our security should be imperative to speed up this revolution.

AI in Government

If you are a regular reader, you’ll know that I’m very skeptical about the rapid adoption of AI in government. It is somewhat concerning then to see that Mark Carney, the new Liberal leader and Prime Minister-Designate, proposed as part of his platform to “harness AI to increase productivity across government services.”

Interestingly, David Coletto of Abacus Data has recently polled on this topic and found the public mood is also highly skeptical, with 34% of Canadians either mostly negative or very negative about the nature of the impact of AI - more than those who are positive. Coletto also polled on various Carney platform arguments, and the statement “Economic growth demands bold leadership – investing in artificial intelligence, education, and innovation to ensure our workers and industries lead the future” performed the worst.

That should be a wake-up call for those arguing in favour of the rapid adoption of AI and all those who are strongly pro-innovation. Coletto argues that “The data suggest that more Canadians are worried that AI will threaten their job security, compromise their privacy, and deeply disrupt their lives. Many are asking a fairly rational question: why would we speed up its adoption if I’m worried about it’s impact?” He also makes the case that “there is a collective anxiety about whether the risks outweigh the rewards.”

It is worth noting what the best-performing statement was:

We need an economy that works for everyone, not just for the wealthy. This means fair wages, good public services, and the rich and large corporations paying their fair share in taxes.

If you are pro-innovation, then you really need to work towards an inclusive vision of what Canada’s innovation economy can be and advocate for policies that can align innovation with those ends. As I frequently argue, innovation is a tool. And tools can be used for good or bad. We need to direct innovation so we unleash the good and mitigate the bad, to build an economy that really does work for everyone.

AI and Workers

I have a few pieces now related to my post last week on AI and workers. Some of these were shared by readers, which I really appreciate. If you have anything you think would be of interest to me or other readers, please do reach out, either in the comments, by email, or through LinkedIn.

Artificial Intelligence at Work: The Shifting Landscape of Future Skills and the Future of Work—The first is a report from the Future Skills Centre on AI in workplaces from the fall. There is a lot of interest here, but the main thing that stands out for me is the lack of training and support being provided by employers. They found that 68% of those who use AI at work are doing so on their own either by learning to use the tools without any training (44%) or using them with self-guided training (24%). Canadian firms chronically under-train and under-develop their staff, as other FSC research has highlighted. We’re not going to reap any rewards from AI until we’re better able to have a more supportive and collaborative environment to utilise it productively.

Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared” - The second piece gets more at what the costs are for using AI. Research from Microsoft and Carnegie Mellon University finds that people who increasingly rely on generative AI for their work can “result in the deterioration of cognitive faculties that ought to be preserved.” The study outlines a “key irony of automation” that means that “you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.” In addition to the research that AI makes your workday longer and reduces employee satisfaction, that is not a positive picture.

The research reminds me of an excellent essay from Bianca Wylie: Automating Summation — On AI and Holding Responsibility in Relationships. In it, Wylie discusses the time-consuming process of summarizing the outcomes of public meetings and the responsibility that comes from it:

By being the people running these meetings, we became responsible and in relationship with everyone that participated. It was our responsibility — in that relationship — to take the participant’s time and energy, their knowledge and beliefs, their advice and ideas, and their delivery of these things, and summarize them in a report.

Part of this summarization includes not only what was said but also how it was said and what was unsaid in the room. Wylie argues how important it is to sit with this and the value of spending time with it all.

For Wylie:

When I see AI being suggested as a summarizing agent, I’m not only concerned about the accuracy of what is created through the use of automation, but moreso the absence or loss of what does not get done — what is inefficient and what is dull. I’m concerned because in the time-pressured world we live in, where efficiency is a constant measure of our professional capacity, there is every incentive to rid ourselves of this type of work if and where we can.

The historian Cate Denial makes a similar case in this piece: Why I’m Saying No to Generative AI. Denial walks through the process of exploring Gen-AI with students, setting out to them how they work, their limitations, and then the ethical aspects of AI, including the ecological costs of data centres as well as the labour violations involved in cleaning data. Denial then expands on the value, or more accurately, lack of value of AI in her own work and research:

And when it comes to writing history, I feel very strongly that the value of searching for and using words to brainstorm, draft, redraft, polish writing skills, and discover our own thinking is paramount. Not everyone finds working with writing easy, and I have a responsibility to meet every student where they are and help them in as many creative ways as I can become better at it. But if employers want “good” writing (for some value of good), and if my students can only produce it by asking generative AI to do the work, then I’ve done them a disservice. Why wouldn’t an employer turn to a machine to write something in that case? (At least while that machine does the job for free.) Tracking down primary sources, collecting objects, listening to stories, deciphering handwriting, analyzing ideas, and thinking through what we think about those ideas . . . that is beautifully human work. I will no longer apologize for wanting humans to do it.

This resonates with me. I use AI in my own work. Grammarly’s ability to use AI to help clean up my writing is what makes these newsletters hopefully reasonably readable, even if they no doubt would still benefit from a human editor. Readwise's new AI feature to “chat” with highlights I have made in books and articles is incredibly helpful for surfacing quotes that I remember but can’t pinpoint where they are from.

But those kinds of uses are very different from asking Chat-GPT to do the writing or asking Gemini to summarize the endless stream of newsletters and articles that come through my email and across my desk. For me, the hard friction of actually reading and writing brings new ideas to the fore and refines my thinking. That is time well spent. We need to use these abilities, not outsource them to AI and let them atrophy.

Previous
Previous

The Opening Policy Window

Next
Next

"Hope as Prologue and Hope as Epilogue"