Skip to main content

Beyond the Tool: What Banff Made Clear About AI and Culture

Beyond the Tool: What Banff Made Clear About AI and Culture

By Ana Serrano, President & Vice-Chancellor, OCAD University

Photograph of Ana Serrano talking at the Banff National AI Summit in front of a crowd

I came back from Banff thinking that Canada’s first National Summit on Artificial Intelligence and Culture clarified something important. The conversation about AI and culture is often treated as a debate between opposing camps: those who want stronger protections and those who want faster adoption. But in Banff, it became clear that this framing is too simple. The real challenge is that Canada must solve two problems at once.

The first is lack of protection. Many rights holders believe creative work has been used to train models without consent, compensation, or meaningful accountability. They want stronger safeguards, as they should. There is also a broader concern that if licensing becomes part of the new normal, creators and cultural producers will once again be asked to accept weak terms in the name of innovation. That is not resistance to technology. It is a demand for fairer market conditions. The Dais’s new report, The Art in Artificial Intelligence, captures this well, particularly in its attention to unresolved questions around compensation, intellectual property, and training data.

The second problem is lack of capacity. Canada’s AI adoption rate remains relatively low. Statistics Canada reported that in the second quarter of 2025, only 12.2 per cent of businesses had used AI to produce goods or deliver services over the previous 12 months, even though that figure had doubled from the year before. And among those that are using AI, the most common applications remain fairly basic: text analytics, data analytics, and chatbots.

The cultural sector sits at the intersection of these two problems. It is economically significant, contributing an estimated $65 billion to Canada’s GDP and employing roughly 690,000 workers. But it is also unevenly positioned to adopt AI. The Dais notes that while information and cultural industries have been relatively strong adopters, arts, entertainment, and recreation remain among the lowest. That gap matters. It suggests that the issue is not simply whether culture is engaging with AI, but whether the people and institutions producing cultural value can do so under conditions that are fair, trusted, and workable.

This is why a sequential approach makes little sense. We cannot say that first we will solve copyright and compensation, and only later worry about adoption. But we also can’t push for faster uptake while leaving the underlying extraction model untouched. One path delays the future. The other reproduces the problem at scale.

What Canada needs instead are models that address protection and capacity together. That means moving beyond the idea of AI as just a tool and thinking more seriously about AI as infrastructure.

This is where Public AI becomes useful as a framework. By Public AI (or AI Commons as I have written in OCAD U’s Cultural Policy Hub blog) I mean shared, publicly governed, rights-cleared infrastructure that organizations can use, shape, and trust. That matters because most museums, publishers, arts organizations, and small creative firms cannot build their own AI stack. They do not have the capital, the technical teams, or the leverage to negotiate from strength. But they should not be left to choose between opting out entirely or adopting systems they had no role in designing. Public AI offers a third option: it creates the conditions for experimentation and adoption while also improving the terms around provenance, consent, compensation, transparency, and data sovereignty.

Importantly, this is not entirely theoretical. Indigenous communities are already advancing important parts of this work. As Jackson 2Bears noted on the panel I moderated, research initiatives such as Abundant Intelligences are helping show what sovereign, community-governed AI infrastructure can look like when cultural data, governance, and responsibility are built in from the start.

Arts and culture are not a side case. They are one of the first sectors to be impacted by the lack of guardrails and regulations around how AI is developed, owned, and put to use.  The creative sector also includes exactly the kinds of institutions most exposed to the current imbalance in capacity: too small to build their own responsible systems, but too important to be left to depend on private platforms designed elsewhere. That is why OCAD University’s Cultural Policy Hub is advancing, with public-interest partners, a proposal to test Public AI through arts and culture. The point is not to create a special exception for culture. It is to show that Canada can build shared AI infrastructure on fairer terms, with governance, access, and public value built in from the start. If we can make that work here, the model can extend well beyond the cultural sector.