Skip to main content

Social and cultural impacts of AI: A seat at the policy table

Social and cultural impacts of AI: A seat at the policy table

By Kelly Wilhelm, Head, Cultural Policy Hub at OCADU 

Yarn and threads in the shape of a human head

Policy debates on AI have been ramping up across the world. In this blog, we’ll talk about where these debates are at in Canada following a recent roundtable discussion with senior government officials facilitated by the Cultural Policy Hub. We’ll look at why AI policy matters to the arts, culture and creative industries and how to get involved.  

AI policy is not a new field. But efforts by governments to put policies and frameworks in place have become more urgent as AI systems become both more powerful and more ubiquitous. Recent Generative AI tools like Chat GPT have allowed anyone with a computer to use AI for everything from creating images to creating deepfakes and spreading misinformation, all after issuing a few simple prompts. 

At the same time, AI systems bring new creative pathways and potential. As we explored in a recent blog post on the implications for Generative AI and the arts, many artists and experts in the arts and cultural sector are using AI as part of their work and are active in discussions around AI policy. Recently, a number of artists and culture sector leaders responded to the federal government’s Copyright and Generative AI consultation, which closed on January 15, 2024. (For more info on how people were invited to participate, see our AI consultation toolkit.

The Hub is working with those already in this field to connect these conversations and to strengthen the sector’s capacity to contribute to AI policy-making.  

AI is a challenging public policy area: the technology evolves quickly, has a global impact, crosses several policy issues and poses challenges to economic, social and cultural well-being. Governments are under pressure to strike a balance in policy and regulation that supports the public good. Experts around the world, including in Canada, are calling for safe, ethical and equitable approaches to the development and use of AI, to ensure that it is of net benefit to the people who either use it or are affected by it. The Organisation for Economic Co-operation and Development (OECD) calls this a “human-centric” approach to AI development.  

Put another way, policy makers seek to balance the opportunities of AI with the risks it presents to the economy, safety and social and cultural well-being. 

This was the starting point for a roundtable discussion that the Cultural Policy Hub was invited to co-create and facilitate with the Department of Canadian Heritage.  

In 2023, the Government of Canada’s top bureaucrat, Clerk of the Privy Council John Hannaford, launched a policy process to address issues that will have a significant impact on Canada in the next 10 years, including AI. Hannaford called on the group of senior officials that run government departments, the Deputy Ministers, to “shake conventional thinking on policymaking and develop fresh perspectives on key issues the country will face over the next decade.” https://policyoptions.irpp.org/magazines/december-2023/dms-future-public-service/ 

The objective of this roundtable was to provide an opportunity for senior government officials from many departments to meet with a group of Canadian experts to explore the social and cultural impacts of AI, including: social cohesion and trust in democratic institutions; inclusion, diversity, equity and accessibility; creativity and cultural identity; and education and learning. 

Chaired by Canadian Heritage Deputy Minister Isabelle Mondou and facilitated by me, the discussion covered a wide range of topics, from the impact of AI on creators, creativity and cultural identity—including IP and copyright—to what constitutes ethical and equitable AI, including harm caused by biased systems, particularly for racialized people.  

The experts at the table were clear that the impact of AI on society cannot be a policy afterthought: as in all policy areas, it is the government’s responsibility to put people first. One expert stressed that the measure of success needs to change from economic growth, innovation and productivity to “citizen return on investment.”  

The conversation moved toward how Canada could develop policy that could contribute to it achieving this proposed shift:  

1) Policymakers need meaningful engagement with people about what outcomes we want for our society as a result of the development of AI. One expert suggested flipping the priorities for AI development on their head. Instead of developing AI that performs a specific function, developers would be encouraged to ask what people’s biggest concerns are and then how AI can help address them—for the development of AI to be led by citizen and societal needs.  

2) Urgently, and at a minimum, policy must reduce the risk of harm to individuals, recognizing that the AI systems and tools don’t affect all people equally. For example, marginalized people are particularly affected by bias built into AI systems (see, for example, Unmasking AI: My Mission to Protect What is Human in World of Machines by Joy Buolamwini). In these discussions we could be asking: if machine learning and AI-generated content is cheaper and faster, for whom and for what purposes do we want to use it? Companies may save time and money by using AI systems. But, when those systems lead to results that are incorrect or harmful, the burden is on the individual, often marginalized people, to defend themselves and seek remedies. Few have the means to do so. 

3) Guardrails and stronger prior conditions are needed before AI applications go to market to prevent harm and support benefits. Introducing AI too quickly will exacerbate already eroded trust in public institutions. Some private companies recognize the need for collaboration with government on this issue to build and maintain public trust in their AI product themselves. 

4) As AI tools become more ubiquitous and our interactions with them become increasingly part of our everyday experience, people have the right to know when they are interacting with an AI or engaging with AI-generated content. There are two sides to this coin: the need for standards by which AI tools and AI-generated content is identified, and building AI literacy through awareness, education and training. 

5) Specific to the arts and cultural sector, artists and writers have the right to compensation when their protected intellectual property is being used to teach AI systems so that it can be packaged and re-sold, often in a “style” that mimics the original work. We should be asking: How can we enable artists and creators to produce work in this space, and to benefit from the use of their intellectual property in AI systems? If AI systems such as Large Language Models (LLM) depend on published works for their learning, how do we ensure a healthy pipeline of creators making this work, and by protecting it from exploitation? 

6) Consultation and governance of AI must proactively engage civil society alongside industry and others already at the table. Investment in research into social and cultural impacts is urgently needed: these are not well-understood, and governments lack data to inform policy decisions.  

On this last point, the group expressed the urgent and critical need for civil society to not only participate in AI policymaking but to be part of the governance model for decision-making as AI evolves. Capacity is a huge issue; as in all policy areas, who can and who is enabled to participate in this discussion will determine how policy is made and who benefits. 

The Cultural Policy Hub will continue to work on this policy issue. We are building a community of practitioners—artists, arts organizations and membership organizations, researchers, and policy-makers—to work with us. Please reach out if this is of interest to you.  

Want to get involved? Take a look at our toolkit on the ISED Consultation on Generative AI, sign the AI Impact Alliance’s petition, or consult our reading list below! 


  1. Beyai J., (2022) “Policy Reflections: AI Generated Art Implications” Cultural Policy Hub
    https://culturalpolicyhub.ocadu.ca/news/ai-generated-art-implications
  2. Petition: Join the Art Impact AI Coalition & Support Artists Voices on the Future of AI (June 2, 2023)
    https://www.change.org/p/join-the-art-impact-ai-coalition-support-artists-voices-on-the-future-of-ai
  3. Toolkit: Consultation on Copyright in the Age of Generative Artificial Intelligence. (December 2023). Cultural Policy Hub.
    https://culturalpolicyhub.ocadu.ca/sites/default/files/pdfs/Hub_AI%20Toolkit_EN_FINAL_.pdf
  4. Intersection of AI & Copyright. Copyright Clearance Centre.
    https://www.copyright.com/resource-library/insights/intersection-ai-copyright/
  5. “Can GenAI companies train their systems on images, text, code, or other things I’ve made without getting my permission?” Knowing Machines
    https://knowingmachines.org/knowing-legal-machines/legal-explainer/questions/can-gen-ai-companies-train-their-systems-on-things-i-made
  6. EU AI Act: first regulation on artificial intelligence. (December 6 2023) European Parliament.
    https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
  7. Canadian Guardrails for Generative AI – Code of Practice. (August 16 2023) Innovation, Science, and Economic Development Canada.
    https://ised-isde.canada.ca/site/ised/en/consultation-development-canadian-code-practice-generative-artificial-intelligence-systems/canadian-guardrails-generative-ai-code-practice