Special Issue on ‘Artificial Intelligence Safety and Public Policy in Canada’ in Canadian Public Policy / Analyse de politiques
Sponsors: Canadian Institute for Advanced Research (CIFAR) and the University of Waterloo Cybersecurity and Privacy Institute (CPI)
Managing Guest Editor and contact email: Anindya Sen, Professor of Economics (University of Waterloo) and Acting Executive Director, Waterloo Cybersecurity and Privacy Institute (CPI), asen@uwaterloo.ca
Guest Editorial Committee
- Amanda Clarke, Associate Professor and Public Affairs Research Excellence Chair, School of Public Policy & Administration, Carleton University
- Jocelyn Maclure, Stephen A. Jarislowsky Chair in Human Nature and Technology and Professor of Philosophy, McGill University
- Juan Morales, Associate Professor, Economics, Wilfrid Laurier University
- Jonathon Penney, Associate Professor, Osgoode Hall Law School, York University
- Teresa Scassa, Canada Research Chair in Information Law and Policy and Professor of Law, University of Ottawa
REVISED Deadline for final paper submissions: 2 April 2025
The deadline for paper submission has been extended to April 2nd to ensure high-quality submissions.
We are also pleased to announce that authors of selected submissions will be invited to present their papers at a conference organized by CIFAR in Toronto on 28 May. CIFAR will cover travel expenses of one presenter for each invited paper. Conference invitations to presenters will be extended before the end of April.
Artificial intelligence (AI) is increasingly shaping the world around us, changing how industries operate and deliver goods and services, creating new jobs, revolutionizing public services such as healthcare and education, and influencing public opinion. The past decade has witnessed an exponential rise in the sophistication of AI methods, including the widespread popularity of Large Language Models (LLMs) and their use by businesses, governments, and individuals.
Given AI’s sweeping impacts, it is critical to have policy and governance frameworks that can be used to evaluate societal safety with regard to AI development and deployment. However, there are still considerable knowledge gaps on how best to regulate AI and the pace of AI regulation in Canada has lagged behind international regulation, such as the European Union AI Act. Recently, calls to accelerate the pace of AI regulation have come across civil society, academia, government, and computer science. There is a need to speed up conversations through interdisciplinary research that can address the technical and policy challenges related to AI, including attention to the short-term and long-term risks of AI.
In response to the need for more research on governance and regulations on AI Safety, CIFAR and the University of Waterloo CPI have sponsored a special issue of Canadian Public Policy. All papers are expected to discuss gaps in public policies, regulatory frameworks, and governance mechanisms on AI safety in Canada, lessons from other jurisdictions (if relevant), along with possible solutions and paths forward. We take inspiration from the International Scientific Report on the Safety of Advanced AI, chaired by Canadian computer scientist Yosuha Bengio, which identified systemic risks that include AI’s impacts on the labour market, global AI divide, market concentration, environmental concerns, privacy, and copyright.
We invite submissions from researchers across a range of disciplines in the social sciences, humanities, law/legal studies, and STEM disciplines, with a preference towards multidisciplinary collaborations. The editorial committee is especially interested in papers that address the following questions, grouped according to three main pillars.
Governance and regulation
-
What government policies could help Canada exploit the benefits of AI with respect to increasing innovation, productivity, and economic growth?
-
How should policymakers consider the balance of long-term and short-term risks from AI systems?
-
What are the necessary conditions to ensure that AI development/deployment is ‘Safe’ from a societal perspective? How might regulators arrive at a societal welfare model that can help policymakers to identify appropriate levels of AI safety, balancing with the economic benefits of AI innovation?
-
What is a possible regulatory framework that can be used by government/public agencies to evaluate the costs and benefits and net societal effects of AI development and deployment?
-
What lessons can Canada take from global AI regulation, such as the EU AI Act?
-
Ontario has introduced Bill 194 to address cybersecurity in the public sector. What federal or provincial policies need to be implemented to ensure enhanced cyber safety for Canadians, given the threats from advanced AI systems?
Market competition and economics of innovation
-
AI development has been dominated by a few firms with vast market power. What government policies should be considered to encourage more Canadian AI startups which should lead to more choices for businesses and consumers?
-
Does the Competition Act of Canada need to be amended to account for market power generated by data collection/possession and AI use by a few firms?
-
What regulatory initiatives are needed to ensure that data collected for AI purposes is done ethically and does not infringe on the intellectual property of other firms?
Trust, privacy, and misinformation
-
How do we combat the threats of misinformation and disinformation, given the potential of AI-generated content to accelerate these threats?
-
Synthetic data offers exciting possibilities for AI innovation. What specific policies should be implemented to encourage their safe use?
-
Do current existing and proposed federal and provincial initiatives adequately regulate the use of machine learning algorithms from the perspective of public safety? What new legal or policy frameworks need to be developed or adapted to account for privacy in advanced AI systems?
-
From a regulatory perspective, how can we integrate the concepts of fairness, explainability, and trustworthiness with AI safety? How can we develop and enforce standards that are consistent with fairness, explainability, trustworthiness, and safety?
Details on collection of papers
Full papers should be a maximum of 7000 words, in English or French, and conform to the guidelines of Canadian Public Policy.
Submissions must be made under the option - Special Issue: "Artificial Intelligence and Public Policy in Canada"
All submitted papers will go through the journal’s standard editorial peer review process.