Blog Post:

STATEMENT TO HOUSE OF COMMONS COMMITTEE ON INDUSTRY AND TECHNOLOGY ON ARTIFICIAL INTELLIGENCE (March 23, 2026)

I am Professor Emeritus of Political Science at the University of Victoria, and a fellow at the Center for Global Studies.  I have researched and written about national and international privacy protection policy for over 40 years. I am also an advisor to several civil liberties and digital rights associations – including the Center for Digital Rights.

AI is a misleading short hand to encapsulate a range of different forms of machine learning and computation.  I regard the current hype as the latest saga in the long-standing drive for more technical sophistication in the hope that our lives can be happier and more prosperous if we just process more and more data with ever advanced computing tools.  While I concede that there is some remarkable potential for AI, a healthy dose of scepticism is necessary.  The initial enthusiasm has now given rise to a recognition of the enormous risks to our economy, our environment, our social fabric and our civil liberties.

Therefore, your should remain very sceptical about the business narrative that prescriptive regulation of artificial intelligence burdens companies, suppresses investment, and surrenders competitive ground to other countries. Regulatory certainty can reduce legal and reputational risk for companies building at scale, and safety and privacy by design requirements can drive better engineering. The real liability lies in building AI systems and products that harm users (including children) without any measures to hold those systems, and the companies that build and deploy them, accountable.  A sovereign AI strategy is not just about the physical infrastructure, it is also about Canadian law and policy.  There is no point building Canadian digital infrastructure, if the information that flows over those networks is governed by the laws of other countries (such as the US Cloud Act) and the opaque corporate practices of foreign big tech organizations.

There is a plethora of laws, guidelines, standards, codes and other soft law that already applies in this space.  The regulatory landscape is complex, often latent, and incomplete.  I therefore find it very difficult to envisage a future for Canada without an overarching statutory framework for AI – an AIDA 3.0, but one that is more comprehensive, supported by credible oversight, and rooted in widespread consultation with all stakeholders.   It is unfortunate that policy development to this point has suffered from a lack of genuine and widespread consultation. The trust gap is a real one, and it needs to be closed. 

We now have sufficient experience to know what effective AI governance looks like:  a complete prohibition on the most egregious and manipulative systems; mandatory risk assessments for high risk systems (especially those that profile individuals);  consistent  data governance regimes; transparency of algorithms and technical and policy documentation; and effective redress mechanisms for individuals whose rights and interests have been denied because of automated decisions made without effective human oversight.   You will probably hear a lot of criticism of the EU AI Act as being overly prescriptive.   For all the flaws in its implementation, that legislation has got the categories and the regulatory framework about right, in my judgement.

Finally, be very aware of the intersection of AI governance and privacy protection policy. The Office of the Privacy Commissioner (OPC) has already been investigating:  ChatGPT for the non-consensual use of Canadians’ personal data to train its large language models; GROK for the display and sharing of sexualized images; and Clearview AI for the scraping of images from the Internet to fuel the facial-recognition systems shared with law enforcement.  Also bear in mind that the hallucinations that generative AI is regularly subject to can severely damage reputations.  Privacy law mandates the accuracy of personal information.  Chatbots like ChatGpt regularly generate false information about people without offering a way to correct it.

As AI becomes embedded in our digital experiences, it is difficult to envisage a privacy case coming before the Commissioner that does not, in some measure, concern AI.  I hope therefore that we will see a new Canadian Privacy Protection Act (CPPA) soon, which gives the Privacy Commissioner the tools and the budget he needs to take on these gargantuan companies that are driving AI technology.  There was vigorous debate about Bill C-27 on this committee in the last Parliament, and I think an emerging cross-party consensus among all parties that an effective and modernized law is urgently required.

A new CPPA should be based on the core principle that privacy is a fundamental human right, and provide the OPC with the full range of investigative and enforcement tools (unencumbered by a Data Protection Tribunal).  It should also impose heightened requirements for personal data transferred outside Canada for processing.    Modernizing and strengthening Canadian privacy law (including the Privacy Act) will not address all the risks associated with AI development and deployment, but it is an urgent first step to advance Canadian digital sovereignty.