Colin Bennett logo white

Blog Post:

Response to ISED Consultation on Canadian Code of Practice for Generative Artificial Intelligence Systems

I am grateful for this opportunity to comment on the consultation for a code of practice to regulate generative AI systems.  As a scholar who has researched and written a great deal about the governance arrangements for privacy protection (including the use of codes of practice), I believe I have something useful to contribute to the debate on this vitally important issue.

In particular, I would like to comment on the idea of a “code of practice” and about the conditions under which they might be made effective.  ISED, and its predecessor Industry Canada, has a considerable experience promoting the use of industry codes of practice – going back over thirty years.   Just because the government is grappling with complex and new technological innovations, does not mean that that experience is irrelevant.  To the contrary, the questions about the conditions under which instruments of self-regulation might be made to work, are enduring.   And they are as relevant today and in the context of AI regulation, as they were thirty years ago when privacy protection first arrived on the government’s agenda, and when earlier generations of IT were being regulated.[1]

I wish to address four aspects of the consultation on the code of practice.  These issues were all raised during the excellent interactive workshop organized by AccessPrivacy on August 16, 2023:   the timing and process;  the definition of a code of practice; the conditions and process for industry buy-in; and the essential elements.   I leave your questions about the content of the code until last, because I believe that the substance cannot effectively be understood and promoted unless crucial issues about the nature and function of this instrument are first addressed and resolved.

  • The Timing and Process

The timing of this consultation exercise is curious, given that legislation is before Parliament in the form of the Artificial Intelligence Development Act (AIDA).   I could have envisaged a process whereby ISED convened the relevant stakeholders (including consumer groups) which would  negotiate a real “code of practice” which would then form the basis for legislation.   Such a “bottom-up” process is similar to that through which the Personal Information Protection and Electronic Document Act (PIPEDA) was developed and promulgated in the late 1990s.  Industry and consumer stakeholders negotiated the Canadian Standards Assocation Privacy Code of Practice, which  formed the essential principles within PIPEDA.    The process had the advantage of securing buy-in from key industry and consumer groups at the outset, and considerably reducing the opposition to legislation when it was later introduced.

Here the process is reversed.   Framework legislation has been introduced, and then stakeholders are invited to comment on the substantive issues through the development of a code of practice, ahead of the promulgation of AIDA and the development of regulations.  As a result, it is unclear how the code, the legislation and the regulations are expected to interact and operate in conjunction in the governance framework.

As many have remarked, it is also clear that the consultation so far has been heavily weighted in favor of industry stakeholders.   Companies, obviously, have to work with whatever rules are promulgated.   But, unless those rules are seen to have been openly and widely developed, they will have less credibility.   It is, therefore, critical that Canadian consumer groups be more closely involved in the process.   They are the ones who receive complaints from individuals and who have the practical experience of how automated machine decision-making and learning can affect Canadians’ rights and interests.   I also note that none of those groups is represented on the current Advisory Council on AI.

  • The Definition and purpose

That leads me to a central question.   What is this code of practice?  What kind of policy instrument is being envisaged here?   Is it just a set of high-level commitments and promises? Or is it going to be a more substantial code of conduct that gives more detailed guidance on organizational practices and implementation?   There has been considerable analysis and debate in the literature on these questions, and on the conditions under which they can be most effective.

Codes of practice can  operate with various levels of compulsion with command-and-control sanction at one end, and pure voluntarism at the other.  In the middle of the continuum are a number of mixed or “co-regulatory” approaches, which allow for variable degrees of flexibility, and different levels of governmental involvement and compulsion.[2]  Despite a huge volume of academic literature, discussion about codes of practice tends to proceed with a lack of clarity on basic terminology, and without appropriate levels of reflection on the long experience with how codes of conduct have operated in various policy sectors in different countries.[3]

At various times, codes have been adopted to anticipate legislation, to supplement legislation, to pre-empt legislation and to implement legislation.   As legislation becomes more imminent, as is the case with AIDA, codes of practice can be used politically to shape the content of legislation and eventually to inform regulatory and court interpretation on compliance.  By preparing member organizations for their future legislative commitments, the better codes of conduct can then anticipate legislation.  If the content of the code is close to that of the law, if the code has been negotiated through a true multi-stakeholder process, and if they have been attended by some genuine efforts at implementation, then they till the ground, reducing the possible disruption caused by legislation.   I assume that is ISED’s intention here, though it is not at all clear.

However, codes of practice or conduct must be codified.    They should be more than a set of public relations pledges or statements of good intention. They should entail guidance about practice and conduct.   They should, therefore, be inward as well as outward looking.  Codes of conduct are not only statements of “what we do”, but also “how we do it.”  That entails careful consideration of internal data governance, management and accountability.

 In previous work, we have made a distinction between privacy commitments, privacy codes, privacy standards and privacy seals.   These instruments should be applied cumulatively:   an organization should be encouraged to make a public commitment to adherence to a set of over-arching principles; codify that commitment in more detail in a code of conduct or practice; if necessary, get that code certified by an external body (a standards body, an auditor for example); and then receive a privacy seal as a “good house-keeping seal of approval.”   The overall goal should be to get organizations to “say what they do, and do what they say.”[4]   I believe this typology is also relevant to the AI context.

However, that process is considerably confused in ISED’s consultation document, with the result that industry is unclear about what they are being asked to consult on.   ISED has remarked that the code “would be implemented on a voluntary basis by Canadian firms ahead of the coming into force of AIDA” (my emphasis).   But what does that mean?   I would argue that if companies publicly state that they are adopting a code of practice, then that commitment would entail far more than “voluntary” implementation.   A public statement of adoption would mean that the company is also implementing all the elements of the code (not cherry-picking different elements) and that they will suffer consequences (legal and reputational) if it is discovered that they have not been implementing the code in good faith.[5]  Thus, the act of adoption would then entail an obligation to “do what you say” with consequences for non-compliance.   Is this what is meant when we are told that the “code will be sufficiently robust”?   None of this is clear.  These issues do need to be clarified, and urgently.

  • Industry buy-in: firms or sectors?

I note that the consultation document states that the code would be “implemented by Canadian firms.”  So, are companies going to be asked to “sign on” to the code (as different participants in the AccessPrivacy workshop seemed to suggest).   And what does signing on entail?    Would it be a public process?   Would it involve the government maintaining and managing a register of compliant companies, as happens in other policy areas?

The emphasis on Canadian firms also suggests that the various industry associations have no role to play in this process.  Yet, sectoral codes of practice have played a very important role in Canada in different policy sectors.   The impact of generative AI is going to vary, as will the social and individual risks, according to different applications in different sectors of the economy.  There is no reason to assume that an AI code should be the same for the application of generative AI in different economic and organizational settings – from insurance, to banking, to marketing, to telecommunications, to health, and so on.  If this code of practice is to operate as a high-level document that different sectors might adapt, then that could serve an important purpose.   But that too needs clarification.

And what will be the role for third-party assessment?   Is it envisaged, for instance, that companies would seek formal certification of their compliance?   On this question, therefore, the role of certifiable standards and standards certification bodies, enters the picture.  And we already have a number of existing, and emerging, standards coordinated through the Standards Council of Canada and the AI and Data Governance Standardization Collaborative.[6]   How does this code of practice support this larger effort at standardization of AI and data governance, on which an enormous amount of valuable work has already been completed?

  • The essential elements

The questions addressed above about the nature of this policy instrument, its scope, the level of compulsion and its relation with other legal and other standards, need to be answered cearly before the substance can be fully understood and debated.   That said and turning to the question asked:   are the core elements right?  I have no objection to the elements presented.   I was surprised, however, that privacy is not listed as one of them.

Generative AI systems, and the large language models on which they are based, can utilize vast quantities of personally identifiable data (PII), including that on children.  The G7 Ministerial Declaration affirmed its commitment that: “AI policies and regulations should be human centric and based on democratic values, including protection of human rights and fundamental freedoms and the protection of privacy and personal data.”[7]   Privacy also makes an albeit fleeting appearance in the Biden Administration’s statement on voluntary commitments on the risks posed by AI.[8]

The G7 privacy authorities also released a joint statement on the use of generative artificial intelligence, emphasising the major risks that AI may present from a privacy perspective. Among other recommendations, they recommended that privacy by design be embedded in the design, conception, operation, and management of generative artificial intelligence.[9]  Individual data protection and privacy authorities in several countries, on their own initiaves or in response to consumer complaints, are also beginning to address the emerging privacy risks that will accompany the more extensive deployment of AI systems.  In Canada, the OPC jointly with provincial privacy authorities, recently announced its investigation into prominent AI firm OpenAI, developer of ChatGPT.[10]

The absence of a reference to privacy and data protection in the essential elements of this code of practice is, therefore, puzzling and troubling, especially as AIDA has been introduced in a legislative package (C-27) that also includes a new and updated privacy law; one assumes that this was done to ensure a consistency between the AI and privacy legislative frameworks.   And, like it or not, if the language models are based on the processing of PII, then firms using AI will have to comply with the various provisions of federal and provincial privacy law and will want guidance on how that should be done responsibly.

It might be argued that the risks to privacy are already embraced in the existing proposed elements of the code of practice.  I do not see that.  AI systems can be safe, fair, equitable, transparent, subject to human oversight, valid and robust, and accountable.   Yet, they can still be highly intrusive.   If the overall goal is to reinforce trust in these systems, then privacy and data protection must be one of the elements of any future code of practice on generative AI.

_______________________________________________

[1] There is an extensive literature on industry codes of practice, generally and in the privacy context.   One example is by former Industry Canada official, Kernaghan Webb, Voluntary Codes, Private Governance, the Public Interest and Innovation (Carleton Research Unit for Innovation, 2004).   I also wrote on the issue when PIPEDA was initially promulgated:   Colin J. Bennett, Implementing Privacy Codes of Practice (Canadian Standards Association, 1995).     For more current work, see:  Dennis D. Hirsch, In Search of the Holy Grail:  Achieving Global Privacy Rules through Sector-Based Codes of Conduct.   Ohio State Law Journal 74: 6 (2013).

[2] Margot Priest, “The Privatization of Regulation:  Five Models of Self-Regulation,” Ottawa Law Review 29 (1998):  233-302.

[3] See Colin J. Bennett and Charles D. Raab, The Governance of Privacy: Policy Instruments in Global Perspective (Cambridge:  MIT Press, 2006), ch. 6.

[4] Bennett and Raab, The Governance of Privacy, ch. 6

[5] This co-regulatory approach is the model on which the new EU-US Data Privacy Framework (and its predecessors) is based.

[6]  Standards Council of Canada, AI and Data Governance Standardization Collaborative at: https://www.scc.ca/en/flagships/data-governance

[7] Ministerial Declaration:  The G7 Digital and Tech Ministers’ Meeting (April 30, 2023) (para. 45) at:   http://www.g7.utoronto.ca/ict/2023-declaration.html

[8] https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/

[9] Statement on Generative AI (June 21, 2023) at: https://www.priv.gc.ca/en/opc-news/speeches/2023/s-d_20230621_g7/

[10] https://www.priv.gc.ca/en/opc-news/news-and-announcements/2023/an_230404/#:~:text=The%20Office%20of%20the%20Privacy,Privacy%20Commissioner%20Philippe%20Dufresne%20says.