Table of Contents
Introduction
The Government of Canada conducted a Consultation on Copyright in the Age of Generative Artificial Intelligence between October 12, 2023, and January 15, 2024, to better understand the effects of generative artificial intelligence (AI) on copyright and the marketplace.
Given the growing adoption of generative AI tools, an increasing number of stakeholders in the cultural industries have expressed concerns about the impact of this technology. Creators see current AI practices as undermining their copyright protection, including their ability to consent to and be credited and compensated for the use of their works.Footnote 1 For their part, stakeholders from technology industries have expressed concerns about the uncertainty surrounding the application of the copyright framework to AI systems, with some expressing fears that this uncertainty may chill investment in, and reduce opportunities for, AI development in Canada.
In 2021, the government sought feedback on similar copyright policy issues in the Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things. At that time, stakeholders felt that it was too early to comment on the policy implications of AI given that this technology was still at an early stage. The purpose of re-engaging Canadians with these policy issues in another round of consultations was to continue the important fact-finding work that began in 2021, especially in the context of new generative AI systems and the evolution of the marketplace.
The government is pleased to present this report summarizing what we heard in the Consultation on Copyright in the Age of Generative Artificial Intelligence (the consultation). The views of Canadians expressed during this consultation are an important contribution to domestic policy considerations and to the broader global discussion. The aim of this report is to distill and faithfully reproduce the perspectives of the participants, allowing these perspectives to help the government reach a deeper understanding of the issues and support forward-looking policy discussions.
Who We Heard From
The consultation was conducted through an online questionnaire as well as virtual roundtable discussions on the basis of a published consultation paper. In total, about 1,000 interested Canadians submitted responses to the questionnaire. The government received 103 responses from organizations or expert stakeholders across different industries and those submissions have been made available online. Notably, a majority of respondents were individual creators.
Additionally, 62 stakeholders participated in seven roundtables held during the consultation. Roundtable invitees included a diverse group of stakeholders, including those from the cultural industries, the technology industries, public interest groups, legal practitioners and scholars, and Indigenous communities. Two roundtables were conducted in French, with the others being conducted in English. All participants were encouraged to speak freely, knowing their opinions would not be attributed to them or to their organizations.
Engagement from organizational stakeholders within the cultural industries was particularly high. Conversely, the consultation received less engagement, and fewer responses, from individuals and organizations in the technology and AI sectors. Additionally, there was limited input from Indigenous people.
What We Heard
The consultation sought feedback on three key copyright policy areas related to generative AI technology:
- the use of copyright-protected works in the training of AI systems, notably for text and data mining (TDM) activities;
- authorship and ownership rights related to AI-generated content; and
- questions of liability, notably when AI-generated content infringes copyright.
While these issues were outlined in the paper, participants in the consultation were also able to raise any other issues they thought important. A number of additional concerns about AI were raised frequently in the consultation. Some of these concerns related to copyright, while others may require consideration outside the copyright framework. These issues are noted below.
This section discusses and synthesizes feedback into 11 observations.
Text and data mining (TDM)
The first policy area addressed in the consultation paper is the use of copyright-protected works in AI development, notably in TDM activities. TDM is an essential step in the training of AI systems and consists of the reproduction and analysis of large quantities of data and information, including those extracted from copyright-protected works, to identify patterns and make predictions. The consultation paper considered whether the Copyright Act should clarify when the use of copyright-protected works for AI training and TDM requires authorization from rights holders or falls under an existing exception to copyright infringement.Footnote 2 The question of whether Canada should amend the Copyright Act, alongside some of its trading partners, to add a new exception to copyright infringement addressing TDM was also discussed in the paper.
Stakeholder views on TDM were divided. Many stakeholders emphasized the critical importance that rights holders can consent to the use of their copyright-protected works in TDM activities and be remunerated for it, citing significant risks to creators' rights without such protections. Conversely, a smaller group of stakeholders argued that TDM activities simply involve machine-learning facts, statistical patterns, or other data from works, rather than reproducing the works or consuming their expressive content. These stakeholders submitted that the use of copyright-protected works for this purpose is non-expressive and does not actually engage copyright law. These stakeholders tended to be more supportive of a copyright infringement exception or of legal clarifications facilitating TDM activities. Further fact-finding may be helpful to better understand how or if copies are made, stored, and used when training AI.
Observation 1: Creators oppose the use of their content in AI without consent and compensation
Many submissions from individual creators and the cultural industries took the position that the unauthorized, unlicensed, and uncompensated use of copyright-protected works in TDM activities and in the training of AI is a violation of their economic and moral rights under current copyright law. The Copyright Act reserves several exclusive rights for rights holders, including the right to produce or reproduce their works or substantial parts of them; the right to perform works in public; and the right to publish them. The Copyright Act also grants moral rights to authors, including the right to the integrity of their copyright-protected content and the right to be associated with it. Individual creators and the cultural industries argued that, as the generative AI market develops, rights holders must be able to consent to the use of their content in the training of AI and should receive credit and compensation for these uses. They also expressed the view that there is no existing exception, nor should there be an exception, covering the use of copyright-protected works for TDM purposes. Some asked for clarifications to this effect in the copyright framework.
Most individual creators and cultural industry stakeholders considered licensing to be a viable option to enable the conduct of TDM activities on copyright-protected works. They noted that licensing markets for TDM uses are already being developed and emphasized the necessity of robust licensing frameworks to ensure fair compensation and enforcement mechanisms for creators. There were a variety of views regarding the type of licensing model that may be most appropriate. In addition to licensing fees to use copyright-protected works as inputs into TDM processes, some stakeholders also argued for potential downstream remuneration for rights holders for AI-generated content that used their works as inputs in training. Other stakeholders suggested that collective management may facilitate licensing.
Some stakeholders also addressed the prospect of compulsory licensing for TDM purposes. Compulsory licensing involves the establishment of a royalty that users must pay to use copyright-protected works. If the royalty is paid, rights holders cannot deny the use of their works. While this model limits rights holders' ability to authorize the uses of their works, it nevertheless ensures remuneration is paid. Critics of the compulsory licensing model were of the view that rights holder consent for the use of their works in TDM activities should be encouraged. They argued there is no evidence that rights holders and AI developers cannot develop voluntary licensing agreements and said those efforts should be allowed to continue before compulsory licensing options are considered.
Many stakeholders expressed a preference for a voluntary or opt-in licensing model, as opposed to a opt-out one. Under opt-out regimes, such as those adopted in European countries, rights holders must proactively indicate that they do not want their works used for TDM purposes; if not, their works can be freely used for such purposes. This contrasts with typical licensing processes, in which users must proactively seek rights holder authorization to reproduce, publish, or perform their works. Critics of opt-out regimes cited excess burdens on rights holders, who may have to monitor and object to many uses of their works, and possible conflicts with Canada's international obligations under copyright-related treaties.
However, some stakeholders, such as those in the technology industries, cited challenges that they held to be inherent to licensing. These concerns include the expense, which may have notable disadvantages for new or small businesses; the importance of facilitating AI training on larger, more diverse datasets; and low individual payouts to a large number of rights holders.
Observation 2: User groups support clarifications that TDM does not infringe copyright
User groups, including the technology industries, were more likely to support clarifications to the law to facilitate TDM; the proposal for a copyright exception was the primary counterpoint to licensing. Some stakeholders expressed support for an infringement exception for the use of copyright-protected works in TDM and AI training, as discussed in the consultation paper.
Technology industry stakeholders generally supported a TDM exception, be it a new standalone exception or a broadening of an existing general exception, such as fair dealing, to address relevant uses. A few public interest stakeholders and some legal practitioners and scholars echoed these suggestions. Some stakeholders also used this discussion to reiterate longstanding concerns about contracts and technological protection measures. Stakeholders in some sectors, such as education, libraries, archives and museums, feel as though contractual terms and technological protection measures may be used to impede users from benefiting from certain copyright exceptions. In the context of generative AI, stakeholders wanting to facilitate TDM highlighted that any new exception addressing this issue specifically should not be able to be frustrated in the same way.
Stakeholders representing the technology industries and user groups made important points about a possible TDM exception. Many stakeholders suggested that Canada's competitiveness in the AI sector may benefit from legal changes clarifying that conducting TDM or AI training on copyright-protected works does not infringe copyright. Some argued that licensing is an unnecessary burden because it may not be clear that copyright is engaged or that works used in TDM are being reproduced in the first place. Some stakeholders wanted to see clarifications stating that such "uses", which in their view do not reproduce or consume the expressive content of works, do not infringe copyright.
On the other hand, a few stakeholders from various backgrounds expressed openness to more limited exceptions for TDM purposes, such as facilitating TDM for research or public interest purposes without infringing copyright laws. However, individual creators and cultural industries generally argued against any expansion of exceptions in the copyright framework for the use of copyright-protected works for TDM purposes. Some of these stakeholders expressed the view that any new TDM exception introduced may not align with Canada's international obligations.
Observation 3: Support for greater transparency regarding TDM inputs
In the consultation, there was significant interest in developing transparency requirements (i.e. recordkeeping and disclosure requirements) surrounding the use of copyright-protected works in the training of AI. While this issue was briefly raised in the consultation paper, it received further engagement in the roundtables and questionnaire responses. Support for transparency requirements was common among the cultural industries and individual creators; it was also common among some public interest stakeholders, legal practitioners and scholars, and stakeholders from fields such as libraries, archives and museums, and educational institutions.
Many stakeholders who supported transparency requirements argued that rights holders' inability to know what inputs were used to train AI inhibited their ability to evaluate if TDM engaged their works and to take action or seek compensation where appropriate. Because copyright is a private right that must be enforced by rights holders, cultural industry stakeholders and individual creators were concerned that transparency barriers may prevent rights holders from assessing their legal options. Some participants also suggested that transparency requirements could serve a more general consumer and public interest purpose by allowing the inputs and outputs of generative AI to be better understood and allowing consumers to choose the content with which they want to engage.
Conversely, some stakeholders, mainly from the technology industries, expressed concerns about transparency requirements that would mandate the disclosure of all inputs, including copyright-protected inputs, into TDM processes and training datasets. These stakeholders were of the view that such requirements could force them to disclose potentially sensitive data, such as personal health data, that may be used in specialized datasets. It was also suggested that, where companies train AI on their own proprietary data or their own intellectual property, disclosure requirements may be inappropriate. Finally, some stakeholders raised concerns that imposing additional transparency requirements may harm the competitiveness of the Canadian AI industry relative to other jurisdictions that do not impose these requirements.
Some related reflections on transparency are also addressed below (see Observations 7 and 9).
Authorship and ownership of works generated by AI
The second policy area addressed in the consultation paper relates to authorship and copyright ownership issues of AI-generated content. This issue attracted less attention than the policy questions around TDM, in both the roundtables and in responses to the questionnaire. The consultation paper asked whether the Copyright Act should clarify that it protects only works with sufficient human originality, or whether it should extend protection more broadly to AI-generated content.Footnote 3 Existing copyright jurisprudence in Canada suggests that authorship must be attributed to a human who exercises skill and judgment to create a work.
Observation 4: Support for the centrality of human authorship
Generally, participants in the consultation expressed support for keeping human authorship central to copyright protection. Many submitted that only AI-generated content with sufficient human contributions should be protected. This was true of most individual creators and stakeholders in the cultural industries, as well as stakeholders in other sectors, such as the technology industries, public interest groups, and legal practitioners and scholars. Overall, there was opposition to protecting AI-generated content without sufficient human contributions.
However, it was also noted that human authors must be allowed flexibility to use AI as a tool in their creation. Certain stakeholders, often from the technology industries, took a broad view of human originality, which could see protection for sufficiently complex prompts inputted by human users into AI systems. Moreover, many stakeholders submitted that current copyright laws are flexible enough to address authorship and ownership questions related to AI as they arise, and that changes to the law would be premature. Nonetheless, some disagreed, stating that clarifications may be appropriate. For instance, some cultural industry stakeholders advocated to clarify that performers have rights in their performances of AI-generated content (e.g. a human performance of a song generated by AI).
Among all stakeholders, there was limited openness to protecting AI-generated content without sufficient human input. In fact, some stakeholders suggested that in managing the registration system the Canadian Intellectual Property Office should adopt a disclosure requirement for AI-generated elements of works, similar to those of the United States Copyright Office. They proposed that AI-generated elements should not be copyright-protected even if other human-authored elements of the work were. However, a few participants took a different approach to the question of protection for AI-generated content. They noted that, because copyright aims to encourage innovation and creativity, it may be appropriate to offer legal protection to machine-generated innovation and creativity, with authorship attributed to the person who arranges for the creation of the content. Along with this view, some stakeholders commented that it may be appropriate to create a new legal regime specific to AI-generated content; however, details about what such a regime may look like were not specified.
Infringement and liability regarding AI
The third policy area addressed in the consultation paper involves a series of issues around liability and infringement. The main question was whether the Copyright Act should be amended to provide more clarity in the marketplace on copyright liability regarding AI-generated content. At present, it may be unclear who, between developers, owners, or users of AI systems, could be liable when AI-generated content is found to infringe copyright. Moreover, it may be difficult for rights holders to establish evidence or proof of infringement, given current uncertainties surrounding the content an AI system has been trained on.
Overall, the consultation yielded limited consensus on liability questions raised by infringing AI-generated content. The following observations focus on key issues that stakeholders discussed. First, stakeholders addressed whether existing legal tests for infringement and existing remedies are sufficient to determine whether a given piece of AI-generated content infringes copyright. Second, stakeholders proposed different parties who may be liable if AI-generated content infringes copyright. Finally, stakeholders discussed barriers, notably a lack of transparency, to determining whether an AI system accessed or copied a specific copyright-protected work prior to generating an infringing output.
Observation 5: No consensus about whether existing legal tests and remedies are adequate
As of the publication of this report, no Canadian court decision has examined the prospect of infringing AI-generated content. Existing legal tests and remedies for copyright infringement have thus not been applied by courts in the context of AI. While not all participants in this consultation addressed this issue, of those who did comment on it, approximately half argued that current legal tests for copyright infringement were sufficient. There were no unified calls for specific changes or submissions about specific gaps in the current legislation. This was particularly true of stakeholders in the technology industries, who generally suggested that it is premature to amend the Copyright Act to account for the potential impacts of generative AI.
Stakeholders from the cultural industries were more divided on these questions. About half of those who commented on this point agreed that legal changes were premature or stated that current law adequately addresses infringement and liability issues regarding AI-generated content. Others were more nuanced, submitting that existing laws and remedies for infringement as they pertain to the AI-generated content are generally sufficient, but that transparency measures regarding inputs into AI systems may be required to ensure access to evidence. Further concerns about transparency in this context are addressed below (see Observation 7). Conversely, some stakeholders argued that there should be clarifications made to existing laws to address infringement and liability issues regarding AI-generated content. One proposal put forward by stakeholders was a new legal presumption providing that when AI-generated content is similar to copyright-protected works, those works are presumed to have been accessed in AI training. This presumption would facilitate infringement proceedings.
Stakeholders from the education sector, libraries, archives and museums, public interest groups, and legal practitioners and scholars were likewise divided on this issue. Some felt that current laws are sufficient while others suggested there should be greater legal guidance or changes to the law, such as greater penalties or a presumption that users should not be liable for the infringing outputs of AI systems.
Observation 6: No consensus about who may be liable for infringing AI-generated content
Stakeholders also discussed several proposals regarding who may be liable when AI-generated content infringes copyright. They mentioned a range of parties who may be liable, including AI owners, developers, deployers, and users; some suggested joint liability for different actors along the AI value chain. Certain stakeholders also felt that courts are in the best position to determine who should be liable in which circumstances, whereas others urged the government to provide more clarity.
While there was no agreement across stakeholder groups as to where liability should rest, stakeholders within related industries tended to express similar views. Many stakeholders from the cultural industries, education, libraries, archives and museums, and public interest groups submitted that liability should fall on the developers and deployers of AI systems. These views were also shared by many individuals responding to the online questionnaire. Some legal practitioners and scholars proposed liability for developers, deployers, or users, depending on the facts. Other stakeholders, often from sectors such as education or public interest groups, emphasized the importance of limited liability for users of AI systems, which would ensure they are not held responsible for damages or consequences arising from the infringing AI-generated content, provided they have followed relevant laws and regulations.
By contrast, stakeholders in the technology industries raised concerns about imposing liability on developers. They submitted that actors within their industries take precautions to avoid infringing AI-generated content; that such infringing content is a bug rather than a feature of generative AI systems; and that, in the rare cases where copyright infringement does occur, users of AI systems should be liable for prompting systems to infringe while developers should have no or limited liability.
Observation 7: Support for greater transparency to facilitate determining liability
Stakeholders from different industries identified what they viewed as a major barrier to determining infringement and liability in the context of AI, namely, a lack of evidence regarding AI inputs. Many expressed concerns that there is a lack of information available on whether an AI system accessed or copied a specific copyright-protected work during the AI training process. Some stakeholders also raised concerns that the companies training AI systems may not keep records of what material is used for training or TDM purposes.
Given this view, and given the fact that copyright must be enforced by rights holders themselves, most stakeholders expressed a desire to see some type of transparency requirements regarding inputs used to train AI. They submitted that greater transparency would allow rights holders to exercise existing copyright remedies in Canada, where appropriate. Transparency requirements were popular even where stakeholders thought that existing legal tests were likely sufficient to identify infringement. Supporters of this view submitted that the lack of publicly available evidence might make it difficult for rights holders to take advantage of the existing system and available remedies.
New transparency requirements were strongly supported by stakeholders from the cultural industries; they also received some support from stakeholders in education, libraries, archives and museums, public interest groups, and some legal practitioners and scholars. The only sector that expressed some reservations about transparency requirements was the technology industries, for reasons noted above (see Observation 3).
Engagement with Indigenous people
The government engaged with Indigenous people from various industries in the roundtables. Additionally, three percent of respondents to the online questionnaire identified as Indigenous. The roundtable discussions were particularly helpful in understanding some of the unique concerns that Indigenous people had about the effects of generative AI on their rights and cultural expression.
Observation 8: Concerns raised on the use of Indigenous cultural expressions in AI
Some Indigenous participants highlighted that generative AI raises several unique issues related to the rights of Indigenous Peoples while also exacerbating other existing challenges. Some underscored underlying policy issues, such as Indigenous data sovereignty, which refers to the ability of Indigenous Peoples to govern the collection, ownership, and application of data that pertains to them. More specifically, some Indigenous participants raised urgent concerns about the potential misuse of their cultural expressions by AI technologies and requested immediate action in this regard.
Indigenous participants noted concerns about the prospect of TDM being conducted on works of Indigenous cultural expressions. Some of these concerns overlapped with those of non-Indigenous stakeholders and individual creators, including concerns about the perceived need for permission and compensation for the use of their works in TDM activities. However, Indigenous participants raised additional challenges and reflected on some of the ways TDM highlighted broader discrepancies between Indigenous traditions and copyright law. The communal nature of many Indigenous cultures was one such discrepancy. Some Indigenous participants submitted that these alternative perspectives may make TDM-related challenges harder to address than simply identifying and paying individual rights holders for the use of their works in TDM activities; some noted that in certain circumstances, remuneration may be more appropriately paid to communities than individuals.
Indigenous participants also raised concerns about how AI systems may train on Indigenous cultural expressions from different communities, then amalgamate that expression and use the training to generate misleading, stereotypical outputs. Indigenous participants suggested some ways this challenge and other broader ones could be mitigated. Some preliminary suggestions included making efforts to include perspectives from diverse Indigenous communities and non-biased works in AI training datasets; penalizing the inappropriate use of knowledge-related resources; and funding Indigenous artists, as well as arts advocacy organizations, that would support Indigenous rights holders in protecting their rights against infringement. Indigenous representatives working in the technology industries emphasized that it is a general best practice, with all content, to limit the size of TDM input data to minimize the likelihood of generating infringing outputs. Other solutions proposed also echoed those of non-Indigenous stakeholders who suggested ways to offset the negative impacts of TDM on rights holders. These proposals included seeking permission and offering compensation for the conduct of TDM on works.
Other broader concerns raised by Indigenous participants included the use of generative AI to create Indigenous representations in art, music, film, and writing, and to create artworks in the style of Indigenous arts and artists. There was also interest in the potential opportunities AI presents for Indigenous Peoples, including prospective economic reconciliation opportunities through ownership and participation in the AI industry, and a possible role for AI in efforts to revitalize Indigenous cultures and languages. However, some Indigenous participants noted that some of these opportunities may present challenges of their own. For example, non-Indigenous people may own the AI companies and tools that become vital to preserving Indigenous languages.
Efforts to further reconciliation in the context of Indigenous cultural expressions are aligned with measure 101 in the government's Action Plan to achieve the objectives of the United Nations Declaration on the Rights of Indigenous Peoples Act. Engagement with Indigenous people will continue and will provide a forum to understand and explore these issues further.
Further commentary
In addition to the three main policy areas of interest and to the concerns raised by Indigenous participants, some participants in the consultation also addressed other AI-related topics. Some of these issues related directly to copyright while others did not. This section provides an overview of some recurring concerns.
Observation 9: Some support for labelling of AI-generated content
In addition to the feedback regarding transparency of data used in the training of AI noted above (see Observations 3 and 7), many stakeholders expressed interest in a legal requirement for transparency regarding AI-generated content. This idea received notable support from stakeholders in the cultural industries, from some legal practitioners and scholars, and from several individuals responding to the questionnaire. These stakeholders advocated for the labelling of mostly or fully AI-generated content to protect rights holders and consumers.
Supporters of this idea submitted that labelling AI-generated content could allow consumers to know the source of the content they are viewing or reading, thus allowing them to choose whether to engage with AI-generated content. Supporters of this idea further made the point that labelling AI-generated content could also protect rights holders by labelling potentially misleading deepfakes (i.e., digitally manipulated images or recordings that may misrepresent subjects) of their likenesses and by allowing consumers to choose to support human-authored content instead.
Observation 10: Some concern over the use of performers' likenesses in deepfakes
Cultural stakeholders were particularly concerned with deepfakes that could compete in the market with original works or performances by performers, or that could give the impression that a person said or did something that could harm their reputation. In that regard, several cultural industry stakeholders advocated for new "personality rights" that would protect performers' name, image, or likeness (e.g., voice, animated images). A handful of stakeholders, mostly organizations representing performing artists, also advocated for Canada to extend the protection of audiovisual performances in copyright law, notably by providing moral rights in such performances. Proponents suggested that additional protection on audiovisual performances would enable performers to exercise more control over the use of their performances in AI and that could address some of their concerns about deepfakes.
Observation 11: Concerns about negative impacts of AI on job security and unfair competition
A number of participants in the consultation expressed general concerns about the negative impacts of AI in relation to the labour market. Concerns about job security and unfair competition were prominent, particularly among individual creators responding to the questionnaire. Their responses revealed considerable concern over the potential economic effects of AI triggering job losses due to automation. They advocated prioritizing urgent policy initiatives to facilitate workforce protection and modernization.
In particular, submissions from cultural industry stakeholders highlighted urgent concerns about the impact of AI on rights holders' ability to monetize their content. These stakeholders expressed concerns about ensuring that human creators and their works will continue to be valued despite potential competition from AI-generated content. Some also suggested that legal frameworks should reinforce the bargaining power of labour unions representing sectors of the cultural industries in securing fair compensation for the use of content in TDM activities.
More generally, many individuals responding to the online questionnaire also expressed negative sentiments towards AI companies and the technology industries. They expressed skepticism regarding the fairness and ethical standards of businesses operating within the AI sector. A few of them also called for more penalties and more corporate accountability in preventing potential harms and unethical AI uses. Some potential misuses of AI that concerned participants in the consultation, but that fall outside the copyright framework, include privacy violations, misinformation, propaganda, terrorism, and various types of criminal offences.
Conclusion
The government thanks all participants who engaged in the consultation for their feedback. The issues raised in this consultation are challenging and the comments received are valuable in informing the government's policy. The government has heard from an unprecedented number of Canadians on copyright and AI, which speaks to the importance and the striking level of interest in these issues.
The government takes note of the variety of participants and diversity of views shared during the consultation. A majority of those who engaged in this discussion were part of the cultural industries, with a key concern being the need to ensure that creative works in AI are only used with consent, credit, and compensation. For their part, user groups, including the technology industries, voiced concerns that the current copyright framework could hinder Canada's competitiveness in the global AI economy.
These opposing concerns were notably reflected in the feedback received on text and data mining policy questions. Participants from cultural industries emphasized the importance of consenting and receiving compensation whenever their copyright-protected works are used in TDM activities, while participants from user groups asked for a clarification in the law that such activities should never require the authorization of copyright holders, nor amount to compensation. In light of this feedback, the government will consider options to bring more clarity into the marketplace, and examine how a balanced copyright approach to TDM activities could support the rights of creators while fostering Canadian innovation in an evolving global context.
Despite the lack of consensus on many issues among participants from different sectors, one notable area of agreement was regarding the need for substantial human authorship contributions in order to qualify for copyright protection. Many participants were of the view that any changes to the law in this regard would not be necessary or would be premature, although others stated that clarifications and guidance may be appropriate. The government will examine whether any legislative or policy intervention could be made to support this proposition and study its implications.
It is also worth noting the significant interest in transparency, both regarding AI-generated content and copyright-protected content used as inputs in the development of AI. The government recognizes that additional transparency could assuage many concerns about AI. Greater transparency would be an important step forward to support copyright remedies in the context of AI and rebuild trust between stakeholders in this ecosystem.
The government has already committed to fostering greater AI transparency and sought to address it in the Artificial Intelligence and Data Act (AIDA) in Bill C-27. AIDA would have created a risk-based framework to ensure the safe and responsible development and deployment of AI systems in Canada. In November 2023, in response to feedback received from stakeholders, the Minister of Innovation, Science and Industry proposed in a letter amendments to AIDA that, while not directly addressing copyright law, would have addressed some of the concerns raised in this report. Proposed amendments to AIDA include enabling the government to make regulations concerning the content used to train and build AI systems, the labelling of AI-generated content, and a requirement that AI systems identify themselves as such where they may be mistaken for real people. While parliamentary work on AIDA has ceased with prorogation, the deliberations and witness testimony during Committee study of AIDA will inform the government's future policies in this area.
The government continues to consider how Canadian concerns posed by generative AI, including those raised by cultural and technology industries, might be addressed. In considering the policy issues at the intersection of copyright and AI, the government is particularly mindful of preserving incentives to create and distribute works provided in the copyright framework, while supporting Canada's innovation strategy. The government recognizes that these policy issues have the potential to question the foundations of the copyright framework, but believes that copyright remains relevant in the age of generative AI. Over time, copyright law has always been resilient in adapting to technological disruptions—from the printing press to the advent of the Internet—and to changing business models in the marketplace.
While the consultation period has ended, the government encourages stakeholders and all Canadians to continue sharing their views on these emerging issues as they evolve.