-
What are your countries legal definitions of “artificial intelligence”?
As of July 2024, Canada does not have a statutory definition of artificial intelligence (“AI”). However, the latest proposed amendments to Bill C-27, the Digital Charter Implementation Act (“DCIA”), which is pending in Canada’s House of Commons, define an “artificial intelligence system” as: “a technological system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions.”
-
Has your country developed a national strategy for artificial intelligence?
In 2019, the federal government unveiled Canada’s “Digital Charter”, which includes a “Pan-Canadian AI Strategy” comprised of three pillars: (1) commercialization through the financial support of three national AI institutes and five innovation clusters; (2) financial support for the Standards Council of Canada to develop standards related to AI; and (3) attracting AI development talent by supporting three centres of academic training and research as well as organizations providing dedicated computing capacity for AI researchers.
The proposed DCIA, referenced above, includes the Artificial Intelligence and Data Act (“AIDA”), which if enacted will apply to the use of AI by the private sector.
In July 2021, Canada launched the Consultation on Modern Copyright Framework for Artificial Intelligence and the Internet of Things (the “2021 Consultation”). The 2021 Consultation sought comments and information to help the government consider copyright policy in view of the challenges posed by AI following a 2018–2019 parliamentary review of the Copyright Act. The government received 38 submissions relating to three key copyright policy areas (text and data mining and the training of AI systems, authorship and ownership of AIgenerated and AI-assisted works, and infringement and liability issues relating to AI). All submissions have been published online.
In October 2023, Canada launched the Consultation on Copyright in the Age of Generative Artificial Intelligence (the “2023 Consultation”) following the rapid development of generative AI systems.1 The 2023 Consultation sought input regarding the same three topics as the 2021 Consultation, with an increased focus on generative AI. The government received 98 submissions, all of which have been published online. A report summarizing the written submissions and subsequent roundtable discussions is expected to be released in the coming months.
Footnote(s):
1 For a more detailed discussion of the 2023 Consultation, see: <https://cassels.com/insights/canada-launchesconsultation-on-copyright-in-the-age-of-generative-artificial-intelligence/>.
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
Guidelines: The federal government has issued guiding principles on the use of AI by government ministries/agencies. These guidelines focus on issues such as system impacts, transparency, “explainability”, sharing of source code and training data, user training, ethical and legal use, risk assessment, safety, and governance. Similar guidelines have also been issued in various provinces. The Office of the Superintendent of Financial Institutions has also published a framework in support of safe AI development.
Several courts have also issued guidelines and practice directions regarding the use of AI. In December 2023, the Federal Court of Canada issued a Notice requiring that litigants provide a declaration, and consider certain principles, if using AI to prepare court documents, as well as a set of interim principles regarding the Court’s use of AI. The Court published an additional Notice in May 2024, which provides further guidance regarding the declaration requirement. Guidelines have also been published by the courts of several provinces and territories.
Legislation
AIDA: Again noting that AIDA has not yet been passed into law, the latest proposed amendments to AIDA seek to regulate “high-impact” AI systems based on whether the intended use of the AI system falls within one of the following seven classes:
- the use of an AI system relating to determinations of employment, including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer, or termination;
- the use of an AI system relating to determinations of whether to provide services to an individual, the type or cost of services to be provided to an individual, or prioritization of services to be provided to individuals;
- the use of an AI system to process biometric information relating to the identification of an individual, other than in cases in which the biometric information is processed with individual’s consent to authenticate their identity, or the assessment of individual’s behaviour or state of mind;
- the use of an AI system relating to the moderation of content found on an online communications platform, including search engines and social media services, or the prioritization of presentation of such content;
- the use of an AI system relating to health care or emergency services, but excluding a use referred to in any of paragraphs (a) to (e) of the definition “device” in section 2 of the Food and Drugs Act that is in relation to humans;
- the use of an AI system by a court or administrative body in making a determination in respect of an individual who is a party to proceedings before the court or administrative body; and
- the use of an AI system to assist a “peace officer”, as defined in section 2 of the Criminal Code, in the exercise and performance of their law enforcement powers, duties, and functions.
The proposed AIDA amendments also impose obligations on different parties depending on their role in the AI value chain. For example, an organization engaged in the development of an AI system would have distinct obligations and responsibilities from an organization involved in the deployment of the same AI system.
In addition, the proposed AIDA amendments also introduce obligations for “general-purpose systems”, which is defined as “an artificial intelligence system that is designed for use, or that is designed to be adapted for use, in many fields and for many purposes and activities, including fields, purposes, and activities not contemplated during the system’s development.” The proposed requirements for general-purpose systems follow the same principles as those imposed on high-impact systems. For general-purpose generative AI systems, there are additional requirements, such as identifying the use of an AI system in interactions with human users and using “best efforts” to ensure that AI-generated outputs can be detected by humans.
Copyright Act: As noted above in response to question 2, Canada launched a consultation to seek comments and technical evidence from stakeholders on copyright policy issues relating to generative AI. The consultation noted the potential uncertainty surrounding the application of Canada’s existing copyright framework in the context of generative AI, particularly in relation to three topics: (1) the application of the copyright framework to text and data mining (TDM) and the training of machine learning models; (2) questions of authorship and ownership of AI generated or AI-assisted works; and (3) whether and how to clarify Canada’s copyright framework to address infringement and liability issues relating to generative AI.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
No rules currently apply to defective AI systems. AIDA would provide for administrative monetary penalties and regulatory and criminal prosecution for defective AI systems.
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
At present, there are no novel or specific rules or causes of action in civil litigation that would apply in the case of damages caused by AI systems. Broadly speaking, claims would have to be made under existing legal frameworks, such as tort law (including negligence and intentional or strict liability torts), intellectual property law, and human rights and privacy laws.2
For example, in February 2024, a civil resolution tribunal in the province of British Columbia found an airline liable for the tort of negligent misrepresentation, due to its AI chatbot having provided misinformation to a consumer.3 While the tribunal’s decision does not have precedential effect, it is an example of a potential framework for liability for misinformation provided by chatbots or other AI systems.
In order to establish negligence, which holds a person liable for damages caused by their failure to exercise reasonable care, a plaintiff must prove that: (i) the defendant owed a duty of care to the plaintiff to avoid the kind of loss alleged; (ii) the defendant breached that duty by failing to observe the applicable standard of care; (iii) the plaintiff has suffered damages; and (iv) the damages were caused, in fact and in law, by the defendant’s breach. In the case of negligent misrepresentation, additional factors are considered.
AI will likely pose challenges to the negligence analysis. For example, if an AI system operates autonomously, such as in the case of an automated vehicle, it might raise questions as to whether a duty of care is owed by any of the parties involved in the manufacture, distribution, or use of the system. Due to the “black box” nature of AI systems, it might also be technically difficult to prove that an AI system “malfunctioned” or that it did so because of a lack of reasonable care by any one or more of the parties involved, including the victim.
Certain uses of AI could potentially constitute one or more intentional torts. For example, creating a “deepfake” image or video of another person could give rise to liability for various intentional torts, including: (i) portraying the person in a false light; (ii) publication of embarrassing facts; (iii) appropriation of that person’s likeness; (iv) non-consensual sharing of intimate images; (v) intentional infliction of emotional distress; and (vi) intrusion upon seclusion.
In Canada, strict liability in tort is limited. Although future plaintiffs might seek to hold the operator of an AI system strictly liable for damages caused by the system,4 it remains to be seen how that issue will develop in Canada.
In addition, the training or use of an AI system may give rise to a claim for copyright or moral rights infringement under the Copyright Act, or a discrimination claim under the Canadian Human Rights Act or provincial and territorial human rights statutes.
With respect to criminal liability, Canada’s Criminal Code does not explicitly address AI. It does contain several provisions related to the unlawful use of computers, computer systems, and data. For example, section 342.1 pertains to the unauthorized use of a computer and contains language related to obtaining “directly or indirectly any computer service”. A “computer service” is defined as including “data processing and the storage or retrieval of computer data.” While this provision seems to be aimed at “hacking”, it could potentially capture illegal activities of AI and, by extension, its operators or inventors. Section 342.1 is a hybrid offence, which may be punishable by up to ten years in prison. Various sections of the Criminal Code could also apply where AI is used to create a “deepfake” of another person, depending on the nature of the deepfake.
In March 2024, Canada tabled Bill C-65, proposing amendments to the Canada Elections Act. Although not explicit under the current draft, Bill C-65 may be interpreted as clarifying that the use of AI or deepfakes to impersonate certain election officials constitutes an offence under the legislation.
Footnote(s):
2 Common law torts would not be available in Quebec, which is a civil law jurisdiction. Principles of liability set out in the Civil Code of Québec would apply. See Civil Code of Québec, CQLR c CCQ-1991.
3 Moffatt v Air Canada, 2024 BCCRT 149. For a more detailed discussion, see <https://cassels.com/insights/talk-isnot-always-cheap-ai-chatbots-misinformation-leads-to-liability/>.
4 By way of analogy, see: Del Giudice v Thompson, 2021 ONSC 5379 (dismissing a claim of strict liability for a data breach).
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
Assessing responsibility for harm caused by an AI system will be one of the challenges of litigating AI system claims, particularly in negligence cases, where a plaintiff must prove the elements described in response to question 5, above. In the case of AI, where damages have been caused by an AI system acting autonomously, establishing the duty of care may be problematic unless the court is prepared to hold any of the parties involved in the commercialization of the system liable or to recognize a novel duty of care or cause of action. As discussed in response to question 5, although a civil resolution tribunal in British Columbia found that the defendant in that case owed the plaintiff a duty of care, the tribunal’s decision does not have precedential effect on other tribunals or courts. How liability will be allocated between such parties and the victim will not be straightforward, as it may be difficult to assess which, if any, of the parties involved failed to exercise reasonable care and to establish factual and legal causation between their actions and the ultimate damages.
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
In a negligence action, the plaintiff must prove the elements of negligence on the civil standard of proof, namely on a balance of probabilities. Among other things, the plaintiff must prove that the defendant’s failure to exercise reasonable care caused the plaintiff’s damages. However, it remains to be seen whether Canadian courts might adopt a strict liability approach to damages caused by AI systems.
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
At present, losses related to the use of AI are not routinely listed as exclusions in commercial insurance policies. As AI advances and uses become more routine and widespread, we can expect that AI-specific policy exclusions for certain types of AI-related losses may become more common.
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
The Manual of Patent Office Practice is silent on the issue of inventorship, which has not yet been specifically addressed by Canadian case law. Canada’s Patent Act does not have a provision that expressly provides that an inventor must be an “individual”. While the Supreme Court of Canada has stated that “the inventor is the person or persons who conceived of” the invention and who is “responsible for the inventive concept”,5 the Court did not explicitly consider whether this is limited to a natural person.
On the issue of patentability, the Federal Court of Appeal made a passing reference to the determination of patentability in the age of artificial intelligence, stating that the issue is a “highly fact specific exercise, and it is impossible to attempt to define the full spectrum of particular circumstances that may exist depending on the nature of a particular invention implemented by computer,” particularly in light of “the advent of artificial intelligence.”6
Footnote(s):
5 Apotex Inc v Wellcome Foundation Ltd, 2002 SCC 77.
6 Canada (Attorney General) v Benjamin Moore & Co, 2023 FCA 168.
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
Canadian courts have not yet considered whether copyright subsists in AI-generated or AI-assisted works, or who would be the author(s) of such works.
To receive copyright protection, a work must be “original”. An original work is one that originates from an author, is not copied from another work, and is the product of the author’s exercise of skill and judgment, which must not be so trivial that it could be characterized as a purely mechanical exercise.7 There is also a geographic requirement, which may be satisfied based on, among other criteria, the citizenship or residency of the author of the work at the time the work was made.
The Copyright Act does not define the term “author”. However, courts have held that an author must be a natural person,8 since the term of copyright protection is tied to the author’s life and death. The Copyright Act grants certain moral rights to the author of a work, which, due to their personal nature, might also suggest that an author must be a natural person.
In both the 2021 Consultation and 2023 Consultation, the federal government identified three possible approaches to clarifying these issues in the Copyright Act, namely: (1) attributing authorship to the person who arranged for the work to be created; (2) clarifying that copyright and authorship apply only to works generated by humans or involving some form of human participation; or (3) creating a new and unique set of rights for AI-generated works.
In December 2021, the Canadian Intellectual Property Office (“CIPO”) issued a copyright registration for an image, which lists a human and an AI program as co-authors. However, because CIPO does not conduct substantive examinations of copyright registration applications, this is not necessarily indicative of CIPO’s position. In addition, although registration creates a statutory presumption as to the subsistence and ownership of copyright, the presumption is rebuttable. In July 2024, the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC) commenced an application in the Federal Court, seeking: (1) a declaration that copyright does not subsist in the image because it lacks sufficient originality, or in the alternative, that the human author is the sole author of the image because an AI program is not capable of being an author under Canadian copyright law; and (2) corresponding orders for the rectification of the copyright register. As such, these issues remain to be determined in court or clarified through legislative reform.
Footnote(s):
7 CCH Canadian Ltd v Law Society of Upper Canada, 2004 SCC 13, at paras 16, 25.
8 S. Knight Co Ltd v Canadian Standards Association, 2018 FCA 222, at para 147; Setana Sport Limited v 2049630 Ontario Inc (Verde Minho Tapas & Lounge), 2007 FC 899, at para 4.
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
Non-unionized workplaces: The main concerns relate to human rights risks and severance obligations when AI adoptions results in job eliminations or constructive dismissals.
From a human rights perspective, federal, provincial and territorial human rights statutes protect against discrimination in employment on the basis of various protected grounds, including race, gender identity, and age. The data used to develop AI systems and algorithms can, in some cases, reflect unconscious bias and unintentional discrimination, which can, in turn, create biased outputs contrary to human rights legislation.
AI adoption can also trigger severance obligations when job duties are either eliminated or significantly changed without the employee’s consent which may give rise to constructive dismissals. Severance obligations can be a significant liability for employers in Canada.
Ontario has also passed legislation that requires employers to disclose in public job advertisements whether they use AI tools in the recruitment and hiring process. We expect other provinces and territories to adopt similar legislation.
Unionized workplaces: The main concern is whether the adoption of AI tools would trigger any obligations under the collective agreement to notify or engage in a consultation process with the union. Collective agreements generally require a consultation process where an employer proposes to adopt technology that may have an impact on terms and conditions of employment of unionized employees.
We generally recommend that employers adopt AI policies to govern the use of AI tools by employees. Such policies should address which employees are permitted to use AI tools in the performance of their work duties, whether there are restrictions on how AI tools may be used, issues relating to intellectual property, confidentiality and human rights, and the potential for discipline if AI tools are used contrary to the policy or any applicable legal requirements.
-
What privacy issues arise from the use of artificial intelligence?
The Personal Information Protection and Electronic Documents Act (“PIPEDA”) defines personal information as “information about an identifiable individual”. If AI systems are using anonymized/de-identified personal information and not collecting consent for such use, individuals’ privacy rights are affected, particularly if that information can be subsequently re-identified.
Canadian privacy laws are generally based on the concept of informed and meaningful consent to the collection, use, and disclosure of personal information, unless an exception applies. In an AI world, obtaining a valid consent may not be feasible, particularly since the consent request must explain the consequences of granting consent. AI can analyze, infer, and predict individuals’ behaviour in ways that could affect a person’s ability to obtain credit, employment, insurance, or other benefits. AI could make unfair, biased, incorrect, or discriminatory decisions about individuals. The individual may not have provided an informed consent to this use of their personal information, particularly if the information that AI generates about the individual is considered to be untrue by that individual.
AI systems often use training data collected from public sources. However, under PIPEDA, only certain “publicly available” personal information may be used without consent, and the permitted uses are strictly limited. Permitting AI systems to go beyond those limits to train on publicly available personal information and generate new information from it without consent could result in individuals effectively losing control over their personal information and possibly their identities.
PIPEDA is based on the principle of minimizing the collection of personal information to that which is necessary to achieve the purposes of collection and to limit retention. AI models harvesting vast quantities of data to learn and draw inferences fly in the face of those principles. Additionally, as an overarching principle, PIPEDA requires that collection, use, and disclosure of personal information must be for purposes that a reasonable person would consider appropriate in the circumstances. If AI systems are making unfair, biased, incorrect, or discriminatory decisions about individuals, then the use of the personal information by such AI systems would not meet that threshold test.
Canadian privacy laws are also based on the concept of access rights and transparency. Under PIPEDA, individuals have a right to access their personal information, to receive an accounting of its existence, uses, and disclosures, and to correct any information they can demonstrate is inaccurate. How will individuals be able to exercise this right of correction with respect to information that AI generates about them? Individuals also have the right to withdraw consent, subject to legal/contractual limits, and reasonable notice. In an AI world where personal information has been used to generate further insights and information, how far will that right of withdrawal extend? At present, these are unanswered questions.
In addition, the province of Quebec has introduced disclosure requirements for enterprises making decisions exclusively on automated processing methods, which could conceivably capture the use of AI.
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
IP: There is no specific IP legislation regarding data scraping. Canadian courts have found website scraping activities to constitute an infringement of copyright in certain cases,9 but have not yet considered text and data mining (TDM) activities performed in connection with the training of AI models. As noted above, Canada has launched a consultation process to consider copyright policy issues relating to TDM and generative AI.
Privacy: PIPEDA contains provisions that prohibit: (1) the collection of electronic addresses through the use of a computer program designed or marketing primarily for use in generating or searching for and collection of electronic addresses; and (2) the collection of personal information through any means of telecommunication if made by accessing a computer system or causing a computer system to be accessed “in contravention of an Act of Parliament”.
In August 2023, the Office of the Privacy Commissioner of Canada (“OPC”) issued a joint statement on data scraping and the protection of privacy with 11 other nation members of the Global Privacy Assembly’s International Enforcement Cooperation Working Group. The statement called for social media companies and operators of websites that host publicly accessible personal data to “implement multi-layered technical and procedural controls to mitigate the risks” of unlawful data scraping. The OPC’s position is that operators that host publicly accessible personal data have obligations under data protection and privacy laws to protect personal information on their platforms from unlawful data scraping, and that mass data scraping incidents that harvest personal information can constitute reportable data breaches.
Competition: There are no specific provisions, case law, or enforcement guidelines that address data scraping for the purpose of training AI systems. While, in March 2024, the Canadian Competition Bureau (the “Bureau”) released a discussion paper that outlines high-level considerations for artificial intelligence and competition, it does not specifically address data scraping. However, some guidance can be derived from a 2008 position statement by the Bureau, which provided some mandatory and optional insurance coverage in relation to an investigation into a provincial Crown corporation. In that investigation, the Bureau found that the Crown corporation’s policies prohibiting insurance brokers from ‘screen scraping’ customer data from its online database for use in preparing quotes and processing transactions for optional insurance offered by other insurers were potentially “exclusionary”. This suggests that, if a dominant firm restricts data scraping activities, it might potentially attract Bureau scrutiny.
Footnote(s):
9 Century 21 Canada Limited Partnership v Rogers Communications Inc, 2011 BCSC 1196; Trader v CarGurus, 2017 ONSC 1841; Rallysport Direct LLC v 2424508 Ontario Ltd, 2020 FC 794 (on issue of damages), aff’d 2022 FCA 24.
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
At least one Canadian court decision has examined this issue in depth, finding that the prohibition of data scraping in the terms of use of a website was enforceable on the facts of that case.10 Where a website’s terms of use is a “browse wrap” agreement, a binding contract may be established if a user is made aware of the terms of use and continues to use the website. As such, a user can be found liable if they breach provisions in the terms of use prohibiting data scraping.
Footnote(s):
10 Century 21 Canada Limited Partnership v Rogers Communications Inc, 2011 BCSC 1196.
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
Yes. In December 2023, the OPC jointly issued “Principles for Responsible, Trustworthy, and Privacy-Protective Generative AI Technologies” (“Principles”) with the privacy authorities of each Canadian province and territory. These Principles discuss the fundamental principles of PIPEDA (consent, appropriate purposes, necessity, proportionality, openness, accountability, individual access, limitation on collection, use and disclosure, accuracy, and safeguards) in the context of the use of AI systems. These Principles also advise companies to give special consideration to the identification and prevention of risks to vulnerable groups, such as children, and groups that have historically experienced discrimination or bias.
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
On May 25, 2023, the OPC and its provincial counterparts in Alberta, British Columbia, and Quebec, announced a joint investigation into OpenAI’s ChatGPT in response to a complaint that OpenAI collected, used, and disclosed personal information without consent. The investigation will also consider whether OpenAI has met its obligations of openness and transparency, access, accuracy, and accountability and whether it has met the overarching principle under PIPEDA that it has “collected, used and/or disclosed personal information for purposes that a reasonable person would consider appropriate, reasonable or legitimate in the circumstances, and whether this collection is limited to information that is necessary for these purposes”.
In 2021, the same regulators conducted an investigation into the use of Clearview AI, Inc.’s (“Clearview”) facial recognition technology in Canada, considering the same issues regarding consent and appropriate purposes. The regulators concluded that Clearview did not collect valid consents and that its collection, use, and disclosure of personal information was “neither appropriate nor legitimate”. Clearview disagreed with the findings and ultimately withdrew from the Canadian market.
-
Have your national courts already managed cases involving artificial intelligence?
As stated above in response to question 5, a civil resolution tribunal in British Columbia issued a decision regarding artificial intelligence.11 While a civil resolution tribunal is not a court, and thus, does not have precedential effect, the decision nonetheless provides a potential framework for liability for misinformation provided by chatbots or other AI systems.
In Haghshenas v Canada (Citizenship and Immigration), the Federal Court considered the use of AI in the administrative decision-making process.12 The case involved an application for judicial review of a decision by a Canadian immigration officer, which had denied a work permit application. The officer’s decision involved input assembled by an AI system, known as Chinook. On judicial review, the Federal Court held that the decision was procedurally fair because it had been made by the immigration officer, not the software. The court also rejected the argument that the officer’s use of the software rendered the decision substantively unreasonable.
In Orpheus Medica v Deep Biologics Inc., the plaintiff sought an interlocutory injunction against former employees who it alleged had misappropriated confidential information.13 The plaintiff claimed that its confidential information included its approach of using artificial intelligence to analyze a database of certain types of antibodies. The Ontario Superior Court of Justice dismissed the motion, finding that the concept of using AI for that purpose was not “unique” to the plaintiff or confidential. In addition, the AI system was not proprietary to the plaintiff. Rather, the plaintiff had been using open source, publicly available computer programs.
In James v Amazon.com.ca, Inc., the Federal Court denied the applicant’s request for a declaration that the respondent’s AI-based automated data request decision-making process did not comply with PIPEDA.14 The Court dismissed the application on the basis that: (i) the relief sought was beyond the scope of the applicant’s original complaint to the OPC; (ii) the allegation was not considered by the OPC in its investigation; and (iii) there was no basis in the record upon which to consider the issue.
Other decisions have involved parties that develop or use AI systems, but the claims in those decisions were not directly related to the AI systems. In one case, an online market research firm that used AI for web analytics and related purposes infringed copyright by posting the plaintiff’s photographs on its website. However, the decision does not indicate that the photographs were generated by AI, and AI was not directly relevant to the decision.15 Another decision involved a commercial dispute between an AI developer and one of its customers, but the AI system was not the subject of the claims.16
Footnote(s):
11 Moffatt v Air Canada, 2024 BCCRT 149. For a more detailed discussion, see <https://cassels.com/insights/talkis-not-always-cheap-ai-chatbots-misinformation-leads-to-liability/>.
12 Haghshenas v Canada (Citizenship and Immigration), 2023 FC 464.
13 Orpheus Medica v Deep Biologics Inc, 2020 ONSC 4974.
14 James v Amazon.com.ca, Inc, 2023 FC 166.
15 Stross v Trend Hunter Inc, 2020 FC 201, aff’d 2021 FC 955.
16 Core Insight Strategies Inc v Advanced Symbolics (2015) Inc, 2021 ONSC 1717.
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
Not at present. If AIDA in its present form becomes law, the legislation would be enforced by the Minister of Innovation, Science and Economic Development (“ISED”) and the newly-created Artificial Intelligence and Data Commissioner (“AIDC”), which would sit within ISED.
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
According to Statistics Canada’s June 2024 report on the use of AI, 6.1% of Canadian businesses used AI in producing goods or delivering services over the past 12 months. The adoption of AI varied based on the industry – 20.9% of businesses in the information and cultural industries used AI to produce goods or deliver services, while only 0.7% of businesses in the agriculture, forestry, fishing, and hunting industries used AI.
Among the applications of AI, the most common was natural language processing, with 28.9% of businesses reporting such use. The next most common applications of AI include text analytics using AI (27.0%), virtual agents or chatbots (26.5%), data analytics using AI (25.0%), large language models (21.9%), and image and pattern recognition (21.8%).
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
AI is being used in the legal sector. Several Canadian legal technology companies offer machine learning–based legal analytics products, which can be used to, among other things, analyze content in contracts and documents, and predict outcomes in future legal proceedings based on past judicial decisions. AI is also used by legal resource databases, including to classify, summarize, and analyze case law. In litigation, e-discovery platforms offer technology–assisted review to streamline the review of large sets of documents.
As mentioned in response to question 3, the Federal Court of Canada issued a “Notice to the Parties and the Profession” regarding the use of AI in court proceedings requiring parties to declare when documentation filed with the court contained content created or generated by AI. Provincial courts have issued similar practice directions requiring declaration of AI-generated materials.
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
AI will raise many challenges and opportunities for lawyers and the practice of law. Some of the more notable challenges include the following:
- Advising clients about the use of AI or assessing the risks and merits of disputes involving the use of AI while the law is evolving.
- The selection of training datasets for AI systems used in law firms may pose challenges, which include the risk of confidential client information being inappropriately used and potentially disclosed, as well as the risk of performing analyses based on outdated datasets. This could also pose potential issues for ethical walls established within firms.
- Over-reliance on AI without appropriate review of the results risks reduction in opportunities for legal training and development of lawyers and staff and could result in a corresponding decline in critical thinking and practical legal skills. This may also result in a failure to meet the expected standard of care for legal professionals.
- The use of AI tools in law firms creates uncertainties relating to professional liability insurance coverage.
- Powerful AI systems designed specifically for the legal field may be beyond the financial reach of some law firms.
Some of the more notable opportunities include:
- The appropriate use of AI tools in a law firm may enhance efficiency and productivity without jeopardizing the quality of legal services. Examples include automating legal research, due diligence review, contract legal review, and document assembly.
- Depending on the dataset used to train an AI tool, a user will have easier access to a breadth of information that exceeds what might be possible to access via manual search methods.
- The possible use of an AI tool to evaluate the costs and benefits of litigation. iv. The use of low- or no-cost open-source AI tools may allow smaller law firms to effectively compete.
- The cost savings offered by the use of AI in law firms may make legal services more widely available and accessible, including for small- and medium-sized businesses.
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?
Key developments are likely to include: (i) the reform of privacy law through the enactment of AIDA; (ii) amendments to PIPEDA to address AI and strengthen regulators’ oversight; (iii) further steps in the copyright policy consultation process and AI-related copyright litigation; (iv) government monitoring of the approaches to AI taken by Canada’s major trading partners, including the implementation and enforcement of the European Union’s new AI Act; (v) further consideration of the use of AI in judicial and administrative decision-making; (vi) regulation of autonomous vehicles and other products incorporating AI systems; and (vii) further developments under Bill C-65.
Canada: Artificial Intelligence
This country-specific Q&A provides an overview of Artificial Intelligence laws and regulations applicable in Canada.
-
What are your countries legal definitions of “artificial intelligence”?
-
Has your country developed a national strategy for artificial intelligence?
-
Has your country implemented rules or guidelines (including voluntary standards and ethical principles) on artificial intelligence? If so, please provide a brief overview of said rules or guidelines. If no rules on artificial intelligence are in force in your jurisdiction, please (i) provide a short overview of the existing laws that potentially could be applied to artificial intelligence and the use of artificial intelligence, (ii) briefly outline the main difficulties in interpreting such existing laws to suit the peculiarities of artificial intelligence, and (iii) summarize any draft laws, or legislative initiatives, on artificial intelligence.
-
Which rules apply to defective artificial intelligence systems, i.e. artificial intelligence systems that do not provide the safety that the public at large is entitled to expect?
-
Please describe any civil and criminal liability rules that may apply in case of damages caused by artificial intelligence systems.
-
Who is responsible for any harm caused by an AI system? And how is the liability allocated between the developer, the user and the victim?
-
What burden of proof will have to be satisfied for the victim of the damage to obtain compensation?
-
Is the use of artificial intelligence insured and/or insurable in your jurisdiction?
-
Can artificial intelligence be named an inventor in a patent application filed in your jurisdiction?
-
Do images generated by and/or with artificial intelligence benefit from copyright protection in your jurisdiction? If so, who is the authorship attributed to?
-
What are the main issues to consider when using artificial intelligence systems in the workplace?
-
What privacy issues arise from the use of artificial intelligence?
-
How is data scraping regulated in your jurisdiction from an IP, privacy and competition point of view?
-
To what extent is the prohibition of data scraping in the terms of use of a website enforceable?
-
Have the privacy authorities of your jurisdiction issued guidelines on artificial intelligence?
-
Have the privacy authorities of your jurisdiction discussed cases involving artificial intelligence?
-
Have your national courts already managed cases involving artificial intelligence?
-
Does your country have a regulator or authority responsible for supervising the use and development of artificial intelligence?
-
How would you define the use of artificial intelligence by businesses in your jurisdiction? Is it widespread or limited?
-
Is artificial intelligence being used in the legal sector, by lawyers and/or in-house counsels? If so, how?
-
What are the 5 key challenges and the 5 key opportunities raised by artificial intelligence for lawyers in your jurisdiction?
-
Where do you see the most significant legal developments in artificial intelligence in your jurisdiction in the next 12 months?