AI Regulatory Roundup: What You Missed in AI News While You Were Gone for the Summer
AI Regulatory Roundup: What You Missed in AI News While You Were Gone for the Summer
Introduction
The relentless march of Artificial Intelligence (“AI”) continues, pushing nations worldwide to adopt diverse strategies to keep pace.
This article is a follow-up to the previous AI regulatory roundup bulletin, published earlier this year. You can find the article here.
Canada
- AIDA Still in Progress. Canada aims to regulate AI through the newly proposed Artificial Intelligence and Data Act (“AIDA”). Per our previous bulletin, AIDA aims to mitigate the harms and encourage the responsible use of AI systems by mandating strict oversight and transparency obligations for corporations looking to utilize its advantages. Bill C-27, which contains AIDA, will be considered by the Standing Committee on Industry and Technology in the fall.
- Voluntary Code of Practice for Generative AI. Following the announcement of a set of voluntary commitments entered into by AI developers in the United States, Innovation, Science and Economic Development Canada (“ISED”) has announced its development of a similar code of practice. In its press release, ISED seeks comments on a set of potential elements for the code, which include safety; fairness and equity; transparency; human oversight and monitoring; validity and robustness; and accountability. The code is intended to be an interim solution, while the AIDA makes its way through the parliamentary system.
- CCCS Issues Generative AI Warning. The Canadian Centre for Cyber Security (“CCCS”) has issued a guidance document to organizations using generative AI technologies, noting potential risks of misinformation, data privacy concerns, and biased content. To mitigate such risks, the CCCS recommends verifying the accuracy of content, staying on top of security updates for software, and implementing strong multi-factor authentication methods to protect personal data.
- Federal Privacy Regulator’s Joint Statement with G7 Counterparts on Generative AI. Canada’s Privacy Commissioner Philippe Dufresne, along with his G7 counterparts, released a joint statement recognizing the privacy harms that may arise from the unregulated use of generative AI. The statement then urged developers to embed privacy considerations in the design and implementation of generative AI technologies.
- Joint Statement on Data Scraping. On August 24, the federal privacy regulator also joined forces with data protection authorities around the world to issue a joint statement on data scraping. The statement indicates that indiscriminate online data scraping violates privacy laws, and social media companies must take steps to prevent data scraping on their platforms. This guidance will be particularly relevant to AI developers hoping to train AI systems on purportedly “public” data.
- Canadian courts issue practical directives on the use of AI in legal filings. The Supreme Court of Yukon (see directive) and the Court of King’s Bench of Manitoba (see directive) now require counsel to disclose their use of generative AI technology for their legal research or submissions. This follows multiple incidents of lawyers being fined by courts for referring to fictitious case law generated by generative AI technology.
European Union
- Artificial Intelligence Act Adopted. The European Union (“EU”) is at the forefront of AI regulation, notably with the Artificial Intelligence Act (“Act”). On 14 June 2023, the Act was adopted by the Members of the European Parliament by a majority vote, with the final wording of the law to be negotiated by specific EU member states. The Act takes a risk-based approach and categorizes AI systems by their risks to human privacy and safety. The higher the risk, the stricter the regulatory requirements.
USA
- Voluntary Commitments from AI Titans. On July 21, the Biden-Harris administration obtained voluntary commitments from seven leading AI companies to develop safe, secure, and transparent AI technology (see announcement here). The companies specifically promised to conduct security testing prior to the public-wide release of their technology, disclose its capabilities and limitations, and invest in robust cybersecurity safeguards.
- NIST AI Framework and Working Group. While the federal government has not passed legislation targeting the use of AI, certain governmental agencies have provided guidance. One example is the release of the AI Risk Management Framework by the National Institute of Standards and Technology (“NIST”) earlier this year. The framework is a voluntary guide for companies using AI technology. The US Secretary of Commerce also recently announced a new NIST public working group on AI, which will, among other things, provide guidance to organizations that are developing, deploying and using generative AI.
- National Agencies Collaborate with Joint Statement on AI. Several national agencies in the USA have published a joint statement regarding the enforcement against discrimination and bias in the use of AI technology. The statement articulates that the use of AI may perpetuate discrimination and violate federal law, and the agencies pledge to continue to monitor and protect individual rights so that America’s core principles of fairness and equality can be maintained.
- State-Specific AI Laws. While the federal landscape remains regulation-free for the time being, certain states and cities have adopted laws impacting AI. For example, New York City’s Local Law 144 recently came into force, creating requirements for employers using automated employment decision tools (see our bulletin discussing these laws here). Various newly-in-force state privacy laws provide for the ability to opt-out of profiling in furtherance of automated decision making, such as Connecticut’s Data Privacy Act, Colorado’s Privacy Act, and Virginia’s Consumer Data Privacy Act.
China
- Measures for Generative AI Come into Force. China has published its own regulations targeting generative AI – the Interim Measures for the Management of Generative Artificial Intelligence Services (“Measures”). The Measures came into force this month and apply to AI technologies offered to the general public. Its goals are to prevent AI-based discrimination, respect the privacy rights of citizens, and ensure AI transparency.
Brazil
- Bill 2338 Proposed. Brazil is following the EU approach with the proposal of Bill 2338, categorizing AI systems by risk and regulating accordingly. High risk AI systems will require a mandatory impact assessment (see English summary here). Bill 2338 also includes general transparency requirements, among other privacy protections.
The United Kingdom (“UK”)
- A Pro-Innovation Approach. The UK endorses a “light touch” and “pro-innovation” approach to the regulation of AI, as outlined in their recently published policy paper, which was developed after consultation with stakeholders. The UK government’s current plan is to develop a set of principles in further consultation with industry, which would be enforced through existing regulatory bodies. The policy paper identifies five key principles: (i) safety, security and robustness, (ii) appropriate transparency and explainability, (iii) fairness, (iv) accountability and governance, and (v) contestability and redress.
Australia
- Voluntary Principles. Currently, Australia has also chosen to not take a legislative approach to AI regulation, instead publishing a set of voluntary AI Ethics Principles focusing on human values and well-being, transparency, AI reliability and safety, and fairness.
- Two Papers in Advance of Anticipated Regulatory Scheme. Nevertheless, Australia’s Minister of Industry and Science has stated his intention to implement AI regulations in the future. The proposed framework follows the European Union’s risk-based approach, classifying AI systems by their degree of risk to society. In the meantime, the Australian government has released two papers assessing the potential risks and opportunities of using AI technology in the country.
New Zealand
- Increased Interest in Future AI Regulation. New Zealand’s current policy landscape lacks a sweeping regulatory scheme. Certain governmental agencies, however, have taken an increasing interest in AI regulation. For example, New Zealand’s privacy commissioner recently published an update to its Generative AI guidance document, outlining expectations of transparency and safety for organizations deploying artificial intelligence.
Singapore
- AI Verify Toolkit. Singapore has similarly chosen to not implement a comprehensive legislative regime for AI, instead taking a more flexible approach. Two initiatives are of note: the first is Singapore’s “AI Verify” software toolkit, which allows companies using AI to demonstrate that their technology is consistent with eleven principles commonly recognized in regulatory frameworks around the world.[1]
- Model AI Governance Framework. The second initiative is the publication of the Model AI Governance Framework, which aims to provide readily accessible guidance for all private sector organizations looking to use AI technology. The aim is to promote public transparency, safety, and trust in companies’ use of artificial intelligence.
Notable AI Lawsuits
- Lawsuit Against OpenAI for Breach of Privacy. The Federal Trade Commission (“FTC”) has begun investigating OpenAI for potential privacy and cybersecurity violations in the data used to train ChatGPT. The FTC is requesting documentation to determine whether OpenAI’s training process resulted in a breach of privacy or personal information security.
- AI Factual Inaccuracy Leads to Lawsuit. Mark Walters, a radio host based in Florida, is claiming damages after alleging that ChatGPT falsely concocted a story about him being accused of fraud and embezzlement. According to Walters, ChatGPT produced a factually inaccurate story of him through a phenomenon in generative AI known as a “hallucination.” This is one of the first civil cases litigating the factual accuracy of ChatGPT in America.
- AI and Copyright Infringement. Beyond reputational harms, ChatGPT is also being sued for copyright infringement. US comedian and actor Sarah Silverman is suing OpenAI and Mark Zuckerberg after numerous copyrights materials were found in the datasets used to train AI software. This sits among the numerous other lawsuits commenced by other content generators against OpenAI.
- AI cannot hold copyright. Also on the subject of copyright, the DC courts in Thaler v. Perlmutter, D.D.C., No. 1:22-cv-01564, recently held that AI is not capable of holding copyright in a work, ruling that human authorship is an essential prerequisite for a valid copyright to issue. This is in keeping with guidance from the U.S. Copyright Office, issued in March of this year.
- Class Action Lawsuit for Non-Consensual Information Use. On a broader level, a recent class action has accused Google of gathering mass amounts of online information to train its AI models without consent or notice. The class action demands at least 5 billion dollars in restitution. In its defence, Google’s general counsel remains steadfast in asserting that the AI models have always used information from public sources and that using public information for new beneficial uses is not against the law in America.
Conclusion and Takeaways
Countries around the world have taken different approaches in response to the ever-growing AI regime. From Canada’s AIDA to the European Union’s comprehensive AI Act, diverse strategies are emerging to tackle the ethical, privacy, and safety dimensions of artificial intelligence. At the same time, the legal arena is witnessing a surge in AI-related lawsuits, revealing the privacy and intellectual property concerns that arise in this swiftly evolving legal landscape. Despite the uncertainties that remain, one thing is clear: artificial intelligence will continue to play an ever-increasing role in shaping our every day lives, for better or worse.
If you have any questions about AI regulation, and how your company can prepare for upcoming changes, please contact a member of TRC-Sadovod’s Technology group.
[1] The 11 governance principles are transparency, explainability, repeatability/reproducibility, safety, security, robustness, fairness, data governance, accountability, human agency and oversight, inclusive growth, societal and environmental well-being.
by Robbie Grant, Robert Piasentin and Clifford Chuang (Summer Law Student)
A Cautionary Note
The foregoing provides only an overview and does not constitute legal advice. Readers are cautioned against making any decisions based on this material alone. Rather, specific legal advice should be obtained.
© TRC-Sadovod LLP 2023
Insights (5 Posts)View More
Enforcing Arbitration Agreements: Ontario Superior Court Raises a ‘Clause’ for Concern
This bulletin discusses a recent decision that found that an arbitration clause that contracts out of applicable employment standards legislation is invalid.
Transparency for Talent: Proposed Legislation Would Mandate Salary Range and Artificial Intelligence Disclosure in Hiring Process
Ontario will propose legislation aimed at providing additional transparency to Ontario workers, including salary ranges and use of artificial intelligence.
Environmental Obligations Trump Lenders: The Trend Continues
Re Mantle Materials Group, Ltd continues a recent trend in Alberta in which environmental remediation obligations are found to have a super priority.
A New Intelligence Creeps into the Capital Markets
On October 10, 2023, the OSC and EY jointly released a report on the use of artificial intelligence in Ontario's capital markets.
An Update on Cross Border Data Transfers in Canada and the EU
This webinar will feature presentations about key and emerging issues involving the transfer of personal information within and outside of the European Union and Canada.
Get updates delivered right to your inbox. You can unsubscribe at any time.