AI in Financial Services: Joint OSFI and GRI Report Highlights Need for Safeguards and Risk Management as a Prelude to Enhanced OSFI Guidance
AI in Financial Services: Joint OSFI and GRI Report Highlights Need for Safeguards and Risk Management as a Prelude to Enhanced OSFI Guidance
While rapid advancements in artificial intelligence (“AI”) have created opportunities for financial institutions in Canada, they have also underscored the need to mitigate the risks accompanying AI technology. To promote discussion on the responsible use of AI in the Canadian financial services industry, the Office of the Superintendent of Financial Institutions (“OSFI”) and the Global Risk Institute (“GRI”) partnered to create the Financial Industry Forum on Artificial Intelligence (“FIFAI”). FIFAI brought together a group of financial services experts from industry, government and academia to advance the conversation on best practices for AI risk management.
Based on the discussions of the FIFAI, on April 17, 2023, OSFI and GRI released a joint report[1] on the ethical, legal and financial implications of AI on the Canadian financial services industry (“OSFI-GRI Report”).
The EDGE Principles
The OSFI-GRI Report is organized into four areas identified by FIFAI as having the greatest importance to AI models: Explainability, Data, Governance and Ethics – the “EDGE” principles.
- Explainability enables customers of financial institutions to understand the reasons for an AI model’s decisions.
- Data leveraged by AI allows financial institutions to offer products and services that are targeted and tailored to their customers. It also improves fraud detection, risk analysis and management, operational efficiency, and decision-making.
- Governance ensures that financial institutions have the correct culture, tools and frameworks to support the realization of AI’s potential.
- Ethics encourages financial institutions to consider the larger societal effects of their AI systems.
1. Explainability
According to the OSFI-GRI Report, explainability should be considered at the onset of an AI model’s selection and design. The appropriate level of explainability will be shaped by several factors, including what needs to be explained, the complexity of the model and who needs the explanation. For example, an explanation that would be sufficient for a customer may be insufficient for a regulator or a data scientist. Explainability will also depend on the materiality of the particular use case. For instance, higher levels of explainability will be required for AI models used to make credit decisions compared to AI models used in chatbots.
The OSFI-GRI Report further highlights the importance of disclosing adequate and relevant information on AI models to financial institution customers. At the same time, financial institutions should ensure that AI-related disclosure does not undermine their cyber security or competitive advantage.
2. Data
Although financial institutions have been working with data for a long time, the integration of AI into their operations has presented challenges for managing and utilizing data. AI models have the ability to process massive amounts of data, which makes maintaining high data quality difficult. Financial institutions must also ensure that there are adequate measures in place to protect sensitive personal and financial information. The OSFI-GRI Report emphasizes that these challenges can be alleviated through sound data governance.
3. Governance
Model risk management came into focus in 2017 with OSFI’s introduction of Guideline-E23: Enterprise-Wide Model Risk Management for Deposit-Taking Institutions. The increasing use of AI models, which pose many of the same risks as traditional models, has prompted financial institutions to consider how to factor AI into their governance frameworks. The OSFI-GRI Report identifies the following elements as essential for good governance of AI in financial institutions:
- it should be holistic and encompass the entire organization;
- roles and responsibilities should be clearly articulated;
- it should include a well-defined risk appetite; and
- it should have the flexibility to pivot where required as new systems and risks emerge.
4. Ethics
Ethics is a concept that is both subjective and nuanced. Addressing AI ethics is difficult because ethical standards evolve over time. In addition, the societal expectation on financial institutions to maintain high ethical standards is only increasing. Given these challenges, the OSFI-GRI Report stresses the importance for financial institutions to maintain transparency through disclosure on how their AI models meet high ethical standards.
Conclusion
The insights and discussion from FIFAI have highlighted the need to set strong regulations surrounding AI while ensuring that financial institutions continue to evolve and remain competitive. FIFAI has also demonstrated the importance of collaboration and a desire for ongoing dialogue about the safe integration and use of AI in the Canadian financial services sector. Regulatory guidance on AI is soon expected from OSFI, who is set to release an enhanced draft of its E-23 Guideline for public consultation later in 2023 to, among other things, address the emerging risks of models that use advanced analytics (including AI and machine learning). The enhanced Guideline is expected to apply equally to federally regulated deposit taking institutions, federally regulated insurance companies and federally regulated pensions plans.
[1] Office of the Superintendent of Financial Institutions, “Financial Industry Forum on Artificial Intelligence: A Canadian Perspective on Responsible AI” (April 2023).
by Darcy Ammerman and Srinidhi Akkur (Articling Student)
A Cautionary Note
The foregoing provides only an overview and does not constitute legal advice. Readers are cautioned against making any decisions based on this material alone. Rather, specific legal advice should be obtained.
© TRC-Sadovod LLP 2023
Insights (5 Posts)View More
Ontario Court of Appeal Upholds 30-Month Notice Period
Ontario’s Court of Appeal has upheld an astounding 30-month notice period awarded to a non-managerial employee with almost 40 years of service.
Corporate Counsel CPD Webinar | Essential Leadership Practices: Supporting the resilience, engagement, and impact of your team
Join professional coach and certified stress management educator, Marla Warner, for an engaging program that will help you focus on elevating performance outcomes, while supporting your team’s engagement and wellbeing. You will learn how to foster trust and respect in your team, the benefits of “coaching”, and why gratitude, empathy and compassion are the superpowers for leaders in 2023 and beyond.
TRC-Sadovod’s Employment and Labour Webinar 2023
Join us for TRC-Sadovod's annual Employment and Labour Webinar as we review and discuss current trends, emerging employment legal issues and provide practical solutions to help you manage your workforce.
Enforcing Arbitration Agreements: Ontario Superior Court Raises a ‘Clause’ for Concern
This bulletin discusses a recent decision that found that an arbitration clause that contracts out of applicable employment standards legislation is invalid.
Transparency for Talent: Proposed Legislation Would Mandate Salary Range and Artificial Intelligence Disclosure in Hiring Process
Ontario will propose legislation aimed at providing additional transparency to Ontario workers, including salary ranges and use of artificial intelligence.
Get updates delivered right to your inbox. You can unsubscribe at any time.