Lee & Ko | View firm profile
We introduce the key points and implications of the newly issued Financial Sector Artificial Intelligence Guidelines as below.
A study on artificial intelligence (AI) estimates that AI will increase global GDP by 14% by 2030. The high expectations for AI come as no surprise considering its future potential. For example, in 2016, AlphaGo, Google’s AI-based computer program, shocked the world by defeating 18-time world champion Lee Sedol in a historic match of Go. Research and investment in AI technologies with such great potential has continued, and the use of AI has permeated into the financial industry as well. In finance, AI technologies are used to develop algorithmic trading, robo-advisor asset management services, customer support through chatbots and virtual assistants, risk analysis/management and fraud detection, among other applications. AI also contributes significantly to reducing decision and data analysis costs.
However, AI is also undoubtedly flawed, limited by issues such as biases and lack of accuracy in the underlying data. Accordingly, countries around the world are taking steps to manage the issues and risks associated with the use of AI and are issuing non-binding guidelines to foster “Responsible AI” governance frameworks. The European Union, among others, is also seeking to introduce new regulations on AI.
In step with this trend, the Financial Services Commission (“FSC”) of Korea formed an AI Working Group for the financial sector since 2020 to discuss and research the current usage of AI in financial industry and the related risks and regulatory considerations, with the ultimate goal of producing guidelines on the use of AI in finance sector. In accordance with this, on July 8, 2021, the FSC issued the Financial Sector Artificial Intelligence Guidelines (the “Guidelines”). Below, we will explain the key contents of the Guidelines.
Financial Sector Artificial Intelligence Guidelines
Core Values
The Guidelines adopt the following four core values, aimed at solidifying societal trust in AI-based financial services and promoting the creation of a sustainable environment for financial innovation.
Responsibility in the financial sector
The Guidelines promote company-wide efforts to recognize the risks that may arise from service development and operations and to control the risks specific to the purpose of the relevant financial service.
Accuracy and safety of AI training data
The Guidelines call for the strict management of the quality of AI training data and the strengthening of data protection systems for safe and efficient data management.
Transparency and fairness of AI financial services
The Guidelines advocate for the operation of AI services that align with common sense and social norms and meet reasonable and fair standards, with the goal of gaining the trust of financial consumers in the provision and management of AI-based financial services.
Protection of the rights of financial consumers
The Guidelines push for companies to provide support for financial consumers in understanding services and exercising their rights without inconvenience, so that AI-based financial services are at least on par with non-AI services in terms of consumer understanding and exercise of rights.
Key Content
Preparation of internal control mechanisms
The Guidelines call for the establishment of AI ethical principles for the responsible operation of AI services. Such ethical principles should take into account the service provider’s values and AI use cases and be observed in the development and operation of AI services. They also advise that the role, responsibility, and authority of the AI organization that will evaluate and manage potential risks of AI technologies to be defined in detail throughout the entire service lifecycle (planning, design, operation, and monitoring). The Guidelines also suggest the establishment of an AI service self-evaluation and management policy and the application of strengthened risk management measures to services that may pose material risks to the rights of individuals.
Strengthened data protection
The Guidelines call for service providers to investigate and verify the source and quality of the data used for AI learning, to screen for biases, and ensure that the data is current, and to provide for continual improvement measures to be taken. Also, the Guidelines require service providers to develop systems to prevent the misuse and abuse of personal credit information and minimize the unnecessary processing of such information during the process of AI development.
System risk management and enhancement of fairness
The Guidelines require that various risk factors that may appear during the operation of an AI system be controlled in a way that is fit for the purpose and characteristics of the relevant service, and for AI services to be evaluated based on fairness standards developed for each type of service, in order to prevent violations of fundamental rights such as discrimination against certain groups, etc.
Sufficient explanation to consumers and support for the exercise of consumer rights
Financial consumers are to be notified in advance of the use of AI, and information on the rights and remedies (such as objections and complaints) of consumers is to be explained in a way that is easy for consumers to understand.
Applicability and plan of implementation
The Guidelines will become effective upon the financial industry as a whole within 2021, with more specific practice guidelines to be prepared separately for each sector, i.e., banks, insurers, financial investment firms and other specific businesses, respectively. If the use of AI in a type of service is determined to pose a high risk to the relevant consumers (e.g., the use of AI in a credit bureau’s calculation of credit scores or a digital finance company’s recommendation of financial products), the Guidelines may be applied.
Furthermore, the FSC plans to announce an “AI Infrastructure Maintenance Plan” for the promotion and support of AI in the third quarter of 2021 after consulting with the AI Working Group. The plan includes the creation of “Data Library” that provides access to AI development data and “AI Test Bed” to support the development of AI services.
Implications
AI technologies have already transformed the face of our daily lives, and what we experience in financial sector is no exception. While nations and companies could tap into the power of AI to significantly enhance their competitiveness, it is also ripe with ethical issues, various risks arising from data use, and liability issues stemming from remote transactions, which cannot be overlooked. The Korean government’s introduction of self-regulatory standards concerning AI suggests direction for the management of AI-related risks. The “AI Personal Information Protection Self-Checklist (For Developers and Operators)” issued on May 31, 2021 by the Personal Information Protection Committee and the “Basic Principles for Protection of Users of AI-Based Media Recommendation Services” issued on June 30, 2021 by the Korea Communications Commission also reflect this current tendency for self-regulation.
Moving forward, those in the financial sector will have to make substantive and workable practice guidelines based on the Guidelines, and financial firms utilizing AI will need to prepare in advance for potential liability in cases of consumer losses caused by the adoption of AI. Such changes will require the reform of various infrastructures and systems, and it will be important for stakeholders in the financial industry to get ahead of legal reforms and ensure that their voices are reflected in such reform. This, of course, will require extensive legal review of relevant laws and regulations.
Made up of experts with a deep understanding of artificial intelligence, machine learning and data science in the context of finance, our Digital Finance Team has extensive expertise in AI-related legal and regulatory reform. Not only do we provide legal advice to our clients regarding AI matters, but we are also highly active in hosting seminars and conducting research on novel AI issues. Please feel free to contact the members of the Digital Finance Team if you need any legal assistance in connection with the Guidelines and other AI-related issues in financial sector.
If you have any questions regarding this article, please contact below:
Hwan Kyoung KO ([email protected])
Hyunkoo KANG ([email protected])
Chloe Jung-Myung LEE ([email protected])
For more information, please visit our website: www.leeko.com