Korea FSC Introduces Guideline on the Use of Artificial Intelligence in Financial Services



Original Source Here

The following “Financial AI Operation Guidelines” are an automatic translation of the original Korean text.

Purpose and Scope of Application

The guidelines promote AI by enhancing reliability in the entire process of planning, design, evaluation, verification, introduction, operation and monitoring related to the development, commercialization and utilization of artificial intelligence (hereinafter referred to as ‘AI’) systems in the financial field. It aims to contribute to enhancing customer trust and securing customer trust in financial services.

The guidelines apply directly or indirectly to the business of providing financial services and financial products using AI systems (except for cases where the use of AI systems, such as internal employee management of financial companies and simple work efficiency, does not affect customers) or It is applied to non-financial companies (hereinafter referred to as ‘financial companies, etc.’) that directly or indirectly use or want to utilize the AI ​​system in business to provide financial-related services such as product recommendation and credit evaluation.

An AI system, given a specific goal, acquires data to recognize the environment, interpret the acquired data, infer knowledge or process information, and determine the best behavior to achieve that goal, either physically or digitally.

Financial companies must comprehensively consider the characteristics and customer characteristics of the services using the AI ​​system, and the number of customers of the services using the AI ​​system. Within the scope, the scope of application of the guidelines, etc. can be adjusted.

Governance

Financial companies establish ethical principles and standards for the use of AI in consideration of the values ​​pursued by the organization and the main AI application context. In order to evaluate potential risks of AI systems and manage them, financial companies define the roles, responsibilities, and authorities of members in detail throughout the entire process of the AI ​​system. Financial companies may separately establish an AI ethics committee for organizational management that meets AI ethical principles and standards.

Financial companies implement risk management policies necessary to manage potential risks from the use of AI, such as recognizing and evaluating potential risks that may arise from the use of AI throughout the entire process of the AI ​​system, and reviewing ways to manage and minimize them.

The risk management policy includes matters related to operation of the system to ensure consumer rights, management of AI models and learning data, communication with supervisory authorities when problems related to AI systems occur, and promotion of AI responsibility culture within the company.

Appropriate internal control activities and approval when financial companies use AI systems for services that may pose serious risks to individual rights, interests, safety and freedom, such as unfair discrimination against individuals (hereinafter referred to as ‘high-risk services’).

The person in charge of approval should be in a position where s/he can perform responsible tasks, but may concurrently hold positions with similar duties such as chief risk management officer, credit information protection manager, chief information security officer.

Planning and Design Stage of AI Systems

Financial companies should review whether the purpose of using the AI ​​system conforms to ethical principles, and evaluate the social, economic, and cultural impacts and potential damage that may occur from the use of AI in consideration of the context of use.

Financial companies can use the AI ​​system flexibly in consideration of the purpose and characteristics of the AI ​​system and the characteristics of customers. However, if the AI ​​system completely replaces human decision-making or replaces important decision-making, financial companies design the AI ​​system so that it can effectively supervise and control the AI ​​system and maintain accountability.

Development Stage of AI Systems

Financial companies investigate and verify the source, quality, and bias of data for proper AI learning, and review methods for data quality improvement such as periodic data update.

Financial companies shall take sufficient measures for safe information use, such as obtaining prior consent or taking de-identification measures when the AI ​​system uses sensitive information pursuant to Article 23 (1) of the Personal Information Protection Act and Article 18 of the Enforcement Decree or similar information related to personal life.

In the process of data processing, the need to use the information is evaluated, and there is no possibility of re-identification, leakage, or misuse of the information.

When using an AI system for financial services that have an obligation to explain to customers in accordance with related laws, etc., or when using an AI system for high-risk services, consider the possibility of explanation from the development stage, and efforts are made to check the technology, etc. and introduce it.

Evaluation and Verification Stages of AI Systems

Financial companies comprehensively consider the AI ​​ethics principles, the purpose of the AI ​​system, the degree of customer impact and potential damage due to error cases, and the trade-off between AI performance metrics, etc. to determine the appropriate performance target level and performance metrics for the AI ​​system.

Financial companies comprehensively consider the AI ​​ethics principles, the purpose of the AI ​​system, the degree of customer impact and potential damage by fairness evaluation indicators, and the trade-off between the AI ​​fairness evaluation indicators to determine the appropriate fairness target level and fairness judgment indicators of the AI ​​system. If an imbalance is found according to the selected fairness judgment index, technical and managerial efforts are made to improve fairness.

When using an AI system for financial services that have an obligation to explain to customers in accordance with relevant laws and regulations, etc., or when using an AI system for high-risk services, transparently apply appropriate artificial intelligence technology, such as explainable artificial intelligence technology, to provide context It is necessary to check whether an explanation suitable for this purpose is derived, and strive to improve the explanatory ability to a reasonable level within the range that does not impair the stability and reliability of the AI ​​system.

AI system Introduction, Operation and Monitoring Stage

When operating the customer AI system, financial companies shall notify the customer of the right remedies appropriate to the nature of the AI ​​system, such as whether or not to use AI, the rights of consumers according to relevant laws, such as the right to explanation and objection, and the method of filing objections and complaints. .

Financial companies periodically monitor the performance of the introduced AI system and check the possibility of performance improvement, such as reviewing the need for data re-learning.

Financial companies check whether the AI ​​system has potential for misuse or abuse, such as data contamination attacks and hostile attacks by customers or third parties, and introduce measures to minimize misuse and abuse within the available technology range.

Financial companies reflect procedures to receive regular notifications about security vulnerabilities in AI development environments, such as open source-based AI development frameworks, and strive to build the best security system.

Special Cases concerning AI System Consignment

When financial companies want to entrust the development and operation of AI systems to external organizations, the entrusted institutions shall develop and operate AI systems in compliance with the guidelines and the AI ​​ethics principles and risk management policies prepared based on the guidelines.

Financial companies establish and operate a periodic report and check system to ensure that the development and operation of AI systems by external agencies has been carried out in accordance with the risk management guidelines, and for high-risk services.

In the event of consumer damage, strict pre-inspection, such as preparation of measures and reporting procedures, should be made.

Financial companies and other external institutions will prepare clear liability clauses and procedures for handling damages to prevent delays in compensation in case of consumer damage caused by AI system development and operation.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: