Go to contents

Lack of transparency in reasoning behind AI-driven decisions

Lack of transparency in reasoning behind AI-driven decisions

Posted December. 16, 2023 09:06,   

Updated December. 16, 2023 09:06

한국어

U.S. financial authorities identified AI technology as a potential threat to the financial sector on Thursday (local time), warning about the substantial damage that AI will do to the current financial system and consumers just as cyberattacks and climate change do.

The U.S. financial industry has recently beefed up its effort to regulate AI technologies. For example, the U.S. Securities and Exchange Commission (SEC) launched a complete inspection of how large Wall Street firms adopt AI-driven software. Such a rising trend will have significant implications for the South Korean financial sector which has yet to come up with a blueprint for monitoring AI technologies.

U.S. Treasury Secretary Janet Yellen presided over a Financial Stability Oversight Council (FSOC) meeting on Thursday where she issued an annual report on the 14 potential risks to the financial community including AI programs. She stated that the council called AI applied to financial services “an emerging vulnerability” for the first time.

The report described AI as a technology with a lack of explainability. Computing systems before AI-driven software ensure a clear, transparent path from input to output. By contrast, self-taught AI models do not provide any vision of where the results came from just as the internal mechanism of a black box is too complicated to decode. Critics raise concerns about the AI-centered framework where bias and accuracy issues arise and are swept under the rug.

There is also an ongoing controversy over the reliability of databases used by AI programs. The FSOC pointed out that it is unclear whether data bias follows when AI produces outcomes based on a massive pool of unsourced or unfiltered data. If an AI program recommends a particular financial product based on defective datasets, it is consumers who will see the consequences of such a poorly informed decision.

A race-biased AI system can make an unfavorable decision against a particular group of consumers in the loan approval process as well. World-famous futurist Jason Schenker, chairman of the Futurist Institute who joined the 2023 Dong-A New Centennial Forum, also commented that AI programs based on a large pool of data can produce monotonous and distorted outputs, adding that they can involve the leakage of sensitive or classified information.

There are growing concerns that the South Korean financial sector lacks momentum in setting up a regulatory framework. Although most financial firms have AI-driven services in place, they do not seem keenly interested in preventing AI risks to financial activities. Managing Director Oh Soon-young KB Kookmin Bank’s Financial AI Center said, “Generative AI models are so sophisticated that we cannot distinguish them from us humans. Ethical and legal issues automatically follow.”


Hyoun-Soo Kim kimhs@donga.com