Source: AI Insight
Federal Reserve leadership appears to view generative AI as a “super-analyst” capable of supercharging the institution’s workflow.
Leaders at the Federal Reserve appear to believe that generative artificial intelligence (AI) tools will become “super-analysts” for banks and governments — capable of providing customer service to banks and replacing human programmers.
Fed Chief Innovation Officer Sunayna Tuteja recently participated in a fireside chat with Margaret Riley, senior vice president of the Payments Division in the Federal Reserve’s Financial Services Division, at an AI Week event in Chicago.
The topic of the discussion was “Advancing Responsible AI Innovation at the Federal Reserve System.” According to financial news and analytics outlet Risk.net, Tuteja and Riley discussed five use cases for generative AI that the Fed is exploring: data cleansing, customer engagement, content generation, translating legacy code and improving operational efficiency.
AI 'super-analyst'
Riley described the overall potential of generative AI as a "super-analyst" that could make life easier for Fed staff and could also serve as a customer support expert, personalizing and enhancing the bank's ability to interact with customers.
On the topic of “translating legacy code,” Tutja seemed to lean toward the idea that large language models (LLMs) like ChatGPT or similar AI products could replace some of the work traditionally undertaken by humans:
“It’s hard to justify [hiring] coding developers to update all the old code to the new code, but now you can leverage LLMs and then the developers become reviewers or editors rather than the primary enforcers.”
Dangers and Drawbacks
Both were careful to emphasize that generative AI and LLMs have their limitations and that the use cases currently discussed are only exploratory.
While the risks of applying generative AI systems to accuracy-demanding technical fields like finance are well documented, Tutja issued a stark warning about the downsides of not adopting these systems:
"We should consider all the risks of doing new things, but we should also ask ourselves: What are the risks of not doing something? Because sometimes the risks of inaction are greater than the risks of action, but the way forward must be responsible."