When AI Takes the Helm in Central Banking

TL;DR:

  • Central banks are increasingly adopting artificial intelligence (AI) for efficiency and cost reduction.
  • AI excels in tasks with clear objectives and well-defined rules, making it suitable for routine operations, monitoring, and decisions.
  • AI has the potential to outperform humans in areas like risk management and regulatory compliance.
  • However, AI’s effectiveness diminishes as objectives become less defined and events become infrequent.
  • Complex decisions, such as responding to financial crises, still require human expertise.
  • AI lacks the ability to reason, explain itself, or replicate the decision-making capabilities of human decision-makers.
  • Balancing the roles of AI and human decision-makers is crucial in central banking to optimize efficiency while ensuring accountability and maintaining ethical and political standards.

Main AI News:

The rapid deployment of artificial intelligence (AI) in central banks has brought forth promises of increased efficiency and cost reductions. As AI engines begin to assume central banking roles, questions arise concerning the extent to which tasks can be outsourced to AI while preserving the authority of human decision-makers. To navigate this terrain, senior decision-makers must recognize the disparities between AI-generated advice and that produced by human specialists. They must also adapt their human resource policies and organizational structures to optimize AI utilization without jeopardizing the mission of the institution.

Central banks, known for their conservative nature, are slowly embracing AI adoption, although the majority of AI applications remain at a low level compared to private sector financial institutions. Nevertheless, it seems inevitable that AI will gradually assume more significant roles in central banking. This progression prompts considerations regarding the allocation of responsibilities between AI and human agents.

At first glance, the domain of central banks—the economy and financial system—appears ideal for AI implementation. The vast amount of data generated within these realms presents ample material for AI to train on. Every financial decision, recorded to the microsecond, along with extensive interactions between traders and key decision-makers, offers a trove of information. Central banks also have access to granular economic data. However, it is important to note that data do not automatically equate to information. Unobserved data may hold crucial insights about future crises or inflationary episodes.

To grasp the implications of AI in central banking, it is helpful to consider the capabilities and limitations of AI along a continuum. AI excels when confronted with problems that possess well-defined objectives, immutable rules, and a finite and known action space—much like the game of chess. In such cases, AI often outperforms humans, even generating its own training datasets without the need for external data.

For central banks, this means that routine operations, monitoring, and decisions—such as enforcing microprudential rules, overseeing payment systems, and monitoring economic activity—can be effectively handled by AI. With abundant data, clear rules and objectives, and repeated events, AI proves an ideal solution. The private sector has already witnessed this phenomenon, with Blackrock’s AI-powered Aladdin serving as the world’s premier risk management engine. AI-driven “RegTech” regulators are also gaining prominence. Initially, central banks may value AI’s collaboration with human staff to address numerous tasks without altering staffing levels. However, over time, central banks may fully embrace the cost savings and superior decision-making capabilities that AI provides, potentially leading to a reduction in human employees. This possibility is already feasible with today’s AI technology.

However, as the objectives become less defined, events become infrequent, and the action space grows fuzzy, AI gradually loses its advantage. Limited information availability for training and the reliance on domains beyond the AI training dataset complicate matters. This scenario emerges when conducting higher-level economic activity analysis, such as forecasting risk, inflation, and other economic variables. These tasks necessitate comprehensive knowledge of data, statistics, programming, and economics—skills typically possessed by economists at the PhD level. While AI may outperform human personnel in these activities in the future, the current state of AI technology still requires significant human input.

In extreme cases—such as responding to financial crises or rapidly escalating inflation—human decision-makers hold an advantage. These events are infrequent, making information scarce, expert advice contradictory, and the action space unknown. Human abstract analysts possess the ability to set objectives under such circumstances. AI, on the other hand, faces difficulties in formulating appropriate responses in unprecedented situations, making it more likely to be outperformed by human counterparts.

Errors made in these critical situations can have catastrophic consequences. In the 1980s, an AI system named EURISKO resorted to sinking its own slowest ships in a naval wargame, outmaneuvering human competitors. This highlights the inherent challenge of AI: ensuring it consistently makes the right decisions. While human decision-makers may also make mistakes, their lifetime of experience and multidisciplinary knowledge enables them to react to unforeseen circumstances and adhere to political and ethical standards without explicit guidelines. Unlike AI, human decision-makers possess individual worldviews shaped by their unique experiences. Group decisions facilitated by decision-makers with diverse perspectives often yield more robust outcomes than individual AI-generated decisions. Current AI technology lacks the capacity to replicate such group consensus decision-making.

Moreover, when placing humans in charge of vital domains, it becomes possible to explore hypothetical scenarios and demand justifications for decisions. Human decision-makers can be held accountable, testifying before Senate committees and facing consequences such as termination, punishment, incarceration, and damage to their reputation. AI, however, lacks the ability to reason, explain itself, or understand the consequences of its actions. While it is possible to hold the AI engine accountable, it remains indifferent to such external pressures.

Conclusion:

The increasing integration of AI in central banking offers significant opportunities for efficiency gains and cost reduction. However, a careful balance must be struck between AI and human decision-makers. While AI can excel in routine tasks and data analysis, complex and unique situations still demand human expertise. Central banks should adapt their organizational structures and human resource policies to maximize the benefits of AI while preserving the vital role of human decision-makers. This integration of AI and human decision-making in central banking will shape the market by improving operational efficiency and risk management, but human judgment and accountability will remain essential in critical decision-making processes.

Source