Select Page
Swiss FINMA Issues Guidance on Governance and Risk Management in Artificial Intelligence Applications

On 18 December 2024, the Swiss Financial Market Supervisory Authority (FINMA) issued FINMA Guidance 08/2024 ‘Governance and risk management when using artificial intelligence’ for supervised financial institutions using artificial intelligence (AI). This move highlights FINMA’s focus on addressing the operational, legal, and reputational risks associated with AI while promoting robust oversight practices in Switzerland’s financial sector.

The rapid adoption of AI has transformed financial markets, presenting significant opportunities but also risks that are often difficult to evaluate. FINMA’s guidance aims to ensure that supervised institutions adopt a proactive approach to identifying, assessing, and managing risks associated with AI. This effort reflects FINMA’s commitment to maintaining the reputation of Switzerland’s financial centre while helping institutions safeguard their business models in an evolving technological landscape.

FINMA has observed that most institutions remain in the early stages of AI adoption, with governance and risk management frameworks still under development. Risks identified include model risks such as a lack of robustness, explainability, or bias, as well as data-related risks including security, quality, and availability. Additional concerns relate to IT and cyber risks, legal exposure, reputational harm, and growing dependencies on third-party AI providers. Switzerland’s principle-based regulatory framework emphasises technology neutrality and proportionality to ensure Swiss institutions remain competitive.

Governance and Responsibility

FINMA observed that many supervised institutions focus heavily on data protection but often neglect critical model risks such as robustness, bias, and explainability. Additionally, decentralised development of AI applications frequently results in inconsistent standards and unclear accountability. FINMA’s guidance emphasises the importance of centralised governance structures, requiring supervised institutions to maintain comprehensive inventories of AI applications, define roles and responsibilities clearly, and implement training measures. For outsourced AI solutions, institutions must ensure rigorous due diligence and include contractual clauses addressing liability and responsibility.

Inventory and Risk Classification

Institutions often narrowly define AI applications, failing to account for the breadth of potential risks. FINMA stresses the need for a broad and inclusive definition of AI, encompassing traditional and generative models alike. Institutions are encouraged to create complete inventories of AI systems and classify risks based on the materiality and specific challenges posed by each application. This classification helps in identifying applications that require heightened risk management.

Data Quality

AI applications depend heavily on data integrity, but FINMA noted that some institutions lack controls to ensure the quality of their datasets. Poor-quality data, including biased or outdated information, can lead to systemic errors in AI outcomes. FINMA’s guidance calls for institutions to establish internal rules and directives to guarantee the completeness, accuracy, and relevance of data. Additional emphasis is placed on securing access to data and evaluating the suitability of datasets provided by third-party vendors.

Tests and Ongoing Monitoring

Weaknesses in performance indicators, testing methodologies, and monitoring systems were observed across several institutions. FINMA advises scheduling rigorous tests to validate AI functionality, including stress tests and sensitivity analyses to assess robustness and stability. Institutions are also urged to define performance thresholds and monitor changes in input data to detect and address potential “data drift” over time. Regular manual reviews of system outputs are recommended to identify discrepancies and improve the reliability of AI applications.

Documentation

FINMA’s guidance mandates the preparation of documentation covering data selection, model performance, assumptions, limitations, and fallback mechanisms. Proper documentation enables institutions to ensure transparency, support internal reviews, and demonstrate compliance with regulatory requirements.

Explainability

FINMA discusses the importance of explainability in AI applications, particularly for decisions affecting investors, customers, or regulatory compliance. Institutions must ensure that AI-driven decisions can be understood, reproduced, and critically assessed. Explainability involves identifying the factors influencing AI outputs and evaluating their behaviour under varying conditions.

The guidance is effective immediately, as on the date of publication, for Supervised institutions, who are expected to integrate the outlined measures into their risk management frameworks promptly. Ongoing reviews by FINMA will assess compliance, with the possibility of further refinements to the guidance based on supervisory findings and evolving international standards.

(Source: https://www.finma.ch/en/~/media/finma/dokumente/dokumentencenter/myfinma/4dokumentation/finma-aufsichtsmitteilungen/20241218-finma-aufsichtsmitteilung-08-2024.pdf?sc_lang=en&hash=AA85AC0A19240FFFA14E4692BF385651, https://www.finma.ch/en/news/2024/12/20241218-mm-finma-am-08-24/)