We use cookies to provide the best site experience.
  • /
  • /
11 March, 2025

FINMA Issues New Guidance on AI Governance and Risk Management in Finance

The Swiss Financial Market Supervisory Authority (FINMA) has released new guidance on the governance and risk management of artificial intelligence (AI) in the financial sector. This FINMA Guidance (08/2024), published in December 2024, underscores the emerging AI for finance trend and the need for robust oversight as financial institutions leverage AI for finance and accounting tasks. The guidance highlights key AI-related risks and sets out FINMA’s supervisory expectations, marking an important development in Switzerland’s regulatory framework for AI. Below we provide an overview of FINMA’s guidance, discuss the implications for financial institutions operating in Switzerland, and examine the broader trend of AI regulation in the Swiss financial sector.

Overview of FINMA’s AI Guidance and Key Risks

FINMA’s guidance draws attention to a range of risks associated with the use of AI in financial services. The rapid adoption of AI in banking, insurance, and other finance areas brings risks that are often difficult to assess.

FINMA identifies several key AI-related risks that financial institutions must manage:

Operational Risks (Model Risks):

AI models can introduce operational risk, particularly model risk. This includes concerns about the robustness and correctness of AI systems, their reliability, and issues like lack of explainability or bias in algorithms. For example, if an AI-driven credit scoring system is not properly robust or is biased, it could produce incorrect results, leading to faulty decisions and compliance problems.

Data-Related Risks:

AI systems depend on large volumes of data. FINMA warns of risks related to data security, data quality, and data availability. Poor data quality or insecure data handling can undermine AI outputs and create privacy or accuracy issues. Ensuring high-quality, representative data and protecting that data (in compliance with data protection laws) is crucial for AI governance.

IT and Cyber Risks:

As AI is typically software-driven and often integrated into IT systems, it heightens traditional IT and cyber risks. AI applications might introduce new vulnerabilities or be targets for cyber attacks. Robust cybersecurity measures and IT controls are necessary to safeguard AI systems, especially when AI is used for critical finance functions (e.g. algorithmic trading or automated financial reporting).

Third-Party Dependency Risks:

Many AI solutions involve third-party providers (such as cloud services, AI platforms, or external data sources). FINMA notes an increasing dependency on third parties as a risk factor.

Over-reliance on external AI vendors or cloud providers can concentrate risk and reduce an institution’s direct control. If a vendor fails to perform or faces a security breach, the financial institution could suffer operational disruptions or compliance breaches. FINMA’s guidance implies that institutions should conduct due diligence and oversight of AI vendors and consider contingency plans.

Legal and Reputational Risks:

The use of AI can give rise to legal liabilities and reputational damage if not properly governed. For instance, if an AI-driven decision process inadvertently discriminates against certain customers or violates privacy rules, the institution could face regulatory enforcement and public backlash. FINMA explicitly flags legal risks (e.g. compliance with laws, liability for AI decisions) and reputational risks as important considerations alongside technical risks. Financial institutions must anticipate how AI outcomes could lead to customer complaints, regulatory scrutiny, or litigation, and implement controls to mitigate these risks.


FINMA’s overview makes clear that AI risks are multi-faceted – spanning operational integrity, data governance, cybersecurity, outsourcing, compliance, and reputation. These risks are interrelated; for example, a lack of AI transparency (an operational/model risk) could lead to regulatory issues (legal risk) or erosion of customer trust (reputational risk). By identifying these core risk categories, FINMA sets the expectation that any use of AI for finance must be accompanied by corresponding risk management measures.

FINMA’s Observations: Early Stages of AI Governance

A notable insight from FINMA’s guidance is that many Swiss financial institutions are still in the early stages of AI governance and risk management. Based on its supervisory reviews, FINMA observed that most institutions have only begun developing AI use cases and that formal governance structures for AI are still being established. In other words, while interest in AI applications (like AI for accounting automation, customer service chatbots, or fraud detection) is growing, the maturity of internal controls and oversight lags behind.

FINMA’s guidance serves as a caution that institutions cannot delay implementing AI governance. FINMA is drawing attention to the need for proper identification, assessment, management, and monitoring of AI-related risks as part of every firm’s risk management framework. Boards and senior management are expected to understand the AI systems in use and ensure that risk controls are in place. This includes clear internal policies on AI deployment, defined roles and responsibilities for overseeing AI projects, and integration of AI risks into existing risk control systems (such as operational risk registers and internal control systems).

The supervisory notice also shares some observed good practices and measures from FINMA’s ongoing oversight. Although FINMA’s guidance does not impose new binding rules, it outlines practical steps firms should consider in strengthening AI governance. For example, financial institutions should maintain an inventory of all AI models and tools in use, including details about their purpose, data inputs, and outputs. Creating such an inventory helps in understanding the scope of AI usage and ensures nothing slips outside the risk oversight. Additionally, FINMA emphasizes that responsibilities cannot be abdicated to AI or third parties – humans at the institution remain accountable for decisions facilitated by AI. Therefore, clear lines of responsibility and sufficient expertise among staff are necessary so that employees can question and validate AI outcomes. Regular training on AI ethics and risk for staff, robust documentation of AI models, and ongoing model validation and monitoring are other measures aligned with FINMA’s expectations.

In summary, FINMA’s observation of the early stage of AI governance is a call to action: financial institutions should proactively build their AI risk management capabilities now. Those who fail to do so may find themselves exposed – both to the intrinsic risks of AI failures and to regulatory intervention if FINMA deems their governance inadequate. Given that FINMA’s mandate includes ensuring that supervised institutions have proper organization and risk controls, this guidance foreshadows more scrutiny on AI use in future supervisory reviews.

Implications for Financial Institutions Leveraging AI in Finance and Accounting

For banks, insurers, asset managers, and other financial institutions operating in Switzerland, FINMA’s AI guidance has significant implications. Any institution leveraging AI for finance and accounting processes should evaluate its current governance framework against FINMA’s recommendations and supervisory expectations:

Risk Assessment and Internal Controls:

Firms should immediately assess how AI applications impact their risk profile. This means identifying every AI system in use (from automated portfolio management tools to AI-assisted accounting software) and evaluating the risks each brings. Existing risk management and internal control frameworks (ICT controls, model risk management policies, etc.) should be updated to incorporate AI-specific risk factors. For example, if a bank uses AI for credit scoring, the model risk policy should address AI model validation, bias testing, and outcome monitoring on an ongoing basis.

Governance and Oversight:

Robust governance structures must be put in place for AI oversight. FINMA expects clear assignment of responsibility – senior management and possibly the board should oversee an “AI governance” program or working group. This oversight body should establish policies on AI development and use (covering data governance, model development standards, and ethical guidelines). It should also ensure compliance with applicable laws (such as data protection or anti-discrimination laws) when deploying AI. Importantly, decisions made by AI should remain subject to human review and accountability. Financial institutions cannot simply blame “the algorithm” if things go wrong; they need documented processes for human intervention and review of AI-driven decisions.

Operational Resilience and Third-Party Management:

Given the IT and third-party risks highlighted by FINMA, institutions using AI must also shore up their operational resilience. This includes conducting robust due diligence on AI vendors and service providers and monitoring their performance. Contractual arrangements with AI providers should include provisions for data security, service levels, and clear liability in case of failures. Additionally, contingency plans are needed in case an AI service fails or a model produces erroneous results – the institution should be able to revert to manual processes or backups to continue operations. FINMA’s existing outsourcing and operational risk guidelines (e.g., FINMA Circular on operational risks) apply here: the AI guidance reinforces that firms should approach AI-related outsourcing with the same rigor as any critical outsourcing.

Legal Compliance and Ethical Use:

Financial institutions must consider the legal and ethical implications of AI in finance. This means ensuring that AI-driven decisions comply with anti-discrimination laws, consumer protection rules, and data privacy requirements. For instance, if AI is used in loan approvals or insurance underwriting, firms should guard against unjustified biases that could lead to discriminatory outcomes – a risk FINMA explicitly flags. Institutions should implement testing for bias and fairness in AI models and be prepared to explain AI decisions to customers or regulators (the explainability requirement). Documentation of how AI models make decisions and why those decisions are being used in business processes will be critical, especially if FINMA or an external auditor asks for evidence of how an AI system functions and is controlled.

Reputation Management:

The reputational stakes are high when deploying AI. A well-publicized AI failure (for example, an AI chatbot that gives inappropriate financial advice or a trading algorithm that malfunctions) can damage customer trust. FINMA’s mention of reputational risk signals that firms should include public perception and customer communication in their AI risk strategies. This might involve transparent disclosures to clients when AI is used (where appropriate) and having a communications plan to address any AI-related incidents promptly. Building a strong risk culture around AI – where employees at all levels understand the importance of safe and ethical AI use – will help protect the institution’s reputation in the long run.


In practical terms, FINMA’s guidance means that Swiss financial institutions should treat AI initiatives with the same level of rigor as any other significant risk-bearing activity. Whether using AI for financial analytics, robo-advisory services, automated transaction monitoring, or accounting automation, firms are expected to embed AI risk management into their day-to-day operations. Institutions that are ahead of the curve in establishing AI governance frameworks will not only satisfy regulators but also likely gain a competitive advantage by safely harnessing AI innovations. Conversely, institutions that delay implementing the guidance may face regulatory scrutiny or enforcement if AI-related incidents occur. FINMA has the authority to take action under existing financial market laws requiring proper organization and risk control, so non-compliance with the spirit of this guidance could be viewed as a breach of those general obligations.

Broader Trends: AI Regulation in Switzerland’s Financial Sector

FINMA’s new guidance is part of a broader trend towards more formal AI regulation in Switzerland. As of now, Switzerland does not have a specific AI law or regulation in force. AI deployments are currently governed by general laws – for example, the revised Federal Data Protection Act (FDPA) covers personal data used in AI, and sectoral regulations (financial, medical, etc.) apply to AI like any other technology on a principle-based basis. Swiss authorities have historically favored a technology-neutral approach, meaning existing laws are interpreted to address new technologies like AI.

However, the Swiss government recognizes that AI’s rapid evolution may require targeted rules. The Federal Council (the Swiss government’s executive body) has made AI a focus of its Digital Switzerland Strategy and has already commissioned expert studies on how to regulate AI. In fact, the Federal Council has tasked the relevant agencies (led by the Federal Department of the Environment, Transport, Energy and Communications (DETEC)) with identifying possible approaches for AI regulation by the end of 2024. The aim is to develop a Swiss regulatory framework for AI by 2025 that is aligned with international developments, particularly the European Union’s emerging AI Act and the Council of Europe’s draft AI Convention. The government’s goal is to uphold fundamental rights and ethical standards in AI use while also promoting innovation and growth in the digital economy. This initiative indicates that dedicated AI regulations in Switzerland’s future are likely, which will influence all industries including finance.

In the financial sector specifically, regulators are increasingly active in setting expectations for AI. FINMA has been monitoring AI-related risks for some time – for example, in its annual Risk Monitor 2023, FINMA highlighted AI as a growing concern under operational risks. The newly issued guidance 08/2024 builds on those risk findings and provides interim governance principles without waiting for a new law to be enacted. FINMA’s approach demonstrates a preference for using existing regulatory powers (such as the requirement for proper business organization and risk management under financial market laws) to address AI challenges now, rather than leaving a regulatory void.

This Swiss development parallels trends in other jurisdictions. Around the world, financial regulators and lawmakers are grappling with how to ensure AI is used responsibly in finance. The European Union’s AI Act, expected to come into force in the near future, will likely classify many financial AI systems (like credit scoring or AML monitoring tools) as “high-risk,” imposing strict requirements on transparency, risk management, and human oversight. While Switzerland is not an EU member, its regulators intend to keep Swiss standards compatible with international best practices. We can anticipate that FINMA will continue refining its supervisory approach to AI as global standards take shape. In the coming years, what is now guidance could evolve into more formal rules or binding guidelines, especially if incidents or the complexity of AI systems call for it.

For financial institutions in Switzerland, this broader regulatory trend means that the current FINMA guidance is likely just the beginning. Firms should not treat it as a one-off advisory, but rather as a sign of regulatory direction. By investing in AI governance today – building robust compliance and risk frameworks around AI – institutions will be better prepared for tomorrow’s regulations, whether those emerge as Swiss-specific AI laws, updated FINMA regulations, or alignment with EU norms. In short, AI regulation in the financial sector is poised to become more rigorous, and stakeholders should stay engaged with policy developments. Keeping an eye on FINMA communications, Federal Council reports, and international regulatory initiatives will be critical for compliance officers and strategists in the financial industry.
Conclusion and Next Steps: Navigating AI Governance with Confidence

FINMA’s newly issued guidance on AI governance and risk management is a clear message that AI in finance must be handled with care, diligence, and accountability. Financial institutions operating in Switzerland should act now to review their AI strategies: identify gaps in governance, bolster their risk management processes, and ensure compliance with FINMA’s expectations. Embracing these measures will not only satisfy regulatory scrutiny but also help institutions leverage AI in a safe and sustainable manner, protecting both their clients and their own business models.

At Goldblum and Partners, we are closely monitoring these regulatory developments and their impact on the financial sector. Our team can assist clients in understanding and implementing FINMA’s AI guidance, from conducting AI risk assessments to developing compliant governance frameworks tailored to AI for finance applications. We combine deep knowledge of Swiss financial regulations with insights into technology law, enabling us to help clients craft proactive strategies for AI compliance and innovation. In light of the evolving regulatory landscape, seeking legal counsel early can ensure your institution stays ahead of the curve and turns regulatory compliance into a competitive advantage.
Source: FINMA

Disclaimer: This article is provided for general informational purposes and does not constitute legal advice or create an attorney-client relationship. For advice on specific situations regarding AI regulations and compliance, please contact Goldblum and Partners.

Contact Information

For more information about our services, the current legislation or to discuss the particular case, please contact:
Stockerstrasse, 45,
8002 Zurich

Baarerstrasse, 25,
6300 Zug
Show more