Broader Trends: AI Regulation in Switzerland’s Financial Sector
FINMA’s new guidance is part of a broader trend towards more formal AI regulation in Switzerland. As of now, Switzerland does not have a specific AI law or regulation in force. AI deployments are currently governed by general laws – for example, the revised Federal Data Protection Act (FDPA) covers personal data used in AI, and sectoral regulations (financial, medical, etc.) apply to AI like any other technology on a principle-based basis. Swiss authorities have historically favored a technology-neutral approach, meaning existing laws are interpreted to address new technologies like AI.
However, the Swiss government recognizes that AI’s rapid evolution may require targeted rules. The Federal Council (the Swiss government’s executive body) has made AI a focus of its Digital Switzerland Strategy and has already commissioned expert studies on how to regulate AI. In fact, the Federal Council has tasked the relevant agencies (led by the Federal Department of the Environment, Transport, Energy and Communications (DETEC)) with identifying possible approaches for AI regulation by the end of 2024. The aim is to develop a Swiss regulatory framework for AI by 2025 that is aligned with international developments, particularly the European Union’s emerging AI Act and the Council of Europe’s draft AI Convention. The government’s goal is to uphold fundamental rights and ethical standards in AI use while also promoting innovation and growth in the digital economy. This initiative indicates that dedicated AI regulations in Switzerland’s future are likely, which will influence all industries including finance.
In the financial sector specifically, regulators are increasingly active in setting expectations for AI. FINMA has been monitoring AI-related risks for some time – for example, in its annual Risk Monitor 2023, FINMA highlighted AI as a growing concern under operational risks. The newly issued guidance 08/2024 builds on those risk findings and provides interim governance principles without waiting for a new law to be enacted. FINMA’s approach demonstrates a preference for using existing regulatory powers (such as the requirement for proper business organization and risk management under financial market laws) to address AI challenges now, rather than leaving a regulatory void.
This Swiss development parallels trends in other jurisdictions. Around the world, financial regulators and lawmakers are grappling with how to ensure AI is used responsibly in finance. The European Union’s AI Act, expected to come into force in the near future, will likely classify many financial AI systems (like credit scoring or AML monitoring tools) as “high-risk,” imposing strict requirements on transparency, risk management, and human oversight. While Switzerland is not an EU member, its regulators intend to keep Swiss standards compatible with international best practices. We can anticipate that FINMA will continue refining its supervisory approach to AI as global standards take shape. In the coming years, what is now guidance could evolve into more formal rules or binding guidelines, especially if incidents or the complexity of AI systems call for it.
For financial institutions in Switzerland, this broader regulatory trend means that the current FINMA guidance is likely just the beginning. Firms should not treat it as a one-off advisory, but rather as a sign of regulatory direction. By investing in AI governance today – building robust compliance and risk frameworks around AI – institutions will be better prepared for tomorrow’s regulations, whether those emerge as Swiss-specific AI laws, updated FINMA regulations, or alignment with EU norms. In short, AI regulation in the financial sector is poised to become more rigorous, and stakeholders should stay engaged with policy developments. Keeping an eye on FINMA communications, Federal Council reports, and international regulatory initiatives will be critical for compliance officers and strategists in the financial industry.