A notable insight from FINMA’s guidance is that many Swiss financial institutions are still in the
early stages of AI governance and risk management. Based on its supervisory reviews, FINMA observed that most institutions have only begun developing AI use cases and that formal governance structures for AI are still being established. In other words, while interest in AI applications (like AI for accounting automation, customer service chatbots, or fraud detection) is growing, the maturity of internal controls and oversight lags behind.
FINMA’s guidance serves as a caution that institutions cannot delay implementing AI governance. FINMA is drawing attention to the
need for proper identification, assessment, management, and monitoring of AI-related risks as part of every firm’s risk management framework. Boards and senior management are expected to understand the AI systems in use and ensure that risk controls are in place. This includes clear internal policies on AI deployment, defined roles and responsibilities for overseeing AI projects, and integration of AI risks into existing risk control systems (such as operational risk registers and internal control systems).
The supervisory notice also shares some observed good practices and measures from FINMA’s ongoing oversight. Although FINMA’s guidance does not impose new binding rules, it outlines practical steps firms should consider in strengthening AI governance. For example, financial institutions should maintain an
inventory of all AI models and tools in use, including details about their purpose, data inputs, and outputs. Creating such an inventory helps in understanding the scope of AI usage and ensures nothing slips outside the risk oversight. Additionally, FINMA emphasizes that responsibilities cannot be abdicated to AI or third parties – humans at the institution remain accountable for decisions facilitated by AI. Therefore, clear lines of responsibility and sufficient expertise among staff are necessary so that employees can question and validate AI outcomes. Regular training on AI ethics and risk for staff, robust documentation of AI models, and ongoing model validation and monitoring are other measures aligned with FINMA’s expectations. Similar to practices applied in Switzerland’s
regulated DLT platform, institutional readiness depends on strong internal controls and transparency in technology deployment.
In summary, FINMA’s observation of the early stage of AI governance is a call to action: financial institutions should
proactively build their AI risk management capabilities now. Those who fail to do so may find themselves exposed – both to the intrinsic risks of AI failures and to regulatory intervention if FINMA deems their governance inadequate. Given that FINMA’s mandate includes ensuring that supervised institutions have proper organization and risk controls, this guidance foreshadows more scrutiny on AI use in future supervisory reviews.