The rise of Artificial Intelligence (AI) has reached the financial service industry. Know-Your-Client (KYC) verification processes and transaction monitoring to comply with regulations are already supported by AI algorithms. Lending, financial advice and customer-centric product development increasingly take advantage of this technology as well. With it, customer expectations such low cost, increased speed to market and minimisation of human errors or misconduct can be achieved.
However, this widespread utilisation of AIs brings its own risks. Algorithms may learn a bias based on skin colour or classify customers unjustly. Furthermore, machine predictions may incur real-world penalties even though predicted events never occurred. Likewise, machine-created thresholds may be accepted across the organisation disregarding human dynamics such as organisational complexities or soft factors.
In summary, AIs bring many benefits but associated risks need to be identified and managed not only individually but holistically across the entire organisation. At the minimum, a management strategy must include:
- a registry containing ongoing and future AI initiatives from all parts of the organisation
- a separate risk taxonomy to identify, assess, control and manage individual as well as composite risks associated with AIs
- compliance with emerging regulations and principles relating to AIs (see Australia’s AI Ethics Framework, Singapore’s FEAT Principles and EU’s privacy law GDPR).
- a governance structure and reporting framework designed to manage AI risks across the entire organisation on all hierarchy levels
- risk managers with in-depth knowledge in data science and AI algorithms
With the aforementioned rise of AIs in financial institutions, associated individual and composite risks must be “Managed by Design”. Existing infrastructure, governance & management framework must be adapted - enhanced where appropriate - to capture this new type of risk. The time to act is now.