That is the second a part of my weblog and webinar collection on AI governance and threat administration. In my earlier blog post and webinar, I mentioned how AI is driving the necessity for knowledge and know-how governance to evolve and broaden its scope to incorporate ethics, accountability and threat administration. On this weblog accompanying the second webinar, I’ll define how the Singapore Mannequin AI Governance Framework helps organizations to deploying AI responsibly.
Making Singapore a trusted, AI-enabled digital financial system
If knowledge is the brand new oil, then what are its Exxon Valdez and Deepwater Horizon moments? As with environmental disasters, any main blunder of utilizing knowledge and AI in an unethical manner will put the manufacturers concerned beneath excessive stress from shoppers and governments. Whereas Singapore had up to now escaped main knowledge and AI disasters, the proliferation of AI implies that it is just a matter of time. In 2018, an AI and ethics council initiated by the Singapore authorities got down to tackle three main threat classes for the AI-enabled digital financial system envisioned for Singapore:
- Know-how threat – Countering knowledge misuse and rogue AI
- Social threat – Constructing belief between businesses, corporations, workers, and clients
- Financial and political threat – Securing Singapore’s future in a digital financial system
Ethics and social duty as core ideas of Singapore’s AI Governance Framework
The Mannequin Framework follows two guiding ideas. The primary one is to make sure that AI decision-making is explainable, clear, and truthful. Explainability, transparency, and equity — “typically accepted AI ideas” — are the muse of moral AI use. Absent from the framework nonetheless, is the notion of accountability. The framework’s second precept is that AI options must be human-centric and function for the good thing about human beings. This ties AI ethics to the bigger dimension of company values and CSR, in addition to the company threat administration framework.
A threat administration method for tackling the dangers related to deploy AI at scale
In alignment with different world frameworks, the Singapore Mannequin Framework recommends a threat administration method to deal with the know-how threat related to AI. Ideally this must be a dimension added to company threat administration frameworks. It will elevate the danger past IT and particular person enterprise items to company degree (following within the footsteps of cyber-security threat).
Specifically, the Mannequin Framework recommends that organizations:
- Arrange AI governance buildings and measures and hyperlink them to company buildings
- Decide the extent of human involvement with a severity-probability matrix
- Use knowledge and mannequin governance for accountable AI operations
- Arrange clear, aligned communication channels and interplay insurance policies
Organizations should begin now to look into threat administration and set up accountability chains for AI
The important thing job for organizations is to begin early and construct consciousness internally about AI threat. Deploying AI-enabled resolution processes at scale should be accompanied by investments in governance and threat administration. Tips such because the Mannequin Framework set non-binding suggestions, however organizations should begin to develop their capabilities internally. The evolving nature of the Mannequin Framework has added use case libraries in addition to evaluation instruments – nonetheless for all however the largest organizations, adoption would possibly nonetheless be difficult.
Forrester recommends that organizations begin on the next actions:
- Flip buyer belief right into a aggressive benefit by truthful, moral, and accountable use of knowledge and AI
- Align AI ethics along with your company values and threat administration frameworks
- Outline your group’s AI accountability chain, together with exterior companions and suppliers
- Leverage the experience of AI consultancies with robust capabilities in AI ethics and governance
Additional studying
The second edition of the Singapore Model AI Governance Framework can be accessed here, and the Implementation and Self-Assessment Guide (ISAGO) is available here. As well as, the Use Case Library Vol. 1 and Use Case compendium Vol. 2 are available.
Forrester purchasers can entry my report Case Study: Singapore’s Journey To Deploying Responsible AI.
Please connect with me on LinkedIn!
In the event you’d like to debate how this impacts you and your group, please don’t hesitate to schedule an inquiry call with me.