Should you worry about AI ethics?

Article contributed by:

Mr Raju Chellam
Executive Education Fellow
Advanced Computing for Executives, NUS

Article Contribution by Mr Raju Chellam

Are you or your company developing or using AI (artificial intelligence) in your products, solutions, or services? Is your team updated on the latest regulations and guidelines that govern the ethical use of AI? How can you mitigate the risks of reputational damage in case your AI arrives at a decision deemed unfair?

If you answered yes to any of the above queries, you’re not alone. Companies and government organisations the world over are grappling with the new paradigms that AI has opened. Put simply, AI is about getting computers to perform tasks or processes that would be considered intelligent if done by humans. For example, an autonomous car is not just making suggestions to the human driver; it is the one doing the driving.

Governments and businesses and are on track to invest US$110 billion in AI-related solutions and services by 2024 — up 193 per cent over the US$37.5 billion they spent in 2019, according to IDC estimates.

Singapore has taken the lead in trying to address issues related to the ethical development and deployment of AI. On May 25, Singapore launched “AI Verify”, the world’s first AI Governance Testing Framework & Toolkit. It’s aimed at companies who wish to demonstrate responsible AI in an objective and verifiable manner. AI Verify was aptly announced for a global audience at the WEF Annual Meeting in Davos by Mrs Josephine Teo, Singapore’s Minister for Communications and Information. The goal is to promote transparency between companies and their stakeholders through a combination of technical tests and process checks.

Singapore was also the first to launch the Model AI Governance Framework, both at Davos (the First Edition in Jan 2019 and the Second Edition in Jan 2020). The Model Framework translates ethical considerations into practical measures to guide organisations in four key aspects: Internal governance structures and measures, determining the level of human involvement in AI-augmented decision-making, operations management, and stakeholder communications.

And in October 2020, IMDA and SCS (Singapore Computer Society) collaborated to launch the AI Ethics & Governance Body of Knowledge (AI E&G Bok). Singapore was one of the first countries in the world to have developed a BoK focused on the ethics of AI. It is tailored for practical issues related to human safety, fairness and the prevailing approaches to privacy, data governance and general ethical values. The BoK aims to be a kind of directory handbook for three key stakeholders – AI solution providers, businesses and end-user organisations, and individuals or consumers. The need arises because of the rapid advances in AI tools and technologies, and the increasing deployment and embedding of such tools in apps or solutions.

“Globally, testing for the trustworthiness in AI systems is an emergent space,” says IMDA. “As more companies use AI in their products and services, fostering the public’s trust in AI technologies remains key in unlocking the transformative opportunities of AI.”

On a global stage, the World Economic Forum (WEF) launched the Global AI Action Alliance (GAIA) in January 2021 to speed up the adoption of inclusive, transparent and trusted AI. “AI holds the promise of making organisations 40 per cent more efficient by 2035, unlocking an estimated US$14 trillion in new economic value,” the WEF notes. “But as AI’s transformative potential has become clear, so, too, have the risks posed by unsafe or unethical AI systems.”

Recent controversies on facial recognition, automated decision-making and Covid-19 tracking have shown that realising AI’s potential requires substantial support from citizens and governments, based on their trust that AI is being built and used ethically.

How do you, as a decision maker, know where the opportunities and boundaries are? How can you ensure that the AI algorithms your team designs, develops or deploys meets the criteria for fairness, transparency, explainability, auditability and accountability? Where can you learn about the gist of AI ethics—using case studies in peer-led discussions—to decide what’s best for your organisation?