Leading digital technology company, Kainos, has partnered with award-winning slow news organisation, Tortoise, to investigate how business leaders should respond to challenges around trust in AI now.
Through interviews with twenty international AI experts – from companies including Accenture, The Alan Turing Institute, and UNESCO – Kainos and Tortoise Media identified three guiding hypotheses to improve trust in AI. These are: harnessing the role of the AI ethicist; embedding pluralistic governing principles to adapt to emerging regulation; and delivering holistic explainability.
“The AI ecosystem has the potential to deliver huge societal benefits in everything from health to education to sustainability,” commented Tom Gray, Group CTO and Director of Innovation, Kainos. “However, the benefits are likely to unequally benefit the already well off unless we take action. As these technologies become increasingly embedded into society, there are big questions to be answered to assure us that we can trust the fairness of these systems.
We know what can happen when AI is deployed irresponsibly or inexpertly, from perpetuating systemic biases in justice systems to making an incorrect medical diagnosis. Organisations are now learning the importance of ethical and responsible AI implementation, just as the corporate world has been on a similar journey with other urgent issues – like improving sustainability. To help organisations, a practical, nuanced, and sophisticated approach to AI governance is needed now.”
“Trust is a much discussed, but often misunderstood subject in the field of artificial intelligence. This research has given us a clear view from the leading-edge, and access to the perspectives of some of the most prominent experts in the world. At Tortoise, we see AI as a potent enabler, but also as a real threat to democracy, safety and society when it is misused.
If there is a real focus on ethics, standards, and social transparency, we think the next five years in AI will be transformational for its power to do good, and well-placed trust is a crucial ingredient,” said Alexandra Mousavizadeh, Director of Tortoise Intelligence.
Kainos and Tortoise Media identified the following three hypotheses for establishing trust in AI:
Responsibility is not only a role: The ethicist is necessary but not sufficient to achieve trust throughout the artificial intelligence lifecycle. Many companies are hiring an “AI ethicist”, but this is just one step in the process – the responsibility for trust is best diffused across an organisation rather than placed on a single individual. Ray Eitel-Porter, Global Lead for Responsible AI, Accenture, commented, “We very much take the view that Responsible AI is a responsibility and a business imperative that has to be embedded across the whole of the organisation and not just within the technology people.”
Standardisation from diversity: Standards throughout ethical AI development can help to cultivate trust through the sharing of best practice. Algorithm Watch found 173 AI “guidelines” were published between 2018 and 2020. Yet, it concluded there was “a lot of virtue signalling going on and few efforts of enforcement.” Standardisation of ethical AI practices is needed, but they must be developed with a diversity of perspectives. This includes addressing what Emma Ruttkamp-Bloem, Chairperson for UNESCO’s Recommendation on AI Ethics, calls an “epistemic injustice” in the Global South, given that the global AI ecosystem is today dominated by the Global North.
From explainability to understandability and prospect to procedure: Technical explainability hasn’t enabled trust, but a number of overlapping procedures are emerging as helpful alternatives. The value of “opening the black box” is limited, and complex AI systems can only be understood by few stakeholders. Explainability that is both holistic – covering project impact, data provenance, fairness, responsibility – and context-specific to each user, delivers greater transparency. Organisations should embrace multiple explainability procedures, including auditing algorithms. Dr Gemma Galdón-Clavell, Founder, Eticas Consulting, commented, “Algorithmic audits are one of the most practical things we can do in terms of increasing trust in AI… it’s about taking back control.”
“The harms that emerge from reinforcing racial, gender or other socio-economic biases are down to poor AI design, governance and implementation,” commented Peter Campbell, Data & AI Practice Director at Kainos. “Mitigating this risk and building trust between practitioners, business leaders, regulators, and consumers will require collaboration, upskilling and investment.
Kainos is at the forefront of this drive and we’ve partnered with Tortoise Media on this report in order to advance the conversation around AI ethics. We are also committed to taking real action in this area and are currently refining the Kainos Code of Ethics to guide our own work, hiring data ethicists, and integrating understandability into our processes. Kainos has retained the IEEE as an ethics advisory body, and joined the TechUK Data Analytics & AI Leadership Committee.”
Comments