Trust and Trustworthiness Are Essential for AI Systems – Technology Org

A Lancaster University Professor has co-authored a ‘TechBrief’ highlighting the complexities of trust and trustworthiness around AI systems.

Trust and Trustworthiness Are Essential for AI Systems – Technology Org

Artificial intelligence, machine learning – artistic interpretation. Image credit: Steve Johnson via Unsplash, free license

The document, entitled ‘Techbrief: Trusted AI’ and co-authored by Bran Knowles, Professor of Sociotechnical Systems at Lancaster University’s School of Computing and Communications, highlights the key challenge that trustworthiness mechanisms and measures being advanced in AI regulations and standards may not actually increase trust.

Released by the Association for Computing Machinery (ACM), the world’s largest educational and scientific computing society, TechBriefs are short technical bulletins that present scientifically grounded perspectives on the impact and policy implications of specific technological developments in computing.

Key conclusions of the TechBrief include:

  • The effectiveness of mechanisms and metrics implemented to promote trust of AI must be empirically evaluated to determine if they actually do so.
  • Distrust of AI implicates trustworthiness and calls for a deeper understanding of stakeholder perceptions, concerns, and fears associated with AI and its specific applications.
  • Fostering public trust of AI will require that policymakers demonstrate how they are making industry accountable to the public and their legitimate concerns.

“As AI is becoming pervasive, more and more institutions are using it,” said Professor Knowles.

“These institutions include government agencies, major corporations, healthcare providers, and even schools. The danger is that a lack of public trust of AI may impact the acceptance of these new technologies and erode trust in the institutions that are using it. For these reasons, an urgent need is to examine how public trust is developed around AI technologies.”

“So much of the public conversation about regulating AI systems has focused on issues including accuracy or transparency,” explained John Richards, a Distinguished Research Scientist at IBM in the US and co-lead author of the new TechBrief.

“But there is much less discussion about how the public views an AI system as ‘trustworthy.’ In preparing this TechBrief, we found that the public’s perspective on what makes AI trustworthy will often diverge from the perspective of technologists and policymakers. We hope this TechBrief begins a conversation that will encourage industry leaders and policymakers to put the issue of trustworthiness front and centre.”

Source: Lancaster University