Bridging the AI Trust Gap

AI adoption is reaching a critical inflection point. Businesses are enthusiastically embracing AI, driven by its promise to achieve order-of-magnitude improvements in operational efficiencies.

A recent Slack Survey found that AI adoption continues to accelerate, with use of AI in workplaces experiencing a recent 24% increase and 96% of surveyed executives believing that “it’s urgent to integrate AI across their business operations.”

However, there is a widening divide between the utility of AI and the growing anxiety about its potential adverse impacts. Only 7%of desk workers believe that outputs from AI are trustworthy enough to assist them in work-related tasks.

This gap is evident in the stark contrast between executives’ enthusiasm for AI integration and employees’ skepticism related to factors such as:

The Role of Legislation in Building Trust

To address these multifaceted trust issues, legislative measures are increasingly being seen as a necessary step. Legislation can play a pivotal role in regulating AI development and deployment, thus enhancing trust. Key legislative approaches include:

  • Data Protection and Privacy Laws: Implementing stringent data protection laws ensures that AI systems handle personal data responsibly. Regulations like the General Data Protection Regulation (GDPR) in the European Union set a precedent by mandating transparency, data minimization, and user consent.  In particular, Article 22 of GDPR protects data subjects from the potential adverse impacts of automated decision making.  Recent Court of Justice of the European Union (CJEU) decisions affirm a person’s rights not to be subjected to automated decision making.  In the case of Schufa Holding AG, where a German resident was turned down for a bank loan on the basis of an automated credit decisioning system, the court held that Article 22 requires organizations to implement measures to safeguard privacy rights relating to the use of AI technologies.
  • AI Regulations: The European Union has ratified the EU AI Act (EU AIA), which aims to regulate the use of AI systems based on their risk levels. The Act includes mandatory requirements for high-risk AI systems, encompassing areas like data quality, documentation, transparency, and human oversight.  One of the primary benefits of AI regulations is the promotion of transparency and explainability of AI systems. Furthermore, the EU AIA establishes clear accountability frameworks, ensuring that developers, operators, and even users of AI systems are responsible for their actions and the outcomes of AI deployment. This includes mechanisms for redress if an AI system causes harm. When individuals and organizations are held accountable, it builds confidence that AI systems are managed responsibly.

Standards Initiatives to foster a culture of trustworthy AI

Companies don’t need to wait for new laws to be executed to establish whether their processes are within ethical and trustworthy guidelines. AI regulations work in tandem with emerging AI standards initiatives that empower organizations to implement responsible AI governance and best practices during the entire life cycle of AI systems, encompassing design, implementation, deployment, and eventually decommissioning.

The National Institute of Standards and Technology (NIST) in the United States has developed an AI Risk Management Framework to guide organizations in managing AI-related risks. The framework is structured around four core functions:

  • Understanding the AI system and the context in which it operates. This includes defining the purpose, stakeholders, and potential impacts of the AI system.
  • Quantifying the risks associated with the AI system, including technical and non-technical aspects. This involves evaluating the system’s performance, reliability, and potential biases.
  • Implementing strategies to mitigate identified risks. This includes developing policies, procedures, and controls to ensure the AI system operates within acceptable risk levels.
  • Establishing governance structures and accountability mechanisms to oversee the AI system and its risk management processes. This involves regular reviews and updates to the risk management strategy.

In response to advances in generative AI technologies NIST also published Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, which provides guidance for mitigating specific risks associated with Foundational Models.  Such measures span guarding against nefarious uses (e.g. disinformation, degrading content, hate speech), and ethical applications of AI that focus on human values of fairness, privacy, information security, intellectual property and sustainability.

Furthermore, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have jointly developed ISO/IEC 23894, a comprehensive standard for AI risk management. This standard provides a systematic approach to identifying and managing risks throughout the AI lifecycle including risk identification, assessment of risk severity, treatment to mitigate or avoid it, and continuous monitoring and review.

The Future of AI and Public Trust

Looking ahead, the future of AI and public trust will likely hinge on several key factors which are essential for all organizations to follow:

  • Performing a comprehensive risk assessment to identify potential compliance issues. Evaluate the ethical implications and potential biases in your AI systems.
  • Establishing a cross-functional team including legal, compliance, IT, and data science professionals. This team should be responsible for monitoring regulatory changes and ensuring that your AI systems adhere to new regulations.
  • Implementing a governance structure that includes policies, procedures, and roles for managing AI initiatives. Ensure transparency in AI operations and decision-making processes.
  • Conducting regular internal audits to ensure compliance with AI regulations. Use monitoring tools to keep track of AI system performance and adherence to regulatory standards.
  • Educating employees about AI ethics, regulatory requirements, and best practices. Provide ongoing training sessions to keep staff informed about changes in AI regulations and compliance strategies.
  • Maintaining detailed records of AI development processes, data usage, and decision-making criteria. Prepare to generate reports that can be submitted to regulators if required.
  • Building relationships with regulatory bodies and participate in public consultations. Provide feedback on proposed regulations and seek clarifications when necessary.

Contextualize AI to achieve Trustworthy AI 

Ultimately, trustworthy AI hinges on the integrity of data.  Generative AI’s dependence on large data sets does not equate to accuracy and reliability of outputs; if anything, it’s counterintuitive to both standards. Retrieval Augmented Generation (RAG) is an innovative technique that “combines static LLMs with context-specific data. And it can be thought of as a highly knowledgeable aide. One that matches query context with specific data from a comprehensive knowledge base.”  RAG enables organizations to deliver context specific applications that adheres to privacy, security, accuracy and reliability expectations.  RAG improves the accuracy of generated responses by retrieving relevant information from a knowledge base or document repository. This allows the model to base its generation on accurate and up-to-date information.

RAG empowers organizations to build purpose-built AI applications that are highly accurate, context-aware, and adaptable in order to improve decision-making, enhance customer experiences, streamline operations, and achieve significant competitive advantages.

Bridging the AI trust gap involves ensuring transparency, accountability, and ethical usage of AI. While there’s no single answer to maintaining these standards, businesses do have strategies and tools at their disposal. Implementing robust data privacy measures and adhering to regulatory standards builds user confidence. Regularly auditing AI systems for bias and inaccuracies ensures fairness. Augmenting Large Language Models (LLMs) with purpose-built AI delivers trust by incorporating proprietary knowledge bases and data sources. Engaging stakeholders about the capabilities and limitations of AI also fosters confidence and acceptance

Trustworthy AI is not easily achieved, but it is a vital commitment to our future.