Trust is the Linchpin to Enterprise AI Success and ROI

650
Enterprise AI success and ROI with robot and money in office

According to a 2023 study by KPMG, only about a third of survey respondents indicated that they had trust in AI. No doubt some part of this lack of trust is due to the novel nature of newer enterprise AI models such as Generative AI and LLMs. But a large number of well publicized mishaps around “AI gone bad” have also contributed to a lack of trust in AI. 

The lack of trust in AI and by extension the products and services that AI drives, are bad for business. Translating the business impact of trust directly into dollars and cents is difficult, but a 2022 book authored by members of Deloitte Consulting LLP found that trusted companies outperform their peers by up to 400%.   

Addressing Obstacles to Trust 

The novelty of newer AI model types will wear off with repeated use and the passage of time. Likewise, the technical issues associated with the roll out of new enterprise AI models, typical of new technologies, will also be addressed as model builders revise and improve their models.  

But will this alone be enough for end users to begin to put their trust in AI?  Probably not. There is a third issue peculiar to AI that will need to be addressed before AI achieves a high level of trust with end users. That issue is the “black box” nature of newer, more powerful and more complicated AI models. 

The Explainability Dilemma

Newer model types based on deep learning neural networks lack visibility into their decision-making process.  Addressing this transparency gap by providing simple explanations in human centric terms will be critical to achieving high levels of trust with AI.  

The key concepts making AI explainable are trustworthiness, causality, informativeness, confidence, fairness, and privacy. The definitions of each of these are summarized in the table below. Importantly, effective AI governance helps organizations take practical steps to building trust with GenAI through traceability, documentation, and monitoring techniques that ultimately activate these concepts. 

Key concepts of enterprise AI

Explainable AI – A Global Concern

Beyond building trust with consumers, regulatory compliance is another powerful reason to start addressing explainability related to AI models today. Guidance published within national and supranational regions like the European Union have specifically called out explainability as a core requirement for deployed AI within their respective jurisdictions.   

Worldwide, the language used to talk about explainability is remarkably consistent. And with the EU AI Act, the EU has taken a step further than other jurisdictions. The Act is the world’s first comprehensive regulatory framework focused on AI. Within this Act, explainability is now a legal obligation, especially as it relates to “high risk” systems.  

Model Complexity and AI Explainability

The biggest challenge that deployers seeking to explain their AI models will need to overcome is addressing model interpretability. Interpretability is the ability to understand in human terms why a model made a prediction, recommendation or decision.   

The problem is that the level of interpretability and the process of interpreting models differs depending on the type of AI model used. Traditional models such as linear and log linear models are considered inherently interpretable. Their features and weights are easily decipherable in terms of how they drive prediction. 

More complex machine learning models such as deep neural networks are not considered inherently explainable. They are black boxes. These kinds of models require post hoc processing methods to explain what features and weights are driving decisions. Currently the gold standard for post hoc processing of regressor and classifier models that are not inherently explainable is Shapley Additive Explanations (SHAP). 

Generative AI Models – Not Explainable Today

LLMs and NLPs can be considerably larger and more complex than other types of AI models. Today, NLP and LLM pose a challenge to interpretability requirements as no existing means adequately address the need.   

While LLMs and NLP models today cannot be adequately explained through existing methodologies, a strong research movement has developed around solving the explainability dilemma associated with LLMs and NLPs.  Two of the most promising research areas that have emerged are “Chain of Thought Reasoning” and “Attention Mechanisms with Visualization”. 

Third-party Models – Another Unique Challenge

According to a report conducted by MIT Sloan and Boston Consulting Group, nearly 80% of organizations surveyed in 2023 report accessing, buying, licensing, or otherwise using third-party AI tools. In fact, more than half of organizations surveyed rely exclusively on third-party AI tools and have no internally designed or developed AI technologies of their own.  

The use of Third-party models by an organization creates yet another challenge to the requirement that deployers of AI need to ensure that the models they use are explainable. Today most organizations lack the direct model access needed to explain these models. 

In the future, as more and more AI usage policies move from guidance to statute, it is reasonable to expect that the third-party explainability problem will be solved. AI deployers, subject to stiff regulatory fines, will demand that their model suppliers develop and support collaborative processes to share explainability data.  

Focus On Minimum Viable Governance

While many organizations understand they need to address explainability as part of a larger AI governance initiative they are unsure how to get started. This is where the concept of Minimum Viable Governance (MVG) comes in.  The MVG approach to governance focuses on right sizing the effort involved in establishing an AI governance program – not too much, not too little, but just enough to protect the organization while maintaining AI innovation cycles.  

MVG involves three core facets: 

  1. Establishing a governance inventory to ensure visibility into all AI usage and streamline AI use case intake. 
  1. Applying lightweight controls to manage verification, evidence, and approvals without overwhelming innovation. 
  1. Implementing streamlined reporting to achieve transparency and understand how AI is being used. 

Journey to AI Explainability – First Steps 

Summing up, there are three key takeaways that will get your enterprise started on addressing the challenges associated with explainability and the full spectrum of AI related governance requirements.  

  1. Get visibility into all your AI initiatives — including Generative AI and Third-party vendors — across your enterprise today. 
  1. Enable your teams with explainability, interpretability, and traceability capabilities. 
  1. Establish trust in your models with process, documentation, baselines, and attestations. 

Subscribe

* indicates required