Blog Layout

The accuracy-explainability tradeoff in AI: Black box vs. White Box.

Alessio De Filippis • 1 June 2023

Historically, tech leaders have assumed that the better a human can understand an algorithm, the less accurate it will be. 

There is no one-size-fits-all solution to AI implementation. All new technology comes with risks, and the choice of how to balance those risks with the potential rewards will depend on the specific business context and data. But our research demonstrates that in many cases, simple, interpretable AI models perform just as well as black box alternatives — without sacrificing the trust of users or allowing hidden biases to drive decisions.

Historically, tech leaders have assumed that the better a human can understand an algorithm, the less accurate it will be. But is there always a tradeoff between accuracy and explainability? The authors tested a wide array of AI models on nearly 100 representative datasets, and they found that 70% of the time, a more-explainable model could be used without sacrificing accuracy. Moreover, in many applications, opaque models come with substantial downsides related to bias, equity, and user trust. As such, the authors argue that organizations should think carefully before integrating unexplainable, “black box” AI tools into their operations, and take steps to help determine whether these models are really worth the risk before moving forward.


In 2019, Apple’s credit card business came under fire for offering a woman one twentieth the credit limit offered to her husband. When she complained, Apple representatives reportedly told her, “I don’t know why, but I swear we’re not discriminating. It’s just the algorithm.”

Today, more and more decisions are made by opaque, unexplainable algorithms like this — often with similarly problematic results. From credit approvals to customized product or promotion recommendations to resume readers to fault detection for infrastructure maintenance, organizations across a wide range of industries are investing in automated tools whose decisions are often acted upon with little to no insight into how they are made.

This approach creates real risk. Research has shown that a lack of explainability is both one of executives’ most common concerns related to AI and has a substantial impact on users’ trust in and willingness to use AI products — not to mention their safety.

And yet, despite the downsides, many organizations continue to invest in these systems, because decision-makers assume that unexplainable algorithms are intrinsically superior to simpler, explainable ones. This perception is known as the accuracy-explainability tradeoff: Tech leaders have historically assumed that the better a human can understand an algorithm, the less accurate it will be.


White Box vs. Black Box


Specifically, data scientists draw a distinction between so-called black-box and white-box AI models: White-box models typically include just a few simple rules, presented for example as a decision tree or a simple linear model with limited parameters. Because of the small number of rules or parameters, the processes behind these algorithms can typically be understood by humans.

In contrast, black-box models use hundreds or even thousands of decision trees (known as “random forests”), or billions of parameters (as deep learning models do), to inform their outputs. Cognitive load theory has shown that humans can only comprehend models with up to about seven rules or nodes, making it functionally impossible for observers to explain the decisions made by black-box systems. But does their complexity necessarily make black-box models more accurate?


Debunking the Accuracy-Explainability Tradeoff


To explore this question, we conducted a rigorous, large-scale analysis of how black and white-box models performed on a broad array of nearly 100 representative datasets (known as benchmark classification datasets), spanning domains such as pricing, medical diagnosis, bankruptcy prediction, and purchasing behavior. We found that for almost 70% of the datasets, the black box and white box models produced similarly accurate results. In other words, more often than not, there was no tradeoff between accuracy and explainability: A more-explainable model could be used without sacrificing accuracy.

This is consistent with other emerging research exploring the potential of explainable AI models, as well as our own experience working on case studies and projects with companies across diverse industries, geographies, and use cases. For example, it has been repeatedly demonstrated that COMPAS, the complicated black box tool that’s widely used in the U.S. justice system for predicting likelihood of future arrests, is no more accurate than a simple predictive model that only looks at age and criminal history. Similarly, a research team created a model to predict likelihood of defaulting on a loan that was simple enough that average banking customers could easily understand it, and the researchers found that their model was less than 1% less accurate than an equivalent black box model (a difference that was within the margin of error).

Of course, there are some cases in which black-box models are still beneficial. But in light of the downsides, our research suggests several steps companies should take before adopting a black-box approach:


1. Default to white box.


As a rule of thumb, white-box models should be used as benchmarks to assess whether black-box models are necessary. Before choosing a type of model, organizations should test both — and if the difference in performance is insignificant, the white-box option should be selected.


2. Know your data.


One of the main factors that will determine whether a black-box model is necessary is the data involved. First, the decision depends on the quality of the data. When data is noisy (i.e., when it includes a lot of erroneous or meaningless information), relatively simple white-box methods tend to be effective. For example, we spoke with analysts at Morgan Stanley who found that for their highly noisy financial datasets, simple trading rules such as “buy stock if company is undervalued, underperformed recently, and is not too large” worked well.

Second, the type of data also affects the decision. For applications that involve multimedia data such as images, audio, and video, black-box models may offer superior performance. For instance, we worked with a company that was developing AI models to help airport staff predict security risk based on images of air cargo. They found that black-box models had a higher chance of detecting high-risk cargo items that could pose a security threat than equivalent white-box models did. These black-box tools enabled inspection teams to save thousands of hours by focusing more on high-risk cargo, substantially boosting the organization’s performance on security metrics. In similarly complex applications such as face-detection for cameras, vision systems in autonomous vehicles, facial recognition, image-based medical diagnostic devices, illegal/toxic content detection, and most recently, generative AI tools like ChatGPT and DALL-E, a black box approach may be advantageous or even the only feasible option.


3. Know your users.


Transparency is always important to build and maintain trust — but it’s especially critical for particularly sensitive use cases. In situations where a fair decision-making process is of utmost importance to your users, or in which some form of procedural justice is a requirement, it may make sense to prioritize explainability even if your data might otherwise lend itself to a black box approach, or if you’ve found that less-explainable models are slightly more accurate.

For instance, in domains such as hiring, allocation of organs for transplant, and legal decisions, opting for a simple, rule-based, white-box AI system will reduce risk to both the organization and its users. Many leaders have discovered these risks the hard way: In 2015, Amazon found that its automated candidate screening system was biased against female software developers, while a Dutch AI welfare fraud detection tool was shut down in 2018 after critics decried it as a “large and non-transparent black hole.”


4. Know your organization.


An organization’s choice between white or black-box AI also depends on its own level of AI readiness. For organizations that are less digitally developed, in which employees tend to have less trust in or understanding of AI, it may be best to start with simpler models before progressing to more complex solutions. That typically means implementing a white-box model that everyone can easily understand, and only exploring black-box options once teams have become more accustomed to using these tools.

For example, we worked with a global beverage company that launched a simple white-box AI system to help employees optimize their daily workflows. The system offered limited recommendations, such as which products should be promoted and how much of different products should be restocked. Then, as the organization matured in its use of and trust in AI, managers began to test out whether more complex, black-box alternatives might offer advantages in any of these applications.


5. Know your regulations.


In certain domains, explainability might be a legal requirement, not a nice-to-have. For instance, in the U.S., the Equal Credit Opportunity Act requires financial institutions to be able to explain the reasons why credit has been denied to a loan applicant. Similarly, Europe’s General Data Protection Regulation (GDPR) suggests that employers should be able to explain how candidates’ data has been used to inform hiring decisions. When organizations are required by law to be able to explain the decisions made by their AI models, white-box models are the only option.


6. Explain the unexplainable.


Finally, there are of course contexts in which black-box models are both undeniably more accurate (as was the case in 30% of the datasets we tested in our study) and acceptable with respect to regulatory, organizational, or user-specific concerns. For example, applications such as computer vision for medical diagnoses, fraud detection, and cargo management all benefit greatly from black-box models, and the legal or logistical hurdles they pose tend to be more manageable. In cases like these, if an organization does decide to implement an opaque AI model, it should take steps to address the trust and safety risks associated with a lack of explainability.

In some cases, it is possible to develop an explainable white-box proxy to clarify, in approximate terms, how a black-box model has reached a decision. Even if this explanation isn’t fully accurate or complete, it can go a long way to build trust, reduce biases, and increase adoption. In addition, a greater (if imperfect) understanding of the model can help developers further refine it, adding more value to these businesses and their end users.

In other cases, organizations may truly have very limited insight into why a model makes the decisions it does. If an approximate explanation isn’t possible, leaders can still prioritize transparency in how they talk about the model both internally and externally, openly acknowledging the risks and working to address them.


***

Ultimately, there is no one-size-fits-all solution to AI implementation. All new technology comes with risks, and the choice of how to balance those risks with the potential rewards will depend on the specific business context and data. But our research demonstrates that in many cases, simple, interpretable AI models perform just as well as black box alternatives — without sacrificing the trust of users or allowing hidden biases to drive decisions.


***


Alessio De Filippis, Founder and Chief Executive Officer @ Libentium.


Founder and Partner of Libentium, developing projects mainly focused on Marketing and Sales innovations for different types of organizations (Multinationals, SMEs, startups).


Cross-industry experience: Media, TLC, Oil & Gas, Leisure & Travel, Biotech, ICT.


by Alessio De Filippis 26 Apr, 2024
Lessons learned and suggestions.
by Alessio De Filippis 01 Dec, 2023
Generative AI is developing fast, and companies will have to balance pace and innovation with caution.
by Alessio De Filippis 22 Sept, 2023
Don't forget, historically it has taken about five to seven years for a disruptive firm to change industry models.
by Alessio De Filippis 03 Sept, 2023
Don't approach investor relations as a marketing exercise.
by Alessio De Filippis 02 Sept, 2023
It's all about soft vs. hard data.
by Alessio De Filippis 02 Aug, 2023
Create value that’s hard to copy.
by Alessio De Filippis 02 Aug, 2023
A digital and AI transformation cannot be done in “special project” mode. To pull this off, the entire organization must be able to deliver constant digital innovation, which requires a holistic set of capabilities. The effort is significant, but so is the reward.
by Alessio De Filippis 02 Aug, 2023
Like any business-planning exercise, think about your AI strategy in phases. Embrace agility and change, and keep a continuous learning mindset, calibrating and adjusting your gameplan as you go.
by Alessio De Filippis 03 Jul, 2023
The business advantages of scale and scope are widely recognized, but large, global enterprises often fail to fully realize them when it comes to innovation.
by Alessio De Filippis 03 Jul, 2023
Today we find ourselves in a place that’s all too familiar: the unfamiliar.
More posts
Share by: