AI, Machine Learning & Computational Drug Design | Whitepapers & Reports

Explainable AI: Addressing Challenges for Machine Learning Implementation

There are many reasons why it is crucial to understand how an AI-enabled system has led to a specific output. Explainability can help developers ensure that the system is working as expected, meet regulatory requirements, or reveal the importance and effects of data previously deemed unimportant by humans.

Our May discussion group brought together a team to discuss the role of explainable AI in the pharmaceutical industry. Vaishak Belle kindly volunteered to lead the session. Vaishak Belle is a Chancellor’s Fellow and Faculty at the School of Informatics, University of Edinburgh, an Alan Turing Institute Faculty Fellow, a Royal Society University Research Fellow, and a member of the RSE Young Academy of Scotland. At the University of Edinburgh, he directs a research lab on artificial intelligence, specialising in the unification of logic and machine learning, with a recent emphasis on explainability and ethics.

What is Explainable AI?

Belle opened the group by describing explainable AI. He clarifies that “explainable AI is a very broad field; I want to start this conversation with the notion of interpretability in mind. By interpretability, I mean a mechanism used to inspect the decision boundaries of a machine learning model.”

Explainable AI is a set of processes and methods that allows human users to comprehend and trust the results from AI and ML models. As AI models have become more complex, even the developers may not understand how the results are generated.  

There are many reasons why it is crucial to understand how an AI-enabled system has led to a specific output. Explainability can help developers ensure that the system is working as expected, meet regulatory requirements, or reveal the importance and effects of data previously deemed unimportant by humans.

AI can be investigated in various ways. One example is researching the significance of specific data points on the decision boundary. If removing certain data points changes the decision boundary, this can cast doubt on the results. Belle explains that “quite often in biomedicine, you have a certain data point classified as a marker, or not a marker; positive or negative. And often, you want to know what kind of changes you need to make to this data point so that the decision boundary is flipped – for example, from positive to negative. This helps you understand the delta between closest neighbours.”

The techniques and methods used to interpret AI depend on the questions and goals of a project. Belle explains that “the questions you care about ultimately decide which technique is appropriate. For example, if you’re interested in feature relevance, techniques like sharp might become important when looking at the delta between closest neighbours. So, the essential point is that the stakeholder we have in mind and the questions we care about decide what we want to explain and what the techniques we would use.”

Explainable AI & Machine Learning Challenges

In many cases, explainable AI is used to explain and address challenges and problems with machine learning models. Unfortunately, there is no ‘one-size -fits-all’ solution. Today’s discussion revealed that the challenges faced by people working in different areas vary greatly.

Healthcare & Discovery – Lack of Data and Bias

One attendee who works as a consultant neurologist and clinical information officer shared his thoughts on the challenges faced in healthcare; “I think from a healthcare perspective, you view it from the lens of an organisation that is looking to buy an AI solution for a particular problem. For instance, we have huge lags on cytology reporting for diagnostic work. Similarly, for radiology, our options will be to go to the market and look for an AI.”

He continued, asking, “how do you compare systems that are effectively black boxes in terms of reliability? It depends on the dataset you use, but the datasets we have in healthcare are very flawed and quite biased. This makes it difficult to guide our organisation about which system would be better.”

Manufacturing – Distribution Drift

Fausto Artico explained that from a manufacturing point of view, the issue is not a lack of data. Instead, it’s “that from a statistical point of view; models can overfit in a way that is not reasonable at all. For example, our model could recognise correctly if a dented chassis, as long as the light coming from the window is constant. The problem is the training model was generated using images from November, and with current conditions, the model is not working anymore. This is an example of very subtle things that are easily overlooked, causing the models to not work.”

Decision Making: Knowledge Gaps and Time Frame

One challenge for us is getting people to understand enough about AI to value it and give it the time to actually take place. Things happen on different timeframes than statistics in software. The number of loops and iterations is quite different from the other approaches. And the people who decide whether or not we choose an AI solution are not the people who know anything about AI; there are people at the top who are looking for the latest technology, but they still need to be convinced that AI is a viable solution.”

Final Thoughts & Conclusion

At Oxford Global, we could not be more pleased with the turnout and feedback from this discussion. This meeting provided the perfect setting for exchanging ideas, sharing innovations, and discussing the ever-evolving artificial intelligence landscape. If you would like to join one of our future groups, please take a look at our upcoming events page.