Why do we need Explainable AI (video)

· Thomas Wood

What is explainable AI?

Explainable AI, or XAI, is a set of methods and techniques that allow us to understand how a machine learning model works and why it makes the decisions it does. Without XAI, a machine learning model might be a “black box”, where even the developers cannot understand it they arrived at a certain decision.

Examples of how explainable AI can work

Fast Data Science - London

Need an explainable model?

We can ensure your model is accountable, understandable, and explainable. At Fast Data Science, we like Occam’s Razor! That means we won’t use black boxes such as neural networks unless they are necessary.

Explainable AI techniques can vary. In the case of simple machine learning models like linear regression (formula y = mx + c), it’s easy to understand why a model has made a certain decision because there are only two parameters, the gradient m and the intercept c.

However, for more complex machine learning models, such as deep learning models, convolutional neural networks, and so on, we could have many millions of parameters inside the model and it becomes increasingly harder to understand the decisions made.

Explainable AI for very complex models

Explainable AI techniques in the case of extremely complex models normally consist of introducing small variations, or perturbations, into the input to the model, and observing the changes in the model’s output. For example, if a computer vision model is 87% confident that an image is a cat, and changing one pixel reduces the confidence to 85%, we can conclude that the pixel contained an element of ‘cattiness’ from the point of view of the model. By doing this across the image, we can get a very accurate map of which parts of the image are most cat-like to the model.

The beauty of XAI is that we don’t need to have any understanding of the model architecture to perform this analysis.

There are several well-known frameworks for XAI, the most widely used in Python currently being LIME.

Read more about explainable AI in our earlier blog post on the topic.

Why is Explainable AI important?

There are several reasons why explainable AI is important. First, it can help us to trust and validate machine learning models. If we can understand how a model works, we are more likely to trust its decisions. Second, XAI can help us to identify and correct biases in machine learning models. Third, XAI can help us to explain the decisions of machine learning models to users. This can be important in AI applications such as healthcare, where users need to understand why a model has made a certain decision about their treatment.

In certain fields, such as business or healthcare, there is an advantage in using very simple models such as the APGAR score for assessing a newborn baby’s risk level, which can be worked out on pen and paper. You can find out more in our post on formulas vs intuition in machine learning.

How does Explainable AI work?

There are many different techniques for explainable AI. Some of the most common techniques include:

  • Feature importance: This technique identifies the features that are most important for a machine learning model’s predictions.
  • Local interpretability methods: These methods explain the predictions of a machine learning model for individual data points.
  • Model introspection: This technique allows us to see how a machine learning model makes decisions by examining its internal workings.

How can Fast Data Science help with Explainable AI?

At Fast Data Science, we are experts in explainable AI. We can help you to understand how your machine learning models work and why they make the decisions they do. We can also help you to identify and correct biases in your models, and to explain the decisions of your models to users.

To learn more about explainable AI, or to get help with your own machine learning projects, please contact us today.

Resources

Find Top NLP Talent!

Looking for experts in Natural Language Processing? Post your job openings with us and find your ideal candidate today!

Post a Job

Clinical AI Interest Group at Alan Turing Institute

Clinical AI Interest Group at Alan Turing Institute

Thomas Wood presents the Clinical Trial Risk Tool before the November meeting of the Clinical AI Interest Group at Alan Turing Institute The Clinical AI Interest group is a community of health professionals from a broad range of backgrounds with an interest in Clinical AI, organised by the Alan Turing Institute.

Fast Data Science at Ireland's Expert Witness Conference on 20 May 2026
Legal aiGenerative ai

Fast Data Science at Ireland's Expert Witness Conference on 20 May 2026

Fast Data Science will appear at Ireland’s Expert Witness Conference on 20 May 2026 in Dublin On 20 May 2026, La Touche Training is running the Expert Witness Conference 2026, at the Radisson Blu Hotel, Golden Lane, Dublin 8, Ireland. This is a full-day event combining practical workshops and interactive sessions, aimed at expert witnesses and legal professionals who want to enhance their expertise. The agenda covers critical topics like recent developments in case law, guidance on report writing, and techniques for handling cross-examination.

Using Natural Language Processing (NLP) to predict the future
Ai for businessNatural language processing

Using Natural Language Processing (NLP) to predict the future

Guest post by Alex Nikic In the past few years, Generative AI technology has advanced rapidly, and businesses are increasingly adopting it for a variety of tasks. While GenAI excels at tasks such as document summarisation, question answering, and content generation, it lacks the ability to provide reliable forecasts for future events. GenAI models are not designed for forecasting, and along with the tendancy to hallucinate information, the output of these models should not be trusted when planning key business decisions. For more details, a previous article on our blog explores in-depth the trade-offs of GenAI vs Traditional Machine Learning approaches.

What we can do for you

Transform Unstructured Data into Actionable Insights

Contact us