Why do we need Explainable AI (video)

· Thomas Wood

What is explainable AI?

Explainable AI, or XAI, is a set of methods and techniques that allow us to understand how a machine learning model works and why it makes the decisions it does. Without XAI, a machine learning model might be a “black box”, where even the developers cannot understand it they arrived at a certain decision.

Examples of how explainable AI can work

Fast Data Science - London

Need an explainable model?

We can ensure your model is accountable, understandable, and explainable. At Fast Data Science, we like Occam’s Razor! That means we won’t use black boxes such as neural networks unless they are necessary.

Explainable AI techniques can vary. In the case of simple machine learning models like linear regression (formula y = mx + c), it’s easy to understand why a model has made a certain decision because there are only two parameters, the gradient m and the intercept c.

However, for more complex machine learning models, such as deep learning models, convolutional neural networks, and so on, we could have many millions of parameters inside the model and it becomes increasingly harder to understand the decisions made.

Explainable AI for very complex models

Explainable AI techniques in the case of extremely complex models normally consist of introducing small variations, or perturbations, into the input to the model, and observing the changes in the model’s output. For example, if a computer vision model is 87% confident that an image is a cat, and changing one pixel reduces the confidence to 85%, we can conclude that the pixel contained an element of ‘cattiness’ from the point of view of the model. By doing this across the image, we can get a very accurate map of which parts of the image are most cat-like to the model.

The beauty of XAI is that we don’t need to have any understanding of the model architecture to perform this analysis.

There are several well-known frameworks for XAI, the most widely used in Python currently being LIME.

Read more about explainable AI in our earlier blog post on the topic.

Why is Explainable AI important?

There are several reasons why explainable AI is important. First, it can help us to trust and validate machine learning models. If we can understand how a model works, we are more likely to trust its decisions. Second, XAI can help us to identify and correct biases in machine learning models. Third, XAI can help us to explain the decisions of machine learning models to users. This can be important in AI applications such as healthcare, where users need to understand why a model has made a certain decision about their treatment.

In certain fields, such as business or healthcare, there is an advantage in using very simple models such as the APGAR score for assessing a newborn baby’s risk level, which can be worked out on pen and paper. You can find out more in our post on formulas vs intuition in machine learning.

How does Explainable AI work?

There are many different techniques for explainable AI. Some of the most common techniques include:

  • Feature importance: This technique identifies the features that are most important for a machine learning model’s predictions.
  • Local interpretability methods: These methods explain the predictions of a machine learning model for individual data points.
  • Model introspection: This technique allows us to see how a machine learning model makes decisions by examining its internal workings.

How can Fast Data Science help with Explainable AI?

At Fast Data Science, we are experts in explainable AI. We can help you to understand how your machine learning models work and why they make the decisions they do. We can also help you to identify and correct biases in your models, and to explain the decisions of your models to users.

To learn more about explainable AI, or to get help with your own machine learning projects, please contact us today.

Resources

Find Top NLP Talent!

Looking for experts in Natural Language Processing? Post your job openings with us and find your ideal candidate today!

Post a Job

What is a vector index and how does it benefit my business?
Data science consultingAi for business

What is a vector index and how does it benefit my business?

You are probably familiar with traditional databases. For example, a teacher at a school will need to enter students’ grades into a system where they get stored, and at the end of the year the grades would need to be retrieved to create the report card for each student. Or an employee database might store employees’ home addresses, pay grades, start dates, and other crucial information. Traditionally, organisations use a structure called a relational database, where different types of data are stored in different tables, with links between them, and they can be queried using a special language called SQL.

How AI can predict costs of projects
Data science consultingAi for business

How AI can predict costs of projects

A problem we’ve come across repeatedly is how AI can be used to estimate how much a project will cost, based on information known before the project begins, or soon after it starts. By “project” I mean a large project in any industry, including construction, pharmaceuticals, healthcare, IT, or transport, but this could equally apply to something like a kitchen renovation.

Should lawyers stop using generative AI to prepare their legal arguments?
Generative aiLegal ai

Should lawyers stop using generative AI to prepare their legal arguments?

Senior lawyers should stop using generative AI to prepare their legal arguments! Or should they? A High Court judge in the UK has told senior lawyers off for their use of ChatGPT, because it invents citations to cases and laws that don’t exist!

What we can do for you

Transform Unstructured Data into Actionable Insights

Contact us