Some ways that we can model causal effects using machine learning, statistics and econometrics, from a sixth-century religious text to the causal machine learning of 2021 including causal natural language processing.
What is unsupervised learning? When we think about acquiring a skill or learning a new subject, most of us see that process involving a teacher passing their knowledge on to us. If you’re teaching a child how to distinguish between different fruits for example, you might show them various images, identifying one as an apple, another as a pear and so on, so that when the child sees these fruits in real life, they can recognise which is which themselves, but initially via the labels you provided. This is known as supervised learning, and is one way in which Artificial Intelligence uses Machine Learning to predict particular outputs, having used data points with known outcomes. However, this is not the only way we, or computers for that matter, learn. Let us introduce to you Unsupervised Learning.
Explainable AI, or XAI, refers to a collection of ways we can analyse machine learning models. It is the opposite of the so-called ‘black box’, a machine learning model whose decisions can’t be understood or explained. Here’s a short video we have made about explainable AI.
Technical Due Diligence on companies with AI products and technologies Are you thinking about making an investment in a startup that allegedly uses AI or machine learning and would like a completely impartial assessment of their actual AI technology or products?
Key data science concepts from A to Z I’ve put together a short selection of some intermediate-level data science concepts which give you a good grounding in the field. A lot of these are based on a series of articles which I wrote for the excellent data science resource deepai.org. I’ve biased the list of data science concepts a bit towards natural language processing, because that’s the area I mainly work in.
What is explainable AI? Explainable AI, or XAI, is a set of methods and techniques that allow us to understand how a machine learning model works and why it makes the decisions it does. Without XAI, a machine learning model might be a “black box”, where even the developers cannot understand it they arrived at a certain decision.
What is Natural Language Processing? Natural Language Processing (NLP) is the area of artificial intelligence dealing with human language and speech. It sits at the crossroads between a diverse number of disciplines, from linguistics to computer science and engineering, and of course, AI.
We built a machine learning model to predict which shipping vessels are likely to be held in detention. The model made it into the shortlist of the Singapore Ocean of Opportunities AI Track, an internationally renowned event where competing companies aim to build AI solutions for the shipping industry. So what can machine learning deliver for the shipping industry?
How we explain how a neural network can recognise an image? Sometimes as data scientists we will encounter cases where we need to build a machine learning model that should not be a black box, but which should make transparent decisions that humans and businesses can understand. This can go against our instincts as scientists and engineers, as we would like to build the most accurate model possible.
What we can do for you