AI for healthcare

AI for healthcare

What is AI for healthcare?

We can define AI for healthcare as any use of algorithms to enable diagnosis, prognosis, and treatment of patients. Healthcare AI falls under the umbrella of Medtech or health tech, a field which also includes non-AI technology used for healthcare. Many healthcare AI applications use natural language processing, computer vision, and deep learning, to emulate and augment the skills of human physicians. In recent years, the AI revolution has begun to affect all branches of healthcare. Normally these technologies are a type of narrow AI, meaning that they are highly specialised algorithms designed to solve one particular problem better or faster than humans.

What is driving the popularity of healthcare AI?

The main factors which have driven the sudden uptake of AI for healthcare are the widespread availability of health data, the improvement in computing software and hardware, and the intense pressure on healthcare systems around the world to make use of the data to deliver better care to patients. The adoption of AI for healthcare has been somewhat slower than some other industries, due to heightened regulation and privacy and ethical concerns, but nonetheless it is becoming an essential part of modern healthcare.

An example of an Electronic Medical Record (EMR)

Example of the start of an Electronic Medical Record, useful for training healthcare AI models.

Examples of AI for healthcare

At Fast Data Science, we have worked on many AI projects in healthcare and pharmaceuticals, mainly using natural language processing. Common applications of AI in healthcare include:

  • Electronic health records in many healthcare systems are a disparate array of databases with unstructured text fields. They often contain a wealth of data that can be used for descriptive analyses and predictive models, but it is impossible for a single human to read through the numbers of documents required. Natural language processing models such as transformers can be trained to predict the likelihood of an adverse event such as a heart attack or stroke from the documentation of a patient’s regular visits, preventing health problems before they occur.
  • Pathologists traditionally spend large amounts of time analysing specimens on glass slides. Given enough training data, a computer vision model such as a convolutional neural network can learn to interpret a specimen faster and more accurately than a human pathologist. This does not render the pathologist obsolete, but enables them to dedicate their time to higher-level analyses and synthesis of results, and has the end result of improving the accuracy of decisions made and the resultant quality of care.

Fast Data Science - London

Need a business solution?

NLP, ML and data science leader since 2016 - get in touch for an NLP consulting session.
  • The National Health Service has had a problem with turnover of junior doctors for some years. The organisation must invest up to £200,000 in each trainee before that individual reaches the rank of consultant (cardiologist, GP, etc), and attrition on the training pathway, caused by dropouts, illness, emigration or other factors, costs the NHS millions of pounds each year. At Fast Data Science we have been working on a bespoke predictive model for Higher Education England, predicting the likelihood of junior doctor attrition on an individual level and allowing decision-makers within the NHS to develop a data-driven employee retention strategy.

If you are in the healthcare space and have large amounts of data and would like to discuss with a machine learning expert, please get in touch and we would be glad to arrange a free consultation.

AI in healthcare

Artificial intelligence in healthcare is the use of complex algorithms and software, or in other words, artificial intelligence (AI), to emulate human cognition in the analysis, interpretation, and comprehension of complicated medical and healthcare data.

Concerns about AI in healthcare

The rapidly changing landscape of AI in healthcare has naturally raised concerns about the impact on medicine and on society as a whole. One concern is the potential to increase inequality in healthcare, creating a division between groups with access to the cutting edge healthcare AI and those groups without these advantages.

Healthcare data protection

In 2019 it was reported that the UK’s National Health Service had signed a number of data sharing agreements with Google Health, allowing the company access to the sensitive records of more than a million NHS patients. Google’s plans included a project to detect eye disease from retinal scans with high accuracy. Many in the UK expressed their unease at their data potentially being shared with a large multinational without their explicit consent.

In Europe patients’ data is protected by the GDPR, and in the USA by the HIPAA. However, nearly all healthcare machine learning models require access to extremely sensitive data for their development, and it has been difficult to reach a balance between permitting the development of these models and preventing abuse of patient data by nefarious parties. Imagine, for example, if a health insurance company were able to obtain highly sensitive patient data such as genomics data, allowing them to boost premiums for patients based on this information before those patients are even aware of their risk status.

Explainability of AI for healthcare

Other concerns frequently raised about AI in healthcare include the danger of black box models, which deliver a decision without any accountability or explanation of the reasoning behind the decision. This can be mitigated either by simplifying the machine learning model used or resorting to explainable machine learning models.

Algorithmic bias in healthcare AI

Finally, many have raised the issue of bias in machine learning having an adverse affect on the healthcare received by minorities and exacerbating the inequalities already existing in societies. Examples include a diagnostic model trained on white males failing to achieve the same accuracy for other groups. Fortunately, with rigorous quality control it is possible to address the issue of algorithmic bias and allay these concerns.

What we can do for you

Transform Unstructured Data into Actionable Insights

Contact us