AI ethics

· Thomas Wood
AI ethics

Elevate Your Team with NLP Specialists

Unleash the potential of your NLP projects with the right talent. Post your job with us and attract candidates who are as passionate about natural language processing.

Hire NLP Experts

How do we apply ethics to artificial intelligence?

Why do we now need AI ethics? What is AI ethics?

The ever-expanding availability of big data and cloud computing, improved computing power, and recent developments in deep learning algorithms have paved the way for machine learning algorithms to transform nearly every industry.

The data revolution is allowing AI to improve society and quality of life in myriad ways. Essential services such as healthcare are beginning to be transformed, but as with all new technologies, we have much to learn about harmful impacts of AI. A need has arisen to study and define AI ethics.

AI ethics, bias and discrimination

Machine learning algorithms gain their insights from the societies that they analyse. This means that they are also able to learn implicit bias and perpetuate inequalities such as sexism and racism in their algorithms.

For example, a machine learning algorithm used to predict the likelihood of recidivism (a repeat criminal offence) may unwittingly become biased, if the past data used to train it contains a bias. If the justice system is more likely to convict a person of one race than another, then a machine learning algorithm trained on the decisions of that justice system will be subject to the same bias and prejudice. If such a biased AI is used, for example, on parole or sentencing decisions, this can result in a feedback loop where the biased AI further entrenches the disadvantages of marginalised groups in society.

An case of a biased AI was brought to the Wisconsin Supreme Court in 2017 in Loomis v. Wisconsin, illustrating how biased AI in justice systems is no longer the preserve of science fiction.

Fast Data Science - London

Need a business solution?

NLP, ML and data science leader since 2016 - get in touch for an NLP consulting session.

Fortunately, there are AI ethics methods to detect, prevent, and remedy algorithmic bias. One simple option is to play ‘devil’s advocate’ with an AI in development and subject it to a kind of bias penetration testing. This is one of the most important areas of AI ethics.

If you would like to ensure that an AI that you are developing is free of bias, please get in contact and we will be pleased to assist you with AI ethics consulting.

AI explainability/transparency, GDPR and HIPAA

Many AI systems, such as deep neural networks, operate in a high dimensional space with millions of parameters, and generate decisions with minimal accountability. Imagine being denied a car loan, or being given a high insurance premium, without a comprehensible explanation.

In some cases, the lack of explainability may be acceptable. However there is often a business requirement, or regulatory requirement, or simply an ethical obligation, to provide an explanation of decisions, especially when they impact private individuals. In Europe, the GDPR guarantees a right to explanation which can be interpreted as an obligation for companies to explain AI decisions to customers on request.

The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention.

EU GDPR, Recital 71

In the United States, the Health Insurance Portability and Accountability Act (HIPAA) has imposed some similar but less stringent regulations.

This has generated a need for algorithmic explanations. Companies may encounter a difficult balancing act between developing a complex model with optimal predictive power, and providing explainability, and sometimes model performance may need to be compromised in order to comply with explainability requirements.

Fortunately, improvements in computing power have allowed for nearly any machine learning decision to be ‘explained’. In cases where a model is too complex for its inner parameters to be interpreted directly, it is possible to apply a perturbation to the model input and measure any change in the model decision. A number of machine learning explainability packages and algorithms have appeared in recent years to meet this need.

If you are struggling with balancing model explainability against performance, would simply like to comply with your obligations regarding GDPR Recital 71 or similar, or would like to discuss AI ethics in more detail, please get in contact with Fast Data Science and we will arrange a consultation.

Data protection in machine learning

Many big data AI projects involve the storage and processing of large amounts of personal data. In many countries, tech giants have come under fire from the media and government for flagrant abuse of user consent, data misuse, unlicenced sharing with third parties, and other dubious practices. AI systems have been known to target or profile subjects in unethical ways, such as targeted advertising without consent.

It is an essential component of AI ethics to obtain user consent at the point that data is gathered to ensure legal compliance, especially as regulation is likely to change in this area in the future.

Further reading on AI ethics

Find Top NLP Talent!

Looking for experts in Natural Language Processing? Post your job openings with us and find your ideal candidate today!

Post a Job

Look up company data from names (video)
Ai for business

Look up company data from names (video)

How to look up UK company data from company names (video) Imagine you have a clients list, suppliers list, or investment portfolio…

Unstructured data
Big dataNatural language processing

Unstructured data

Unstructured Data in Healthcare with NLP Introduction In today’s digital healthcare landscape, data plays a pivotal role. However, while medical records, patient feedback, and clinical research generate vast amounts of information, not all of it is easy to manage or analyze.

How to train your own AI: Fine tune an LLM for mental health data
Generative aiAi in research

How to train your own AI: Fine tune an LLM for mental health data

Fine tuning a large language model refers to taking a model that has already been developed, and training it on more data.

What we can do for you

Transform Unstructured Data into Actionable Insights

Contact us