NLP on under-resourced languages

· Thomas Wood

Your NLP Career Awaits!

Ready to take the next step in your NLP journey? Connect with top employers seeking talent in natural language processing. Discover your dream job!

Find Your Dream Job

Image from: harmonydata.ac.uk

“Thinking too much”

I have been working on the development of Harmony, a tool to help psychology researchers harmonise questionnaire items in plain text across languages so that they can combine datasets from disparate sources. One of the challenges put to us by Wellcome, the funders of the mental health data prize research grant for Harmony, was how well does Harmony handle culture-specific concepts? There is an idea in psychology of “cultural concepts of distress”, which is the idea that some mental health disorders manifest themselves in a particular way in different cultures.

Shona, or chiShona, is spoken mainly in Zimbabwe and belongs to the Bantu language family, along with Swahili, Zulu and Xhosa. An example of a “cultural concept of distress” is the Shona word “kufungisisa”, which can be translated as “thinking too much”.

Kufungisisa is derived from the verb stem -funga, to think, as follows:

ShonaEnglish
-fungathink
kufungato think
ndofungaI think
-isa(causative suffix: “to cause to do”)
-isisa(intensive suffix: “to do quickly”)
kufungisisathink deeply, think too much; a Shona idiom for non-psychotic mental illness

Other examples of cultural concepts of distress include hikikomori (Japanese: ひきこもり or 引きこもり), a form of severe social withdrawal where a person refuses to leave their parents' house, does not work or go to school, and isolates themselves away from society and family in a single room.

In order to see if we could match this kind of item using semantics and document vector embeddings, I had to look for a trained language model which could handle text in Shona. Luckily, there has been a project to train large language models in a number of African languages, and I was able to pass my Shona text through the model xlm-roberta-base-finetuned-shona trained by David Adelani at Google DeepMind and UCL. I found that the model was reasonably good at matching monolingual Shona text, but could not match mixed English and Shona text.

Multilingual NLP

Need to process multilingual text?

We can build multilingual NLP solutions for under-resourced and under-served languages from Azeri to Zulu.

The Shona model that I found was developed as part of a paper by Alabi et al, where they developed LLMs for Amharic, Hausa, Igbo, Malagasy, Chichewa, Oromo, Naija (Nigerian Pidgin English), Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa (Xhosa), Yoruba, and isiZulu (Zulu) - as well as afro-xlmr-large which covers 17 languages.

In particular, to handle the challenges of lack of resources for certain languages, the researchers used language adaptive fine-tuning (LAFT), which involves taking an existing multilingual language model and fine-tuning it for the target language.

You can read a write up of my experiments with the Shona model here, and you can download my code in a Jupyter notebook here.

I would be curious to find out how well culture-specific concepts can be represented by embeddings, but I do not have a definitive answer yet, as multilingual LLMs are still in their early stages.

References

Elevate Your Team with NLP Specialists

Unleash the potential of your NLP projects with the right talent. Post your job with us and attract candidates who are as passionate about natural language processing.

Hire NLP Experts

Look up company data from names (video)
Ai for business

Look up company data from names (video)

How to look up UK company data from company names (video) Imagine you have a clients list, suppliers list, or investment portfolio…

Unstructured data
Big dataNatural language processing

Unstructured data

Unstructured Data in Healthcare with NLP Introduction In today’s digital healthcare landscape, data plays a pivotal role. However, while medical records, patient feedback, and clinical research generate vast amounts of information, not all of it is easy to manage or analyze.

How to train your own AI: Fine tune an LLM for mental health data
Generative aiAi in research

How to train your own AI: Fine tune an LLM for mental health data

Fine tuning a large language model refers to taking a model that has already been developed, and training it on more data.

What we can do for you

Transform Unstructured Data into Actionable Insights

Contact us