NLP on under-resourced languages

· Thomas Wood
NLP on under-resourced languages

“Thinking too much”

I have been working on the development of Harmony, a tool to help psychology researchers harmonise questionnaire items in plain text across languages so that they can combine datasets from disparate sources. One of the challenges put to us by Wellcome, the funders of the mental health data prize research grant for Harmony, was how well does Harmony handle culture-specific concepts? There is an idea in psychology of “cultural concepts of distress”, which is the idea that some mental health disorders manifest themselves in a particular way in different cultures.

Shona, or chiShona, is spoken mainly in Zimbabwe and belongs to the Bantu language family, along with Swahili, Zulu and Xhosa. An example of a “cultural concept of distress” is the Shona word “kufungisisa”, which can be translated as “thinking too much”.

Kufungisisa is derived from the verb stem -funga, to think, as follows:

ShonaEnglish
-fungathink
kufungato think
ndofungaI think
-isa(causative suffix: “to cause to do”)
-isisa(intensive suffix: “to do quickly”)
kufungisisathink deeply, think too much; a Shona idiom for non-psychotic mental illness

Other examples of cultural concepts of distress include hikikomori (Japanese: ひきこもり or 引きこもり), a form of severe social withdrawal where a person refuses to leave their parents’ house, does not work or go to school, and isolates themselves away from society and family in a single room.

In order to see if we could match this kind of item using semantics and document vector embeddings, I had to look for a trained language model which could handle text in Shona. Luckily, there has been a project to train large language models in a number of African languages, and I was able to pass my Shona text through the model xlm-roberta-base-finetuned-shona trained by David Adelani at Google DeepMind and UCL. I found that the model was reasonably good at matching monolingual Shona text, but could not match mixed English and Shona text.

Multilingual NLP

Need to process multilingual text?

We can build multilingual NLP solutions for under-resourced and under-served languages from Azeri to Zulu.

The Shona model that I found was developed as part of a paper by Alabi et al, where they developed LLMs for Amharic, Hausa, Igbo, Malagasy, Chichewa, Oromo, Naija (Nigerian Pidgin English), Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa (Xhosa), Yoruba, and isiZulu (Zulu) - as well as afro-xlmr-large which covers 17 languages.

In particular, to handle the challenges of lack of resources for certain languages, the researchers used language adaptive fine-tuning (LAFT), which involves taking an existing multilingual language model and fine-tuning it for the target language.

You can read a write up of my experiments with the Shona model here, and you can download my code in a Jupyter notebook here.

I would be curious to find out how well culture-specific concepts can be represented by embeddings, but I do not have a definitive answer yet, as multilingual LLMs are still in their early stages.

References

Your NLP Career Awaits!

Ready to take the next step in your NLP journey? Connect with top employers seeking talent in natural language processing. Discover your dream job!

Find Your Dream Job

AI Consulting
Data science consulting

AI Consulting

Unlock your business potential with expert AI consulting services from Fast Data Science. Discover strategies to accelerate growth and outperform competitors.

AI for financial advice
Generative ai

AI for financial advice

Financial advisors, like lawyers, are regulated in the UK. All financial advisors should be registered with the Financial Conduct Authority (FCA) and must have certain qualifications and have signed up to a code of ethics. UK financial advisors must also complete professional training every year.

How good are the best large language models in 2026?
Generative aiLegal ai

How good are the best large language models in 2026?

Which is the best AI for legal questions in 2026? We tested 16 Large Language Models (AIs) from the last two years on a law exam.

What we can do for you

Transform Unstructured Data into Actionable Insights

Contact us