Building a face recogniser: traditional methods vs deep learning

· Thomas Wood
Building a face recogniser: traditional methods vs deep learning

Why is face recognition everywhere?

Face recognition technology has existed for quite some time, but until recently face analysis was not accurate enough for most purposes.
Now it seems that face recognition is everywhere:

  • you upload a photo to Facebook and it suggests who is in the picture
  • your smartphone can probably recognise faces
  • lots of celebrity look-a-like apps have suddenly appeared on the app stores
  • police and antiterrorism units all over the world use the latest in face recognition technology

The reason why facial recognition software has recently got a lot better and a lot faster is due to the advent of deep learning: more powerful and parallelised computers, and better software design.
I’m going to talk about what’s changed.

Traditional face recognition: Eigenfaces

The first serious attempts to build a face recogniser were back in the 1980s and 90s and used something called Eigenfaces. An Eigenface is a blurry face-like image, and a face recogniser assumes that every face is made of lots of these images overlaid on top of each other pixel by pixel.

Eigenfaces, an early method of face recognition technology

If we want to recognise an unknown face we just work out which Eigenfaces it’s likely to be composed of.
Not surprisingly the Eigenface method didn’t work very well. If you shift a face image a few pixels to the right or left, you can easily see how this method will fail, since the parts of the face won’t line up with the eigenface any more.

Fast Data Science - London

Need a business solution?

NLP, ML and data science leader since 2016 - get in touch for an NLP consulting session.

Next step up in complexity: facial feature points

The next generation of face recognisers would take each face image and find important points such as the corner of the mouth, or an eyebrow. The coordinates of these points are called facial feature points. One well known commercial program converts every face into 66 feature points. 

Facial feature points, an hand coded method of face recognition technology

Facial feature points, an hand coded method of face recognition technology. Image source

To compare two faces you simply compare the coordinates (after adjusting in case one image is slightly off alignment).

Not surprisingly the facial feature coordinates method is better than the Eigenfaces method but is still suboptimal. We are throwing lots of useful information away: hair colour, eye colour, any facial structure that isn’t captured by a feature point, etc.

Deep learning approach to face recognition

The last face analysis method in particular involved a human programming into a computer the definition of an “eyebrow” etc. The current generation of machine learning face recognition models throws all this out of the window.

This approach used convolutional neural networks (CNNs). This involves repeatedly walking a kind of stencil over the image and working out where subsections of the image match particular patterns.

The first time, you pick up corners and edges. After doing this five times, each time on the output of the previous run, you start to pick up parts of an eye or ear. After 30 times, you have recognised a whole face!

The neat trick is that nobody has defined the patterns that we are looking for but rather they come from training the network with millions of face images.

Of course this can be an Achilles’ heel of the CNN approach since you may have no idea exactly why a face recogniser gave a particular answer.

The obstacle you encounter if you want to develop your own CNN face recogniser is, where can you get millions of images to develop the model? Lots of people scrape celebrity images from the internet to do this.

However you can get much more images if you can get people to give you their personal photos for free!

This is the reason why Facebook, Microsoft and Google have some of the most accurate face recognisers, since they have access to the resources necessary to train the machine learning models for facial recognition.

Where is face recognition going now?

The CNN approach is far from perfect and many companies will have some adjustments on top of what I described in order to compensate for its limitations, such as correcting for pose and lighting, often using a 3D mesh model of the face.

Machine learning facial recognition models are advancing rapidly and every year the state of the art in face recognition and analysis brings a noticeable improvement.

If you’d like to know more about this field, similar projects, or want to implement business applications of machine learning facial recognition models in 2024 - please get in touch.

Your NLP Career Awaits!

Ready to take the next step in your NLP journey? Connect with top employers seeking talent in natural language processing. Discover your dream job!

Find Your Dream Job

Clinical AI Interest Group at Alan Turing Institute

Clinical AI Interest Group at Alan Turing Institute

Thomas Wood presents the Clinical Trial Risk Tool before the November meeting of the Clinical AI Interest Group at Alan Turing Institute The Clinical AI Interest group is a community of health professionals from a broad range of backgrounds with an interest in Clinical AI, organised by the Alan Turing Institute.

Fast Data Science at Ireland's Expert Witness Conference on 20 May 2026
Legal aiGenerative ai

Fast Data Science at Ireland's Expert Witness Conference on 20 May 2026

Fast Data Science will appear at Ireland’s Expert Witness Conference on 20 May 2026 in Dublin On 20 May 2026, La Touche Training is running the Expert Witness Conference 2026, at the Radisson Blu Hotel, Golden Lane, Dublin 8, Ireland. This is a full-day event combining practical workshops and interactive sessions, aimed at expert witnesses and legal professionals who want to enhance their expertise. The agenda covers critical topics like recent developments in case law, guidance on report writing, and techniques for handling cross-examination.

Using Natural Language Processing (NLP) to predict the future
Ai for businessNatural language processing

Using Natural Language Processing (NLP) to predict the future

Guest post by Alex Nikic In the past few years, Generative AI technology has advanced rapidly, and businesses are increasingly adopting it for a variety of tasks. While GenAI excels at tasks such as document summarisation, question answering, and content generation, it lacks the ability to provide reliable forecasts for future events. GenAI models are not designed for forecasting, and along with the tendancy to hallucinate information, the output of these models should not be trusted when planning key business decisions. For more details, a previous article on our blog explores in-depth the trade-offs of GenAI vs Traditional Machine Learning approaches.

What we can do for you

Transform Unstructured Data into Actionable Insights

Contact us