Generative Adversarial Networks Made Easy

A fake face generated by a generative adversarial network StyleGAN
Would you hire or date this person? There’s a catch: she doesn’t exist! I generated the image in a few seconds using the software StyleGAN. That’s why you can see some small artefacts in the image if you look carefully.

Human or AI?

Imagine this scenario: you have encountered a profile online of a good-looking person. They might have contacted you about a job, or on a social media site. You might even have swiped right on their face on Tinder.

There is just one little problem. This person may not even exist. The image could have been generated using a machine learning technique called Generative Adversarial Networks, or GANs. GANs were developed in 2014 and have recently experienced a surge in popularity. They have been touted as one of the most groundbreaking ideas in machine learning in the past two decades. GANs are used in art, astronomy, and even video gaming, and are also taking the legal and media world by storm. 

Generative Adversarial Networks are able to learn from a set of training data, and generate new synthetic data with the same characteristics as the training set. The best-known and most striking application is for image style transfer, where GANs can be used to change the gender or age of a face photo, or re-imagine a painting in the style of Picasso. GANs are not limited to just images: they can also generate synthetic audio and video.

Can we also use generative adversarial networks for natural language processing – to write a novel, for example? Read on and find out.

I’ve included links at the end of the article so you can try all the GANs featured yourself.

style mixing min 1
A generative adversarial network allows you to change parameters and adjust and control the face that you are generating. I generated this series of faces with StyleGAN.

Invention of Generative Adversarial Networks

The American Ian Goodfellow and his colleagues invented Generative Adversarial Networks in 2014 following some ideas he had during his PhD at the University of Montréal. They entered the public eye around 2016 following a number of high profile stories around AI art and the impact on the art world.

A Game of Truth or Lie?

How does a generative adversarial network work? In fact, the concept is quite similar to playing a game of ‘truth-or-lie’ with a friend: you must make up stories, and your friend must guess if you’re telling the truth or not. You can win the game by making up very plausible lies, and your friend can win if they can sniff out the lies correctly.

A Generative Adversarial Network consists of two separate neural networks:

  • The generator: this is a neural network which takes some random numbers as input, and tries to generate realistic fake data, such as fake images.
  • The discriminator: this is a neural network with a simple task: it must spot the discriminator’s fakes and distinguish them from real ones.

The two networks are trained together but must work against each other, hence the name ‘adversarial’. If the discriminator doesn’t recognise a fake as such, it loses a point. Likewise, the generator loses a point if the discriminator can correctly distinguish the real images from the fake ones.

A clip from the British panel show Would I Lie To You, where a contestant must either tell the truth or invent a plausible lie, and the opposing team must guess which it is. Over time the contestants get better at lying convincingly and at distinguishing lies from truth. The initial contestant is like the ‘generator’ in a Generative Adversarial Network, and the opponent is the ‘discriminator’.

How Generative Adversarial Networks Learn

So how does a generative adversarial network learn to generate such realistic fake content?

As with all neural networks, we initialise the generator and discriminator with completely random values. So the generator produces only noise, and the discriminator has no clue how to distinguish anything.

Let us imagine that we want a generative adversarial network to generate handwritten digits, looking like this:

mnist 3.0.1 min
Some examples of handwritten digits from the famous MNIST dataset.

When we start training a generative adversarial network, the generator only outputs pure noise:

At the start of training, a generative adversarial network outputs white noise.
The output image of a GAN before training starts

At this stage, it is very easy for the discriminator to distinguish noise from handwritten numbers, because they look nothing alike. So at the start of the “game”, the discriminator is winning.

After a few minutes of training, the generator begins to output images that look slightly more like digits:

After a few epochs, a generative adversarial network starts to output more realistic digits
After a few epochs, a generative adversarial network starts to output more realistic digits.

After a bit longer, the generator’s output becomes indistinguishable from the real thing. The discriminator can’t tell real examples apart from fakes any more.

Applications

Generating Face Images

Generative Adversarial Networks are best known for their ability to generate fake images, such as human faces. The principle is the same as for handwritten digits in the example shown above. The generator learns from a set of images which are usually celebrity faces, and generates a new face similar to the faces it has learnt before.

A set of faces generated by the generative adversarial network StyleGAN, developed by NVidia.
A set of faces generated by the generative adversarial network StyleGAN, developed by NVidia.

Interestingly, the generated faces tend to be quite attractive. This is partly due to the use of celebrities as a training set, but also because the GAN performs a kind of averaging effect on the faces that it’s learnt from, which removes asymmetries and irregularities.

Image Style Transfer

As well as generating random images, generative adversarial networks can be used to morph a face from one gender to another, change someone’s hairstyle, or transform various elements of a photograph.

For example, I tried running the code to train the generative adversarial network CycleGAN, which is able to convert horses to zebras in photographs and vice versa. After about four hours of training, the network begins to be able to turn a horse into a zebra (the quality isn’t that great here as I didn’t run the training for very long, but if you run CycleGAN for several days you can get a very convincing zebra).

A horse which will be transformed to a zebra by the generative adversarial network CycleGANA horse transformed to a zebra by the generative adversarial network CycleGAN

Music

It’s possible to convert an audio file into an image by representing it as a spectrogram, where time is on one axis and pitch is on the other.

spectrogram min
The spectrogram of Beethoven’s Military March

An alternative method is to treat the music as a MIDI file (the output you would get from playing it on an electronic keyboard), and then transform that to a format that the GAN can handle. Using simple transformations like this, it’s possible to use GANs to generate entirely new pieces of music in the style of a given composer, or to morph speech from one speaker’s voice to another.

The generative adversarial network GANSynth allows us to adjust properties such as the timbre of a piece of music.

Here’s Bach’s Prelude Suite No. 1 in G major:

Bach’s Prelude Suite No. 1 in G major.

And here is the same piece of music with the timbre transformed by GANSynth:

Bach’s Prelude Suite No. 1 in G major with an interpolated timbre, generated by GANSynth.

Generative Adversarial Networks for Natural Language Processing?

After seeing the amazing things that generative adversarial networks can achieve for images, video and audio, I started wondering whether a GAN could write a novel, a news article, or any other piece of text.

I did some digging and found that Ian Goodfellow, the inventor of Generative Adversarial Networks, wrote in a post on Reddit back in 2016 that GANs can’t be used for natural language processing, because GANs require real-valued data.

An image, for example, is made up of continuous values. You can make a single pixel a touch lighter or darker. A GAN can learn to improve its images by making small adjustments. However there is no analogous continuous value in text. According to Goodfellow,

If you output the word “penguin”, you can’t change that to “penguin + .001” on the next step, because there is no such word as “penguin + .001”. You have to go all the way from “penguin” to “ostrich”.

Since all NLP is based on discrete values like words, characters, or bytes, no one really knows how to apply GANs to NLP yet.

Ian Goodfellow, posting on Reddit in 2016

However since Ian Goodfellow wrote this quote, a number of researchers have succeeded in adapting generative adversarial networks for text.

A Chinese team (Yu et al) has developed a generative adversarial network which they used to generate classical Chinese poems, which are made up of lines of four characters each. They found that independent judges were unable to tell the generated poems from real ones.

They then tried it out on Barack Obama’s speeches and were able to generate some very plausible-sounding texts, such as:

Thank you so much. Please, everybody, be seated. Thank you very much. You’re very kind. Thank you.

I´m pleased in regional activities to speak to your own leadership. I have a preexisting conditions. It is the same thing that will end the right to live on a high-traction of our economy. They faced that hard work that they can do is a source of collapse. This is the reason that their country can explain construction of their own country to advance the crisis with possibility for opportunity and our cooperation and governments that are doing. That’s the fact that we will not be the strength of the American people. And as they won’t support the vast of the consequences of your children and the last year. And that’s why I want to thank Macaria. America can now distract the need to pass the State of China and have had enough to pay their dreams, the next generation of Americans that they did the security of our promise. And as we cannot realize that we can take them.

And if they can can’t ensure our prospects to continue to take a status quo of the international community, we will start investing in a lot of combat brigades. And that’s why a good jobs and people won’t always continue to stand with the nation that allows us to the massive steps to draw strength for the next generation of Americans to the taxpayers. That’s what the future is really man, but so we’re just make sure that there are that all the pressure of the spirit that they lost for all the men and women who settled that our people were seeing new opportunity. And we have an interest in the world.

Now we welcome the campaign as a fundamental training to destroy the principles of the bottom line, and they were seeing their own customers. And that’s why we will not be able to get a claim of their own jobs. It will be a state of the United States of America. The President will help the party to work across our times, and here in the United States of uniform. But their relationship with the United States of America will include faith.

Thank you. God bless you. And May God loss man. Thank you very much. Thank you very much, everybody. Thank you. God bless the United States of America. God bless you. Here’s President.

A generated Barack Obama-esque speech, by Yu et al (2017)

Generative Adversarial Networks in Society

Deepfakes

GANs have received substantial attention in the mainstream media because of their part in the controversial ‘deepfakes’ phenomenon. Deepfakes are realistic-looking synthetic images or videos of politicians and other public figures in compromising situations. Malicious actors have created highly convincing footage of people doing or saying things they have never actually done or said.

It has always been possible to Photoshop celebrities or politicians into fake backdrops, or show these people hugging or shaking hands with a person that they have never seen in person. The Soviet apparatus was notorious for airbrushing out-of-favour figures out of photographs in a futile attempt to rewrite history. Generative adversarial networks have taken this one step further by making it possible to create apparently real video footage.

A digitally retouched photograph from the Soviet era. Who knows what the authoritarian state could have achieved with generative adversarial networks?
A digitally retouched photograph from the Soviet era. Who knows what the authoritarian state could have achieved with generative adversarial networks? Image is in the public domain.

This is an existential threat to the news media, where the credibility of the content is key. How can we know whether a whistle-blower’s hidden camera clip is real, or is it an elaborate fake created by a GAN to destroy the opponent’s reputation?​​ Deepfakes can also be used to add credibility to fake news articles.

The technology poses dark problems. GAN-enabled pornography has appeared on the Internet, created using the faces of real celebrities. Celebrities are currently an easy target because there are already many photos of them on the Internet, making it easy to train a GAN to generate their faces. Furthermore, the public’s interest in their personal lives is already high, so it can be lucrative to post fake videos or photos. However, as technology advances and the size of the required training set shrinks, hackers can use blackmail to make fake clips featuring nearly anybody.

AI Art

Even bona fide uses of generative adversarial networks raise some complicated legal questions. For example, who owns the rights to an image created by a generative adversarial network?

United States copyright law requires a copyrighted work to have a human author. But who owns the rights to an image generated by a GAN? The software engineer? The person who used the GAN? Or the owner of the training data?

The concept of ‘who is the creator’ was famously put to the test in 2018, when the Parisian arts collective Obvious used a generative adversarial network to create a painting called Edmond de Belamy, which was later printed onto canvas. The artwork sold at Christie’s New York for $432,500. However, it soon emerged that the code to generate the painting had been written by another AI artist, Robbie Barratt, who was not affiliated with Obvious. Public opinion was divided as to whether the three artists in Obvious could rightfully claim to have created the artwork.

The GAN-generated painting Edmond de Belamy, printed on canvas but created using a generative adversarial network by the Parisian collective Obvious.
The GAN-generated painting Edmond de Belamy, printed on canvas but created using a generative adversarial network by the Parisian collective Obvious. Image is in the public domain.

Future of Generative Adversarial Networks

Generative Adversarial Networks are a young technology but in a short time they have had a large impact on the world of deep learning and also on society’s relationship with AI. So far, the various exotic applications of GANs are only beginning to be explored.

Currently, generative adversarial networks do not yet have widespread use in data science in industry, so we can expect GANs to spread out from academia in the near future. I expect GANs to become widely used in computer gaming, animation, and the fashion industry. A Hong Kong-based biotechnology company called Insilico Medicine is beginning to explore GANs for drug discovery. Companies such as NVidia are investing heavily in research in GANs and also in more powerful hardware, so the field looks promising. And of course, we can expect to hear a lot more about GANs and AI art following the impact of Edmond de Belamy.

Links to get started with Generative Adversarial Networks

If you want to run any of the generative adversarial networks that I’ve shown in the article, I’ve included some links here. Only the first one (handwritten digits) will run on a regular laptop, while the others would need you to create an account with a cloud provider such as AWS or Google Colab, as they need more powerful computing.

Further Reading about Generative Adversarial Networks

References

Building explainable machine learning models

How we explain how a neural network can recognise an image?

Sometimes as data scientists we will encounter cases where we need to build a machine learning model that should not be a black box, but which should make transparent decisions that humans can understand. This can go against our instincts as scientists and engineers, as we would like to build the most accurate model possible.

In my previous post about face recognition technology I compared some older hand-designed technologies which are easily understandable for humans, such as facial feature points, to the state of the art face recognisers which are harder to understand. This is an example of the trade-off between performance and interpretability, or explainability.

The need for explainability

Imagine that you have applied for a loan and the bank’s algorithm rejects you without explanation. Or an insurance company gives you an unusually high quote when the time comes to renew. A medical algorithm may recommend a further invasive test, against the best instincts of the doctor using the program.

Or maybe the manager of the company you are building the model for doesn’t trust anything he or she doesn’t understand, and has demanded an explanation of why you predicted certain values for certain customers.

All of the above are real examples where a data scientist may have to trade some performance for explainability. In some cases the choice comes from legislation. For example some interpretations of GDPR give an individual a ‘right to explanation’ of any algorithmic decision that affects them.

How can we make machine learning models explainable?

One approach is to avoid highly opaque models such as Random Forest, or Deep Neural Networks, in favour of more linear models. By simplifying architecture you may end up with a less powerful model, however the loss in accuracy may be negligible. Sometimes by reducing parameters you can end up with a model that is more robust and less prone to overfitting. You may be able to train a complex model and use it to identify feature importance, or clever preprocessing steps you could take in order to keep your model linear.

An example would be if you have a model to predict sales volume based on product price, day, time, season and other factors. If your manager or customer wanted an explainable model, you might convert weekdays, hours and months into a one-hot encoding, and use these as inputs to a linear regression model.

Explainable computer vision models

The best models for image recognition and classification are currently Convolutional Neural Networks (CNNs). But they present a problem from a human comprehension point of view: if you want to make the 10 million numbers inside a CNN understandable for a human, how would you proceed? If you’d like a brief introduction to CNNs please check out my previous post on face recognition.

You can make a start by breaking the problem up and looking at what the different layers are doing. We already know that the first layer in a CNN typically recognise edges, later layers are activated by corners, and then gradually more and more complex shapes.

You can take a series of images of different classes and looking at the activations at different points. For example if you pass a series of dog images through a CNN:

Dog image to be passed through an explainable CNN, from Zeiler and Fergus
Dog image to be passed through an explainable CNN. Image credit: Zeiler & Fergus (2014) [1]

…by the 4th layer you can see patterns like this, where the neural network is clearly starting to pick up on some kind of ‘dogginess’.

The activations of a neural network by the 4th layer. This shows how machine learning models can be explainable. Image credit: Zeiler & Fergus (2014)
The activations of a neural network by the 4th layer, explaining how the neural network has detected some ‘dogginess’. Image credit: Zeiler & Fergus (2014) [1]

Taking this one step further, we can tamper with different parts of the image and see how this affects the activation of the neural network at different stages. By greying out different parts of this Pomeranian we can see the effect on Layer 5 of the neural network, and then work out which parts of the original image scream ‘Pomeranian’ most loudly to the neural network.

If you grey out different segments of an input image you can see what part of the neural network as affected by layer 5. This is starting to make the model more explainable. Image credit: Zeiler & Fergus (2014)
If you grey out different segments of an input image you can see what part of the neural network as affected by layer 5. This is starting to make the model more explainable. Image credit: Zeiler & Fergus (2014) [1]

Using these techniques, if your neural network face recogniser backfires and lets an intruder into your house, if you have the input images it would be possible to unpick the CNN to work out where it went wrong. Unfortunately going deep into a neural network like this would take a lot of time, so a lot of work remains to be done on making neural networks more explainable.

Convolutional neural network explainability by masking parts of a dog image
Convolutional neural network explainability by masking parts of a dog image

Moving towards linear models for explainability

Imagine you have trained a price elasticity model that uses 3rd order polynomial regression. But your client requires something easier to understand. They want to know for each additional penny reduced from the price of the product, what will be the increase in sales? Or for each additional year of age of a vehicle what is the price depreciation?

You can try a few tricks to make this more understandable. For example you can convert your polynomial model to a series of joined linear regression models. This should give almost the same power but could be more interpretable.

polyreg normal
Traditional polynomial regression fitting a curve, showing car price depreciation by age of vehicle
polyreg steps
Splitting up the data into segments and applying a linear regression to each segment. This is useful because it shows a ballpark rate of depreciation at different stages, which salespeople might find useful for quick calculations.

Explaining recommendation algorithms

Recommendation systems such as Netflix’s movie recommendations are notoriously hard to get right and users are often mystified by what they see as strange recommendations. The recommendations were usually calculated directly or indirectly because of previous shows that the user has watched. So the simplest way of explaining a recommendation system is to display a message such as ‘we’re recommending you The Wire because you watched Breaking Bad’ – which is Netflix’s approach.

General method applicable to all models

There have been some efforts to arrive at a technique that can demystify and explain a machine learning model of any type, no matter how complex.

The technique that I described for investigating a convolutional neural network can be broadly extended to any kind of model. You can try perturbing the input to a machine learning model and monitoring its response to perturbations in the input. For example if you have a text classification model, you can change or remove different words in the document and watch what happens.

Explainability library LIME

One implementation of this technique is called LIME, or Local Interpretable Model-Agnostic Explanations[2]. LIME works by taking an input and creating thousands of duplicates with small noise added, and passing these duplicate inputs to the ML model and comparing the output probabilities. This way it’s possible to investigate a model that would otherwise be a black box.

Trying out LIME on a CNN text classifier

I tried out LIME on my author identification model. I gave the model an excerpt of one of JK Rowling’s non-Harry Potter novels, where it correctly identified the author, and asked LIME for an explanation of the decision. So LIME tried changing words in the text and checked which changes increase or decrease the probability that JK Rowling wrote it.

LIME explanation for an extract of The Cuckoo’s Calling by JK Rowling, for predictions made by a stylometry model trained on some of her earlier Harry Potter novels

LIME’s explanation of the stylometry model is interesting as it shows how the model has recognised the author by subsequences of function words such as ‘and I don’t…’ (highlighted in green) rather than strong content words such as ‘police’.

However the insight provided by LIME is limited because under the hood, LIME is perturbing words individually, whereas a neural network based text classifier looks at patterns in the document on a larger scale.

I think that for more sophisticated text classification models there is still some work to be done on LIME so that it can explain more succinctly what subsequences of words are the most informative, rather than individual words.

Can LIME explain image classifiers?

With images, LIME gives some more exciting results. You can get it to highlight the pixels in an image which led to a certain decision.

LIME highlighting in pink the parts of face images that "look like" certain people. Image credit: Ribeiro, Singh, Guestrin (2016)
LIME highlighting in pink the parts of face images that “look like” certain people. Image credit: Ribeiro, Singh, Guestrin (2016) [2]

Conclusion

There is a huge variety of machine learning models being used and deployed for diverse purposes, and their complexity is increasing. Unfortunately many of them are still used as black boxes, which can pose a problem when it comes to accountability, industry regulation, and user confidence in entrusting important decisions to algorithms as a whole.

The simplest solution is sometimes to make compromises, such as trading performance for interpretability. Simplifying machine learning models for the sake of human understanding can have the advantage of making models more robust.

Thankfully there have been some efforts to build explainability platforms to make black box machine learning more transparent. I have experimented with LIME in this article which aims to be model-agnostic, but there are other alternatives available.

Hopefully in time regulation will catch up with the pace of technology, and we will see better ways of producing interpretable models which do not reduce performance.

References

  1. Zeiler M.D., Fergus R. (2014) Visualizing and Understanding Convolutional Networks. In: Fleet D., Pajdla T., Schiele B., Tuytelaars T. (eds) Computer Vision – ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, vol 8689. Springer, Cham
  2. Ribeiro T.M., Singh, S., Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. 97-101. 10.18653/v1/N16-3020.