🎥 Celebrity GAN: Generating Faces Out of Stars
Reformed Opera Singer Takes on GANs:
A year ago, I was an opera singer, a voice-teacher, and a songwriter; i.e. many facets of who and what I am would have me labeled the creative type. And I do not think many people realize how much creativity there is in data science, especially in machine learning and deep learning. A month ago, I did not even know what a GAN was. I have heard the term thrown around ever since I started the data science life. And from the sound of it, I knew it was going to be something I would really like. Honestly though, there is little in machine learning that I do not thoroughly love. GANs are a whole new level of awesome.
I get to build models, feed them things, and see what comes out?! This is literally just a grown up version of everything I loved as a child. I was the kid who always took anything mechanical apart so I could figure out how it worked. I DESTROYED my grandfather's tools just experimenting with building things, dismantling things, and basically demolishing things in the name of curiosity during the summers when I had way too much time on my hands and endless energy and investigation always brewing from within. So it is not a surprise to me that I gladly throw myself full-force into machine learning, deep learning, and GANS!
It is not as if every GAN-generated picture is even any good. A lot of them look completely ridiculous, which is pretty fun too. The interesting part for me is seeing the progression over the course of training, sitting on the literal edge of my seat waiting to see what it generates next, seeing what the generator and discriminator came up with this time, and getting REALLY sad when it all goes wrong and I have to start over. But then I get to start over! I get to do more and see more! I get to build something and then get to know it and learn more about my construction. For the kid who was always looking to feed an ever-starving curiosity, this is heaven. And I am still that kid.
The Project: Celebrity GAN
Where to begin?! Firstly, the GAN architecture is so incredibly intriguing. Now I get to play with, not one, but TWO models! And I get to make them fight! Well, ok, they do not FIGHT fight, but their little machine learning, poker-face dance can get pretty wild. And just considering the mere structure leads my mind to immediately jump to how this style of architecture could be stacked with model upon model working off of each other, each a tiny gear in a giant machine. And that is when I realize I am getting a little ahead of myself.
Let's start small – why not the faces of celebrities then? It is seemingly unfortunate that I honestly do not care much for celebrities. Most of them have a keen way of letting you down. I find it actually a benefit that I do not have any kind of interest in my data, however, other than the working of the model. It keeps things clean. I did have a fun time looking at the generated images and recognizing features of which I knew the exact original facial origins. You would not BELIEVE how many images had Britney Spears' nose and eyes. I did not even think I knew her face that well. But I sure can pick it out in a GAN-generated crowd! She must have taken up more than her fair share of the dataset.
The CelebA Dataset:
This collection contains over 200,000 images of more than 10,000 different celebrities, which were all obtained from sources on the internet. The set has a great deal of variation in color, poses, composition, facial expression, styles, facial accessories, hair styles, etc. My goal in choosing this dataset was to create a project that was quite challenging but at the same time possible to get an interesting and successful result. Before this dataset, I tried using 13,000+ impressionist paintings with GANs, which was very disappointing due to the vastness of the subject matter as well as the vagueness of the style. Compared to that challenge, which I had drastically underestimated, this set seemed like it would offer a lot to work with but also generate images with few enough epochs as to be feasible.
I did not stray from the GAN handbook with this one. Again, this is still new, and I want success at every step. My failure at impressionism was a hard hit. Can you tell? So I did not experiment broadly with the structure of the models. I wanted to make sure that if I break anything I can fix it.
The generator and discriminator (these fields are scrollable):
In this project, I trained the model 3 different times. (I trained many times aside from these to practice working with the data.) The first training is for 44 epochs, which yielded quite good results, once I found a good learning rate and got the model training set up well. The second training here, which is for 200 epochs, suffered a great problem around epoch 45. Since I was training over night (as this model takes an extremely long time to train per epoch), I did not catch the issue. I woke up to a very disappointing reality that over 3/4 of the training was useless. So I adjusted the hyperparameters yet again and tried for a 100 epoch training, which turned out very good.
Here, you can view access the interactive Jupyter notebook for the entire project. Or access the PDF version here. You can also go section by section through scrollable notebook excerpts below:
⇓ Intro and Data-Wrangling:
⇓ Dataloaders, Generator, and Discriminator:
⇓ Prepping for Training:
⇓ Training Part One: A Really Good Start
⇓ Results of 44 epochs: Input vs Generated
⇓ Training Part Two: Epic Fail
⇓ Training Part Three: Acceptable Results!
⇓ Generated Results: Epochs 80, 90, and 100
I absolutely loved working on this project. I built up so much anticipation while waiting for every single epoch to give me an image result, so I could see how the models were performing and get a real sense of what the model was seeing and doing epoch after epoch. I would like to continue working on this project and attempt much higher numbers of epochs to see the resulting images from the model. But it is clear that achieving much better results will require much more tuning, which will in turn require much more practice with GANs on my part. And I intend to do just that. And because of the length of training for these projects, it would be nice to have another machine to train on. I did a few other practice projects, and human faces were by far the most time-consuming to train.
I am honestly amazed with how well the model performed considering the variations in the images: the head poses, the different facial features that are highlighted, the differences in hair and accessories, etc. Overall I am very happy with the result of this project, and I can see a bright future for me and GANs.