I first came across the concept of a Generative Adversarial Network (GAN) in early 2019. An extremely oversimplified explanation of what a GAN does is train one neural network (we'll call it "X") to make forgeries of training data, for example pictures of trees, and train another network ("Y") to spot forgeries. X makes its best attempt at a picture of a tree to fool Y, and Y tries its hardest to separate the fakes X makes from the training data. The networks are both adjusted based on the results and should get better at each of their jobs every iteration. If all goes well eventually X is making pictures of trees that are indistinguishable from the real thing and these can be downloaded at will. I decided I wanted to try my hand at training a GAN on classical portraits, and used the freely available library StyleGAN2 (https://github.com/NVlabs/stylegan2) combined with some free resources from Google. In 2023 AI painting pictures is commonplace and my results are modest compared to something from MidJourney. However I still learned a lot about Python, Ubuntu, and running a Virtual Machine off a command line and got some unique looking "faces" in the process.
• Learned about setting up virtual Ubuntu machines and installing dependencies through a command line
• Library runs on a Google Cloud Virtual Machine with a Nvidia Tesla V100 GPU
• Required modifying and creating python scripts to scrape thousands (7000+) of classical art images from the web in order to train the networks
• Achieved moderate success in outputs, humanoid shapes and rough attempts by the network to imitate defined art styles
• Network requires a bigger sample for more accurate outputs but scraping more than 10,000 images creates stability issues, a problem I never overcame



