How to control GANgsters. Controlled image synthesis workshop
Oles Petriv, CTO, NeoCortext/ML Lead, Videogorillas
Oles is actively engaged in the research and development of computer vision and natural language processing. He is the author of a Prometheus machine training course and an ARVI Lab in-depth training course. Oles has extensive experience in video processing using Deep learning methods for object detection and action, image caption generation, and Hollywood movie studio videos.
Nazar Shmatko, VP of engineering, NeoCortext
As a specialist in machine learning, Nazar has an experience in computer vision, natural language processing and applied knowledge in healthcare. He is now actively working with generative models for the facial transfer project.
The workshop is divided into 2 parts:
Report – 1 hour.
In this report we will talk about the basic theory of deep generative models and consider the principles of operation of competing networks (GAN). We will discuss the main problems that researchers face in training such models and how to solve them and will review the existing achievements of generative models and their practical application.
Workshop – 1.5 years.
We will look at a general method that allows you to use a trained generative model to synthesize data that has predefined attributes. For example, we will use a trained GAN model that generates random faces from noise. We will train a model in such a way, that by changing the latent vector at the input, will adjust the gender and age of the generated person. In order to give an understanding of the neural network, such as gender and age, we will further train a helper model that is responsible for predicting these attributes. And in the end we will discuss the advantages and disadvantages of the methods considered and the limits of their applications.