CycleGAN is a semi-supervised architecture. I experimented to see if it could actually work with AMDRadeon GPU.
CycleGAN is a domain conversion technology that is similar to the well-known style conversion in pix2pix, but it has a major difference in that it completely transforms it into another domain.
Of course, CycleTransfer can do StyleTransfer easily, so you can do the same thing by preparing a base content image and a style image.
The most interesting part of CycleGAN is the CycleGAN Face-swap paper that swaps faces between two people. [Xiaohan Jin, Ye Qi, Shangxuan Wu]
With this, you can impersonate President Obama and President Trump.
However, if the source image is unrelated, a broken image may be created, so the further away the source image from the style, the more difficult it will be to adjust.
The architecture replaces the Generator part of conventional DCGAN with pix2pix (image auto-encoder), and the Discriminator remains unchanged.
Furthermore, I make two networks and convert domain A to domain B. It converts from domain B to domain A and makes fake judgment so as to make it rotate (cycle) and learn.
The number of copies used is about 10000 for experiment A and 10000 for experiment B,
From the left, the original image, 10 epochs, 50 epochs, 200 epochs and stronger conversion results are obtained as you go to the right.
There has been no change after 200 epoch.
Up to 200 epochs will take around 10 hours with GeForce RTX 2080 Ti or Radeon VII.