Fixed-Point Generative Adversarial Networks

Description

Generative adversarial networks (GANs) are revolutionizing image-to-image translation, which is attractive to researchers in the medical imaging community. While using GANs to reveal diseased regions in a medical image is appealing, it requires a GAN to identify a minimal subset of target pixels for domain translation, also known as fixed-point translation, which is not possible with current GANs.

Researchers at Arizona State University have proposed a new GAN, called Fixed-Point GAN, which introduces fixed-point translation and proposes a new method for disease detection and localization. This new Gan is trained by (1) supervising same-domain translation through a conditional identity loss, and (2) regularizing cross-domain translation through revised adversarial, domain classification, and cycle consistency loss. Qualitative and quantitative evaluations demonstrate that the proposed method outperforms the state of the art in multi-domain image-to-image translation and that it surpasses predominant weakly-supervised localization methods in both disease detection and localization.

This fixed-point GAN dramatically reduces artifacts in image-to-image translation and introduces a novel method for disease detection and localization that outperforms the state of the art.

Potential Applications

• Computer-aided diagnoses - e.g. pulmonary embolism and brain lesion localization

• Non-medical applications – photo editing/aging/blending, game development and animation production

Benefits and Advantages

• Works with unpaired images – does not require two images with and without the attribute

• Requires only image-level annotation for training

• Same-domain translation without adding or removing attributes

• Cross-domain translation without affecting unrelated attributes

o E.g. removes eyeglasses from an image without affecting hair color

• Source-domain-independent translation using only image-level annotation

• Outperforms the state of the art in multi-domain image-to-image translation for both natural and medical images

• Surpasses predominant weakly-supervised localization methods in both disease detection and localization

• Dramatically reduces artifacts in image-to-image translation

For more information about this opportunity, please see

Rahman Siddiquee et al - ICCV - 2019

GitHub - 2019

For more information about the inventor(s) and their research, please see

Dr. Liang's departmental webpage

Case ID:
M19-117L^
Published:
02-26-2020
Last Updated:
05-19-2020

For More Information, Contact