Case ID: M20-225L^

Published: 2020-04-20 12:20:15

Last Updated: 1677136199


Inventor(s)

Zongwei Zhou
Vatsal Sodha
Jiaxuan Pang
Jianming Liang

Technology categories

Computing & Information TechnologyImagingLife Science (All LS Techs)Medical Diagnostics/SensorsMedical Imaging

Licensing Contacts

Jovan Heusser
Director of Licensing and Business Development
[email protected]

Models Genesis: Autodidactic Models for 3D Medical Image Analysis

Image analysis techniques are becoming invaluable in the medical field, as they have been shown to help physicians better diagnose and treat diseases and expand the utility of medical imaging. Transfer learning, in particular, is one of the most practical paradigms in deep learning for medical image analysis. In conventional transfer learning, source models are used as a starting point for training a target model for a specific application. However, those source models are often generated using 2D images, and in medical application-specific tasks, the images are obtained using 3D imaging modalities. Thus, in order to utilize transfer learning, 3D imaging tasks have to be reformulated and solved in 2D which results in a loss of rich 3D anatomical information and a compromise in performance. Additionally, to have a robust source model, a large set of annotated images is needed, which is difficult to assemble.

 

Researchers at Arizona State University have developed a set of pre-trained models which may serve as a primary source of transfer learning for 3D medical imaging applications. These models were created ex nihilo (without manual labeling), were self-taught (learned by self-supervision), and were made generic so that they may serve as source models for generating application-specific target models. With the ability to learn from scratch on unlabeled images, these models yield a common visual representation that is generalizable and transferable across diseases, organs and imaging modalities. These models preserve the rich 3D anatomical information often found in medical images.

These novel models consistently top any 2D/2.5D approaches and out-perform learning from scratch in all 5 target 3D applications, making them ideal source models for transfer learning in 3D medical imaging applications.

 

Potential Applications

•       Medical imaging analyses across any disease (e.g. nodule, embolism, tumor, etc.) in any organ and using any imaging modality (e.g. CT, X-Ray, MRI, etc.)

o       Classification tasks – e.g. determine healthy vs diseased tissue

o       Segmentation tasks – e.g. identify regions of interest vs background

 

Benefits and Advantages

•       Self-supervised – can learn from scratch on unlabeled images without manual labeling

•       Robust – can learn from multiple perspectives (appearance, texture, context, etc.)

•       Scalable – can accommodate many training schemes, while sharing the same encoder and decoder

•       Generic – can yield diverse target applications across diseases, organs and modalities

•       Learning organ appearance via non-linear transformation

•       Learning organ texture and local boundaries via local pixel shuffling

•       Learning organ spatial layout and global geometry via outer-cutout

•       Learning local continuities of organs via inner-cutout

•       Surpasses learning 3D models from scratch and other existing 3D pre-trained models

•       Cuts annotation costs by at least 30% while maintaining a high performance

•       Consistently tops any 2D/2.5D approaches in solving 3D imaging problems

 

For more information about this opportunity, please see

Zhou et al – arXiv – 2019

ModelsGenesis – Github

 

For more information about the inventor(s) and their research, please see

Dr. Liang’s departmental webpage