Case ID: M21-048L^

Published: 2022-03-21 13:26:57

Last Updated: 1677136277


Inventor(s)

Ruibin Feng
Zongwei Zhou
Jianming Liang

Technology categories

Computing & Information TechnologyImagingLife Science (All LS Techs)Medical Imaging

Licensing Contacts

Jovan Heusser
Director of Licensing and Business Development
[email protected]

Self-supervised Learning: From Parts to Whole

­
As machine learning grows and advances, contrastive representation learning continues to emerge as the state-of-the-art technique in computer vision. Contrastive representation learning, however, has major limitations that make it problematic for 3D medical imaging, such as requiring extensive mini-batch sizes, special network design, or memory banks. While reconstruction-based self-supervised learning shows promise, it lacks mechanisms to learn contrastive representation, making it is also unsuitable for 3D medical imaging.
 
Researchers at Arizona State University developed a novel algorithm to learn contrastive representation in 3D medical imaging. This framework for self-supervised contrastive learning via reconstruction is called Parts2Whole, because it exploits the universal and intrinsic part-whole relationship to learn contrastive representation without using contrastive loss. This self-supervised learning framework brings greater efficiency and computational capability for processing 3D medical images than has previously been achievable.
 
This algorithm was evaluated on five distinct imaging tasks covering classification as well as segmentation. It was compared with four competing available 3D pretrained models and outperformed in two out of five tasks with competitive performance on the remaining three.
 
Potential Applications
  • 3D medical imaging
  • Self-driving vehicles
  • Educational assistance
  • Commercial image-based search
  • Facial recognition
 
Benefits and Advantages
  • Able to learn contrastive representation
  • Can utilize smaller batch sizes
  • No need for special network design
  • Does not require memory banks
  • Exceeded performance expectations during testing 
  • Greater efficiency and computational feasibility
  • Leverages contrastive representation learning via the self-supervised learning framework
For more information about this opportunity, please see
 
For more information about the inventor(s) and their research, please see