Self-Supervised Learning with Medical Imaging

Dr. Shadi Albarqouni (TU Munich) - (Helmholtz Zentrum)
Hossam Abdelhamid (TU Munich)
Tariq Bdair (TU Munich)

Working Title: Self-Supervised Learning Technique Based on Mixed-up Augmentation for Image Data

Self-supervised Learning (SSL) has recently gained much attention due to the high cost and data limitation in the training of supervised models. SSL usage is divided into two stages. The first stage is called a pretext task, which forces the model to map different augmentations of the same data to have a similar representation. The second stage is called the downstream task, where the outcome model from the first stage is used for fine-tuning. Self-supervised learning is widely applied to various fields (e.g., computer vision, natural language processing, and reinforcement learning). Nowadays, most SSL approaches still suffer from decent accuracy compared to supervised approaches. Besides, medical applications for these algorithms are still limited. Our contribution is to investigate a new concept for learning based on mixed-up augmentation and enforce the model to regress the percentage of such composition of the mixed-up image. We will validate our proposed method on a couple of databases; namely CIFAR-10 [1], STL10 [2], and Tiny-Imagenet [3], besides a few medical databases.

References

[1] Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.
[2] Coates, A., Ng, A., & Lee, H. (2011, June). An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics (pp. 215-223). JMLR Workshop and Conference Proceedings.
[3] Fei-Fei Li, Andrej Karpathy, and Justin Johnson, part of the cs231n course at Stanford university http://cs231n.stanford.edu/.

Posted by OpenPower Team