Publication date: Oct 09, 2024
Self-supervised pre-training of deep learning models with contrastive learning is a widely used technique in image analysis. Current findings indicate a strong potential for contrastive pre-training on medical images. However, further research is necessary to incorporate the particular characteristics of these images. We hypothesize that the similarity of medical images hinders the success of contrastive learning in the medical imaging domain. To this end, we investigate different strategies based on deep embedding, information theory, and hashing in order to identify and reduce redundancy in medical pre-training datasets. The effect of these different reduction strategies on contrastive learning is evaluated on two pre-training datasets and several downstream classification tasks. In all of our experiments, dataset reduction leads to a considerable performance gain in downstream tasks, e. g., an AUC score improvement from 0. 78 to 0. 83 for the COVID CT Classification Grand Challenge, 0. 97 to 0. 98 for the OrganSMNIST Classification Challenge and 0. 73 to 0. 83 for a brain hemorrhage classification task. Furthermore, pre-training is up to nine times faster due to the dataset reduction. In conclusion, the proposed approach highlights the importance of dataset quality and provides a transferable approach to improve contrastive pre-training for classification downstream tasks on medical images.
Concepts | Keywords |
---|---|
Downstream | Computed Tomography (CT) |
Hashing | Contrastive learning |
Hemorrhage | Deep learning |
Redundancy | Medical imaging |
Training | Self-supervised pre-training |
Transfer learning |
Semantics
Type | Source | Name |
---|---|---|
drug | DRUGBANK | Spinosad |
disease | MESH | brain hemorrhage |
disease | IDO | quality |