TY - JOUR
T1 - Exploring simple triplet representation learning
AU - Ren, Zeyu
AU - Lan, Quan
AU - Zhang, Yudong
AU - Wang, Shuihua
N1 - Publisher Copyright:
© 2024 The Author(s)
PY - 2024/12
Y1 - 2024/12
N2 - Fully supervised learning methods necessitate a substantial volume of labelled training instances, a process that is typically both labour-intensive and costly. In the realm of medical image analysis, this issue is further amplified, as annotated medical images are considerably more scarce than their unlabelled counterparts. Consequently, leveraging unlabelled images to extract meaningful underlying knowledge presents a formidable challenge in medical image analysis. This paper introduces a simple triple-view unsupervised representation learning model (SimTrip) combined with a triple-view architecture and loss function, aiming to learn meaningful inherent knowledge efficiently from unlabelled data with small batch size. With the meaningful representation extracted from unlabelled data, our model demonstrates exemplary performance across two medical image datasets. It achieves this using only partial labels and outperforms other state-of-the-art methods. The method we present herein offers a novel paradigm for unsupervised representation learning, establishing a baseline that is poised to inspire the development of more intricate SimTrip-based methods across a spectrum of computer vision applications. Code and user guide are released at https://github.com/JerryRollingUp/SimTripSystem, the system also runs at http://43.131.9.159:5000/.
AB - Fully supervised learning methods necessitate a substantial volume of labelled training instances, a process that is typically both labour-intensive and costly. In the realm of medical image analysis, this issue is further amplified, as annotated medical images are considerably more scarce than their unlabelled counterparts. Consequently, leveraging unlabelled images to extract meaningful underlying knowledge presents a formidable challenge in medical image analysis. This paper introduces a simple triple-view unsupervised representation learning model (SimTrip) combined with a triple-view architecture and loss function, aiming to learn meaningful inherent knowledge efficiently from unlabelled data with small batch size. With the meaningful representation extracted from unlabelled data, our model demonstrates exemplary performance across two medical image datasets. It achieves this using only partial labels and outperforms other state-of-the-art methods. The method we present herein offers a novel paradigm for unsupervised representation learning, establishing a baseline that is poised to inspire the development of more intricate SimTrip-based methods across a spectrum of computer vision applications. Code and user guide are released at https://github.com/JerryRollingUp/SimTripSystem, the system also runs at http://43.131.9.159:5000/.
KW - Contrastive learning
KW - Deep learning
KW - Machine learning
KW - Medical image analysis
KW - Self-supervised learning
KW - Semi-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85190125904&partnerID=8YFLogxK
U2 - 10.1016/j.csbj.2024.04.004
DO - 10.1016/j.csbj.2024.04.004
M3 - Article
AN - SCOPUS:85190125904
SN - 2001-0370
VL - 23
SP - 1510
EP - 1521
JO - Computational and Structural Biotechnology Journal
JF - Computational and Structural Biotechnology Journal
ER -