Exploring simple triplet representation learning

Zeyu Ren, Quan Lan*, Yudong Zhang*, Shuihua Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


Fully supervised learning methods necessitate a substantial volume of labelled training instances, a process that is typically both labour-intensive and costly. In the realm of medical image analysis, this issue is further amplified, as annotated medical images are considerably more scarce than their unlabelled counterparts. Consequently, leveraging unlabelled images to extract meaningful underlying knowledge presents a formidable challenge in medical image analysis. This paper introduces a simple triple-view unsupervised representation learning model (SimTrip) combined with a triple-view architecture and loss function, aiming to learn meaningful inherent knowledge efficiently from unlabelled data with small batch size. With the meaningful representation extracted from unlabelled data, our model demonstrates exemplary performance across two medical image datasets. It achieves this using only partial labels and outperforms other state-of-the-art methods. The method we present herein offers a novel paradigm for unsupervised representation learning, establishing a baseline that is poised to inspire the development of more intricate SimTrip-based methods across a spectrum of computer vision applications. Code and user guide are released at https://github.com/JerryRollingUp/SimTripSystem, the system also runs at

Original languageEnglish
Pages (from-to)1510-1521
Number of pages12
JournalComputational and Structural Biotechnology Journal
Publication statusPublished - Dec 2024


  • Contrastive learning
  • Deep learning
  • Machine learning
  • Medical image analysis
  • Self-supervised learning
  • Semi-supervised learning


Dive into the research topics of 'Exploring simple triplet representation learning'. Together they form a unique fingerprint.

Cite this