TY - JOUR
T1 - Cross-Domain Random Pretraining With Prototypes for Reinforcement Learning
AU - Liu, Xin
AU - Chen, Yaran
AU - Li, Haoran
AU - Li, Boyu
AU - Zhao, Dongbin
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Unsupervised cross-domain reinforcement learning (RL) pretraining shows great potential for challenging continuous visual control but poses a big challenge. In this article, we propose cross-domain random pretraining with prototypes (CRPTpro), a novel, efficient, and effective self-supervised cross-domain RL pretraining framework. CRPTpro decouples data sampling from encoder pretraining, proposing decoupled random collection to easily and quickly generate a qualified cross-domain pretraining dataset. Moreover, a novel prototypical self-supervised algorithm is proposed to pretrain an effective visual encoder that is generic across different domains. Without finetuning, the cross-domain encoder can be implemented for challenging downstream tasks defined in different domains, either seen or unseen. Compared with recent advanced methods, CRPTpro achieves better performance on downstream policy learning without extra training on exploration agents for data collection, greatly reducing the burden of pretraining. We conduct extensive experiments across multiple challenging continuous visual-control domains, including balance control, robot locomotion, and manipulation. CRPTpro significantly outperforms the next best Proto-RL(C) on 11/12 cross-domain downstream tasks with only 54.5% wall-clock pretraining time, exhibiting state-of-the-art pretraining performance with greatly improved pretraining efficiency.
AB - Unsupervised cross-domain reinforcement learning (RL) pretraining shows great potential for challenging continuous visual control but poses a big challenge. In this article, we propose cross-domain random pretraining with prototypes (CRPTpro), a novel, efficient, and effective self-supervised cross-domain RL pretraining framework. CRPTpro decouples data sampling from encoder pretraining, proposing decoupled random collection to easily and quickly generate a qualified cross-domain pretraining dataset. Moreover, a novel prototypical self-supervised algorithm is proposed to pretrain an effective visual encoder that is generic across different domains. Without finetuning, the cross-domain encoder can be implemented for challenging downstream tasks defined in different domains, either seen or unseen. Compared with recent advanced methods, CRPTpro achieves better performance on downstream policy learning without extra training on exploration agents for data collection, greatly reducing the burden of pretraining. We conduct extensive experiments across multiple challenging continuous visual-control domains, including balance control, robot locomotion, and manipulation. CRPTpro significantly outperforms the next best Proto-RL(C) on 11/12 cross-domain downstream tasks with only 54.5% wall-clock pretraining time, exhibiting state-of-the-art pretraining performance with greatly improved pretraining efficiency.
KW - Cross-domain representation
KW - deep reinforcement learning (DRL)
KW - random policy
KW - RL visual pretraining
KW - self-supervised learning (SSL)
KW - unsupervised exploration
UR - http://www.scopus.com/inward/record.url?scp=85219080616&partnerID=8YFLogxK
U2 - 10.1109/TSMC.2025.3541926
DO - 10.1109/TSMC.2025.3541926
M3 - Article
AN - SCOPUS:85219080616
SN - 2168-2216
JO - IEEE Transactions on Systems, Man, and Cybernetics: Systems
JF - IEEE Transactions on Systems, Man, and Cybernetics: Systems
ER -