A novel training and collaboration integrated framework for human–Agent teleoperation

Zebin Huang, Ziwei Wang, Weibang Bai, Yanpei Huang*, Lichao Sun, Bo Xiao, Eric M. Yeatman

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

14 Citations (Scopus)

Abstract

Human operators have the trend of increasing physical and mental workloads when per-forming teleoperation tasks in uncertain and dynamic environments. In addition, their performances are influenced by subjective factors, potentially leading to operational errors or task failure. Although agent-based methods offer a promising solution to the above problems, the human experience and intelligence are necessary for teleoperation scenarios. In this paper, a truncated quantile critics reinforcement learning-based integrated framework is proposed for human–agent teleoperation that encompasses training, assessment and agent-based arbitration. The proposed framework allows for an expert training agent, a bilateral training and cooperation process to realize the co-optimization of agent and human. It can provide efficient and quantifiable training feedback. Experiments have been conducted to train subjects with the developed algorithm. The performances of human–human and human–agent cooperation modes are also compared. The results have shown that subjects can complete the tasks of reaching and picking and placing with the assistance of an agent in a shorter operational time, with a higher success rate and less workload than human–human cooperation.

Original languageEnglish
Article number8341
JournalSensors
Volume21
Issue number24
DOIs
Publication statusPublished - 1 Dec 2021
Externally publishedYes

Keywords

  • Human–agent interaction
  • Reinforcement learning
  • Teleoperation

Fingerprint

Dive into the research topics of 'A novel training and collaboration integrated framework for human–Agent teleoperation'. Together they form a unique fingerprint.

Cite this