Visual-Tactile Robot Grasping Based on Human Skill Learning From Demonstrations Using a Wearable Parallel Hand Exoskeleton

Zhenyu Lu, Lu Chen, Hengtai Dai, Haoran Li, Zhou Zhao, Bofang Zheng, Nathan F. Lepora, Chenguang Yang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

20 Citations (Scopus)

Abstract

The soft fingers and strategic grasping skills enable the human hands to grasp objects in a stable manner. This letter is to model human grasping skills and transfer the learned skills to robots to improve grasping quality and success rate. First, we designed a wearable tool-like parallel hand exoskeleton equipped with optical tactile sensors to acquire multimodal information, including hand positions and postures, the relative distance of the exoskeleton claws, and tactile images. Using the demonstration data, we summarized three characteristics observed from human demonstrations, involving varying-speed actions, grasping effect read from tactile images and grasping strategies for different positions. The characteristics were then utilized in the robot skill modelling to achieve a more human-like grasp. Since no force sensors are fixed to the claws, we introduced a new variable, called 'grasp depth', to represent the grasping effect on the object. The robot grasping strategy diagram is constructed as follows: First, grasp quality is predicted using a linear array network (LAN) and global visual images as inputs. The conditions such as grasp width, depth, position, and angle are also predicted. Second, with the grasp width and depth of the object determined, dynamic movement primitives (DMPs) are employed to mimic human grasp actions with varying velocities. To further enhance grasp quality, a final action adjustment based on tactile detection is performed during the near-grasp time. The proposed strategy was validated through experiments conducted with a Franka robot with a self-designed gripper. The results demonstrate that robot grasping test achieved an increase in the grasping success rate from 82% to 96%, compared to the results obtained by pure LAN and constant grasp depth testing.

Original languageEnglish
Pages (from-to)5384-5391
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume8
Issue number9
DOIs
Publication statusPublished - 1 Sept 2023
Externally publishedYes

Keywords

  • data-driven human modeling
  • exoskeleton
  • Force and tactile sensing
  • learning from demonstration
  • robot grasping

Fingerprint

Dive into the research topics of 'Visual-Tactile Robot Grasping Based on Human Skill Learning From Demonstrations Using a Wearable Parallel Hand Exoskeleton'. Together they form a unique fingerprint.

Cite this