TY - JOUR
T1 - MagicGripper
T2 - A Mini-MagicTac Integrated Gripper Enabling Multimodal Perception in Contact-Rich Manipulation
AU - Fan, Wen
AU - Li, Haoran
AU - Cong, Qingzheng
AU - Zhang, Dandan
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Contact-rich robotic manipulation in unstructured environments demands reliable multimodal perception. Here, we present MagicGripper, a multimodal robotic gripper built around mini-MagicTac, a compact variant of the MagicTac sensor. Mini-MagicTac embeds multi-layer grid structures in a 3D-printed elastomer, enabling visual, proximity, and tactile sensing in a gripper-compatible form factor. In this paper, we introduce the design and multimodal perception capabilities of mini-MagicTac, as well as two algorithmic frameworks for proximity and contact detection. Experimental evaluations show that mini-MagicTac achieves high spatial resolution, accurate contact localisation, and robust force estimation under mechanical and manufacturing variations. Autonomous grasping trials further validate MagicGripper's reliable multimodal perception and adaptability to complex manipulation scenarios. These results demonstrate MagicGripper as a compact and versatile platform for embodied intelligence in contact-rich environments. Note to Practitioners - Robotic end-effectors often break down when a task calls for both 'eyes' and 'skin': Adding multiple sensors usually makes the gripper bulky, fragile, and expensive to build. MagicGripper shows one practical way around that trade-off. Each finger is 3D-printed without casting or post-assembly is required; inside the soft skin a multi-layer grid acts as sensing feature, letting embedded camera read visual, proximity, and tactile cues simultaneously.
AB - Contact-rich robotic manipulation in unstructured environments demands reliable multimodal perception. Here, we present MagicGripper, a multimodal robotic gripper built around mini-MagicTac, a compact variant of the MagicTac sensor. Mini-MagicTac embeds multi-layer grid structures in a 3D-printed elastomer, enabling visual, proximity, and tactile sensing in a gripper-compatible form factor. In this paper, we introduce the design and multimodal perception capabilities of mini-MagicTac, as well as two algorithmic frameworks for proximity and contact detection. Experimental evaluations show that mini-MagicTac achieves high spatial resolution, accurate contact localisation, and robust force estimation under mechanical and manufacturing variations. Autonomous grasping trials further validate MagicGripper's reliable multimodal perception and adaptability to complex manipulation scenarios. These results demonstrate MagicGripper as a compact and versatile platform for embodied intelligence in contact-rich environments. Note to Practitioners - Robotic end-effectors often break down when a task calls for both 'eyes' and 'skin': Adding multiple sensors usually makes the gripper bulky, fragile, and expensive to build. MagicGripper shows one practical way around that trade-off. Each finger is 3D-printed without casting or post-assembly is required; inside the soft skin a multi-layer grid acts as sensing feature, letting embedded camera read visual, proximity, and tactile cues simultaneously.
KW - multi-modality sensing
KW - robotic manipulation
KW - Vision-based tactile sensor
UR - https://www.scopus.com/pages/publications/105021545594
U2 - 10.1109/TASE.2025.3631485
DO - 10.1109/TASE.2025.3631485
M3 - Article
AN - SCOPUS:105021545594
SN - 1545-5955
VL - 22
SP - 24311
EP - 24332
JO - IEEE Transactions on Automation Science and Engineering
JF - IEEE Transactions on Automation Science and Engineering
ER -