Simultaneous Hand-Eye and Target Estimation By 2D-3D Generative Point Alignment

Gumin Jin, Xingkai Yu, Yuqing Chen, Lantao Zhang, Jianxun Li

Research output: Contribution to journalArticlepeer-review


Hand-eye calibration aims to relate what the camera sees to where the robot moves, which is crucial for vision-guided robot systems and has received much attention in the robotics community. The classical hand-eye calibration flow estimates camera extrinsic poses first, then uses homogeneous pose alignment to solve the hand-eye calibration problem. Since this two-step procedure is trivial and prone to error propagation, point-alignment-based hand-eye calibration is currently gaining popularity. The point alignment approach is promising but still immature, as evidenced by the lack of a unified formulation for different calibration targets, the requirement of pose for initialization, and the reliance on an optimization solver and its numerical differentiation. These issues are addressed item by item in this article. We first formulate the hand-eye calibrations for multi-point, single-point, and patterned targets via 2D-3D generative point alignment. Then, we propose a generic initialization on a single-point sequence covering all the above target cases. Following that, we infer the analytical Jacobian matrix in detail and develop the sparsity for the pose-perturbation refinement. Finally, both numerical simulations and real-world experiments are provided to verify that our approach is more accurate, efficient, and robust than state-of-the-art methods. The codes and datasets are open-source and can be found at <italic>Note to Practitioners</italic>&#x2014;Vision-guided robots have been widely deployed in flexible automation, and only through hand-eye information can visual perception be used to guide robotic movement. However, the classical hand-eye calibration methods perform the hand-eye parameter estimation with beforehand camera poses, which is time-consuming and prone to error propagation. This work investigates one-step hand-eye calibrations based on direct point alignment. We provide simultaneous hand-eye and target estimation methods for multi-point, single-point, and patterned calibration targets by solving 2D-3D generative point alignment problems. Generally speaking, the single-point method achieves the least time consumption, the patterned method achieves the highest accuracy with a precise pattern, and the multi-point method achieves the strongest robustness and adaptability. With reliable robot information, the proposed methods are more accurate even in pattern deformation cases compared with the classical two-step methods. With the analytical Jacobian refinement, our methods are solver-free, lightweight, and suitable for flexible deployment. Furthermore, without the range requirement of pose acquisition, our methods offer the ability to integrate path planning into autonomous hand-eye calibration.

Original languageEnglish
Pages (from-to)1-14
Number of pages14
JournalIEEE Transactions on Automation Science and Engineering
Publication statusPublished - 18 Dec 2023


  • Calibration
  • Cameras
  • Costs
  • End effectors
  • Estimation
  • Generative point alignment
  • hand-eye calibration
  • Robot vision systems
  • Robots
  • simultaneous parameter estimation
  • vision-guided robot


Dive into the research topics of 'Simultaneous Hand-Eye and Target Estimation By 2D-3D Generative Point Alignment'. Together they form a unique fingerprint.

Cite this