A mixed reality framework for microsurgery simulation with visual-tactile perception

Nan Xiang*, Hai Ning Liang, Lingyun Yu, Xiaosong Yang*, Jian J. Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)


Microsurgery is a general term for surgery combining surgical microscope and specialized precision instruments during operation. Training in microsurgery requires considerable time and training resources. With the rapid development of computer technologies, virtual surgery simulation has gained extensive attention over the past decades. In this work, we take advantage of mixed reality (MR) that creates an interactive environment where physical and digital objects coexist, and present an MR framework for the microsurgery simulation. It enables users to practice anastomosis skills with real microsurgical instruments rather than additional haptic feedback devices that are typically used in virtual reality-based systems, and to view a realistic rendering intra-operative scene at the same time, thus creating an immersive training experience with such a visual-tactile interactive environment. A vision-based tracking system is proposed to simultaneously track microsurgical instruments and artificial blood vessels, and a learning-based anatomical modeling approach is introduced to facilitate the development of simulations in different microsurgical specialities by rapidly creating virtual assets. Moreover, we build a prototype system for the simulation specializing in microvascular hepatic artery reconstruction to demonstrate the feasibility and applicability of our framework.

Original languageEnglish
Pages (from-to)3661-3673
Number of pages13
JournalVisual Computer
Issue number8
Publication statusPublished - Aug 2023


  • 3D modeling
  • 3D tracking
  • Mixed reality
  • Virtual surgery


Dive into the research topics of 'A mixed reality framework for microsurgery simulation with visual-tactile perception'. Together they form a unique fingerprint.

Cite this