The Medical Autonomy and Precision Surgery lab (MAPS) was established in 2015 to bring advanced technologies to enhance surgical interventions. Since then, MAPS has demonstrated clinical achievements in robotics, AI, computer vision, and machine intelligence. Currently, the main focus of the lab is advancing clinical automation for high precision clinical applications such as ophthalmology.
MAPS multidisciplinary team consists of researchers and students from various desciplines includining medicine, computer science, electrical engineers and mechanical engineering.
The lab offers education and training from undergraduate to graduate level in all relevant disciplines. MAPS collaborates with several national and international research groups, providing exchange programs.
The Medical Autonomy and Precision Surgery (MAPS) Laboratory is an integral part of the ophthalmology department of the Technical University of Munich. Led by Prof. Dr.-Ing M. Ali Nasseri, the laboratory thrives under the expert guidance and professional assistance of distinguished ophthalmic specialists, including Prof. Peter Chabel Issa, Prof. Mathias Maier, and Dr. Daniel Zapp, who are renowned for their extraordinary contributions to the field of ophthalmology. Currently, the lab comprises seven PhD students, two external PhD students, and several graduate students, all working collaboratively to advance interdisciplinary research in three key areas: AI-assisted medical image analysis and diagnosis, robot-assisted ophthalmic automation, and surgical workflow analysis. This strong collaboration with the Eye Clinic ensures that the MAPS Laboratory's research is academically rigorous and clinically relevant, setting a high standard for innovation in the field. In 2023 and 2024, we have 16 high-quality papers published and earned outstanding reputations in both academic and industrial fields. The following sections will provide detailed introductions to the lab's key research areas and achievements.
The application of AI in medical image analysis and diagnosis is crucial for advancing precision medicine and enhancing the accuracy and efficiency of ophthalmic procedures. The MAPS Laboratory has focused on two critical directions within this domain: iOCT volume reconstruction and fundus image processing and diagnosis. Our work in iOCT applications has led to significant advancements, including improved intraoperative 3D visualization [1], and innovative iOCT-dominant multi-modality fusion methods [2]. These innovations have contributed significantly to the accuracy and safety of ophthalmic surgeries. In fundus image processing and diagnosis, we have made strides in utilizing fundus images for patient identity anonymization [3], retina feature segmentation, and medical dataset augmentation [4], thereby improving both the privacy and diagnostic capabilities of ophthalmic data. Collectively, our research has advanced the technical capabilities of medical imaging and contributed to the broader field of ophthalmology by providing robust, AI-driven solutions. We aim to refine these technologies, enhancing real-time intraoperative imaging and expanding the application of AI-driven diagnostic tools in clinical practice.
Robot-assisted ophthalmic automation represents a transformative approach to eye surgery, where precision and control are paramount. As a featured research topic of the MAPS Laboratory, we have made significant progress by collaborating with leading industrial companies to develop one of the few ophthalmic robotic platforms worldwide that achieves micron-level precision for subretinal interventions. Our surgical robotic platform enables end-effector manipulation with exceptional accuracy through meticulous kinematic calibration, setting a new standard for ophthalmic surgery [5][6][7][8]. Additionally, we are pioneering the development of intelligent control algorithms that integrate multiple modalities, such as fundus [9][10] and iOCT [11][12], to enhance surgical automation and significantly accelerate the surgical process with precision that surpasses human capabilities [13][14]. We also emphasize the safety of our platform deployment by analyzing environmental interference such as sporadic vibration [15]. Furthermore, our efforts extend to creating simulation and control devices that seamlessly integrate the robotic platform with surgeons [16], ensuring that the human touch remains vital to surgical performance. These achievements have positioned our lab at the forefront of robotic ophthalmic surgery, contributing to safer and more effective interventions. In the future, we aim to further refine our robotic systems, focusing on enhancing automation capabilities and deploying our platform in more operating rooms to benefit a broader range of patients.
Surgical workflow analysis is an emerging research focus in the MAPS Laboratory, aimed at optimizing the efficiency and precision of ophthalmic surgeries. By equipping the operating room with advanced sensors, we can classify objects and reconstruct the real-time environment through semantic segmentation. This detailed understanding of the surgical environment enables us to monitor and analyze current surgical tasks and object distributions within the operating room. Our goal is to use this information to flexibly deploy our surgical platform at optimal temporal and spatial positions, generating precise instrument manipulation commands for the automatic surgical robot. This research has the potential to significantly enhance the coordination between robotic systems and the surgical environment, leading to more efficient and precise surgical procedures. Moving forward, we aim to refine these capabilities, ensuring that our surgical platforms can adapt seamlessly to dynamic surgical environments and contribute to improved patient outcomes.
Team
Prof. Dr.-Ing. M. Ali. Nasseri
Founding Director of MAPS
Alireza Alikhani
Doctoral Candidate
Lead of the Robotic Team
Korab Hoxha
Doctoral Candidate
Angelo Henriques
Doctoral Candidate
Satoshi Inagaki
Doctoral Candidate
Yinzheng Zhao
Doctoral Candidate
Junjie Yang
Doctoral Candidate
Zhihao Zhao
Doctoral Candidate
News
BIBM 2024
Our research group will attend the IEEE BIBM conference in Lisbon in 2024. Two papers are accepted and will be presented.
[1] IEEE BIBM 2024(link is exterlink is exter(link is external)
IROS 2024
MAPS will be present at IROS 2024 with five papers and one workshop.
IROS-ARVOS 2024, co-organized by MAPS, will take place on 14 October.
ICRA 2024
Our research group will attend the IEEE ICRA conference in Yokohama in 2024. Five papers are accepted and will be presented. Additionally our research group has one best paper award nominations. The videos of these five papers can be seen in the Following:
Fornero
The introduction of robotic assistance systems into the clinical workflow leads to a significant increase in technical, social and organizational complexity in the operating theatre. With ForNeRo, we aim to improve the integration of the systems, taking into account the needs and capacities of the OR staff. This is to be realized with the support of machine learning, simulation, augmented reality and UI technologies. The focus of the investigations is on the optimal placement and use of the robotic systems.
Publications
2023 - 2024
[1] Sommersperger, M. et al. (2023). Semantic Virtual Shadows (SVS) for Improved Perception in 4D OCT Guided Surgery. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14228. Springer, Cham. https://doi.org/10.1007/978-3-031-43996-4_39(link is external)
[2] S. Pannek et al., "Exploring the Needle Tip Interaction Force with Retinal Tissue Deformation in Vitreoretinal Surgery," 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 16999-17005, doi: 10.1109/ICRA57147.2024.10610807.
[3] Zhihao Zhao, Shahrooz Faghihroohi, Junjie Yang, Kai Huang, Nassir Navab, Mathias Maier, and M. Ali Nasseri, "Unobtrusive biometric data de-identification of fundus images using latent space disentanglement," Biomed. Opt. Express 14, 5466-5483 (2023)
[4] Zhao, Z. et al. (2023). Label-Preserving Data Augmentation in Latent Space for Diabetic Retinopathy Recognition. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14222. Springer, Cham. https://doi.org/10.1007/978-3-031-43898-1_28(link is external)
[5] S. Inagaki, A. Alikhani, N. Navab, M. Maier and M. A. Nasseri, "Analyzing Accessibility in Robot-Assisted Vitreoretinal Surgery: Integrating Eye Posture and Robot Position," 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 9894-9900, doi.org/10.1109/ICRA57147.2024.10611482(link is external).
[6] A. Alikhani, N. Fareghzadeh, S. Inagaki, M. Maier, N. Navab and M. A. Nasseri, "IC-RCM: Intraoperative Real-Time Detection of 5D RCM-Misalignment in Robot-Assisted Ophthalmic Surgeries with a Single-Camera System," 2024 10th International Conference on Electrical Engineering, Control and Robotics (EECR), Guangzhou, China, 2024, pp. 111-117, doi: 10.1109/EECR60807.2024.10607316.
[7] A. Alikhani et al., "PKC-RCM: Preoperative Kinematic Calibration for Enhancing RCM Accuracy in Automatic Vitreoretinal Robotic Surgery," in IEEE Access, vol. 11, pp. 103616-103627, 2023, doi: 10.1109/ACCESS.2023.3316708.
[8] A. Alikhani et al., "RCIT: A Robust Catadioptric-based Instrument 3D Tracking Method For Microsurgical Instruments In a Single-Camera System," 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Sydney, Australia, 2023, pp. 1-5, doi: 10.1109/EMBC40787.2023.10340955.
[9] J. Yang et al., "EyeLS: Shadow-Guided Instrument Landing System for Target Approaching in Robotic Eye Surgery," in IEEE Robotics and Automation Letters, vol. 9, no. 4, pp. 3664-3671, April 2024, doi: 10.1109/LRA.2024.3370000.
[10] J. Yang, Z. Zhao, M. Maier, K. Huang, N. Navab and M. Ali Nasseri, "Shadow-Based 3D Pose Estimation of Intraocular Instrument Using Only 2D Images," 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 1323-1329, doi: 10.1109/ICRA57147.2024.10611011.
[11] S. Dehghani et al., "Robotic Navigation Autonomy for Subretinal Injection via Intelligent Real-Time Virtual iOCT Volume Slicing," 2023 IEEE International Conference on Robotics and Automation (ICRA), London, United Kingdom, 2023, pp. 4724-4731, doi: 10.1109/ICRA48891.2023.10160372.
[12] Zhou, M., Guo, X., Grimm, M., Lochner, E., Jiang, Z., Eslami, A., Ye, J., Navab, N., Knoll, A., & Nasseri, M. A. (2023). Needle detection and localisation for robot‐assisted subretinal injection using deep learning. In CAAI Transactions on Intelligence Technology. Institution of Engineering and Technology (IET). https://doi.org/10.1049/cit2.12242(link is external)
[13] S. Dehghani et al., "Colibri5: Real-Time Monocular 5-DoF Trocar Pose Tracking for Robot-Assisted Vitreoretinal Surgery," 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 4547-4554, doi: 10.1109/ICRA57147.2024.10610576.
[14] Zhao, Yinzheng, Anne-Marie Jablonka, Niklas A. Maierhofer, Hessam Roodaki, Abouzar Eslami, Mathias Maier, Mohammad Ali Nasseri, and Daniel Zapp. 2023. "Comparison of Robot-Assisted and Manual Cannula Insertion in Simulated Big-Bubble Deep Anterior Lamellar Keratoplasty" Micromachines 14, no. 6: 1261. https://doi.org/10.3390/mi14061261
[15] A. Alikhani, S. Inagaki, S. Dehghani, M. Maier, N. Navab and M. A. Nasseri, "Envibroscope: Real-Time Monitoring and Prediction of Environmental Motion for Enhancing Safety in Robot-Assisted Microsurgery," 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 8202-8208, doi.org/ 10.1109/ICRA57147.2024.10611207.
[16] K. Hoxha, A. Alikhani, S. Inagaki, M. Ferle, M. Maier and M. A. Nasseri, "Modelling and Development of a Mechanical Eye for the Evaluation of Robotic Systems for Surgery," 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Sydney, Australia, 2023, pp. 1-4, doi: 10.1109/EMBC40787.2023.10340226.