B1 – Intraoperative Navigation of Multimodal Sensors

Research Area B: Modeling & Classification

In order to use the information content of multiple sensor channels, the real sensor signal must be provided with consistent positional information. The position and geometry are calculated from the camera image data. For this purpose, a model-based SLAM (Simultaneous Localization and Mapping) algorithm for deformable systems will be developed.

Modeling of Tissue & Sensor Properties and intraoperative Navigation of Multimodal Sensors

The interdisciplinary modelling of tissue and sensor properties is a central task and represents the interface between the focal areas "sensor technology" and "classification", in which the sensor information is combined. As a precondition for tissue differentiation it is assumed that different tissue stages (e.g. benign and malignant) have different properties, which in turn can be quantified by domain-specific tissue parameters. In order to use the information content of multiple sensor channels, the actual sensor signal must be provided with consistent positional information. Thus, the aggregation of this data is of central importance for the following steps. To achieve a tissue differentiation in the range of a few millimeters, the intraoperative knowledge of the local position and orientation of the multimodal sensors with appropriate accuracy is indispensable. For this purpose, inertial measurement units (IMU) are placed on the instruments and merged together with the intraoperative camera images.

Network of Modeling and Navigation
Deformations of the organ are considered using a model-based prediction, and the model is dynamically adjusted utilizing navigation data.

The reconstruction of the instrument position, its orientation, and the integration into a global map for intraoperative navigation is an independent scientific issue. For this purpose, a parameterizable, three-dimensional geometric model of the organs studied (bladder or ovaries in cooperation with C1 & C2) is created. This not only serves as an intraoperative patient-specific map, but also allows for the inclusion of pre- and postoperative data. The information about the orientation of the sensors as well as the tissue classification is displayed to the surgeon via augmented reality. In addition, the interpretability of the results can be maximized by reconstructing locally distributed tissue parameters from the sensor signals, by supporting the data-driven methods of project B3 with existing local model information.

Main tasks

  • Intraoperative Navigation: Estimation of position and orientation of included instruments relative to each other and to the tissue of interest.
  • Local, Multiphysical Tissue Modelling: Model-based processing of preoperative, intraoperative, and postoperative raw signals.

Methods and Approaches

  • Modelling of the locally distributed tissue model via continuum mechanical balance relations
  • Discretization by finite element method, analysis of the coupling structure, and order reduction
  • Camera-based online adaptation through processed, intraoperative camera image data
  • Fusion of camera data with accelerometer, gyroscope, and magnetometer data (IMU): Solution of an iterative optimization problem with dynamic constraints
  • Derivation of optimal sensor combinations that allow maximizing the quality of the reconstruction


This image shows Johannes Schüle

Johannes Schüle


PhD StudentB1

This image shows Oliver Sawodny

Oliver Sawodny

Prof. Dr.-Ing. habil. Dr. h.c.

Sprecher des GRK 2543

To the top of the page