B3 – Multimodal Data Driven Sensor Fusion

Research Area B: Modeling & Classification

By using multi-variate statistical methods, the extensive data sets resulting from the different measurement procedures will be analyzed and used to optimize the classification quality of the multimodal tissue differentiation procedures by considering local tissue parameters.

Multimodal Data Driven Sensor Fusion

Classification of tissue using sensor fusion
Classification of tissue using sensor fusion

Overview

Data from real applications comprise several modalities that represent content with the same semantics from complementary aspects. Intraoperative tissue differentiation is also based on the fusion of considerable amounts of data with different information to finally estimate the malignant potential of the respective target structure. The challenge is to enable the processing of multimodal data flows, i.e. the real-time acquired data of the sensor systems from projects A1-A5, for interventional diagnostics.

A novel feature fusion procedure based on a multi-level feature extraction structure is to be developed in order to fuse the different semantic feature groups and to use the newly generated feature vectors for classification. Furthermore, already acquired image data (sonography, CT, MRI) can be used indirectly, since local tissue parameters can be considered as additional features. Different tissue states (benign/malignant) correspond to a multitude of domain-spanning symptom manifestations, which are recorded by the newly developed sensors. The data flows recorded by the sensors, synchronized with the local position and orientation, have to be pre-processed (in real time).

Graphical representation of the classification overlaying the model of B1
Graphical representation of the classification overlaying the model of B1

Methods and Approaches

The stationary locally distributed tissue model from B1 is used to generate synthetic data that represent one dimension of the input for tissue classification. Input data also includes the synthetic sensor data from the strictly local measurement techniques (Raman and IR spectroscopy, projects A2 & A3) as well as from the spatially extended novel sensor systems of projects A1, A4, and A5.

Approaches to classification methods to be monitored are support vector machines (SVMs), k-nearest neighbours clustering, decision trees, or artificial neural networks. An approach that provides a confidence level of the prediction result beyond class membership is modelling using Gaussian processes.

As a result of these classification algorithms, a 3D point cloud is obtained, in which the points bear as features the class affiliation (malignant vs. benign) and a confidence level.

This image shows Matthias Ege

Matthias Ege

M.Sc.

PhD Student B1

This image shows Cristina Tarín Sauer

Cristina Tarín Sauer

Prof. Dr.-Ing.

Principle Investigator of Subproject B3, Equal Opportunities Officer

To the top of the page