Artificial neural network

B2 – Tissue Classification Based on Machine Learning and Linkage of Multimodal Information with Databases

Research Area B: Modeling & Classification

The aim of this research project is to take into account all the information collected per patient during the therapy and to make it effectively usable through pre-trained classification methods. For this purpose, existing labelled databases will be included in addition to acquired data from the partner projects. By means of transfer learning, the knowledge gained from one modality will be used to explore another modality.
The focus is on the development of new machine learning methods for the simultaneous observation of heterogeneous data modalities.

Tissue Classification Based on Machine Learning

Linear classification
Linear classification

Machine learning methods based on deep neural networks (DNNs) have already proven their strength in the robust semantic segmentation of image data in medicine and are successfully used, for example, for the classification of malignant tissue in mammograms. Compared to classical pattern recognition methods, no explicit features are specified, instead they are learned based on a large data sets with known segmentation/labelling and stored as weights of the DNN.

Patch-based tissue classification in histopathology.
Patch-based tissue classification in histopathology.

In this project, one challenge is to merge different types of established pre- and intraoperative data, like MRI scans and endoscopic images, with data that is only sparsley available at certain points, like the signals generated by the sensor from the A projects or the labels defined by the intraoperative histopathological frozen section.

Patch-extraction of histopathological whole slide images.
Patch-extraction of histopathological whole slide images.

With the help of transfer learning between the modalities, the intraoperatively acquired camera image should also be processed in real time in such a way that a spatial registration with the other multimodal models can take place.

Multimodal database of urothelial carcinoma. Various clinically relevant diagnostic methods are combined, especially endoscopic video sequences and histopathological sections stained with different immunohistochemical methods.
Multimodal database of urothelial carcinoma. Various clinically relevant diagnostic methods are combined, especially endoscopic video sequences and histopathological sections stained with different immunohistochemical methods.

Linkage of Multimodal Information with Databases

In tumor diagnostics there are some publicly available image data sets (BACH 2018, CAMELYON16 2016, CAMELYON17 2018), which are already classified and can be used for training DNNs. However, in most cases only one modality is given, e.g. histological sections stained with hematoxylin-eosin (HE). In this subproject, DNN-based procedures will be investigated that support both the transfer of the classification and the collaborative training on multiple modalities.

The uneven distribution of labeled data poses a further challenge. Thus, segmentation maps, i.e. classifications for each pixel for histological sections or radiological images, exist only in some cases. Image data with findings, on the other hand, are available more frequently. These labels are much easier to produce and also easier to collect for the newly developed modalities from subprojects A1-A5.

The aim is therefore not only to bring the modalities together, but also to unify the heterogeneity of the available label data by using semi- or non-supervised learning procedures.

Projektverantworliche

This image shows Simon Holdenried-Krafft

Simon Holdenried-Krafft

M.Sc.

PhD Student B2

[Photo: Simon Krafft]

This image shows Hendrik Lensch

Hendrik Lensch

Prof. Dr.-Ing.

Principle Investigator of Subproject B2

[Photo: Hendrik Lensch, University of Tübingen ]

To the top of the page