I am a PhD candidate in the Medical Image Computing group of Prof. Klaus Maier-Hein at the German Cancer Research Center (DKFZ) as part of the European Laboratory for Learning and Intelligent Systems (ELLIS).
My research focuses on bringing machine learning algorithms into the clinics. We are establishing a nationwide network, called RACOON, connecting all university clinics in Germany enabling them to do privacy-preserving federated learning with real patient data. One of the major problems here is the amount of unstructured image data in the clinics, which currently requires many hours of manual work to make this data available for machine learning models. Therefore, I am researching automatic (federated) annotation and curation of medical image data to uncover the potential of utilizing this data straight from the clinics.
Download my CV.
PhD Candidate, since 2022
German Cancer Research Center (DKFZ)
M.Sc. in Informatics, 2021
Technical University of Munich (TUM)
B.Sc. in Computer Science, 2018
University of Applied Sciences Würzburg-Schweinfurt
Segmentation of Multiple Sclerosis (MS) lesions in longitudinal brain MR scans is performed for monitoring the progression of MS lesions. We hypothesize that the spatio-temporal cues in longitudinal data can aid the segmentation algorithm. Therefore, we propose a multi-task learning approach by defining an auxiliary self-supervised task of deformable registration between two time-points to guide the neural network toward learning from spatio-temporal changes. We show the efficacy of our method on a clinical dataset comprised of 70 patients with one follow-up study for each patient. Our results show that spatio-temporal information in longitudinal data is a beneficial cue for improving segmentation. We improve the result of current state-of-the-art by 2.6% in terms of overall score (p<0.05).
Deep unsupervised representation learning has recently led to new approaches in the field of Unsupervised Anomaly Detection (UAD) in brain MRI. The main principle behind these works is to learn a model of normal anatomy by learning to compress and recover healthy data. This allows to spot abnormal structures from erroneous recoveries of compressed, potentially anomalous samples. The concept is of great interest to the medical image analysis community as it i) relieves from the need of vast amounts of manually segmented training data—a necessity for and pitfall of current supervised Deep Learning—and ii) theoretically allows to detect arbitrary, even rare pathologies which supervised approaches might fail to find. To date, the experimental design of most works hinders a valid comparison, because i) they are evaluated against different datasets and different pathologies, ii) use different image resolutions and iii) different model architectures with varying complexity. The intent of this work is to establish comparability among recent methods by utilizing a single architecture, a single resolution and the same dataset(s). Besides providing a ranking of the methods, we also try to answer questions like i) how many healthy training subjects are needed to model normality and ii) if the reviewed approaches are also sensitive to domain shift. Further, we identify open challenges and provide suggestions for future community efforts and research directions.