Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  GENTEL : GENerating Training data Efficiently for Learning to segment medical images

Thakur, R., Rocamora, S., Goel, L., Pohmann, R., Machann, J., & Black, M. (2020). GENTEL: GENerating Training data Efficiently for Learning to segment medical images. In Joint Conferences CAp and RFIAP 2020 (pp. 1-7).

Item is

Basisdaten

einblenden: ausblenden:
Genre: Konferenzbeitrag

Externe Referenzen

einblenden:
ausblenden:
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Thakur, RP, Autor
Rocamora, SP, Autor
Goel, L, Autor
Pohmann, R1, 2, Autor           
Machann, J, Autor
Black, MJ, Autor           
Affiliations:
1Department High-Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497796              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Accurately segmenting MRI images is crucial for many cli-nical applications. However, manually segmenting imageswith accurate pixel precision is a tedious and time consu-ming task. In this paper we present a simple, yet effectivemethod to improve the efficiency of the image segmenta-tion process. We propose to transform the image annota-tion task into a binary choice task. We start by using classi-cal image processing algorithms with different parametervalues to generate multiple, different segmentation masksfor each input MRI image. Then, instead of segmenting thepixels of the images, the user only needs to decide whethera segmentation is acceptable or not. This method allowsus to efficiently obtain high quality segmentations with mi-nor human intervention. With the selected segmentations,we train a state-of-the-art neural network model. For theevaluation, we use a second MRI dataset (1.5T Dataset),acquired with a different protocol and containing annota-tions. We show that the trained network i) is able to au-tomatically segment cases where none of the classical me-thods obtain a high quality result ; ii) generalizes to thesecond MRI dataset, which was acquired with a differentprotocol and was never seen at training time ; and iii) en-ables detection of miss-annotations in this second dataset.Quantitatively, the trained network obtains very good re-sults : DICE score - mean 0.98, median 0.99- and Haus-dorff distance (in pixels) - mean 4.7, median 2.0-.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2020-06
 Publikationsstatus: Online veröffentlicht
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: -
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: Congrès Reconnaissance des Formes, Image, Apprentissage et Perception (RFAIP 2020)
Veranstaltungsort: Vannes, France
Start-/Enddatum: 2020-06-23 - 2020-06-26

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Joint Conferences CAp and RFIAP 2020
Genre der Quelle: Konferenzband
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: -
Seiten: - Band / Heft: - Artikelnummer: - Start- / Endseite: 1 - 7 Identifikator: -