Loading…
bwge2021 has ended
Thursday, March 4 • 15:42 - 15:48
G11: Semi-automated annotation tool outperforms trained medical students and is comparable to clinical expert performance for frame level detection of colorectal polyps

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Authors
T. EELBODE (1), O. AHMAD (2), P. SINONQUEL (3), T. KOCADAG (2), N. NARAYAN (2), N. RANA (2), F. MAES (1), L. LOVAT (2), R. BISSCHOPS (3) / [1] KU Leuven, , Belgium, Medical Imaging Research Center, ESAT/PSI, [2] University College London Hospitals, London, United Kingdom (the), Wellcome/EPSRC Centre for Interventional & Surgical Sciences (WEISS), [3] UZ Leuven, Leuven, Belgium, Gastroenterology and hepatology

Introduction
Training of deep learning systems requires an enormous amount of labeled data. This data must ideally cover the entire range of polyp appearances in real life, but also the whole possible range of image qualities and polyp locations. Expert labelling of each frame in a polyp video is therefore the most robust way for constructing a training set, but this is very time-consuming and currently represents a major barrier for widespread implementation of AI in endoscopy.

Aim
This study aims to evaluate two alternative approaches for frame-level annotation: an innovative semi-automated labelling tool and manual annotation by trained medical students.

Methods
20 unique polyp white light videos containing 6282 frames (14 adenomas and 6 sessile serrated lesions confirmed by histopathology, mean size 7mm, Olympus) were annotated with bounding boxes by a clinical expert. These annotations are used as the gold standard for comparison. Two cheaper annotation methods were then applied to evaluate their validity and relative performance: (1) a semi-automated labelling technique – this tool only requires 3 manually annotated video frames, from which a representation of the polyp is learned and transferred automatically to all the other frames in the video; (2) independent manual labelling of each video by three medical students – following a training module with polyp images and videos.

Results
The semi-automated method significantly outperforms all three students on frame-level sensitivity (paired t-test, p-value < 0.05) with 74, 63, 67 and 94% (SD 27, 20, 27 and 6%) respectively for student 1, 2, 3 and the semi-automated method. It also achieves the highest value for positive predictive value (PPV) with 89, 95, 65 and 97% (SD 31, 22, 22 and 6%) respectively and adjudicated PPV (for borderline low-quality frames) with 90, 95, 95 and 99% (SD 15, 7, 12 and 14%). The total time for annotation is also significantly shorter when using the semi-automated method with 264, 1208, 234 and 25 minutes respectively.

Conclusions
A semi-automated labelling tool is a faster, more efficient and valid approach for polyp detection. It outperforms three medical students, specifically trained for polyp recognition and is comparable to clinical expert performance.


Thursday March 4, 2021 15:42 - 15:48 CET
TBA