Endoscopy Artefact Detection and Segmentation (EAD2020)
Training dataset (frames only) has been released now!!! 2200 annotated frames @ 8 classes!!!
Semantic Segmentation Phase-I data has been released now!!! (frames only) @5 classes
(15-01-2020): Both Semantic and detection data (frames only) released!!! 99 annotated frames @8 classes for detection and @6 classes for semantic!!!
--> (!!!Note:!!! due to imbalance in class we request participants to use only first 5 classes for the segmentation!!!)
(20-01-2020): All training data released!!!
--> Leaderboard has now been setup and tested! Only 50% of test data is released!!! (Starts online from 13th Feb)
--> Please note that full 100% test data will be released only 2 days before the closing date of the competition!!! This result will be only taken into account to decide the winner!
--> Limit for 2 submissions per day has been placed now!!!
--> Intention to submit should include, abstract, method brief and your results on current sub-set of dataset (Please do not compare your method with any other fellow participants in the leaderboard*)
--> Leaderboard has been temporarily made unavailable for the final round of challenge. Please note that the final round will be on 100% test data which will be sent to the participants who have submitted their 2-4 page intension to submit for EndoCV2020 proceeding at CMT.
--> Call for travel grant applications*: Please send in your application to firstname.lastname@example.org by 10am, 12th March 2020.
Google groups: https://groups.google.com/d/forum/endocv2020
FINALS STARTS at 1st March 23:59 till 3rd March 23:59. GOOD LUCK TO ALL PARTICIPANTS.
--> Final full paper submission: 7th March (extended deadline)
Accurate detection of artefacts is a core challenge in a wide-range of endoscopic applications addressing multiple different disease areas. The importance of precise detection of these artefacts is essential for high-quality endoscopic frame restoration and crucial for realising reliable computer assisted endoscopy tools for improved patient care. Existing endoscopy workflows detect only one artefact class which is insufficient to obtain high-quality frame restoration. In general, the same video frame can be corrupted with multiple artefacts, e.g. motion blur, specular reflections, and low contrast can be present in the same frame. Further, not all artefact types contaminate the frame equally. So, unless multiple artefacts present in the frame are known with their precise spatial location, clinically relevant frame restoration quality cannot be guaranteed. Another advantage of such detection is that frame quality assessments can be guided to minimise the number of frames that gets discarded during automated video analysis.The aim of this task is to localise bounding boxes, predict class labels and pixel-wise segmentation of 8 different artefact classes for given frames and clinical endoscopy video clips.
The 8 classes in this challenge include specularity, bubbles, saturation, contrast, blood, instrument, blur and imaging artefacts. The algorithms are also evaluated on generalisation of detection methods used in this category.
- EAD2019 (in conjunction IEEE International Symposium on Biomedical Imaging (ISBI'19))
Number of users: 777
Number of users: 777