Endoscopy Artefact Detection and Segmentation (EAD2020)

Training dataset (frames only) has been released now!!! 2200 annotated frames @ 8 classes!!! 

Semantic Segmentation Phase-I data has been released now!!! (frames only) @5 classes

(15-01-2020): Both Semantic and detection data (frames only) released!!! 99 annotated frames @8 classes for detection and @6 classes for semantic!!!

--> (!!!Note:!!! due to imbalance in class we request participants to use only first 5 classes for the segmentation!!!)

(20-01-2020): All training data released!!! 

--> Leaderboard has now been setup and tested! Only 50% of test data is released!!! (Starts online from 13th Feb)

--> Please note that full 100% test data will be released only 2 days before the closing date of the competition!!! This result will be only taken into account to decide the winner!

--> Limit for 2 submissions per day has been placed now!!!

--> Accepting papers now:  https://cmt3.research.microsoft.com/EndoCV2020  (intention to submit 25th Feb   28th Feb., please note that full test data will be made available to only these participants)

--> Intention to submit should include, abstract, method brief and your results on current sub-set of dataset (Please do not compare your method with any other fellow participants in the leaderboard*)

Latex sample

--> Leaderboard has been temporarily made unavailable for the final round of challenge. Please note that the final round will be on 100% test data which will be sent to the participants who have submitted their 2-4 page intension to submit for EndoCV2020 proceeding at CMT.

--> Call for travel grant applications*: Please send in your application to sharib.ali@eng.ox.ac.uk by 10am, 12th March 2020.

Google groups: https://groups.google.com/d/forum/endocv2020

FINALS STARTS at 1st March 23:59 till 3rd March 23:59. GOOD LUCK TO ALL PARTICIPANTS.

--> Final full paper submission: 7th March (extended deadline)

ABOUT

Endoscopy is a widely used clinical procedure for the early detection of numerous cancers (e.g., nasopharyngeal, oesophageal adenocarcinoma, gastric, colorectal cancers, bladder cancer etc.), therapeutic procedures and minimally invasive surgery (e.g.,laparoscopy). During this procedure an endoscope is used; a long, thin, rigid or flexible tube having a light source and a camera at the tip which allows to visualise inside of affected organs on a screen. A major drawback of these video frames is that they are heavily corrupted with multiple artefacts (e.g., pixel saturations, motion blur, defocus, specular reflections, bubbles, fluid, debris etc.). These artefacts not only present difficulty in visualising the underlying tissue during diagnosis but also affect any postanalysis methods required for follow-ups (e.g., video mosaicking done for follow-ups and archival purposes, and video-frame retrieval needed for reporting).

Accurate detection of artefacts is a core challenge in a wide-range of endoscopic applications addressing multiple different disease areas. The importance of precise detection of these artefacts is essential for high-quality endoscopic frame restoration and crucial for realising reliable computer assisted endoscopy tools for improved patient care. Existing endoscopy workflows detect only one artefact class which is insufficient to obtain high-quality frame restoration. In general, the same video frame can be corrupted with multiple artefacts, e.g. motion blur, specular reflections, and low contrast can be present in the same frame. Further, not all artefact types contaminate the frame equally. So, unless multiple artefacts present in the frame are known with their precise spatial location, clinically relevant frame restoration quality cannot be guaranteed. Another advantage of such detection is that frame quality assessments can be guided to minimise the number of frames that gets discarded during automated video analysis.The aim of this task is to localise bounding boxes, predict class labels and pixel-wise segmentation of 8 different artefact classes for given frames and clinical endoscopy video clips.

The 8 classes in this challenge include specularity, bubbles, saturation, contrast, blood, instrument, blur and imaging artefacts. The algorithms are also evaluated on generalisation of detection methods used in this category.

Relevant information:

Previous challenges: