Hi all, I have a few questions regarding the evaluation code. At first I thought the evaluation code was available at: https://github.com/schnobi1990/robust_mis_2019, but this version seems incomplete in comparison to this one: https://phabricator.mitk.org/source/rmis2019/browse/master/. Is that normal ? Also, in the function "compute_statistics" (in mean_average_precision_calculations.py) the default parameter min_iou_for_match has a default value of 0.03, while in the description of the challenge it is written that "assigned pairs of references and predictions were defined as TP if their IoU>0.3". Since in create_algorithm_performances.py you only provide the performances for MI_DSC metric and not other metrics, I don't see any explicit call to the function "compute_statistics", so I don't know if you changed this default parameter from 0.03 to 0.3. Or maybe I'm just confusing different stuff. If you could explain me this please Thanks in advance

Created by Mordokkai
Hi @schnobi1990, Do you have any updates ? I don't think I 've received any email?
@schnobi1990 By the way, I don't really get how the mAP was computed. Usually to compute mAP, the detection score is used to rank detections in descending order. However only segmentation without score are used to do the evaluation, so I don't get how its possible to get a PR-curve (and then mAP).
Hi @schnobi1990 , Thanks a lot for answering so quickly :) Ok, so the paper I was reading was https://arxiv.org/pdf/2003.10299.pdf which shouldn't be up to date because you were still using mAP at that time. Is your latest publication freely available somewhere ? Thanks a lot for updating your code :) By the way, just for your information I noticed that there is only one frame in the whole dataset that has no instrument instances but still got a file instrument_instances.png (Testing/Stage_3/Sigmoid/9/104845/raw.png), that's a bit weird, but nothing important :) My private email is mordokkai@gmail.com Thanks! Sylvain
Hi @Mordokkai, Thank you for testing the code!!! :) 1. The correct repository is https://phabricator.mitk.org/source/rmis2019/browse/master. The other one was a preliminary solution which I will remove now, thank you for the reminder. 2. The IoU of 0.03 should be 0.3, which I forget to push, thank you :). I will update the repository soon. For luck does my local copy of the repository has the correct value ;). 3. During the rebuttal of our manuscript we noticed that the mean average precision (maP) is not correctly implemented. For this reason, we changed in our latest publication the ranking and validation for the detection task where we don't use the maP anymore. I will put the new code into the new repository within in the next days. If you can send me your private e-mail address (by PM) I will keep you up to date. This will also include the fix for the failing test. Cheers, Tobias
@schnobi1990 Can you please reply?
@AnnikaReinke I'm sorry to tag you like that, it's just I don't know if anybody of the challenge was notified by my two last messages. Again I'm sorry for the inconvenients
I also ran the unit tests on my ubuntu 20.04 machine using https://phabricator.mitk.org/source/rmis2019/browse/master/. test_distances.py and test_instance_dice.py succeeded, while test_detection_metric.py failed: ``` rmis2019/statistics/mean_average_precision_calculations.py:27: RuntimeWarning: invalid value encountered in true_divide iou = overlap.sum() / float(union.sum()) # Treats "True" as 1, .F ====================================================================== FAIL: test_mean_average_precision (__main__.TestMAPCalculation) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_detection_metric.py", line 330, in test_mean_average_precision self.assertAlmostEqual(compute_mean_average_precision(statistics_list_1), (0.4*1.0+0.4*0.57+0.2*0.5), delta=delta) AssertionError: 0.714 != 0.728 within 0.0005 delta (0.014000000000000012 difference) ---------------------------------------------------------------------- Ran 3 tests in 0.009s FAILED (failures=1) ```

Evaluation code and repositories page is loading…