Dear organizers of the KPI challenge, I have a question regarding the exact implementation of the Evaluation Metric that is used in your challenge. A close look to the provided segmentation masks shows a slight inconsistency of the pixel values in the edge areas of annotated region. I unfortunately can not upload screenshots here, but as soon as you zoom in very deep, it gets obvious what I am referring to. I suspect that this is unwanted and an artifact that arises during the export of the annotations. In other words: There are annotated pixels outside the boundary with pixel values !=0 and inside the structure there are values !=255. During the calculation of the Dice score this could lead potentially to misleading results. Can you please share how you are going to adress this issue? For short term evaluations we are currently mitigating this issue by array[array<150]=0 array[array > 150] = 1 array[array==150]=1 where we found 150 to be a sufficient separator. Can we assume for further trainings that something similar is also applied during your evaluations? Bes wishes

Created by mafi95
@huoy1 Thanks so much for the quick response! It's all clear now.
Thanks for the detailed specification. We have updated the instruction based on your concern in https://sites.google.com/view/kpis2024/evaluation "For the segmentation, Task 1 segmentation must be JPG or PNG and use the .jpg or .png file extension, Task 2 segmentation must be TIFF image and use the .tiff file extension. The output mask should be saved as binary format, such as {0,1} or {0,255}" For your second question, yes, please save the mask files explicitly and we will run the testing code based on the masks to compute the final metrics. Please let us know if you have any questions or concerns. Best regards, KPIs Challenge Organizing Committee
@huoy1 The two validation Docker (hrlblab333/kpis:validation_patch and hrlblab333/kpis: validation_slide) save output in `.png` format but the Submission Guideline ([here](https://sites.google.com/view/kpis2024/evaluation?authuser=0)) asking us to save mask results for Task 1 in `.jpg` format. For segmentation tasks, the output mask should NOT be saved in JPEG. The submission guideline and the validation example docker are inconsistent so please check that! By the way, please answer my previous second question if you can. Thanks,
You can run the Docker on the released data and see the format. Thanks
@huoy1 Thanks for the reply! > .py, .jpg or .png is just a format, you can choose whatever format for your research, MONAI or any other tools. Then why does it state this: `For the segmentation, Task 1 segmentation must be JPG and use the .jpg file extension, Task 2 segmentation must be TIFF image and use the .tiff file extension.` in the Submission Guideline here: https://sites.google.com/view/kpis2024/evaluation?authuser=0? > You will submit a Docker for final evaluation rather than files. In my understanding, we submit a docker that is supposed to output the prediction results to an `output_dir`. Then, the organizers take these outputs and evaluation using their own evaluation code. Is it correct?
.py, .jpg or .png is just a format, you can choose whatever format for your research, MONAI or any other tools. However, for our evaluation purposes, please follow the Docker to get the Dice score. You will submit a Docker for final evaluation rather than files. Please run our example Docker to get the details. The organization team has limited bandwidth to incorporate different formats. Best regards, KPIs Challenge Organizing Committee
@agaldran > In my experience, no matter what you do, if you end up saving a segmentation mask in jpg artifacts (non-binary values) will be created due to compression at borders. It would be a much better idea to ask participants to submit masks in PNG, I believe. I found that the provided mask data in `.jpg` extension was actually saved in PNG format but then the organizers changed their filenames to `.jpg`. This is really weird!!! If you check any mask file (`.jpg`) with the below code, you'll always see the file is in PNG format. For example: ```python mask_path = 'Task1_patch_level/validation/normal/normal_M2/mask/normal_M2_26_4096_1024_mask.jpg' with open(mask_path, 'rb') as file: signature = file.read(8) print(signature) # output: b'\x89PNG\r\n\x1a\n' ``` > Also, one question, are you expecting the results to have values in {0,1} or in {0,255}? In the docker evaluation code, they used MONAI's `ScaleIntensity` to scale data to range [0, 1] so you can use either {0,1} or {0,255} (but using {0,255} is better since the provided mask data has these values) @huoy1 Could you explain why the provided mask data with `.jpg` extension was saved in PNG? Also, we should submit the prediction results in PNG, there is no way to save them in JPG format without losing data since it is a lossy compression.
Please refer to our released evaluation Docker. We will use the same docker to evaluate the performance. Best regards, KPIs Challenge Organizing Committee
Hey hello, In my experience, no matter what you do, if you end up saving a segmentation mask in jpg artifacts (non-binary values) will be created due to compression at borders. It would be a much better idea to ask participants to submit masks in PNG, I believe. Also, one question, are you expecting the results to have values in {0,1} or in {0,255}? Cheers, Adrian
We have fixed the masks to binary version. Please download the new masks and this problem should be addressed We performed the following updates on masks (images are identical). Update Notes for Training Masks: Miss-aligned Mask Issue for Case 11-358: The alignment issue for case 11-358 has been fixed for both Task 1 and Task 2. 188 patches have replaced the previous 71 patches in v1.1. Miss-aligned Mask Issue for Case normal_f1576: The alignment issue for case normal_f1576 has been fixed for Task 2. The normal_f1576 WSI has been replaced in v1.1. Non-binary Mask Issue: The issue with non-binary masks has been fixed. The non-binary masks have been replaced in v1.1. Update Notes for Validation Masks: Non-binary Mask Issue: The issue with non-binary masks in the validation data has been fixed. The non-binary masks have been replaced in v1.1. We strongly encourage all participants to download the updated masks from the challenge website to incorporate these fixes into your workflows. The previous masks (v1.0) have been moved to "Old_data_backup" folder Should you have any questions or require further assistance, please do not hesitate to contact us. Best regards, KPIs Challenge Organizing Committee
@huoy1 Has this problem been fixed? Also, I think the correct format for segmentation masks should be in PNG format.
Great! We will fix this issue ASAP. Thanks for sharing this.
checking_Task1_label_code: ``` import imageio.v3 import numpy as np from pathlib import Path def get_npy_file_paths(directory): directory_path = Path(directory) npy_file_paths = sorted([str(file_path) for file_path in directory_path.rglob('*_mask.jpg')]) return npy_file_paths in_dir = 'D:/data/' list11 = get_npy_file_paths(in_dir) for input_path in list11: label = imageio.v3.imread(input_path) print(input_path) print(np.unique(label)) print(np.sum((0Can you please provide the case number? We will check.

Implementation of validation metric page is loading…