Task 2 & 3 output

can you please clarify if we are supposed to predict the progression status (either 0 or 1) or progression free survival (the number of days)? If it is for progression status, the submission should be the probability of progression? thanks.

1 Like

I have the same question; I thought that for Task 2, we need to submit a CSV file with two columns, i.e., the first column is patient ID, and the second column is the predicted risk of PFS.

Kindly if someone from the organizers can help in clarifying this point.

Thanks

For task 2 and 3, you need to submit a csv. The first column is the patient ID, the second column can be either concordant with the PFS in days (e.g predicted PFS days), or anti-concordant (i.e. predicted risk score). A check-box will be available during submission to specify one or the other. So risk score and PFS days are fine, but progression status (0 or 1) is not.

1 Like

Thank you very much for the detailed response.

1 Like

So just to clarify, there will be no need to submit a docker container for task 2 and 3?

Just to answer myself, the official docs now say, that for task 3, we are expected to provide a docker container.

Task 3: For this task, the developed methods will be evaluated on the testing set by the organizers by running them within a docker provided by the challengers. Practically, your method should process one patient at a time. It should take 3 nifty files as inputs (file 1: the PET image, file 2: the CT image, file 3: the provided ground-trugh segmentation mask, all 3 files have the same dimensions, the ground-truth mask contains only 2 values: 0 for the background, 1 for the tumor), and should output in the csv file the score produced by your model.

Input and output names must be explicit in the command-line:

predict.sh [PatientID]PET.nii.gz [PatientID]CT.nii.gz [PatientID]SegMask.nii.gz output_score.csv

where predict.sh is a BASH script taking as first 3 arguments the input PET image, the input CT image, the ground-truth mask image and as a 4th argument the output CSV file.

You must provide a built image containing your method. Please refer to the Docker Docker documentation to build your image.

During the evaluation, your docker will be executed by the organizers on the test dataset, and the output scores will be processed similarly as in task 2 to compute the C-index, using the following command line:

docker run -v /datalocation:/data -v /outputlocation:/output mydockerimage /bin/bash -c "sh predict.sh /data/[PatientID]PET.nii.gz /data/[PatientID]CT.nii.gz /data/[PatientID]SegMask.nii.gz /output/output_score.csv”

Thanks for detailed reply. Just curious would you be able to release the evaluation code for task 2 & 3? In github, there is only evaluation code for tumor segmentation (task 1), thanks.

We will release it in the coming days.

For info we will (today) relase a docker file that you can use to encapsulate your method with a minimum of efforts.

You can download an archive containing the docker example here:

Corrected docker