I ran the base model (https://github.com/nboley/DREAM_invivo_tf_binding_prediction_challenge_baseline) on EGR1 and got some fairly low numbers: GM12878: Area under precision-recall curve = 0.19935381984393966                          Area under ROC = 0.8907711963368433 H1-hESC: Area under precision-recall curve = 0.14948195787885138                          Area under ROC = 0.9095074803802505 HCT116: Area under precision-recall curve = 0.33473546384364555                          Area under ROC = 0.9149479473368525 MCF-7: Area under precision-recall curve = 0.10089654674265679                          Area under ROC = 0.9024965542452085 These results don't look correct to me, and I was wondering if anyone else was successful in running the base model and scoring it.

Created by devin.petersohn
Yeah note that this is a weak baseline. So it's not supposed to do amazingly well. It's a baseline that stronger methods should be able to beat quite easily.
After looking at the leaderboard, it seems these numbers are in line with the expected result.

The score of the base model page is loading…