We have a few questions:
1.) What is the size of the images that will be used when testing the model state? If testing images are available in different sizes, will they be scaled up/down to some default size?
2.) Is it possible to provide a script (e.g. python) which does some preprocessing during the test phase as well? For example, is it allowed to pre-process testing images to feed the model with several sub-windows of the full resolution image?
3.) What should be part of the ModelState to be exported after training? Assuming we use Caffe, is only the *.caffemodel required? What about the prototxt-files and custom layers we may have implemented for Caffe?
Thanks
Created by Matthias Stumpp mstumpp > What is the size of the images that will be used when testing the model state?
The sizes of the training images is representative of the size you can expect for the scoring images. The most common .dcm image size in the data set is about 27MB.
> If testing images are available in different sizes, will they be scaled up/down to some default size?
We do not plan to modify the files before letting your model access them.
> is it allowed to pre-process testing images to feed the model with several sub-windows of the full resolution image?
Yes, you are free to modify testing images before 'feeding' them to your model.
> What should be part of the ModelState to be exported after training?
It's dependent on your modeling framework.
> Assuming we use Caffe ..
You are welcome to use as a reference the Caffe-based examples provided on our site. Each Docker repository includes a link to the source code in GIthub. https://www.synapse.org/#!Synapse:syn4224222/docker/