Test issue in semantic segmentation using DL4J - image-segmentation

I have trained a U-net based model on DL4J on medical images. The code above is for testing the model. my objective is to test the model on 34 2d medical images separately and get the segmentation metrics (accuracy, recall…) for each image, so i tried two ways:
1- when i put all the 34 images in the test folder (“testI”) and i run the code above which will load all the images and test image by image i get good segmentation performance for all the images (even without reset() method),
2- but when i put just one image that i want to test from the 34 in test folder and i execute the code above i get bad results in term of accuracy, recall…), that means i get different segmentation results for the same images by using the two approaches.
This is my label generator code: https://gist.github.com/AbdelmajidB/fccb871836746dd4f0d6e32828eadcdf The following code is used test the model image by image: https://gist.github.com/AbdelmajidB/92f5fc3013c81b28c898b64118b6adb5

Related

Am I using too many training data in GEE?

I am running a classification script in GEE and I have about 2100 training data since my AOI is a region in Italy and have many classes. I receive the following error while I try save my script:
Script error File too large (larger than 512KB).
I tried cancelling some of the training data and it saves. I thought there is no limit in GEE to choose training points. How can I know what is the limit so I adjust my training points or if there is a way to save the script without deleting any points.
Here is the link to my code
The Earth Engine Code Editor “drawing tools” are a convenient, but not very scalable, way to create geometry. The error you're getting is because “under the covers” they actually create additional code that is part of your script file. Not only is this fairly verbose (hence the error you received), it's not very efficient to run, either.
In order to use large training data sets, you will need to create your point data in another tool and upload it (using CSV or SHP files) to become one or more Earth Engine “table” assets, and use those from your script.

How to evaluate mutiple testing using one ROC-Curve

I ran out of memory (11G VRAM) while testing my CNN with 10 test images. I'm using the U-Net architecture, 20 training images each (1600x1200x1), 48x48 patches (190000 sub images) and batch size of 32 (recommended).
So right now I'm testing my network 5 times eatch 2 images. After that I want to evaluate my network using one ROC curve.
So here are my questions: Can I evaluate my network, if I split the testing? If yes, how can I manage it?
If not, what do I have to change in my config so that the memory doesn't run out?
btw I'm a beginner in NN and I'm sorry for my bad english!

MobileNet Confusion Matrix

I'm facing an issue while following the MarkDaoust's gitHub repository: https://github.com/googlecodelabs/tensorflow-for-poets-2
What I've done is creating a mobilenet model which classify different classes, creating the retrained_graph.pb and retrained_labels.txt
The problem is that I'm not able to create a confusion matrix for all the classes. I've tried to use the script evaluate.py but it's no use, even the tf.confusion_matrix is useless because I don't know which are the images in the test set/training_test (I just know how many images will be picked from each class folder).
I hope there is a function or something that, given the retrained_graph.pb can give me the confusion matrix, otherwise I should do all manually because the script retrain can print all the misclassified images from test set but it would be a huge work to be done manually.

How to test cntk object detection example on custom image?

I am trying to run CNTK object detecion example on PascalVoc pretrained dataset. I run all required scripts in fastrcnn and get the visual output for the test data defined in dataset. Now I want to test network on my own image, how can I do that?
For Fast R-CNN you need a library that generates candidate ROIs (regions of interest) for your test images, e.g. selective search.
If you want to evaluate a batch of images you can follow the description in the tutorial to generate the test mapping file and the ROI coordinates (see test.txt and test.rois.txt in the corresponding proc sub folder). If you want to evaluate a single you would need to pass the image and the candidate ROI coordinates as inputs to cntk eval, similar to this example:
# compute model output
arguments = {loaded_model.arguments[0]: [hwc_format]}
output = loaded_model.eval(arguments)
For FastRCNN you need to first run your custom image through Selective Search algorithm to generate ROIs (regions of interest) and then feed it to your model with sth like this:
output = frcn_eval.eval({image_input: image_file, roi_proposals: roi_proposals})
You can find more details here: https://github.com/Microsoft/CNTK/tree/release/latest/Examples/Image/Detection/FastRCNN
Anyway FastRCNN is not the most efficient way to do it because of usage of Selective Search (which is a real bottleneck here). If you want to improve the performance you can try FasterRCNN as it gets rid of SS algorithm and replaces it with Region Proposal Network which performs much, much better.
If you're interested, you can check my repo on GitHub: https://github.com/karolzak/CNTK-Hotel-pictures-classificator

Error using caffe Invalid input size

I tried to train my own neural net using my own imagedatabase as described in
http://caffe.berkeleyvision.org/gathered/examples/imagenet.html
However when I want to check the neural net after training on some standard images using the matlab wrapper I get the following output / error:
Done with init
Using GPU Mode
Done with set_mode
Elapsed time is 3.215971 seconds.
Error using caffe
Invalid input size
I used the matlab wrapper before to extract cnn features based on a pretrained model. It worked. So I don't think the input size of my images is the problem (They are converted to the correct size internally by the function "prepare_image").
Has anyone an idea what could be the error?
Found the solution: I was referencing the wrong ".prototxt" file (Its a little bit confusing because the files are quite similar.
So for computing features using the matlab wrapper one needs to reference the following to files in "matcaffe_demo.m":
models/bvlc_reference_caffenet/deploy.prototxt
models/bvlc_reference_caffenet/MyModel_caffenet_train_iter_450000.caffemodel
where "MyModel_caffenet_train_iter_450000.caffemodel" is the only file needed which is created during training.
In the beginning I was accidently referencing
models/bvlc_reference_caffenet/MyModel_train_val.prototxt
which was the ".prototxt" file used for training.