I'm running a 3-class segmentation problem, with my Label images being "0(many),1(some),2(some)". But I realized when NiftyNet read the images, it only has 0/1 two values. When the label_normalization is on, the ref.txt file says "label from 0,1 to 0,1"; if I turn it off, the prediction also has only 0/1 two values.
Any idea what has gone wrong? Label image format problem?
Many thanks!
Can I ask what the num_classes is in the config file that you are using? When NiftyNet generates the label_normalization, it goes through all the files that it finds. How are you determining the list of files? If it's by searching a path, are you sure the path has all the relevant files on it?
Related
I am generating a matlab figure with features
Separate background regions generated using patch command
A couple of arrows using arrow.m (https://www.mathworks.com/matlabcentral/fileexchange/278-arrow)
Annotations,Labels,etc
I have tried Matlab option to convert to .eps format as well as print2eps.m from https://www.mathworks.com/matlabcentral/fileexchange/23629-export_fig. In both cases I get following warning
Warning: Loading EPS file failed, so unable to perform post-processing. This is usually because the figure contains a large number of patch objects. Consider exporting to a bitmap format in this
case.
Following is the screenshot of two images. Left is Matlab, Right is in latex
As one can obverse, output file is not anywhere near to original matlab figure. Any thoughts on addressing this issue would be much appreciated
I have some questions about making tiff/box files for tesseract 4.
In TrainingTesseract 4.00 document written:
Making Box Files As with base Tesseract, there is a choice between
rendering synthetic training data from fonts, or labeling some
pre-existing images (like ancient manuscripts for example).
But it did not explain how to train with pre-existing images.
I want to train for the Persian language in tesseract 4 (lstm). I have some images from ancient manuscripts and want to train with images and texts instead of font. So I can’t use text2image command. I know that the old format box files will not work for LSTM training.
How can I make tif/box for tessearct 4 lstm then label them and
how to change tesseract commands?
Should I use other tools for generating box files (Given that Persian
language is right to left )?
Should I use fine tuning or train from Scratch?
I was struggling just like you, until I found this github repository:
https://github.com/OCR-D/ocrd-train
It will make your life super easy. All you need to do is to put your images in tif format and your text should have the same image name with extension .gt.txt. It will take care of all the rest for you. (you might need to update the Makefile according to your local machine)
Whether to train from scratch or fine-tune depends on your own language, data and the problem you are trying to solve. For me the fine tunining is what I need cause I am happy with the current performance but need to add upon it.
All the useful details you might need can be found in this answer
1) Use below command to make lstmbox:
tesseract test.tif test-lstmbox -l eng --psm 6 lstmbox
It will make a lstmbox for you but you have to correct the character in box file.
2) You require enough data for training from Scratch So I suggest fine tuning is better option.
Kindly consider the published output in the attached image. Is it possible to arrange figures in a order, say for example: figure below one another? I searched, but couldn't resolve it.
Thanks
You could/should use "snapnow" instead of creating figures to get pictures in a well-defined order.
New with Matlab.
When I try to load my own date using the NN pattern recognition app window, I can load the source data, but not the target (it is never on the drop down list). Both source and target are in the same directory. Source is 5000 observations with 400 vars per observation and target can take on 10 different values (recognizing digits). Any Ideas?
Before you do anything with your own data you might want to try out the example data sets available in the toolbox. That should make many problems easier to find later on because they definitely work, so you can see what's wrong with your code.
Regarding your actual question: Without more details, e.g. what your matrices contain and what their dimensions are, it's hard to help you. In your case some of the problems mentioned here might be similar to yours:
http://www.mathworks.com/matlabcentral/answers/17531-problem-with-targets-in-nprtool
From what I understand about nprtool your targets have to consist of a matrix with only one 1 (for the correct class) in either row or column (depending on the input matrix), so make sure that's the case.
Does anyone use the matlab wrapper for the caffe framework? Is there a way how to extract an 4096 dimensional feature vector from an image?
I was already following
https://github.com/BVLC/caffe/issues/432
and also tried to remove the last lines in imagenet_deploy.prototxt to remove layers as suggested in another forum on github.
But still when I run "matcaffe_demo(im, 1)" I only get a 1000 dim vector of scores (for the image net classes).
Any help would be appreciated
Kind regards
It seems that you might not be calling the correct prototxt file. If the last layer defined in the prototxt has the top blob of 4096 dimension, there is no way for the output to be 1000 dimension.
To be sure, try creating a bug in the prototxt file and see whether the program crashes. If it doesn't then the program indeed is reading some other prototxt file.