regionprops() is giving an error in matlab - matlab

I'm using regionprops() but I'm getting an error with the following code:
J = imread('E:\Canopy New Exp\Data Set\input.jpg');
I = rgb2gray(J);
BW = regionprops(J,'basic');
stats = regionprops('table',BW,'Centroid',...
'MajorAxisLength','MinorAxisLength');
centers = stats.Centroid;
diameters = mean([stats.MajorAxisLength stats.MinorAxisLength],2);
radii = diameters/2;
hold on;
viscircles(centers,radii);
hold off;
But I'm getting the following error:
Error using regionprops
Expected input number 1, L, to be one of these types:
double, single, uint8, uint16, uint32, uint64, int8, int16, int32, int64
Instead its type was char.
Error in regionprops (line 142)
validateattributes(L, {'numeric'}, ...
Error in Untitled (line 8)
stats = regionprops('table',BW,'Centroid',...
Any suggestions?
Thanks in advance!

You are doing regionprops twice, and the second time with 'table' as the first parameter. regionprops expects an image (black and white, connected components, or labelled) as the first parameter, so that is why you are getting the error type char.
Instead feed in a black and white (binary) image to one call of regionprops, and that should be done:
thresh = graythresh(I); % get a threshold (you could just pick one)
I_BW = im2bw(I,thresh); % make the image binary with the given threshold
stats = regionprops(I_BW,'basic'); % do regionprops on the thresholded image
You can also do regionprops with 2 image parameters, one to show the regions in the other, so instead of the regionprops call above, you could possibly try:
stats = regionprops(I_BW, J, 'basic');

regionprops outputs an object so in the third line of the above code sample you call it on J, an image, which is fine and returns an appropriate object BW. But then in the following line you call it again on the BW object and that's where the error comes from. It isn't meaningful to call it twice on successive objects, but it's more likely that that wasn't your intention and you meant to binarise the image first with im2bw.
When you read the error messages output by matlab be aware that the bottom line is the line in your code where the error occurred. If you are supplying the wrong kind of input to one of matlab's builtin functions (this is by far the most common kind of error in my own experience) then it won't be until you've gone deeper into matlab's internal functions that the error manifests.
So reading the error report from the bottom upwards you go deeper into the call stack until the top line, which is the 'actual' error. That top line gives you the cause of the conflict, which is half the story. You can then take that half back to the line of your code to see why it happened and how to fix it.

You're probably feeding it a RGB array NxMx3. regionprops takes a NxM binary array according to the documentation.

Related

MATLAB Canny edge detection: Maximum recursion limit of 500 reached

i'm getting this error
Maximum recursion limit of 500 reached
as I try to execute the function edge(img,'canny').
Fun fact is that the function is called within a script which worked til now (and now it doesn't).
I tried to increase the maximum iteration number ( set(0,'RecursionLimit,value) ) but if I try values too low it appears the same error, if I try values too high the system crashes.
What can I do?
--Update--
I tried to executed the edge() function without specifying 'canny'... This way it works, but I absolutely need the canny edge method!!
--Update--
It also works with the 'sobel' method. Could the problem be on 'canny'?
--Update--
Solved! The problem was a function I created this morning named "gradient" which overriden the function "gradient" called by the Canny Edge detector method
You didn't convert your image to black and white properly. The values stored in your 512 by 512 matrix are on the scale of 0 to 255. To reduce it to the black and white scale used by the edge() function simply divide by 255.
% Load data file
load('lenna512.mat')
% Scale to proper intensity range for the type double (0 to 1)
lenna512_bw = lenna512/255;
% Preview figure if it went alright
figure(1)
imshow(lenna512_bw);
% Detect the edges
edges_result = edge(lenna512_bw,'canny');
% Show result
figure(2)
imshow(result)
Note Intensity images of the type double have the range 0 to 1 while intensity images of type uint8 or uint16 have the range 0 to 255. So instead of using:
lenna512_bw = lenna512/255;
You could also use:
lenna512_bw = uint8(lenna512);
or
lenna512_bw = uint16(lenna512);
Converting the matrix to the type uint8 or uint16 while having the appropriate range for these types set to 0 to 255 as already available in your matrix.
More on imagetypes here and on numerical types here
Good luck!
The resulting images:

Train Cascade Object Detector bounding box out of bound?

I am trying to train a cascade object detector using the built-in function in Matlab (vision toolbox). However, the following message came up after running the command.
*
Error using trainCascadeObjectDetector (line 245)
Error reading instance 1 from image 2, bounding box possibly out of image bounds.
*
I don't understand why the bounding box can be out of bounds. All the parameters for my positive images are set up correctly (starting point x,y, width, and height. I used createMask(h) to create a mask and find the minimum coordinates for x and y to be starting point and max-min for each dimension to be the width and height), and the negative images (as far as I know) are just images without any setup needed.
Anyone ever ran into the same problem? How did you solve it?
EDIT:
Here's the code. I don't have the toolbox for training the "data" struct, so I wrote one myself
positive_samples=struct;
list=dir('my_folder_name_which_I_took_out');
L=length(list)-3; %Set L to be the length of the image list.
for i=1:length(list)
positive_samples(i).imageFilename=list(i).name;
end
positive_samples(:,1)=[]; %first 3 lines do not contain file names
positive_samples(:,1)=[];
positive_samples(:,1)=[];
for j=1:1
imshow(positive_samples(j).imageFilename);
title(positive_samples(j).imageFilename);
h=imrect;
h1=createMask(h);
I=imread(positive_samples(j).imageFilename);
[le, wi, hi]=size(I);
tempmat=[];
count=1;
for l=1:le
for m=1:wi
if h1(l,m)==1
tempmat(count,1)=l;
tempmat(count,2)=m;
count=count+1;
end
end
end
positive_samples(j).objectBoundingBoxes(1,1)=min(tempmat(:,1));
positive_samples(j).objectBoundingBoxes(1,2)=min(tempmat(:,2));
positive_samples(j).objectBoundingBoxes(1,3)=max(tempmat(:,2))-min(tempmat(:,2));
positive_samples(j).objectBoundingBoxes(1,4)=max(tempmat(:,1))-min(tempmat(:,1));
imtool close all
end
trainCascadeObjectDetector('animalfinder.xml', positive_samples, 'my_neative_folder_name', 'FalseAlarmRate', 0.2, 'NumCascadeStages', 3);
sorry if it's messy......
I did not run the code, because I don't own the toolbox, but the following lines are very "suspicious":
positive_samples(j).objectBoundingBoxes(1,1)=min(tempmat(:,1));
positive_samples(j).objectBoundingBoxes(1,2)=min(tempmat(:,2));
positive_samples(j).objectBoundingBoxes(1,3)=max(tempmat(:,2))-min(tempmat(:,2));
positive_samples(j).objectBoundingBoxes(1,4)=max(tempmat(:,1))-min(tempmat(:,1));
I would expect:
positive_samples(j).objectBoundingBoxes(1,1)=min(tempmat(:,2));
positive_samples(j).objectBoundingBoxes(1,2)=min(tempmat(:,1));
positive_samples(j).objectBoundingBoxes(1,3)=max(tempmat(:,2))-min(tempmat(:,2));
positive_samples(j).objectBoundingBoxes(1,4)=max(tempmat(:,1))-min(tempmat(:,1));
Some suggestions to shorten your code, they are not related to the problem:
You can shorten line 4 to 9 to a single line, avoiding the loop: [positive_samples(1:L).im]=list(4:end).name
And this loop can be replaced as well:
tempmat=[];
count=1;
for l=1:le
for m=1:wi
if h1(l,m)==1
tempmat(count,1)=l;
tempmat(count,2)=m;
count=count+1;
end
end
end
shorter and faster code:
[y,x]=find(h1);
tempmat=[y x];
There is a better way to label your positive samples. The Computer Vision System Toolbox now includes the Training Image Labeler app (as of release 2014a). If you do not have R2014a you should try the Cascade Training GUI app.

column to block using sliding window in matlab

using im2col sliding window in matlab i have converted the input image block into column and again by using col2im i do the inverse process but the output is not same as the input image. How can i recover the input image? can anyone please help me.
Here is the code
in=imread('tire.tif');
[mm nn]=size(in);
m=8;n=8;
figure,imshow(in);
i1=im2col(in,[8 8],'sliding');
i2 = reshape( sum(i1),mm-m+1,nn-n+1);
out=col2im(i2,[m n],[mm nn],'sliding');
figure,imshow(out,[]);
thanks in advance...
You didn't specify exactly what the problem is, but I see a few potential sources:
You shouldn't expect the output to be exactly the same as the input, since you're replacing each pixel value with the sum of pixels in an 8-by-8 neighborhood. Also, you will get a shrinkage of the resulting image by 7 pixels in each direction (i.e. [m-1 n-1]) since the 'sliding' option of IM2COL does not pad the array with zeroes to create neighborhoods for pixels near the edges.
These two lines are redundant:
i2 = reshape( sum(i1),mm-m+1,nn-n+1);
out=col2im(i2,[m n],[mm nn],'sliding');
You only need one or the other, not both:
%# Use this:
out = reshape(sum(i1),mm-m+1,nn-n+1);
%# OR this:
out = col2im(sum(i1),[m n],[mm nn],'sliding');
Image data in MATLAB is typically of type 'uint8', meaning each pixel is represented as an unsigned 8-bit integer spanning the range 0 to 255. Assuming this is what in is, when you perform your sum operation you will implicitly end up converting it to type 'double' (since an unsigned 8-bit integer will likely not be big enough to hold the sum totals). When image pixel values are represented with a double type, the pixel values are expected to span the range 0 to 1, so you will want to scale your resulting image by its maximum value to get it to display properly:
out = out./max(out(:));
Lastly, check what kind of input image you are using. For your code, you are essentially assuming in is 2-D (i.e. a grayscale intensity image). If it is a truecolor (i.e. RGB) image, the third dimension is going to cause you some trouble, and you will have to either process each color plane separately and recombine them or convert the RGB image to grayscale. If it is an indexed image (with an associated color map), you will not be able to do the sort of processing you describe above without first converting it to a grayscale representation.
Why are you expecting the output to be the same?
i2 is the result of performing a SUM around a pixel neighborhood (essentially a low-pass filter), which is the final blurry image that you see. i.e you are NOT doing an inverse process with the COL2IM call.
i1 obtained from 'sliding' option has the information that you would get from 'distinct' option as well, which you need to filter out. Now, this may not be the best way to code it up but it works. Assume that mm is a multiple of m and nn is a multiple of n. If this is not the case, then you'll have to zero-pad accordingly to make this the case.
in=imread('tire.tif');
[mm nn]=size(in);
m=8;n=8;
i1 = im2col(in,[m,n],'sliding');
inSel = [];
for k=0:mm/m-1
inSel = [inSel 1:n:nn+(nn-n+1)*n*k];
end
out = col2im(i1(:,inSel),[m,n],[mm,nn],'distinct');

BMP2AVI program in matlab

HI
I wrote a program that use to work (swear to god) and has stopped from working. this code takes a series of BMPs and convert them into avi file. this is the code:
path4avi='C:/FadeOutMask/'; %dont forget the '/' in the end of the path
pathOfFrames='C:/FadeOutMask/';
NumberOfFiles=1;
NumberOfFrames=10;
%1:1:(NumberOfFiles)
for i=0:1:(NumberOfFiles-1)
FileName=strcat(path4avi,'FadeMaskAsael',int2str(i),'.avi') %the generated file
aviobj = avifile(FileName,'compression','None');
aviobj.fps=10;
for j=0:1:(NumberOfFrames-1)
Frame=strcat(pathOfFrames,'MaskFade',int2str(i*10+j),'.bmp') %not a good name for thedirectory
[Fa,map]=imread(Frame);
imshow(Fa,map);
F=getframe();
aviobj=addframe(aviobj,F)
end
aviobj=close(aviobj);
end
And this is the error I get:
??? Error using ==> checkDisplayRange at 22
HIGH must be greater than LOW.
Error in ==> imageDisplayValidateParams at 57
common_args.DisplayRange = checkDisplayRange(common_args.DisplayRange,mfilename);
Error in ==> imageDisplayParseInputs at 79
common_args = imageDisplayValidateParams(common_args);
Error in ==> imshow at 199
[common_args,specific_args] = ...
Error in ==> ConverterDosenWorkd at 19
imshow(Fa,map);
for some reason I cant put it as code segments. sorry
thank you
Ariel
Are the BMP images indexed? I think the map parameter only applies to images with indexed color maps.
The only way I am able to reproduce the error that you get is when map is a two-element vector where the first element is greater than the second. Note first that the function IMSHOW can be called with the following syntax:
imshow(I,[low high]);
In which I is a grayscale image and low and high specify the display range for the pixel intensities. The extra argument is ignored when I is an RGB image, but even then the value of high must be greater than the value of low or an error is thrown (the one you see above).
What's confusing is why map would be a two-element vector. When loading an image with IMREAD, the map output will either be empty (if the image is not an indexed image) or it will be an N-by-3 color map. I can't think of a situation where the built-in IMREAD would return a map argument with just 2 elements.
Based on the fact that you said it was working, and now suddenly isn't, I would suggest first checking to see if you have inadvertently created an m-file somewhere with the name imread. Doing so could cause that new imread function to be called instead of the built-in one, giving you different outputs than you expect.

Problem with Array type "DAMPAR" in MATLAB deconvolucy.m

Below is part of the code that i tried to edit from, MATLAB's deconvolucy.
it appears to have problem with DAMPAR where the class type does not match.
can anyone help or does anyone know a better way to call in an image that I (as in deconvolucy.m) would tolerate?
[perhaps i should convert the image into array before use? how do i do so?]
// -- code -- //
I = imread('C:\Users\Lem\Desktop\III\TIFF\69_M.000.tif', 'tif');
class(I)
PSF = fspecial('gaussian',7,10);
V = .0001;
BlurredNoisy = imnoise(imfilter(I,PSF),'gaussian',0,V);
WT = zeros(size(I));
WT(5:end-4,5:end-4) = 1;
J1 = deconvlucy(BlurredNoisy,PSF);
J2 = deconvlucy(BlurredNoisy,PSF,20,sqrt(V));
J3 = deconvlucy(BlurredNoisy,PSF,20,sqrt(V),WT);
//.........//
??? Error using ==> deconvlucy>parse_inputs at 316
In function deconvlucy, DAMPAR has to be of the same class as the input image.
Error in ==> deconvlucy at 102
[J,PSF,NUMIT,DAMPAR,READOUT,WEIGHT,SUBSMPL,sizeI,classI,numNSdim]=...
You have read in an image using imread. So it is probably coming in as uint8? The help for imread says the result will be integer of some order for a tiff image. What class was I when it was returned?
You then filtered the image. It appears that imfilter will return an integer image for an integer input image.
Next, you add noise, using imnoise. From the online help for imnoise, it internally converts the image to a [0,1] (double) number, adds the Gaussian noise, then converts back to integer output. So your blurred image should still be integer, probably uint8 elements.
The help for fspecial says it will return a double precision output for PSF.
You called deconvolucy with only two arguments, so it is using the default value for DAMPAR. (I'll argue that this should not fail here. The author of deconvolucy appears not to have supplied a default value that was consistent in type with the inputs.)
Not knowing enough about the IPT or deconvolucy, I might first suggest re-running this code, using two different calls.
J1 = deconvlucy(BlurredNoisy,PSF,[],0);
J1 = deconvlucy(BlurredNoisy,PSF,[],uint8(0));
If one of these calls did not fix the problem, it suggests that deconvolucy expects a double input for the image, BlurredNoisy. The online help for deconvolucy was not specific here. It says only that I may be an N dimensional array or a cell array. Further on in the help, it calls the result a numeric array. So I believe that the image for deconvolucy is expected to be a floating point image. (By my standards, this is a flaw in the help.)
I would then probably try scaling your image to [0,1] as a double. It is just a guess however. So something like:
BlurredNoisy = double(BlurredNoisy)/255;
This assumes your image was uint8 in class originally.