Imrect doesn't get the correct size - matlab

I am showing an image so the user can select different ORI with the shape of rectangles always. My problem is that when I see the results of getPosition it gives me an x, y, width and height strange because if I use these numbers to extract this region I get another one totally different. Furthermore, they are too little to mark the current roi selected.
I think that the problem is because when I show the image Matlab reduces it to 67% because it says it is too big to show it so I think it's getting coordinates in a reduced image. Is there any way to get the real positions without this scale? I tried to divide these numbers by 0,67 but the result was not ok so I think matlab is not reducing the same in height than in width.

ok here is the crazy answer:
matlab x == y of your image
matbab y == x of your image
No matter with the scale.

Related

Problems in implementing the integral image in MATLAB

I tried to implement the integral image in MATLAB by the following:
im = imread('image.jpg');
ii_im = cumsum(cumsum(double(im)')');
im is the original image and ii_im is the integral image.
The problem here is the value in ii_im flows out of the 0 to 255 range.
When using imshow(ii_im), I always get a very bright image which I am not sure is the correct result. Am I correct here?
You're implementing the integral image calculations right, but I don't understand why you would want to visualize it - especially since the sums will go beyond any normal integer range. This is expected as you are performing a summation of intensities bounded by larger and larger rectangular neighbourhoods as you move to the bottom right of the image. It is inevitable that you will get large numbers towards the bottom right. Also, you will obviously get a white image when trying to show this image because most of the values will go beyond 255, which is visualized as white.
If I can add something, one small optimization I have is to get rid of the transposing and use cumsum to specify the dimension you want to work on. Specifically, you can do this:
ii_im = cumsum(cumsum(double(im), 1), 2);
It doesn't matter what direction you specify first (2 then 1, or 1 then 2). The summation of all pixels within each bounded area, as long as you specify all directions to operate on, should be the same.
Back to your question for display, if you really, really, really really... I mean really want to, you can normalize the contrast by doing:
imshow(ii_im, []);
However, what you should expect is a gradient image which starts to be dark from the top, then becomes brighter when you get to the bottom right of the image. Remember, each point in the integral image calculates the total summation of pixel intensities bounded by the top left corner of the image to this point, thus forming a rectangle of intensities you need to sum over. Therefore, as we move further down and to the right of the integral image, the total summation should increase.
With the cameraman.tif image, this is the original image, as well as it's integral image visualized using the above command:
Either way, there is absolutely no reason why you would want to visualize it. You would use this directly with whatever application requires it (adaptive thresholding, Viola-Jones detector, etc.)
Another option could be applying a log operation for each value in the integral image. Something like:
imshow(log(1 + ii_im), []);
However, this will make most of the pixels have the same contrast and this is probably not useful. This is what I get with cameraman.tif:
The moral of this story is that you need some sort of contrast normalization so that you can fit all of the values in your integral image within the confines of the data type that is used to display the image on the screen using imshow.

Image Pyramids - Dealing with Arbitrary Dimensions

Looking at the Image Pyramids tutorial I see the following note:
Notice that it is important that the input image can be divided by a factor of two (in both dimensions). Otherwise, an error will be shown.
I was wondering, How can Image Pyramid can be built for arbitrary image size and keep the "Reproduction" exact (Up to round off errors).
Taking image size of 101 x 101, After the first "Downsample" step using 1:2:101 an image size of 51 x 51 is yielded.
Yet after another iteration 26 x 26 image is yielded, so how can we handle both odd and even sizes.
I'd be happy for a MATLAB code dealing with the "Upsample" / "Downsample" procedure for any size.
I have seen some techniques where they resize the dimensions of the original image to be even, before they subsample the image. This odd and even stuff is unfortunately unavoidable as you have already seen, so you'd have to do some work by yourself before you pass it to an image decomposition routine. That way there is no ambiguity when constructing your image pyramid. When you're done, you can crop out those portions of the original image that you don't want. Another technique would be to eliminate the row and column to ensure that they're both even as well.
As such, all you'd have to do is extend your image's dimensions to ensure that each dimension is even. In other words, you'd do something like this:
im = imread('...'); %// Place image here
rows = size(im,1);
cols = size(im,2);
imResize = imresize(im, [rows + mod(rows,2), cols + mod(cols,2)], 'bilinear');
This basically reads in an image and the dimensions (rows and columns). After, it resizes the image to ensure that the dimensions (rows and columns) are even. This is done by checking to see if the values are odd by using mod. If any of the dimensions are odd, the output would be 1 and you would just tack this on as the output dimensions.
Also, you can simply crop out the last row or column if they're odd too by doing:
im = imread('...'); % // Place image here
rows = size(im,1);
cols = size(im,2);
imResize = im(1:rows-mod(rows,2), 1:cols-mod(cols,2), :);
The mod here is used in the same way to crop out what you don't need. If any of the dimensions are odd, simply subtract 1 so that we eliminate either the last row or last column if needed.
As CST-Link has already stated, upsampling from an image that has odd dimensions already would be impossible to reconstruct the original image dimensions as that precision in order to get those original rows and columns back if they were odd were already lost.
For downsampling the better approach would be to delete the last row & column from the image, thus a 101×101 generates 50×50. Though it may be a matter of taste, I think is better to ignore real information than to introduce bogus information.
For upsampling, I'm afraid what you ask is impossible. Let's say you have a 200×200 pixel image; can you tell from its size that is was downsampled from a 400×400 pixel image, or from 401×401 pixel image? Both would give the same result.

Can't drag 'imrect' object for high resolution images

I'm making a GUI (with GUIDE) in which there is an axis used to display image sequences. In order to let the user select a region of interest in the sequence I'm using 'imrect'. The problem is the following: everything goes fine when images are smaller than 512x512 pixels (approximately), however for larger images (I tried 600x600 and 1024x1024) the rectangle does appear, I can change its size but I can't drag it around. I though it had to be with axis units so I changed the property from 'pixels' to 'normalized' and use normalized coordinates, but it does not work.
Here is my code to create the rectangle and restrain its movement to the axis limits:
hROI = imrect(hVideo,[Width/4 Height/4 Width/2 Height/2]; % Arbitrary size and position of the rectangle, centered on the image.
fcn = makeConstrainToRectFcn('imrect',get(gca,'XLim'),get(gca,'YLim'));
setPositionConstraintFcn(hROI,fcn);
When I perform the same operation on those large images outside the GUI it works. Any hint is welcome!
thanks
I found a workaround to the problem, in case it can help someone:
In the call to imshow just before calling imrect, we need to specify the axis limits as the "XData" and "YData" parameters.
Example:
imshow(Movie{Frame},'parent',handles.axes1_Video,'XData',get(gca,'XLim'),'YData',get(gca,'YLim'))
It works for images up to 1024x1024.

Drawing 256 squares/rectangles in MATLAB

I am trying to draw 256 small sized squares using MATLAB rectangle function. If I am drawing abut 10 squares then the following works fine:
for i=1:2:40
rectangle('Position',[5,3+i,0.3,0.3],...
'Curvature',[0,0],...
'LineStyle','-', 'faceColor', 'black')
end
axis off;
daspect([1,1,1])
But when I change the last value of for loop to 512 (to draw 256 squares), it is not printing properly:
Here is the magnified version of a section of the above image:
This image clearly shows some thing is wrong somewhere as the sides of the squares are not perfectly equal and that the squares are becoming smaller in size for higher no. of squares: Can anybody help me draw the squares perfectly with size not diminishing, ? (I don't have any issues with memory, and I can tolerate multiple pages scrolling down for covering entire squares )
It is a classical Moire effect. You can't show that much rectangles on your monitor, because there aren't enough pixels. Matlab does some down-sampling for you. Thus, you get another frequency, that did not exist originally.
I tried your code and it works fine even when the loop does 512 iterations - when you zoom-in in the final matlab figure. The artifacts you describe are probably caused by the monitor resolution or a low resolution while exporting to non-vector files.
Try to export your image as a vector file (eps or svg) to see that everything looks fine when you zoom in.

what does MajorAxisLength property in regionprop matlab function mean?

I am using regionprop function in matlab to get MajorAxisLength of an image. I think logically this number should not be greater than sqrt(a^2+b^2) in wich a abd b are the width and heigth of the image. but for my image it is. My black and white image contains a black circle in the center of the image. I think this is strange. Can anybody help me?
Thanks.
If you look at the code of regionprops (subfunction ComputeEllipseParams), you see that they use the second moment to estimate the ellipsoid radius. This works very well for ellipsoid-shaped features, but not very well for features with holes. The second moment increases if you remove pixels from around the centroid (which is, btw, why they make I-beams). Thus, the bigger the 'hole' in the middle of your image, the bigger the apparent ellipsoid radius.
In your case, you may be better off using the extrema property of regionprops, and to calculate the largest radius from there.