I tried to implement the integral image in MATLAB by the following:
im = imread('image.jpg');
ii_im = cumsum(cumsum(double(im)')');
im is the original image and ii_im is the integral image.
The problem here is the value in ii_im flows out of the 0 to 255 range.
When using imshow(ii_im), I always get a very bright image which I am not sure is the correct result. Am I correct here?
You're implementing the integral image calculations right, but I don't understand why you would want to visualize it - especially since the sums will go beyond any normal integer range. This is expected as you are performing a summation of intensities bounded by larger and larger rectangular neighbourhoods as you move to the bottom right of the image. It is inevitable that you will get large numbers towards the bottom right. Also, you will obviously get a white image when trying to show this image because most of the values will go beyond 255, which is visualized as white.
If I can add something, one small optimization I have is to get rid of the transposing and use cumsum to specify the dimension you want to work on. Specifically, you can do this:
ii_im = cumsum(cumsum(double(im), 1), 2);
It doesn't matter what direction you specify first (2 then 1, or 1 then 2). The summation of all pixels within each bounded area, as long as you specify all directions to operate on, should be the same.
Back to your question for display, if you really, really, really really... I mean really want to, you can normalize the contrast by doing:
imshow(ii_im, []);
However, what you should expect is a gradient image which starts to be dark from the top, then becomes brighter when you get to the bottom right of the image. Remember, each point in the integral image calculates the total summation of pixel intensities bounded by the top left corner of the image to this point, thus forming a rectangle of intensities you need to sum over. Therefore, as we move further down and to the right of the integral image, the total summation should increase.
With the cameraman.tif image, this is the original image, as well as it's integral image visualized using the above command:
Either way, there is absolutely no reason why you would want to visualize it. You would use this directly with whatever application requires it (adaptive thresholding, Viola-Jones detector, etc.)
Another option could be applying a log operation for each value in the integral image. Something like:
imshow(log(1 + ii_im), []);
However, this will make most of the pixels have the same contrast and this is probably not useful. This is what I get with cameraman.tif:
The moral of this story is that you need some sort of contrast normalization so that you can fit all of the values in your integral image within the confines of the data type that is used to display the image on the screen using imshow.
Related
I'm trying to calculate and present the Subtraction images of a Dynamic MRI Sequence. However, I've looked for quite some time and I can't seem to find how to relate the individual Rescale Slope and Intercept and even Window Center and Width fields with the respective fields for the new Subtracted image.
I'm sorry if it is a repost but I can't find the answer for this particular problem.
I guess that for Slope and Intercept I probably should just apply the old ones, subtract the images and make sure they are within uint16 range, but what about Window Center and Width?
Thanks in advance
About Rescale Slope and -Intercept:
You can apply the original values of the slices you subtracted, if they are all equal in the slices you subtract from each other. If they are not equal, you will have to rescale one of the slices to the slope/intercept of the other one before doing the subtraction. Otherwise, the subtraction will yield wrong grayscales. Obviously, the resulting subtracted image will then be assigned the slope and intercept of the slice you rescaled the other one to.
About Window Center and -Width:
There is no answer which is right or wrong. Windowing depends on the taste of the person viewing the images - ask three physicians and receive four different answers ;-)
I would rather recommend to calculate new values from the histogram of the subtracted image than trying to calculate them from the orginal slices. Subtraction means that you eliminate tissue. The original values were probably adjusted in such a way that this tissue is visible. Now that you have subtracted it you want to have a window that emphasizes the vessels - the rest is just noise.
I want to obtain a n*m matrix with a approximated "height" at each discrete point. The input is a picture (see link below) of the contours from a map, each contourline represents an 5m increase or decrease of the height.
My thoughts:
I imported the picture as a logical png to a matrix called A which means that every contourline in the matrix is a connected strip of '1's and everything else is just 0.
My initial thought was to just start in the upper left corner of the matrix, set that height to zero, declare a new matrix 'height' and start with figuring out height(:,1) by adding 5 meters each time we meet a '1' in the A matrix. Knowing the whole first colonn I now for each row start from the left and add 5 m each time we meet a '1'.
I quickly realized however that this wouldn't work since there is no way for the algorithm to understand whether it should add or subtract height, i.e if we are running uphill or downhill.
If I somehow could approximate the gradient from the intensity of contourlines that would be great even though it would always be possible for a uphill to be a downhill and vice versa but then I could manually decide which is true of these two cases.
Picture:
WORK IN PROGRESS
%% Read and binarize the image
I=imread('https://i.stack.imgur.com/pRkiY.jpg');
I=rgb2gray(I);
I=I>graythresh(I)*255;
%% Get skeleton, i.e. the lines!
sk=bwmorph(~I,'skel',Inf);
%% lines are too thin, dilate them
dilated=~imdilate(sk, strel('disk', 2, 4));
%% label the image!
test=bwlabel(dilated,8);
imshow(test,[]); colormap(plasma); % use colormap parula if you prefer.
Missing: label each adjacent area with a number +1 (or -1) its neighbours (No idea how to do this)
Missing: Interpolate flat areas. This should be doable once the altitudes are known. One can set the pixels in the skeleton image to the altitudes and interpolate the rest using griddata, which will be slow, but still doable.
Disclaimer: not full answer yet, feel free to edit or reuse the code in this answer to further it!
I have an image and I apply thresholding to it to apply binary mask.I draw histogram before and after the thresholding process.The histograms look like below.
The second figure which is after thresholding,doesn't show any peaks.Is that mean,my thresholding is wrong.Can anyone please explain these histograms.
Update
Image after thresholding
To summarize Sardar's comment, the horizontal range of your plot is tight. Simply loosen the range a bit so you can see the result better. Doing xlim([-0.5 1.5]); will certainly do that and we can see that in the last figure of your update. How you interpret the histogram... well, for black and white images, examining the histogram is never meaningful because there are only two possible intensities to examine - 0 and 1. Histograms usually give a glimpse as to the contrast of the image. If the histogram is spread out, this usually gives an indication that the image has high contrast. However, if the histogram is within a small range this usually means the image is poor contrast.
Remember that the histogram simply count the occurrence of instances in a data set. In this case, we are counting how many times we see 0 and 1 in the image. Referring to your last plot, this means that approximately 9000 pixels that are intensity 0 and approximately 4000 pixels that are intensity 1. This gives absolutely no indication as to the contrast or the spread of the intensities in your image. because there are only two possible intensities that are seen in the image. As such, to answer your question in such a long-winded way, the answer is that you can't really interpret anything.
The only thing I can possible suggest is that it tells you the ratio of object pixels to background pixels and could indicate a measure of quality. Usually when we determine what is an object and what are background pixels, we would expect that there would be more background pixels than object pixels to be able to discern this from the background. Therefore, the more black pixels you have the better it may be. That being said, I can't really say more unless you actually show us what your image looks like after you threshold it.
I am looking for a method that looks for shapes in 3D image in matlab. I don't have a real 3D sample image right now; in fact, my 3D image is actually a set of quantized 2D images.
The figure below is what I am trying to accomplish:
Although the example figure above is a 2D image, please understand that I am trying to do this in 3D. The input shape has these "tentacles", and I have to look for irregular shapes among them. The size of the tentacle from one point to another can change around but at "consistent and smooth" pace - that is it can be big at first, then gradually smaller later. But if suddenly, the shape just gets bigger not so gradually, like the red bottom right area in the figure above, then this is one of the volume of interests. Note that these shapes have more tendency to be rounded and spherical, but some of them are completely arbitrary and random.
I've tried the following methods so far:
Erode n times and dilate n times: given that the "tentacles" are always smaller than the volume of interest, this method will work as long as the volume is not too small. And, we need to have a mechanism to deal with thicker portion of the tentacle that becomes false positive somehow.
Hough Transform: although I have been suggested this method earlier (from Segmenting circle-like shapes out of Binary Image), I see that it works for some of the more rounded shape cases, but at the same time, more difficult cases such that of less-rounded, distorted, and/or arbitrary shapes can slip through this method.
Isosurface: because of my input is a set of 2D quantized images, using an isosurface allow me to reconstruct image in 3D and see things clearer. However, I'm not sure what could be done further in this case.
So can anyone suggests some other techniques for segmenting such shape out of these "tentacles"?
Every point on your image has the property that it is either part of the tentacle, or part of the volume of interest. If it is unknown apriori what the expected girth of the tentacle is, then 1 wont work because we won't be able to set n. However, we know that the n that erases the tentacle is smaller than the n that erases the node. You can for each point replace it with an integer representing the distance to the edge. Effectively, this can be done via successive single pixel erosion, and replacing each pixel with the count of the iteration at which it was erased. Lets call this the thickness at the pixel, but my rusty old mind tells me that there was a term of art for this.
Now we want to search for regions that have a higher-than-typical morphological distance from the boundary. I would do this by first skeletonizing the image (http://www.mathworks.com/help/toolbox/images/ref/bwmorph.html) and then searching for local maxima of the thickness along the skeleton. These are points on the skeleton where the thickness is larger than the neighbor points.
Finally I would sort the local maxima by the thickness, a threshold on which should help to separate the volumes of interest from the false positives.
I am using regionprop function in matlab to get MajorAxisLength of an image. I think logically this number should not be greater than sqrt(a^2+b^2) in wich a abd b are the width and heigth of the image. but for my image it is. My black and white image contains a black circle in the center of the image. I think this is strange. Can anybody help me?
Thanks.
If you look at the code of regionprops (subfunction ComputeEllipseParams), you see that they use the second moment to estimate the ellipsoid radius. This works very well for ellipsoid-shaped features, but not very well for features with holes. The second moment increases if you remove pixels from around the centroid (which is, btw, why they make I-beams). Thus, the bigger the 'hole' in the middle of your image, the bigger the apparent ellipsoid radius.
In your case, you may be better off using the extrema property of regionprops, and to calculate the largest radius from there.