Using rectangle in Matlab. Using Sum() - matlab

I have performed rgb2gray on an image and did a sobel edge detection on the image.
then did
faceEdges = faceNoNoise(:,:) > 50; %binary threshold
so it sets the outline of the image (a picture of a face), to black and white. Values 1 is white pixel, and 0 is black pixel. Someone said I could use this,
mouthsquare = rectangle('position',[recX-mouthBoxBuffer, recY-mouthBoxBuffer, recXDiff*2+mouthBoxBuffer/2, recYDiff*2+mouthBoxBuffer/2],... % see the change in coordinates
'edgecolor','r');
numWhite = sum(sum(mouthsquare));
He said to use two sum()'s because it gets the columns and rows of the contained pixels within the rectangle. numWhite always returns 178 and some decimal numbers.

If you have a 2D matrix M (this being -- for exmple -- an image), the way to count how many elements have the value 1 is:
count_1 = sum(M(:)==1)
or
count_1 = sum(reshape(M,1,[])==1)
If the target values are not exactly 1, but have a Δ-threshold of, let's say, +/- 0.02, then one should ask for:
count_1_pm02 = sum((M(:)>=0.98) & (M(:)<=1.02))
or the equivalent using reshape.

Related

Automatically convert pixels to millimeters in Mathematica

I can get the drop contour through a GetDropProfile command.
However, I can't find the conversion factor from pixels to millimeters. As the contour of the drop is obtained point by point starting from left to right, then the first ordered pair in the list gives the coordinates of the first pixel on the left. Consequently, the last ordered pair gives the value of the last pixel on the right. Since they are opposite each other, they therefore have the same y, so the difference in x of these two points is the diameter of the drop. How can I automate this process of converting pixels into millimeters and viewing the graph in millimeters, smoothing the contour of the discrete curve automatically giving us how many points to the right and left we should take?
It follows the image of the drop and the contour in pixels obtained.
As posted here, assuming the axes are in millimetres, the scale can be obtained from the x-axis ticks, which can be sampled from the row 33 from the bottom. As can be observed by executing the code below, the left- and rightmost ticks occupy one pixel each, coloured RGB {0.4, 0.4, 0.4}. So there are 427 pixels per 80mm.
img = Import["https://i.stack.imgur.com/GIuYq.png"];
{wd, ht} = ImageDimensions[img];
data = ImageData[img];
(* View the left- and rightmost pixel data *)
Take[data[[-33]], 20]
Take[data[[-33]], -20]
p1 = LengthWhile[data[[-33]], # == {1., 1., 1.} &];
p2 = LengthWhile[Reverse[data[[-33]]], # == {1., 1., 1.} &];
p120 = wd - p1 - p2 - 1
427
(* Showing the sampled row in the graphic *)
data[[-33]] = ConstantArray[{1, 0, 0}, wd];
Graphics[Raster[Reverse[data]]]
You might ask about smoothing the curve here https://mathematica.stackexchange.com

How to perform an orthographic projection on a z-Buffer image in Matlab?

I am facing the same problem as mentioned in this post, however, I am not facing it with OpenGL, but simply with MATLAB. Depth as distance to camera plane in GLSL
I have a depth image rendered from the Z-Buffer from 3ds max. I was not able to get an orthographic representation of the z-buffer. For a better understanding, I will use the same sketch as made by the previous post:
* |--*
/ |
/ |
C-----* C-----*
\ |
\ |
* |--*
The 3 asterisks are pixels and the C is the camera. The lines from the
asterisks are the "depth". In the first case, I get the distance from the pixel to the camera. In the second, I wish to get the distance from each pixel to the plane.
The settins of my camera are the following:
WIDTH = 512;
HEIGHT = 424;
FOV = 89.971;
aspect_ratio = WIDTH/HEIGHT;
%clipping planes
near = 500;
far = 5000;
I calulate the frustum settings like the following:
%calculate frustums settings
top = tan((FOV/2)*5000)
bottom = -top
right = top*aspect_ratio
left = -top*aspect_ratio
And set the projection matrix like this:
%Generate matrix
O_p = [2/(right-left) 0 0 -((right+left)/(right-left)); ...
0 2/(top-bottom) 0 -((top+bottom)/(top-bottom));...
0 0 -2/(far-near) -(far+near)/(far-near);...
0 0 0 1];
After this I read in the depth image, which was saved as a 48 bit RGB- image, where each channel is the same, thus only one channel has to be used.
%Read in image
img = imread('KinectImage.png');
%Throw away, except one channel (all hold the same information)
c1 = img(:,:,1);
The pixel values have to be inverted, since the closer the values are to the camera, the brigher they were. If a pixel is 0 (no object to render available) it is set to 2^16, so , that after the bit complementation, the value is still 0.
%Inverse bits that are not zero, so that the z-image has the correct values
c1(c1 == 0) = 2^16
c1_cmp = bitcmp(c1);
To apply the matrix, to each z-Buffer value, I lay out the vector one dimensional and build up a vector like this [0 0 z 1] , over every element.
c1_cmp1d = squeeze(reshape(c1_cmp,[512*424,1]));
converted = double([zeros(WIDTH*HEIGHT,1) zeros(WIDTH*HEIGHT,1) c1_cmp1d zeros(WIDTH*HEIGHT,1)]) * double(O_p);
After that, I pick out the 4th element of the result vector and reshape it to a image
img_con = converted(:,4);
img_con = reshape(img_con,[424,512]);
However, the effect, that the Z-Buffer is not orthographic is still there, so did I get sth wrong? Is my calculation flawed ? Or did I make mistake here?
Depth Image coming from 3ds max
After the computation (the white is still "0" , but the color axis has changed)
It would be great to achieve this with 3ds max, which would resolve this issue, however I was not able to find this setting for the z-buffer. Thus, I want to solve this using Matlab.

How to find maximum pixel intensity of the two regions obtained after finding the threshold of the image

We are working on DIP project where we found out the threshold of the gray scale image. Now we have to * find the maximum intensity of the two regions that we got, one region whose pixels are less than the threshold and the other whose pixels are greater than it. *
PS. We are not converting the image into binary image after finding the threshold. We just have to separate pixels in two regions and find the maximum intensity in each region
PS: We are working on MATLAB
Actually it is pretty simple to do.
Here is a function to do so:
function [ lowThrMaxIntns, highThrMaxIntns ] = FindMaxIntnesity ( mInputImage, thrLevel )
% Pixels which are lower than the threshold level
mLowThrImage = zeros(size(mInputImage));
% Pixels which are higher than the threshold level
mHightThrImage = zeros(size(mInputImage));
mLowThrImage(mInputImage < thrLevel) = mInputImage(mInputImage < thrLevel);
mHightThrImage(mInputImage >= thrLevel) = mInputImage(mInputImage >= thrLevel);
[lowThrMaxIntns lowThrMaxIntnsIdx] = max(mLowThrImage(:));
[highThrMaxIntns highThrMaxIntnsIdx] = max(mHightThrImage(:));
end
The output are only the intensities of the pixels.
If you need the pixel location, use the Idx variables (Use ind2sub for the sub scripts form).

Matlab and OpenCV calculate different image moment m00 for the same image

For exactly the same image
Opencv Code:
img = imread("testImg.png",0);
threshold(img, img_bw, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Mat tmp;
img_bwR.copyTo(tmp);
findContours(tmp, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// Get the moment
vector<Moments> mu(contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mu[i] = moments( contours[i], false );
}
// Display area (m00)
for( int i = 0; i < contours.size(); i++ )
{
cout<<mu[i].m00 <<endl;
// I also tried the code
//cout<<contourArea(contours.at(i))<<endl;
// But the result is the same
}
Matlab code:
Img = imread('testImg.png');
lvl = graythresh(Img);
bw = im2bw(Img,lvl);
stats = regionprops(bw,'Area');
for k = 1:length(stats)
Area = stats(k).Area; %m00
end
Any one has any thought on it? How to unify them? I think they use different methods to find contours.
I uploaded the test image at the link below so that someone who is interested in this can reproduce the procedure
It is a 100 by 100 small 8 bit grayscale image with only 0 and 255 pixel intensity. For simplicity, it only has one blob on it.
For OpenCV, the area of contour (image moment m00) is 609.5 (Very odd value)
For Matlab, the area of contour (image moment m00) is 763.
Thanks
Exist many different definitions of how contours should be extracted from binary image. For example it can be polygon that is the perimeter of white object in a binary image. If this definition was used by OpenCV, then areas of contours would be the same as areas of connected components found by Matlab. But this is not the case. Contour found by findContour() function is the polygon that connects centers of neighbor "edge pixels". Edge pixel is a white pixel that has black neighbor in N4 neighborhood.
Example: suppose you have an image whose size is 100x100 pixels. Every pixel above the diagonal is black. Every pixel below or on the diagonal is white (black triangle and white triangle). Exact separation polygon will have almost 200 vertexes at distance of 1 pixel: (0,0), (1,0), (1,1), (2,1), (2,2),.... (100,99), (100,100), (0,100). As you can see this definition is not very good from practical point of view. Polygon returned by OpenCV will have exactly 3 vertexes needed to define the triangle: (0,0), (99,99), (0,99). Its area is (99 x 99 / 2) pixels. It is not equal to number of white pixels. It is not even an integer. But this polygon is more practical than previous one.
Those are not the only possible definitions for polygon extraction. Many other definitions exist. Some of them (in my opinion) may be better than the one used by OpenCV. But this is the one that was implemented and used by a lot of people.
Currently there no effective workaround for your problem. If you want to get exactly same numbers from MATLAB and OpenCV you will have to draw the contours found by foundContours on some black image, and use function moments() on image. I know that upcoming OpenCV 3 have function that finds connected components but I didn't tried it myself.

maximum intensity projection matlab with color

Hi all I have a stack of images of fluorescent labeled particles that are moving through time. The imagestack is gray scaled.
I computed a maximum intensity projection by taking the maximum of the image stack in the 3rd dimension.
Example:
ImageStack(x,y,N) where N = 31 image frames.
2DProjection = max(ImageStack,[],3)
Now, since the 2D projection image is black and white, I was hoping to assign a color gradient so that I can get a sense of the flow of particles through time. Is there a way that I can overlay this image with color, so that I will know where a particle started, and where it ended up?
Thanks!
You could use the second output of max to get which frame the particular maximum came from. max returns an index matrix which indicates the index of each maximal value, which in your case will be the particular frame in which it occurred. If you use this with the imagesc function, you will be able to plot how the particles move with time. For instance:
ImageStack(x,y,N) where N = 31 image frames.
[2DProjection,FrameInfo] = max(ImageStack,[],3);
imagesc(FrameInfo);
set(gca,'ydir','normal'); % Otherwise the y-axis would be flipped
You can sum up bright pixels of each image with one another after coloring each image. This way you will have mixed colors on overlapped areas which you will miss using max function. Although I like the previous answer more than mine.
hStep = 1/N;
currentH = 0;
resultImage = uint8(zeros(x,y,3));
for i = 1 : N
rgbColor = hsv2rgb(currentH,1,0.5);
resultImage(:,:,1) = resultImage(:,:,1) + im(:,:,i) * rgbColor(1);
resultImage(:,:,2) = resultImage(:,:,2) + im(:,:,i) * rgbColor(2);
resultImage(:,:,3) = resultImage(:,:,3) + im(:,:,i) * rgbColor(3);
currentH = currentH + hStep;
end