Can you explain that a histogram can find the degree of similarity in the image? for example, the euclidean distance.
http://en.wikipedia.org/wiki/Image_histogram
http://en.wikipedia.org/wiki/Euclidean_distance
so base on color look at the stats of color the most for color and how it is trough out the image
euclidean distance is the different between the two points so
1 color and amount of color different
and
2 how fare the pixels are a part count pixels [x=2 y=4 z=2] [x=4 y=-4 z=9] = [x=>2 y=>8 z=7]
Related
I have a BW image. I have to calculate the average intensity of that image. For this I have to store individual intensity value of all pixels of that image then calculate average intensity. In this calculation I have to count only the non zero pixel's intensity value (full black pixel i.e. intensity value zero should not take in calculation). How can I do that?
You can try this, but this doesn't work if any columns of the image are all 0!
im=imread('imageBW.jpg');
intensity=mean(sum(im)./sum(im~=0));
Given 4 points of an area, I want to calculate 30 random points inside the area. Is there a way to do that fast? I ask this because if there is a library to do that I could calculate more than 30 random points.
Do you mean 4 corners? Find the transform that maps your shape to a square, calculate random uniform coordinates in the square, then map back to your shape with the inverse transform.
I've a binary image containing an object as illustrated in the figure below. The centerline of the object is depicted in red. For each pixel belonging to the object, I would like to relabel it with a color. For instance, pixels whose orthogonal distance to the centerline are half of the distance to the object boundary from the centerline, should be labeled blue, otherwise green. An illustration is given below. Any ideas?
Also, how could I fit a 1D gaussian centered in the object centerline and orthogonal to it?
The image in full resolution can be found under: http://imgur.com/AUK9Hs9
Here is what comes to mind (providing you have the Image Processing Toolbox):
Create two binary images, one BWin with 1 (true) pixels at the location of your red line, and one BWout that is the opposite of your white region (1 outisde the region and 0 (false) inside).
Like this:
BWin:
BWout:
Then apply the euclidean transform to both using bwdist:
Din = bwdist(BWin);
Dout = bwdist(BWout);
You now have two images with pixel intensities that represent the euclidean distance to the closest non-0 pixel.
Now subtract both, the values of the difference will be positive on one side of the equidistance and negative on the other side:
blueMask=Din-Dout>0;
greenMask=~BWout & blueMask;
You can then populate the RGB layer using the masks:
Result=zeros(size(II));
Result(:,:,1)=BWin;
Result(:,:,2)=greenMask;
Result(:,:,3)=~blueMask & ~BWin;
imshow(Result);
I'm trying to implement Naive Bayes Nearest Neighbor (NBNN) for image classification. In the algorithm it asks for the Euclidean distance between two pixels belonging to different images.
I have 1) a set of m-images in a m-by-40,000 matrix (where 40,000 is the number of pixels in one image) and 2) another set of n-images in a n-by-40,000 matrix.
1) is the training set and 2) is the validation set.
In order for me to apply NBNN, from my understanding, I need to find the Euclidean distance between each pixels of 2) to the corresponding pixels of 1).
My question is, given two grey scale values, one from 1) and the other from 2), how would I find the Euclidean distance between them in order to apply k-NN?
Let x, y be two gray-scale 200-by-200 images. Pixels levels are x1,x2,...x40000 and y1, y2,...y40000.
The euclidean distance between x and y is d(x,y)=sqrt(sum_i((xi-yi)^2))
I will refer to the notation and definition given at wikipedia
You have 1d-data, thus p=(p1) and q=(q1). e(p,q)=sqrt((p1-q1)^2)=abs(p1-q1)
For the 1d-case, the euclidean distance is the absolute difference of the grey values.
I'm kinda confuse of the meaning of edge density. From the equation,
edge density = sum(x=1,w) sum(y=1,h) e(x,y)/N
where e is the edge map image (magnitude of vertical edge at (x,y)), there are two version of N.
1st version - N = w x h (width x height)
2nd version - N = number of non zero vertical edge pixel
What I don't understand is how can I calculate the edge density? Is it just summation of the white edges pixels?
Edited
Hi all, from what I understand from reading the paper given by #Gilgamesh, the N is the area of the region, width times its height but from the answer given it seems there is a conflict whereby N refers to number of non white pixels(black pixel). So, which is the correct one? Here is another reference on the N value calculating edge density.
basically the edge density is really just a (local) average density, which you can either calculate over binarized images or, more common, over grey scale images.
And yes, it is basically just summing up over both x and y coordinates in a subimage in most cases, see equation (1) here
http://ro.uow.edu.au/cgi/viewcontent.cgi?article=1517&context=infopapers
and averaging afterwards.
Regards,
G.
From what I understand about edge density, it is defined as where are the white pixels and is the total number of pixels i.e. .
cannot be the number of black pixels because the number could be arbitrary, ranging from zero to all pixels and the range of edge density will then be .
When is the area, the range will be which portrays just what we want, the places where edges are dense (or sparse depending on your requirement).
The edge map is the map of gradient magnitude (i.e. the length of the gradient vector). So the edge density is the average of the gradient magnitude over a neighborhood.
If you have a binary edge map where 0 means no-edge, 1 means edge (this can be obtained by thresholding the gradient magnitude), then the edge density is just the ratio of edge/non-edge pixels.