Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm implementing metric rectification of an image with projective distortion in the following manner:
From the original image I'm finding two sets of parallel lines and finding their intersection points (the vanishing points at infinity).
I'm selecting five non-collinear points on a circle to be fit to a conic, then I'm checking where that conic intersects the line at infinity using the aforementioned points.
I use those points to find the distorted dual degenerate conic.
Theoretically, since the distorted conic is determined by C*'=HC*H' (C* is the dual degenerate conic, ' is transpose, H is my homography), I should be able to run SVD to determine H. Undistorted, C* is a 3x3 identity matrix with the last element on the diagonal zero. However, if I run SVD I don't get ones in the diagonal matrix. For some matrices I can avoid this by using Cholesky factorization instead (which factors to C*'=HH' which, at least for this, is mostly okay) but this requires a matrix that's positive definite. Is there a way to distribute the scale inside the diagonal matrix returned in SVD equally into the U and V' matrices while keeping them the same? (e.g. U = V).
I'm using MATLAB for this. I'm sure I'm missing something obvious...
The lack of positive definiteness of the resulting matrices is due to noise, as the image used had too much radial distortion rendering even the selection of many points on the circle fairly useless in this approach.
The point missed in the SVD approach was to remove the scale from the diagonal component by right and left multiplying by the square root of the diagonal matrix (with the last diagonal element set to 1, since that singular value should be zero but a zero component there would not yield correct results).
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am having trouble understanding the result of the 2D Fourier transform on images. Are the indices in the resulting matrix the horizontal and vertical frequencies respectively, of the image? How can I extract the frequencies that are present in the image from the matrix?
As I recall in the 1D case if one were to Fourier transform a signal, one would get a spectrum representing the Magnitude of each frequency in said signal. How does it work for images? How do I interpret the result?
Sample code:
img = imread(image);
A = rgb2gray(img);
X = fft2(A);
How do I interpret the X matrix in this case for example?
Thanks in Advance!
Maybe not a complete answer but I'll try.
The frequency domain for the image could look like the one in the image i provided (1). What those frequency domain plots show is an overall direction which is dominant in the image. For the first frequency plot this would be vertical. If you take a close look you can also see some edges from the input image. Big changes in intensity result in sharp "edges" in the frequency domain which is caused by the representation of a rectangular impulse in the frequency domain (2).
Not sure if this helps
Letter image, binary:
Rectangular impuls in frequency domain:
Lets suppose you have an image of 30x30 i.e 900 sample points on which you have to perform fft. After computing your fft you get 900 point valued matrix. Now, its important to understand that these values are not to be separated as sine and cosine components, instead these are complex numbers that your signal is made of. only 451( half+1) values of your matrix contain actual data that you are interested in i.e DC component, 449 complex components, nyquist frequency component and 449 complex conjugates of complex components. the only information your complex conjugate values provide you is about the null imaginary part.
how can you derive information from your matrix:
modulus of the coefficient: gives amplitude information
angle of the coefficient: gives you phase information
real part: cosine amplitude
imaginary part: sine amplitude
component at index i: frequency (i/N)*Sr where N is the FFT size and Sr is your sampling rate.
Hope this helps. To understand further application refer to page: http://homepages.inf.ed.ac.uk/rbf/HIPR2/fourier.htm
Projection slice theorem is hell of a tool understanding and relating 2D ft to 1D ft. It states that: Suppose that you have a line on the image at any angle (angle increases in ccw ,conventional sense) and along that line you take 1D ft (since line has also an amplitude due to pixels of the image). Now, 1D ft of that line is equal to
S(fcos(theta), fsin(theta))
where S is the 2D ft of the image, theta is the angle that your line is at and f is the parameter of your line (just as the parameter x of the x axis).
Once you understand this theorem, I beleive it becomes easier to understand the concept of 2D ft.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I want to brainstorm an idea in MATLAB with you guys. Given a matrix with many columns (14K) and few rows (7) where columns are items and rows features of the items, I would like to compute the similarity with all items and keep it in matrix which is:
Easy to compute
Easy to access
for 1., I came up with a brilliant idea of using pdist() which is very fast:
A % my matrix
S = pdist(A') % computes the similarity btw all columns very fast
However accessing s is not convenient. I prefer to access similarity between item i and j , e.g. using S(i,j):
S(4,5) % is the similarity between item 4 and 5
In its original definition, S is an array not a matrix. Is making it as an 2D matrix a bad idea storage-wise? Could we think about a cool idea that can help me find which similaity corresponds to which items quickly?
Thank you.
You can use pdist2(A',A'). What is returned is essentially the distance matrix in its standard form where element (i,j) is the dissimilarity (or similarity) between i-th and j-th pattern.
Also, if you want to use pdist(), which is ok, you can convert the resulting array into the well-known distance matrix by using the function squareform().
So, in conclusion, if A is your dataset and S the distance matrix, you can use either
S=pdist(A');
S=squareform(S);
or
S=pdist2(A',A');
Now, regarding the storage point-of-view, you will certainly notice that such matrix is symmetric. What Matlab essentially proposes with the array S in pdist() is to save space: due to the fact that such matrix is symmetric you can as well save half of it in a vector. Indeed the array S has m(m-1)/2 elements whereas the matrix form has m^2 elements (if m is the number of patterns in your training set). On the other hand, most certainly is trickier to access such vector whereas the matrix is absolutely straightforward.
I'm not completely sure to understand what your question is, but if you want to access S(i, j) easily then the function squareform is made for this:
S = squareform(pdist(A'));
Best,
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Please explain as to what happens to an image when we use histeq function in MATLAB? A mathematical explanation would be really helpful.
Histogram equalization seeks to flatten your image histogram. Basically, it models the image as a probability density function (or in simpler terms, a histogram where you normalize each entry by the total number of pixels in the image) and tries to ensure that the probability for a pixel to take on a particular intensity is equiprobable (with equal probability).
The premise behind histogram equalization is for images that have poor contrast. Images that look like they're too dark, or if they're too washed out, or if they're too bright are good candidates for you to apply histogram equalization. If you plot the histogram, the spread of the pixels is limited to a very narrow range. By doing histogram equalization, the histogram will thus flatten and give you a better contrast image. The effect of this with the histogram is that it stretches the dynamic range of your histogram.
In terms of the mathematical definition, I won't bore you with the details and I would love to have some LaTeX to do it here, but it isn't supported. As such, I defer you to this link that explains it in more detail: http://www.math.uci.edu/icamp/courses/math77c/demos/hist_eq.pdf
However, the final equation that you get for performing histogram equalization is essentially a 1-to-1 mapping. For each pixel in your image, you extract its intensity, then run it through this function. It then gives you an output intensity to be placed in your output image.
Supposing that p_i is the probability that you would encounter a pixel with intensity i in your image (take the histogram bin count for pixel intensity i and divide by the total number of pixels in your image). Given that you have L intensities in your image, the output intensity at this location given the intensity of i is dictated as:
g_i = floor( (L-1) * sum_{n=0}^{i} p_i )
You add up all of the probabilities from pixel intensity 0, then 1, then 2, all the way up to intensity i. This is familiarly known as the Cumulative Distribution Function.
MATLAB essentially performs histogram equalization using this approach. However, if you want to implement this yourself, it's actually pretty simple. Assume that you have an input image im that is of an unsigned 8-bit integer type.
function [out] = hist_eq(im, L)
if (~exist(L, 'var'))
L = 256;
end
h = imhist(im) / numel(im);
cdf = cumsum(h);
out = (L-1)*cdf(double(im)+1);
out = uint8(out);
This function takes in an image that is assumed to be unsigned 8-bit integer. You can optionally specify the number of levels for the output. Usually, L = 256 for an 8-bit image and so if you omit the second parameter, L would be assumed as such. The first line computes the probabilities. The next line computes the Cumulative Distribution Function (CDF). The next two lines after compute input/output using histogram equalization, and then convert back to unsigned 8-bit integer. Note that the uint8 casting implicitly performs the floor operation for us. You'll need to take note that we have to add an offset of 1 when accessing the CDF. The reason why is because MATLAB starts indexing at 1, while the intensities in your image start at 0.
The MATLAB command histeq pretty much does the same thing, except that if you call histeq(im), it assumes that you have 32 intensities in your image. Therefore, you can override the histeq function by specifying an additional parameter that specifies how many intensity values are seen in the image just like what we did above. As such, you would do histeq(im, 256);. Calling this in MATLAB, and using the function I wrote above should give you identical results.
As a bit of an exercise, let's use an image that is part of the MATLAB distribution called pout.tif. Let's also show its histogram.
im = imread('pout.tif');
figure;
subplot(2,1,1);
imshow(im);
subplot(2,1,2);
imhist(im);
As you can see, the image has poor contrast because most of the intensity values fit in a narrow range. Histogram equalization will flatten the image and thus increase the contrast of the image. As such, try doing this:
out = histeq(im, 256); %//or you can use my function: out = hist_eq(im);
figure;
subplot(2,1,1);
imshow(out);
subplot(2,1,2);
imhist(out);
This is what we get:
As you can see the contrast is better. Darker pixels tend to move towards the darker end, while lighter pixels get pushed towards the lighter end. Successful result I think! Bear in mind that not all images will give you a good result when you try and do histogram equalization. Image processing is mostly a trial and error thing, and so you put a mishmash of different techniques together until you get a good result.
This should hopefully get you started. Good luck!
I'm looking for available code that can estimate the kernel density of a set of 2D weighted points. So far I found this option in for non-weighted 2D KDE in MATLAB: http://www.mathworks.com/matlabcentral/fileexchange/17204-kernel-density-estimation
However it does not incorporate the weighted feature. Is there any other implemented function or library that should come in handy for this? I thought about "hacking" the problem, where suppose I have simple weight vector: [2 1 3 1], I can literally just repeat each sampled point, twice, once, three times and once respectively. I'm not sure if this computation would be valid mathematically though. Again the issue here is that the weight vector I have is decimal, so normalizing to the minimum number of the vector and then multiplying each other entry implies errors in rounding, specially if the weights are in the same order of magnitude.
Note: The ksdensity function in MATLAB has the weighted option but it is only for 1D data.
Found this, so problem solved. (I guess): http://www.ics.uci.edu/~ihler/code/kde.html
I used this function and found it to be excellent. I discuss varying the n parameter (area over which density is calculated) in this Stack Overflow post, and it contains some examples of 2D KDE plots using contour3.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I have a very basic question. What is the basis of the normal probability plot i.e. what do the probabilities represent? I am testing for a standard normal distribution. My normplot (in MATLAB) revealed that the values were more or less in a straight line BUT the probability of 0.5 corresponded to a value other than zero.
My question is, how do I interpret this? Does this mean that my data is normally distributed but has a non-zero mean (i.e. not standard normal) or does this probability only reflect something else? I tried Google and one link said the probabilities are the cumulative probabilities from the z-table, and I can't figure out what to make of it.
Also in MATLAB, is it that as long as the values are fitting into the line drawn by the program (the red dotted line) the values come from a normal distribution? In one of my graphs, the dotted line is very steep but the values fit in, does this mean that the one or two values that are way outside this line are just outliers?
I'm very new to stats, so please help!
Thanks!
My question is, how do I interpret this? Does this mean that my data is normally distributed but has a non-zero mean (i.e. not standard normal) or does this probability only reflect something else?
You are correct. If you run normplot and get data very close to the fitted line, that means your data has a cumulative distribution function that is very close to a normal distribution. The 0.5 CDF point corresponds to the mean value of the fitted normal distribution. (Looks like about 0.002 in your case)
The reason you get a straight line is that the y-axis is nonlinear, and it's made to be "warped" in such a way that a perfect Gaussian cumulative distribution would map into a line: the y-axis marks are linear with the inverse error function.
When you look at the ends and they have steeper slopes than the fitted line, that means your distribution has shorter tails than a normal distribution, i.e. there are fewer outliers, perhaps due to some physical constraint that prevents excessive variation from the mean.
The normal distribution is a density function. The probability of any single value will be 0. This because you have the total probability ( = 1) distributed between an infinite number of values (its a continuous function).
What you have there in the graph (of the normal distribution) is how the probability is distributed (y axis) around the values (x axis). So what you can get from the graph is the probability of an interval either between 2 points, from -infinite to any point, or from any point to +infinte. This probability is obtained integrating the function (of the normal distribution) defined from point1 to point2.
But you don't have to do this integral since you have the z table. The z table gives you the probability of x being between -infinite and x (aplying the equation that relates x to z)
I don't have matlab here, but i guess the straight line you mention is the cumulative distribution function, which tells you the probability of x between [-infinite, x], and is determined by the sum (or integral in this case) from -infinite to the value of x (or obtained in the z table)
Sorry if my english was bad.
I hope i was helpful.