I want to find ridges for a given image. (Ridges not edges!)
An example is like the image below
I think Hessian matrix will work intuitively. So I hard coded Hessian matrix kernel by starting from a 2D-Gaussian equation as the links below described.
How to build 2D hessian matrix kernel
I use surf to visualize the 3 second order derivative kernels (D_xx,D_yy and D_xy) I created and they look all correct.
I then applied these kernels and did 2D convolution with my image.
I am not sure what to do next, should I need to represent eigen values and vectors by using D_xx,D_yy and D_xy? How can we pull out ridge lines from the image by using the eigen analysis of 2-by-2 matrix for each pixel?
Any idea, formula or even code will be much helpful.
Attached is the code to generate 2D Hessian matrix
[x y]=meshgrid(round(-N/2):round(N/2), round(-N/2):round(N/2));
common = x.^2+y.^2;
Lxx = ((-1+x.^2/(sigma^2)).*exp(-common/(2*sigma^2))) / (2*pi*sigma^4);
Lxx = Lxx./ sum(Lxx(:));
Lyy = ((-1+y.^2/(sigma^2)).*exp(-common/(2*sigma^2))) / (2*pi*sigma^4);
Lyy = Lyy./ sum(Lyy(:));
Lxy = ((x.*y)/(2*pi*sigma^6)).*exp(-common/(2*sigma^2));
Lxy = Lxy./ sum(Lxy(:));
Related
I've got an arbitrary probability density function discretized as a matrix in Matlab, that means that for every pair x,y the probability is stored in the matrix:
A(x,y) = probability
This is a 100x100 matrix, and I would like to be able to generate random samples of two dimensions (x,y) out of this matrix and also, if possible, to be able to calculate the mean and other moments of the PDF. I want to do this because after resampling, I want to fit the samples to an approximated Gaussian Mixture Model.
I've been looking everywhere but I haven't found anything as specific as this. I hope you may be able to help me.
Thank you.
If you really have a discrete probably density function defined by A (as opposed to a continuous probability density function that is merely described by A), you can "cheat" by turning your 2D problem into a 1D problem.
%define the possible values for the (x,y) pair
row_vals = [1:size(A,1)]'*ones(1,size(A,2)); %all x values
col_vals = ones(size(A,1),1)*[1:size(A,2)]; %all y values
%convert your 2D problem into a 1D problem
A = A(:);
row_vals = row_vals(:);
col_vals = col_vals(:);
%calculate your fake 1D CDF, assumes sum(A(:))==1
CDF = cumsum(A); %remember, first term out of of cumsum is not zero
%because of the operation we're doing below (interp1 followed by ceil)
%we need the CDF to start at zero
CDF = [0; CDF(:)];
%generate random values
N_vals = 1000; %give me 1000 values
rand_vals = rand(N_vals,1); %spans zero to one
%look into CDF to see which index the rand val corresponds to
out_val = interp1(CDF,[0:1/(length(CDF)-1):1],rand_vals); %spans zero to one
ind = ceil(out_val*length(A));
%using the inds, you can lookup each pair of values
xy_values = [row_vals(ind) col_vals(ind)];
I hope that this helps!
Chip
I don't believe matlab has built-in functionality for generating multivariate random variables with arbitrary distribution. As a matter of fact, the same is true for univariate random numbers. But while the latter can be easily generated based on the cumulative distribution function, the CDF does not exist for multivariate distributions, so generating such numbers is much more messy (the main problem is the fact that 2 or more variables have correlation). So this part of your question is far beyond the scope of this site.
Since half an answer is better than no answer, here's how you can compute the mean and higher moments numerically using matlab:
%generate some dummy input
xv=linspace(-50,50,101);
yv=linspace(-30,30,100);
[x y]=meshgrid(xv,yv);
%define a discretized two-hump Gaussian distribution
A=floor(15*exp(-((x-10).^2+y.^2)/100)+15*exp(-((x+25).^2+y.^2)/100));
A=A/sum(A(:)); %normalized to sum to 1
%plot it if you like
%figure;
%surf(x,y,A)
%actual half-answer starts here
%get normalized pdf
weight=trapz(xv,trapz(yv,A));
A=A/weight; %A normalized to 1 according to trapz^2
%mean
mean_x=trapz(xv,trapz(yv,A.*x));
mean_y=trapz(xv,trapz(yv,A.*y));
So, the point is that you can perform a double integral on a rectangular mesh using two consecutive calls to trapz. This allows you to compute the integral of any quantity that has the same shape as your mesh, but a drawback is that vector components have to be computed independently. If you only wish to compute things which can be parametrized with x and y (which are naturally the same size as you mesh), then you can get along without having to do any additional thinking.
You could also define a function for the integration:
function res=trapz2(xv,yv,A,arg)
if ~isscalar(arg) && any(size(arg)~=size(A))
error('Size of A and var must be the same!')
end
res=trapz(xv,trapz(yv,A.*arg));
end
This way you can compute stuff like
weight=trapz2(xv,yv,A,1);
mean_x=trapz2(xv,yv,A,x);
NOTE: the reason I used a 101x100 mesh in the example is that the double call to trapz should be performed in the proper order. If you interchange xv and yv in the calls, you get the wrong answer due to inconsistency with the definition of A, but this will not be evident if A is square. I suggest avoiding symmetric quantities during the development stage.
I have a vector of x and y coordinates drawn from two separate unknown Gaussian distributions. I would like to fit these points to a three dimensional Gauss function and evaluate this function at any x and y.
So far the only manner I've found of doing this is using a Gaussian Mixture model with a maximum of 1 component (see code below) and going into the handle of ezcontour to take the X, Y, and Z data out.
The problems with this method is firstly that its a very ugly roundabout manner of getting this done and secondly the ezcontour command only gives me a grid of 60x60 but I need a much higher resolution.
Does anyone know a more elegant and useful method that will allow me to find the underlying Gauss function and extract its value at any x and y?
Code:
GaussDistribution = fitgmdist([varX varY],1); %Not exactly the intention of fitgmdist, but it gets the job done.
h = ezcontour(#(x,y)pdf(GaussDistributions,[x y]),[-500 -400], [-40 40]);
Gaussian Distribution in general form is like this:
I am not allowed to upload picture but the Formula of gaussian is:
1/((2*pi)^(D/2)*sqrt(det(Sigma)))*exp(-1/2*(x-Mu)*Sigma^-1*(x-Mu)');
where D is the data dimension (for you is 2);
Sigma is covariance matrix;
and Mu is mean of each data vector.
here is an example. In this example a guassian is fitted into two vectors of randomly generated samples from normal distributions with parameters N1(4,7) and N2(-2,4):
Data = [random('norm',4,7,30,1),random('norm',-2,4,30,1)];
X = -25:.2:25;
Y = -25:.2:25;
D = length(Data(1,:));
Mu = mean(Data);
Sigma = cov(Data);
P_Gaussian = zeros(length(X),length(Y));
for i=1:length(X)
for j=1:length(Y)
x = [X(i),Y(j)];
P_Gaussian(i,j) = 1/((2*pi)^(D/2)*sqrt(det(Sigma)))...
*exp(-1/2*(x-Mu)*Sigma^-1*(x-Mu)');
end
end
mesh(P_Gaussian)
run the code in matlab. For the sake of clarity I wrote the code like this it can be written more more efficient from programming point of view.
I have a certain problem while implementing multivariate Gaussian distribution for anomaly detection.
I have referred the formula from Andrew Ng notes
http://www.holehouse.org/mlclass/15_Anomaly_Detection.html
below is the problem I face
Suppose I have a data set with 2 features and m number of training set i.e n=2 and wants to determine my multivariate Gaussian probability p(x;mu;sigma) which should be a [m*1] matrix because it produces estimated Gaussian value by feature correlation.
The problem I face is I am unable to use the formula to produce the matrix [m*1].
I am using Octave as IDE to develop the algorithm.
Below is a snapshot showcasing my problem
Considering the multiplication of the Red boundary equation because the LHS of the red boundary is just a real number
PLEASE HELP ME UNDERSTAND WHERE AM I GOING WRONG
Thanks
I think you got the dimensions wrong.
Let's assume you have a 2-dimensional (n=2) data of m instances. We can store this data as a n-by-m matrix in MATLAB (columns are data instances, rows represent features/dimensions). In this case we have:
X the data matrix of size nxm, each instance x = X(:,i) is a vector of size nx1 (column vector in our convention).
mu is the mean vector (mu = mean(X,2)). This is also a column vector of same size as an instance nx1.
sigma is the covariance matrix (sigma = cov(X.')). It has size nxn (it describes how each dimensions co-vary with each other dimension).
So the part that you highlighted in red involves expressions of the following sizes:
= ([nx1] - [nx1])' * [nxn] * ([nx1] - [nx1])
= [1xn] * [nxn] * [nx1]
= 1x1
I am trying to find a discrete approximation to the Gaussian smoothing operation, as shown in the link :
http://bit.ly/1cSgkwt
G is some discrete smoothing kernel, a Gaussian in this case and * is the convolution operation. Basically, I want to apply a smoothing kernel to each pixel in the image. I am doing this in MATLAB and using the following code to create the matrix G, which is naive and hence painfully slow :
z = rgb2gray(imread('train_02463_1.bmp'));
im_sz = size(z);
ksize = 5;
% Gaussian kernel of size ksize*ksize
gw_mat = g_sigma(1,2*ksize+1)'*g_sigma(1,2*ksize+1);
G = sparse(length(ksize+1:im_sz(1)-ksize),prod(im_sz));
for i = ksize+1:im_sz(1)-ksize
for j = ksize+1:im_sz(2)-ksize
[x,y] = meshgrid(i-ksize:i+ksize,j-ksize:j+ksize);
row_num = sub2ind(im_sz,i,j);
colnums = sub2ind(im_sz,x,y);
G(row_num,colnums(:)) = gw_mat(:)';
end
end
Is there a more efficient way to do this ?
EDIT: I should apologize for not complete specifying the problem. Most of the answers below are valid ones, but the issue here is the above approximation is part of a optimization objective where the variable is z. The whole problem looks something like this :
http://goo.gl/rEE02y
Thus, I have to pre-generate a matrix G which approximates a smoothing function in order to feed this objective to a solver. I am using cvx, if that helps.
Typically people do it by applying two 1D Gaussian functions sequentially instead of one 2D function (idea of separable filters). You can approximate 1D horizontal Gaussian pretty fast, apply it to each pixel, save result in temp, and then apply a vertical Gaussian to temp. The expected speed up is quite significant: O(ksize*ksize) -> O(ksize+ksize)
You can use approximate Gaussian smoothing via box filters, e.g see integgaussfilt.m:
This function approximates Gaussian filtering by repeatedly applying
averaging filters. The averaging is performed via integral images which
results in a fixed and very low computational cost that is independent of
the Gaussian size.
The fastest way to apply Spatially Invariant Filter on an image using MATLAB would be using imfilter.
imfilter will automatically notice "Seperable" Kernel and will apply it in 2 steps (Vertically / Horizontally).
For creating some known and useful kernels you may look at fspecial.
I finished an SVM training and got data like X, Y. X is the feature matrix only with 2 dimensions, and Y is the classification labels. Because the data is only in two dimensions, so I would like to draw a decision boundary to show the surface of support vectors.
I use contouf in Matlab to do the trick, but really find it hard to understand how to use the function.
I wrote like:
#1 try:
contourf(X);
#2 try:
contourf([X(:,1) X(:,2) Y]);
#3 try:
Z(:,:,1)=X(Y==1,:);
Z(:,:,2)=X(Y==2,:);
contourf(Z);
all these things do not correctly. And I checked the Matlab help files, most of them make Z as a function, so I really do not know how to form the correct Z matrix.
If you're using the svmtrain and svmclassify commands from Bioinformatics Toolbox, you can just use the additional input argument (...'showplot', true), and it will display a scatter plot with a decision boundary and the support vectors highlighted.
If you're using your own SVM, or a third-party tool such as libSVM, what you probably need to do is to:
Create a grid of points in your 2D input feature space using the meshgrid command
Classify those points using your trained SVM
Plot the grid of points and the classifications using contourf.
For example, in kind-of-MATLAB-but-pseudocode, assuming your input features are called X1 and X2:
numPtsInGrid = 100;
x1Range = linspace(x1lower, x1upper, numPtsInGrid);
x2Range = linspace(x2lower, x2upper, numPtsInGrid);
[X1, X2] = meshgrid(x1Range, x2Range);
Z = classifyWithMySVMSomehow([X1(:), X2(:)]);
contourf(X1(:), X2(:), Z(:))
Hope that helps.
I know it's been a while but I will give it a try in case someone else will come up with that issue.
Assume we have a 2D training set so as to train an SVM model, in other words the feature space is a 2D space. We know that a kernel SVM model leads to a score (or decision) function of the form:
f(x) = sumi=1 to N(aiyik(x,xi)) + b
Where N is the number of support vectors, xi is the i -th support vector, ai is the estimated Lagrange multiplier and yi the associated class label. Values(scores) of decision function in way depict the distance of the observation x frοm the decision boundary.
Now assume that for every point (X,Y) in the 2D feature space we can find the corresponding score of the decision function. We can plot the results in the 3D euclidean space, where X corresponds to values of first feature vector f1, Y to values of second feature f2, and Z to the the return of decision function for every point (X,Y). The intersection of this 3D figure with the Z=0 plane gives us the decision boundary into the two-dimensional feature space. In other words, imagine that the decision boundary is formed by the (X,Y) points that have scores equal to 0. Seems logical right?
Now in MATLAB you can easily do that, by first creating a grid in X,Y space:
d = 0.02;
[x1Grid,x2Grid] = meshgrid(minimum_X:d:maximum_X,minimum_Y:d:maximum_Y);
d is selected according to the desired resolution of the grid.
Then for a trained model SVMModel find the scores of every grid's point:
xGrid = [x1Grid(:),x2Grid(:)];
[~,scores] = predict(SVMModel,xGrid);
Finally plot the decision boundary
figure;
contour(x1Grid,x2Grid,reshape(scores(:,2),size(x1Grid)),[0 0],'k');
Contour gives us a 2D graph where information about the 3rd dimension is depicted as solid lines in the 2D plane. These lines implie iso-response values, in other words (X,Y) points with same Z value. In our occasion contour gives us the decision boundary.
Hope I helped to make all that more clear. You can find very useful information and examples in the following links:
MATLAB's example
Representation of decision function in 3D space