I am trying to find the first derivative of a Gaussian for an image (using Matlab) and I tried two ways.One using the gradient and one calculating the derivative but the results look different from each other.
Method 1
k=7,s=3% kernel,st.dev
f = fspecial('gaussian', [k k], s)
[Gx,Gy] = gradient(f)
Method 2
k=7,s=3% kernel,st.dev
[x,y] = meshgrid(-floor(k/2):floor(k/2), -floor(k/2):floor(k/2))
G = exp(-(x.^2+y.^2)/(2*s^2))/(2*pi*(s^2))
Gn=G/sum(G(:))
Gx = -x.*Gn/(s^2)
Gy = -y.*Gn/(s^2)
Gx and Gy should be the same from the two methods but there is a difference in the values. Does anyone know why that is? I was expecting that they will be the same. Is there a preferred way to calculate the derivative?
Thank you.
Edit: changed the G definition per Conrad's suggestion but problem still persists.
This looks wrong:
G = exp(-(x.^2+y.^2)/(2*s^2))/(2*pi*s);
Assuming this is supposed to the Normal density for (X,Y) where X and Y are independent zero mean RV with equal SDs = s, this should be:
G = exp(-(x.^2+y.^2)/(2*s^2))/(2*pi*(s^2));
(The term in front of the exponential in each component is 1/(sqrt(2*pi)*s) and you have this twice giving 1/(2*pi*s^2) )
According to the docs of fspecial: http://www.mathworks.com/help/images/ref/fspecial.html
so it seems that you should alter G to be:
G = exp(-(x.^2+y.^2)/(2*s^2));
Related
Given a signal x(t), we need to find the symmetrical with respect to the Y-axis signal, x(-t)
If it's helpful to you, here's how my code works thus far:
t = [-5:0.01:5];
wt = (t>=0)&(t<=1);
r = #(t) t/5;
x = r(t).*wt;
%reflection - HERE IS WHERE I AM STUCK, basically looking for v(t) = x(-t)
%Shift by 2
y = v(t-2);
%The rest of the program - printing plots basically
I have tried using these:
v = x(t(1:end));
v = x(t(end:-1:1));
v = x(fliplr(t));
But it's not correct, since I get the error Array indices must be positive integers or logical values. as expected. Any ideas?
One solution is that you consider something like this:
x=signal; % with length 2*N+1 and symmetric
t= -N:N;
now, consider you want the value of index -2.
x(find(t==-2))
For ramp signal as an instance:
signal=[r(end:-1:1) 0 r]
with this assumption that r is a row-base vector and N length.
You should first define the signal function and then sampling it, not the other way round.
For instance, I here define a signal s(t) which is a windowed ramp:
s = #(t) t.*((t>=0)&(t<=1))
Then, I can find the samples for the signal and its symmetrical:
t = -5:0.01:5;
plot(t,s(t),t,s(-t))
What worked for me was defining a reflect function as follows:
function val = reflect(t)
val = -t;
end
And then using it with the shifting function in order to achieve my goal.
Given any sampled signal in an arbitrary time interval:
t = [-5.003:0.01:10];
x = randn(size(t));
You can reflect x around t=0 with:
t = -flip(t);
x = flip(x);
Note that in the example above, t=0 is not sampled. This is not necessary for this method.
I have a cubic expression here
I am trying to determine and plot δ𝛿 in the expression for P values of 0.0 to 5000. I'm really struggling to get the expression for δ in terms of the pressure P.
clear all;
close all;
t = 0.335*1e-9;
r = 62*1e-6;
delta = 1.2*1e+9;
E = 1e+12;
v = 0.17;
P = 0:100:5000
P = (4*delta*t)*w/r^2 + (2*E*t)*w^3/((1-v)*r^4);
I would appreciate if anyone could provide pointers.
I suggest two simple methods.
You evaluate P as a function of delta then you plot(P,delta). This is quick and dirty but if all you need is a plot it will do. The inconvenience is that you may to do some guess-and-trial to find the correct interval of P values, but you can also take a large enough value of delta_max and then restrict the x-axis limit of the plot.
Your function is a simple cubic, which you can solve analytically (see here if you are lost) to invert P(delta) into delta(P).
What you want is the functional inverse of your expression, i.e., δ𝛿 as a function of P. Since it's a cubic polynomial, you can expect up to three solutions (roots) for a give value of P. However, I'm guessing that you're only interested in real-valued solutions and nonnegative values of P. In that case there's just one real root for each value of P.
Given the values of your parameters, it makes most sense to solve this numerically using fzero. Using the parameter names in your code (different from equations):
t = 0.335*1e-9;
r = 62*1e-6;
delta = 1.2*1e9;
E = 1e12;
v = 0.17;
f = #(w,p)2*E*t*w.^3/((1-v)*r^4)+4*delta*t*w/r^2-p;
P = 0:100:5000;
w0 = [0 1]; % Bounded initial guess, valid up to very large values of P
w_sol = zeros(length(P),1);
for i = 1:length(P)
w_sol(i) = fzero(#(w)f(w,P(i)),w0); % Find solution for each P
end
figure;
plot(P,w_sol);
You could also solve this using symbolic math:
syms w p
t = 0.335*sym(1e-9);
r = 62*sym(1e-6);
delta = 1.2*sym(1e9);
E = sym(1e12);
v = sym(0.17);
w_sol = solve(p==2*E*t*w^3/((1-v)*r^4)+4*delta*t*w/r^2,w);
P = 0:100:5000;
w_sol = double(subs(w_sol(1),p,P)); % Plug in P values and convert to floating point
figure;
plot(P,w_sol);
Because of your numeric parameter values, solve returns an answer in terms of three RootOf objects, the first of which is the real one you want.
So I'm just trying to plot 4 different subplots with variations of the increments. So first would be dx=5, then dx=1, dx=0.1 and dx=0.01 from 0<=x<=20.
I tried to this:
%for dx = 5
x = 0:5:20;
fx = 2*pi*x *sin(x^2)
plot(x,fx)
however I get the error inner matrix elements must agree. Then I tried to do this,
x = 0:5:20
fx = (2*pi).*x.*sin(x.^2)
plot(x,fx)
I get a figure, but I'm not entirely sure if this would be the same as what I am trying to do initially. Is this correct?
The initial error arose since two vectors with the same shape cannot be squared (x^2) nor multiplied (x * sin(x^2)). The addition of the . before the * and ^ operators is correct here since that will perform the operation on the individual elements of the vectors. So yes, this is correct.
Also, bit of a more advanced feature, you can use an anonymous function to aid in the expressions:
fx = #(x) 2*pi.*x.*sin(x.^2); % function of x
x = 0:5:20;
plot(x,fx(x));
hold('on');
x = 0:1:20;
plot(x,fx(x));
hold('off');
I am trying to estimate the pose of a camera by scanning two images taken from it, detecting features in the images, matching them, creating the fundamental matrix, using the camera intrinsics to calculate the essential matrix and then decompose it to find the Rotation and Translation.
Here is the matlab code:
I1 = rgb2gray(imread('1.png'));
I2 = rgb2gray(imread('2.png'));
points1 = detectSURFFeatures(I1);
points2 = detectSURFFeatures(I2);
points1 = points1.selectStrongest(40);
points2 = points2.selectStrongest(40);
[features1, valid_points1] = extractFeatures(I1, points1);
[features2, valid_points2] = extractFeatures(I2, points2);
indexPairs = matchFeatures(features1, features2);
matchedPoints1 = valid_points1(indexPairs(:, 1), :);
matchedPoints2 = valid_points2(indexPairs(:, 2), :);
F = estimateFundamentalMatrix(matchedPoints1,matchedPoints2);
K = [2755.30930612600,0,0;0,2757.82356074384,0;1652.43432833339,1234.09417974414,1];
%figure; showMatchedFeatures(I1, I2, matchedPoints1, matchedPoints2);
E = transpose(K)*F*K;
W = [0,-1,0;1,0,0;0,0,1];
Z = [0,1,0;-1,0,0;0,0,0];
[U,S,V] = svd(E);
R = U*inv(W)*transpose(V);
T = U(:,3);
thetaX = radtodeg(atan2(R(3,2),R(3,3)));
thetaY = radtodeg(atan2(-R(3,1),sqrt(R(3,2)^2 +R(3,3)^2)));
thetaZ = radtodeg(atan2(R(2,1),R(1,1)));
The problem I am facing is that R and T are always incorrect.
ThetaZ is most of the times equal to ~90, If I repeat the calculation a lot of times I sometimes get the expected angles. (Only in some cases though)
I dont seem to understand why. It might be because the Fundamental Matrix I calculated is wrong. Or is there a different spot where I am going wrong?
Also what scale/units is T in? (Translation Vector) Or is it inferred differently.
P.S. New to computer vision...
Please note that from decomposing E, 4 solutions are possible (2 possible rotations X 2 possible translations).
Specifically regarding R, it can also be:
R = UWtranspose(V);
Similarly, T can also be:
T = -U(:,3);
To check if this is your bug, please post here all the 4 possible solutions for a given case where you get ThetaZ~90.
Another thing I would check (since you have K), is estimating the essential matrix directly (without going through fundamental matrix): http://www.mathworks.com/matlabcentral/fileexchange/47032-camera-geometry-algorithms/content//CV/CameraGeometry/EssentialMatrixFrom2DPoints.m
Try transposing K. The K that you get from estimateCameraParameters assumes row-vectors post-multiplied by a matrix, while the K in most textbooks assumes column-vectors pre-multipied by a matrix.
Edit: In the R2015b release of the Computer Vision System Toolbox there is a cameraPose function, which computes relative orientation and location from the fundamental matrix.
U and V need to be enforcedto be SO(3). http://mathworld.wolfram.com/SpecialOrthogonalMatrix.html
In other words, if U and/or V has a negative determinat, the last column in U and/or V need to be negated. (det(U) < 0) => U(:,3) = -U(:,3)
Best Regards.
I’m currently a Physics student and for several weeks have been compiling data related to ‘Quantum Entanglement’. I’ve now got to a point where I have to plot my data (which should resemble a cos² graph - and does) to a sort of “best fit” cos² graph. The lab script says the following:
A more precise determination of the visibility V (this is basically how 'clean' the data is) follows from the best fit to the measured data using the function:
f(b) = A/2[1-Vsin(b-b(center)/P)]
Granted this probably doesn’t mean much out of context, but essentially A is the amplitude, b is an angle and P is the periodicity. Hence this is also a “wave” like the experimental data I have found.
From this I understand, as previously mentioned, I am making a “best fit” curve. However, I have been told that this isn’t possible with Excel and that the best approach is Matlab.
I know intermediate JavaScript but do not know Matlab and was hoping for some direction.
Is there a tutorial I can read for this? Is it possible for someone to go through it with me? I really have no idea what it entails, so any feed back would be greatly appreciated.
Thanks a lot!
Initial steps
I guess we should begin by getting a representation in Matlab of the function that you're trying to model. A direct translation of your formula looks like this:
function y = targetfunction(A,V,P,bc,b)
y = (A/2) * (1 - V * sin((b-bc) / P));
end
Getting hold of the data
My next step is going to be to generate some data to work with (you'll use your own data, naturally). So here's a function that generates some noisy data. Notice that I've supplied some values for the parameters.
function [y b] = generateData(npoints,noise)
A = 2;
V = 1;
P = 0.7;
bc = 0;
b = 2 * pi * rand(npoints,1);
y = targetfunction(A,V,P,bc,b) + noise * randn(npoints,1);
end
The function rand generates random points on the interval [0,1], and I multiplied those by 2*pi to get points randomly on the interval [0, 2*pi]. I then applied the target function at those points, and added a bit of noise (the function randn generates normally distributed random variables).
Fitting parameters
The most complicated function is the one that fits a model to your data. For this I use the function fminunc, which does unconstrained minimization. The routine looks like this:
function [A V P bc] = bestfit(y,b)
x0(1) = 1; %# A
x0(2) = 1; %# V
x0(3) = 0.5; %# P
x0(4) = 0; %# bc
f = #(x) norm(y - targetfunction(x(1),x(2),x(3),x(4),b));
x = fminunc(f,x0);
A = x(1);
V = x(2);
P = x(3);
bc = x(4);
end
Let's go through line by line. First, I define the function f that I want to minimize. This isn't too hard. To minimize a function in Matlab, it needs to take a single vector as a parameter. Therefore we have to pack our four parameters into a vector, which I do in the first four lines. I used values that are close, but not the same, as the ones that I used to generate the data.
Then I define the function I want to minimize. It takes a single argument x, which it unpacks and feeds to the targetfunction, along with the points b in our dataset. Hopefully these are close to y. We measure how far they are from y by subtracting from y and applying the function norm, which squares every component, adds them up and takes the square root (i.e. it computes the root mean square error).
Then I call fminunc with our function to be minimized, and the initial guess for the parameters. This uses an internal routine to find the closest match for each of the parameters, and returns them in the vector x.
Finally, I unpack the parameters from the vector x.
Putting it all together
We now have all the components we need, so we just want one final function to tie them together. Here it is:
function master
%# Generate some data (you should read in your own data here)
[f b] = generateData(1000,1);
%# Find the best fitting parameters
[A V P bc] = bestfit(f,b);
%# Print them to the screen
fprintf('A = %f\n',A)
fprintf('V = %f\n',V)
fprintf('P = %f\n',P)
fprintf('bc = %f\n',bc)
%# Make plots of the data and the function we have fitted
plot(b,f,'.');
hold on
plot(sort(b),targetfunction(A,V,P,bc,sort(b)),'r','LineWidth',2)
end
If I run this function, I see this being printed to the screen:
>> master
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the function tolerance.
A = 1.991727
V = 0.979819
P = 0.695265
bc = 0.067431
And the following plot appears:
That fit looks good enough to me. Let me know if you have any questions about anything I've done here.
I am a bit surprised as you mention f(a) and your function does not contain an a, but in general, suppose you want to plot f(x) = cos(x)^2
First determine for which values of x you want to make a plot, for example
xmin = 0;
stepsize = 1/100;
xmax = 6.5;
x = xmin:stepsize:xmax;
y = cos(x).^2;
plot(x,y)
However, note that this approach works just as well in excel, you just have to do some work to get your x values and function in the right cells.