I'm trying to implement the homogeneous transformation from the disparity image to the virtual disparity image, following the paper of Suganuma et. al. "An Obstacle Extraction Method Using Virtual Disparity Image".
After doing the matrix computations described in the paper, I reach a global homogeneous transformation matrix that describes just a translation of -27.7 in the direction v, which makes sense.
Now, to make this transformation, I implemented a loop in MATLAB:
virtual_disparity=zeros(size(disparityMap));
%Homogeneous vector of a point of the disparityMap U=[u/d v/d 1/d 1]' (4x1)
U = zeros(4,1);
U_v = zeros(4,1);
for i=1:size(disparityMap,1) %Rows-->y
for j=1:size(disparityMap,2) %Cols-->x
d = disparityMap(i, j); % (i,j)-->(cols,rows)-->(y,x)
U = [j/d i/d 1/d 1]'; % [u/d v/d 1/d 1]'
U_v = B*U; % B is the whole homogeneous transform
U_v = U_v./U_v(4);
u_v_x = U_v(1); %u_v_j
u_v_y = U_v(2); %u_v_i
if((u_v_x>1) && (u_v_x<=size(virtual_disparity, 2)) && (u_v_y>1) && (u_v_y<=size(virtual_disparity, 1)))
virtual_disparity(round(u_v_y), round(u_v_x)) = disparityMap(i, j);
end
end
end
Now, the problem is that the virtual disparity that I get doesn't make any sense, since it doesn't even corresponds with the transformation described in B, which, as I said is:
1.0000 0 0.0000 0
0 1.0000 0.0000 -27.7003
0 0 1.0000 0
0 0 0 1.0000
These are the disparity and the virtual disparity respectively:
Disparity Map:
Virtual Disparity:
I've been rechecking all day long and I don't find the error.
Finally, a colleage helped me and we found whats wrong. I was suposing that the final coordinates U_v would be given in the form [u v 0 1]', which actually doesn't make much sense. Actually they were given, as the input coordinates in the form [u/d v/d 1/d 1]'. So, instead of normalizing them dividing by the element 4, as I was doing, I must divide them by the element 3 (1/d).
To sum up, it was just an error in the line:
U_v = U_v./U_v(4);
Which must be substituted by:
U_v = U_v./U_v(3);
Now the image, although is a little bit more sparse than I thought, it's similar to the one of the paper:
Related
I'm multiplying two matrices A and B. When I multiply A and B I should obtain the identity matrix, but sometimes instead of obtaining 0 and 1s, I obtain 1.0000 or -0.0000. Of course this is caused because of one the matrices has floating-point numbers.
Is it possible somehow to convert this entries automatically to integers (i.e. -0.0000 does not make any sense, and 1.0000 could simply be 1)?
If I understood right, you just want to see, that the result of the multiplication does not differ by much from the identity matrix. This you could check with a code like:
Result= A*B
Id=eye(size(Result))
eps=.2 %tolerance
% bigError == 1, if there is a error bigger than eps for an entry in the matrix, 0 otherwise:
bigError=any(abs(Result(:)-Id(:))>eps)
You can follow the syntax in the code below to achieve the rounding suggested above without unexpectedly hiding a bug from rounding values not close to 0 or 1.
% Values in matrix to be rounded if within tol.
RoundElim = [0 1];
tol = 1e-12; % ensure that tol is greater than eps
% (floating point relative accuracy).
n = 4;
In = eye(n);
A = rand(n,n);
B = In/A; % Inverse of A
C = A*B
Cprev = C;
% Eliminate rounding from 0 and 1 entries
for i = 1:size(RoundElim,2)
C(abs(C-RoundElim(i)) < tol) = RoundElim(i);
end
C
ErrorInf = norm(C-Cprev,'inf')
If you just want to check that every entry in your matrix is close to that in the corresponding identity matrix, I would suggest using the 'inf' norm.
I have two variables both class double
X = 11x3 Matrix (Showing number of Negative, Neutral, Positive elements in each row)
Y = 11x1 (showing prices)
How would I show the correlation between these two variables and also fit this to a Linear regression model.
I have tried :
corrcoef([X,Y])
ans =
1.0000 0.3119 0.6753 0.0996
0.3119 1.0000 0.4582 -0.0565
0.6753 0.4582 1.0000 -0.0627
0.0996 -0.0565 -0.0627 1.0000
But not sure if this is correct
Many thanks
The specific problem with your code is that in your line corrcoef([X,Y]) you just lumped your X and Y into one variable. You can definitely get the answer that you want out of this matrix (the off-diagonal terms are the correlation between the columns of X and your Y) but this might not be quite what you were expecting.
When you are unsure, I always recommend breaking the problem down into the smallest steps. In this case, things are perhaps confusing for you because your X has three columns while your Y only has one column. What does corrcoef do in this case? If you're not sure, I'd suggest breaking it down into smaller steps...
For the operation that you are interested in (correlation with Y and a linear regression), there is no interdependence between the three columns of X. So, a good simplifying step would be to deal with the 3 columns independently. You can do it in a for loop (yes, you can do it all vectorized at once, but doing it in a for loop makes it easier to understand when one is unsure)...
%see the correation between the two variables
for I=1:3
x_foo = X(:,I)
%http://www.mathworks.com/help/matlab/ref/corrcoef.html
c = corrcoeff(x_foo,Y)
end
Then, you can do the next step...the linear regression. Use the polyfit function to fit a line.
figure;
for I=1:3
x_foo = X(:,I);
%http://www.mathworks.com/help/matlab/ref/polyfit.html
N = 1; % order of the desired polynominal. N=1 means a line
p = polyfit(x_foo,Y,N); %N=1 will fit a line
%plot
subplot(3,1,I)
plot(x_foo,Y,'o',x_foo,polyval(p,x_foo),'s');
legend('Data','Linear Fit');
end
I'm trying to do an algorithm in Matlab to try to calculate a received power in dBm of a logarithmic model of a wireless telecommunication system..
My algorithm calculate the received power for a number of distances in km that the user specified in the input and stores it in a vector
vector_distances = { 1, 5, 10, 50, 75 }
vector_Prx = { 131.5266 145.5060 151.5266 165.5060 169.0278 }
The thing is that I almost have everything that I need, but for graphics purposes I need to plot a graph in where on the x axys I have my vector of receiver power but on the y axys I want to show the same received power but with the most complete logarithmic model (the one that have also the noise - with Log-normal distribution on the formula - but for this thing in particular for every distance in my vector I need to choose 50 numbers with 0.5 distance between them (like a matrix) and then for every new point in the same distance calculate the logarithmic model to later plot in the same graph the two functions, one with the model with no noise (a straight line) and one with the noise.. like this picture
!http://imgur.com/gLSrKor
My question is, is there a way to choose 50 numbers with 0.5 distance between them for an existing number?
I know for example, if you have a vector
EDU>> m = zeros(1,5)
m =
0 0 0 0 0
EDU>> v = 5 %this is the starter distance%
v =
5
EDU>> m(1) = 5
m =
5 0 0 0 0
% I want to create a vector with 5 numbers with 0.5 distance between them %
EDU>> for i=2:5
m(i) = m(i-1) + 0.5
end
EDU>> m
m =
5.0000 5.5000 6.0000 6.5000 7.0000
But I have two problems, the firs one is, could this be more simplex? I am new on Matlab..and the other one, could I create a vector like this (with the initial number in the center)
EDU>> m
m =
4.0000 4.5000 **5.0000** 5.5000 6.0000
Sorry for my english, and thank you so much for helping me
In MATLAB, if you want to create a vector from a number n to a number m, you use the format
A = 5:10;
% A = [5,6,7,8,9,10]
You can also specify the step of the vector by including a third argument between the other two, like so:
A = 5:0.5:10;
% A = [5,5.5,6,6.5,7,7.5,8,8.5,9,9.5,10]
You can also use this to count backwards:
A = 10:-1:5
% A = [10,9,8,7,6,5]
This is my attempt to simulate the water surface. It works fine when I use the surf() function. But when I change it to bar3(), this error occurs: "Matrix dimensions must agree, not rendering mesh". Can some one please tell me how to fix this? Here's my code:
n=60;
i = 2:n-1;
j = 2:n-1;
H = ones(n,n);
Dropx=30; %x and y coordinate of the droplet
Dropy=30;
width=20;
r=width/2;
dt=0.1;
dx=0.3;
%%% add droplet to the surface %%%
[x,y] = ndgrid(-1.5:(2/(width/1.5-1)):1);
D = 8*exp(-5*(x.^2+y.^2));
w = size(D,1);
i2 = (Dropx-r):w+(Dropx-r)-1;
j2 = (Dropy-r):w+(Dropy-r)-1;
H(i2,j2) = H(i2,j2) + D;
oldH=H;
newH=H;
h=surf(newH); % cannot change this to bar3
axis([1 n 1 n -2 8]);
k=0.2; %damping constant
c=2; %wave speed
while 1==1
newH(i,j)=H(i,j)+(1-k*dt)*(H(i,j)-oldH(i,j))-...
dt^2*c^2/dx^2*((4*H(i,j)-H(i+1,j)-H(i-1,j)-H(i,j+1)-H(i,j-1))...
+0.4*(4*H(i,j)-H(i+1,j+1)-H(i+1,j-1)-H(i-1,j+1)-H(i-1,j-1)));
set(h,'Zdata', newH(i,j));
oldH=H;
H=newH;
pause(0.05);
end
The problem, as stated by David, is that bar3 transforms the original data matrix into a special ZData. This new one is a cell array of patches, of length n (60 in your code), each of them is an array of size [n*6,4]. So you cannot assign directly your new matrix to ZData.
There is one solution, besides recreating the plot each time. Basically, it modifies directly ZData. You can directly modify element by element Zdata. For that, try the following code instead of calling set(h,'Zdata', newH(i,j));:
for ih=j
set(h(ih), 'ZData', kron(newH(i,ih),[nan 0 0 nan;0,1,1,0;0,1,1,0;nan 0 0 nan;nan 0 0 nan;nan nan nan nan]));
end
h is the handle of the plot; in the case of bar3, its length is n, the first dimension of your matrix. So, for each bar column, you set the ZData according to its format. Each element V of the matrix is transformed to this matrix:
NaN 0 0 NaN
0 V V 0
0 V V 0
NaN 0 0 NaN
NaN 0 0 NaN
NaN NaN NaN NaN
So, in order to build the complete ZData of each column, you call the function kron with the column of the updated matrix and with this atomic matrix.
This is not very fast; on my computer, the display lags time to time, but it is faster than recreating the bar plot each time. Using surf is more faster because there is less patches to draw.
The problem lies in the way you handle the plotting.
h=bar3(newH);
plots the data and stores handles to patch graphic objects in h. When you write the following :
set(h,'Zdata', newH(i,j));
you assume that the handle 'Zdata' is a 60x60 array, which is not the case for bar3. Just write
output = get(h,'Zdata')
to see that. It requires a bit more data handling to do it this way but that seems tedious.
I propose an easy solution to this, simply replotting at every timestep :
oldH=H;
newH=H;
h=bar3(newH);
axis([1 n 1 n -2 8]);
k=0.2; %damping constant
c=2; %wave speed
while 1==1
newH(i,j)=H(i,j)+(1-k*dt)*(H(i,j)-oldH(i,j))-...
dt^2*c^2/dx^2*((4*H(i,j)-H(i+1,j)-H(i-1,j)-H(i,j+1)-H(i,j-1))...
+0.4*(4*H(i,j)-H(i+1,j+1)-H(i+1,j-1)-H(i-1,j+1)-H(i-1,j-1)));
h=bar3(newH);
axis([1 n 1 n -2 8]);
oldH=H;
H=newH;
pause(0.05);
end
I am currently creating different signals using Matlab, mixing them by multiplying them by a mixing matrix A, and then trying to get back the original signals using FastICA.
So far, the recovered signals are really bad when compared to the original ones, which was not what I expected.
I'm trying to see whether I'm doing anything wrong. The signals I'm generating are the following: (Amplitudes are in the range [0,1].)
s1 = (-x.^2 + 100*x + 500) / 3000; % quadratic
s2 = exp(-x / 10); % -ve exponential
s3 = (sin(x)+ 1) * 0.5; % sine
s4 = 0.5 + 0.1 * randn(size(x, 2), 1); % gaussian
s5 = (sawtooth(x, 0.75)+ 1) * 0.5; % sawtooth
One condition for ICA to be successful is that at most one signal is Gaussian, and I've observed this in my signal generation.
However, another condition is that all signals are statistically independent.
All I know is that this means that, given two signals A & B, knowing one signal does not give any information with regards to the other, i.e.: P(A|B) = P(A) where P is the probability.
Now my question is this: Are my signals statistically independent? Is there any way I can determine this? Perhaps some property that must be observed?
Another thing I've noticed is that when I calculate the eigenvalues of the covariance matrix (calculated for the matrix containing the mixed signals), the eigenspectrum seems to show that there is only one (main) principal component. What does this really mean? Shouldn't there be 5, since I have 5 (supposedly) independent signals?
For example, when using the following mixing matrix:
A =
0.2000 0.4267 0.2133 0.1067 0.0533
0.2909 0.2000 0.2909 0.1455 0.0727
0.1333 0.2667 0.2000 0.2667 0.1333
0.0727 0.1455 0.2909 0.2000 0.2909
0.0533 0.1067 0.2133 0.4267 0.2000
The eigenvalues are: 0.0000 0.0005 0.0022 0.0042 0.0345 (only 4!)
When using the identity matrix as the mixing matrix (i.e. the mixed signals are the same as the original ones), the eigenspectrum is: 0.0103 0.0199 0.0330 0.0811 0.1762. There still is one value much larger than the rest..
Thank you for your help.
I apologise if the answers to my questions are painfully obvious, but I'm really new to statistics, ICA and Matlab. Thanks again.
EDIT - I have 500 samples of each signal, in the range [0.2, 100], in steps of 0.2, i.e. x = 0:0.1:100.
EDIT - Given the ICA Model: X = As + n (I'm not adding any noise at the moment), but I am referring to the eigenspectrum of the transpose of X, i.e. eig(cov(X')).
Your signals are correlated (not independent). Right off the bat, the sawtooth and the sine are the same period. Tell me the value of one I'll tell you the value of the other, perfect correlation.
If you change up the period of one of them that'll make them more independent.
Also S1 and S2 are kinda correlated.
As for the eigenvalues, first of all your signals are not independent (see above).
Second of all, your filter matrix A is also not well conditioned, spreading out your eigenvalues further.
Even if you were to pipe in five fully independent (iid, yada yada) signals the covariance would be:
E[ A y y' A' ] = E[ A I A' ] = A A'
The eigenvalues of that are:
eig(A*A')
ans =
0.000167972216475
0.025688510850262
0.035666735304024
0.148813869149738
1.042451912479502
So you're really filtering/squishing all the signals down onto one basis function / degree of freedom and of course they'll be hard to recover, whatever method you use.
To find if the signals are mutually independent you could look at the techniques described here In general two random variables are independent if they are orthogonal. This means that: E{s1*s2} = 0 Meaning that the expectation of the random variable s1 multiplied by the random variable s2 is zero. This orthogonality condition is extremely important in statistics and probability and shows up everywhere. Unfortunately it applies to 2 variables at a time. There are multivariable techniques, but none that I would feel comfortable recommending. Another link I dug up was this one, not sure what your application is, but that paper is very well done.
When I calculate the covariance matrix I get:
cov(A) =
0.0619 -0.0284 -0.0002 -0.0028 -0.0010
-0.0284 0.0393 0.0049 0.0007 -0.0026
-0.0002 0.0049 0.1259 0.0001 -0.0682
-0.0028 0.0007 0.0001 0.0099 -0.0012
-0.0010 -0.0026 -0.0682 -0.0012 0.0831
With eigenvectors,V and values D:
[V,D] = eig(cov(A))
V =
-0.0871 0.5534 0.0268 -0.8279 0.0063
-0.0592 0.8264 -0.0007 0.5584 -0.0415
-0.0166 -0.0352 0.5914 -0.0087 -0.8054
-0.9937 -0.0973 -0.0400 0.0382 -0.0050
-0.0343 0.0033 0.8050 0.0364 0.5912
D =
0.0097 0 0 0 0
0 0.0200 0 0 0
0 0 0.0330 0 0
0 0 0 0.0812 0
0 0 0 0 0.1762
Here's my code:
x = transpose(0.2:0.2:100);
s1 = (-x.^2 + 100*x + 500) / 3000; % quadratic
s2 = exp(-x / 10); % -ve exponential
s3 = (sin(x)+ 1) * 0.5; % sine
s4 = 0.5 + 0.1 * randn(length(x), 1); % gaussian
s5 = (sawtooth(x, 0.75)+ 1) * 0.5; % sawtooth
A = [s1 s2 s3 s4 s5];
cov(A)
[V,D] = eig(cov(A))
Let me know if I can help any more, or if I misunderstood.
EDIT Properly referred to eigenvalues and vectors, used 0.2 sampling interval added code.