vector of n numbers around a specific number - matlab

I'm trying to do an algorithm in Matlab to try to calculate a received power in dBm of a logarithmic model of a wireless telecommunication system..
My algorithm calculate the received power for a number of distances in km that the user specified in the input and stores it in a vector
vector_distances = { 1, 5, 10, 50, 75 }
vector_Prx = { 131.5266 145.5060 151.5266 165.5060 169.0278 }
The thing is that I almost have everything that I need, but for graphics purposes I need to plot a graph in where on the x axys I have my vector of receiver power but on the y axys I want to show the same received power but with the most complete logarithmic model (the one that have also the noise - with Log-normal distribution on the formula - but for this thing in particular for every distance in my vector I need to choose 50 numbers with 0.5 distance between them (like a matrix) and then for every new point in the same distance calculate the logarithmic model to later plot in the same graph the two functions, one with the model with no noise (a straight line) and one with the noise.. like this picture
!http://imgur.com/gLSrKor
My question is, is there a way to choose 50 numbers with 0.5 distance between them for an existing number?
I know for example, if you have a vector
EDU>> m = zeros(1,5)
m =
0 0 0 0 0
EDU>> v = 5 %this is the starter distance%
v =
5
EDU>> m(1) = 5
m =
5 0 0 0 0
% I want to create a vector with 5 numbers with 0.5 distance between them %
EDU>> for i=2:5
m(i) = m(i-1) + 0.5
end
EDU>> m
m =
5.0000 5.5000 6.0000 6.5000 7.0000
But I have two problems, the firs one is, could this be more simplex? I am new on Matlab..and the other one, could I create a vector like this (with the initial number in the center)
EDU>> m
m =
4.0000 4.5000 **5.0000** 5.5000 6.0000
Sorry for my english, and thank you so much for helping me

In MATLAB, if you want to create a vector from a number n to a number m, you use the format
A = 5:10;
% A = [5,6,7,8,9,10]
You can also specify the step of the vector by including a third argument between the other two, like so:
A = 5:0.5:10;
% A = [5,5.5,6,6.5,7,7.5,8,8.5,9,9.5,10]
You can also use this to count backwards:
A = 10:-1:5
% A = [10,9,8,7,6,5]

Related

Loop to build virtual disparity image in MATLAB

I'm trying to implement the homogeneous transformation from the disparity image to the virtual disparity image, following the paper of Suganuma et. al. "An Obstacle Extraction Method Using Virtual Disparity Image".
After doing the matrix computations described in the paper, I reach a global homogeneous transformation matrix that describes just a translation of -27.7 in the direction v, which makes sense.
Now, to make this transformation, I implemented a loop in MATLAB:
virtual_disparity=zeros(size(disparityMap));
%Homogeneous vector of a point of the disparityMap U=[u/d v/d 1/d 1]' (4x1)
U = zeros(4,1);
U_v = zeros(4,1);
for i=1:size(disparityMap,1) %Rows-->y
for j=1:size(disparityMap,2) %Cols-->x
d = disparityMap(i, j); % (i,j)-->(cols,rows)-->(y,x)
U = [j/d i/d 1/d 1]'; % [u/d v/d 1/d 1]'
U_v = B*U; % B is the whole homogeneous transform
U_v = U_v./U_v(4);
u_v_x = U_v(1); %u_v_j
u_v_y = U_v(2); %u_v_i
if((u_v_x>1) && (u_v_x<=size(virtual_disparity, 2)) && (u_v_y>1) && (u_v_y<=size(virtual_disparity, 1)))
virtual_disparity(round(u_v_y), round(u_v_x)) = disparityMap(i, j);
end
end
end
Now, the problem is that the virtual disparity that I get doesn't make any sense, since it doesn't even corresponds with the transformation described in B, which, as I said is:
1.0000 0 0.0000 0
0 1.0000 0.0000 -27.7003
0 0 1.0000 0
0 0 0 1.0000
These are the disparity and the virtual disparity respectively:
Disparity Map:
Virtual Disparity:
I've been rechecking all day long and I don't find the error.
Finally, a colleage helped me and we found whats wrong. I was suposing that the final coordinates U_v would be given in the form [u v 0 1]', which actually doesn't make much sense. Actually they were given, as the input coordinates in the form [u/d v/d 1/d 1]'. So, instead of normalizing them dividing by the element 4, as I was doing, I must divide them by the element 3 (1/d).
To sum up, it was just an error in the line:
U_v = U_v./U_v(4);
Which must be substituted by:
U_v = U_v./U_v(3);
Now the image, although is a little bit more sparse than I thought, it's similar to the one of the paper:

Calculating Angles between connected line segments using MATLAB

I'm using MATLAB for the first time, and I have little experience with programming.
I have three coordinate points connected together with line segments to create a sort of zig-zag path. If the line segment from the origin to the first point was extended past the first point, I need to find the angle measure from the line extending from the first point to the line extending from the first point to the second point. This needs to be done for the second to the third point as well. I've read the solutions of similar questions, but I wasn't able to interpret and modify them for my situation.
Let's say your coordinates are:
coord = [1 2; 2 4; 1.5 1; 4 2]
coord =
1.0000 2.0000
2.0000 4.0000
1.5000 1.0000
4.0000 2.0000
This will give the following zig-zag pattern:
To find the angles of each line segment, you can do the following:
coord_diff = diff(coord) %// Find the difference between each
%// coordinate (i.e. the line between the points)
%// Make use of complex numbers. A vector is
%// given by x + i*y, where i is the imaginary unit
vector = coord_diff(:,1) + 1i * coord_diff(:,2);
line_angles = angle(vector) * 180/pi; %// Line angles given in degrees
diff_line_angle = diff(line_angles) %// The difference in angle between each line segment
This gives the following angles, which upon inspection of the graph seems reasonable.
line_angles =
63.4349
-99.4623
21.8014
diff_line_angle =
-162.8973
121.2637
Update after comments
coord = [0 0; 3 4; -1 7; 3 10]
coord =
0 0
3 4
-1 7
3 10
coord_diff = diff(coord) %// Find the difference between each
%// coordinate (i.e. the line between the points)
coord_diff =
3 4
-4 3
4 3
%// The angles of these lines are approximately 36.86 and 53.13 degrees
%// Make use of complex numbers. A vector is
%// given by x + i*y, where i is the imaginary unit
vector = coord_diff(:,1) + 1i * coord_diff(:,2);
line_angles = angle(vector) * 180/pi; %// Line angles given in degrees
line_angles =
53.1301
143.1301
36.8699
I'm not sure how you want to treat different signs etc., but something like this should work:
[90-line_angles(1), arrayfun(#(n) line_angles(n+1)-line_angles(n), ...
1:numel(line_angles)-1)].'
ans =
36.8699
90.0000
-106.2602
This is simpler, but harder to adapt in case you need to change signs or something similar:
[90-line_angles(1); diff(line_angles)]

Calculate Euclidean distance between RGB vectors in a large matrix

I have this RGB matrix of a set of different pixels. (N pixels => n rows, RGB => 3 columns). I have to calculate the minimum RGB distance between any two pixels from this matrix. I tried the loop approach, but because the set is too big (let's say N=24000), it looks like it will take forever for the program to finish. Is there another approach? I read about pdist, but the RGB Euclidean distance cannot be used with it.
k=1;
for i = 1:N
for j = 1:N
if (i~=j)
dist_vect(k)=RGB_dist(U(i,1),U(j,1),U(i,2),U(j,2),U(i,3),U(j,3))
k=k+1;
end
end
end
Euclidean distance between two pixels:
So, pdist syntax would be like this: D=pdist2(U,U,#calc_distance());, where U is obtained like this:
rgbImage = imread('peppers.png');
rgb_columns = reshape(rgbImage, [], 3)
[U, m, n] = unique(rgb_columns, 'rows','stable');
But if pdist2 does the loops itself, how should I enter the parameters for my function?
function[distance]=RGB_dist(R1, R2, G1, G2, B1, B2),
where R1,G1,B1,R2,G2,B2 are the components of each pixel.
I made a new function like this:
function[distance]=RGB_dist(x,y)
distance=sqrt(sum(((x-y)*[3;4;2]).^2,2));
end
and I called it D=pdist(U,U,#RGB_dist); and I got 'Error using pdist (line 132)
The 'DISTANCE' argument must be a
string or a function.'
Testing RGB_dist new function alone, with these input set
x=[62,29,64;
63,31,62;
65,29,60;
63,29,62;
63,31,62;];
d=RGB_dist(x,x);
disp(d);
outputs only values of 0.
Contrary to what your post says, you can use the Euclidean distance as part of pdist. You have to specify it as a flag when you call pdist.
The loop you have described above can simply be computed by:
dist_vect = pdist(U, 'euclidean');
This should compute the L2 norm between each unique pair of rows. Seeing that your matrix has a RGB pixel per row, and each column represents a single channel, pdist should totally be fine for your application.
If you want to display this as a distance matrix, where row i and column j corresponds to the distance between a pixel in row i and row j of your matrix U, you can use squareform.
dist_matrix = squareform(dist_vect);
As an additional bonus, if you want to find which two pixels in your matrix share the smallest distance, you can simply do a find search on the lower triangular half of dist_matrix. The diagonals of dist_matrix are going to be all zero as any vector whose distance to itself should be 0. In addition, this matrix is symmetric and so the upper triangular half should be equal to the lower triangular half. Therefore, we can set the diagonal and the upper triangular half to Inf, then search for the minimum for those elements that are remaining. In other words:
indices_to_set = true(size(dist_matrix));
indices_to_set = triu(indices_to_set);
dist_matrix(indices_to_set) = Inf;
[v1,v2] = find(dist_matrix == min(dist_matrix(:)), 1);
v1 and v2 will thus contain the rows of U where those RGB pixels contained the smallest Euclidean distance. Note that we specify the second parameter as 1 as we want to find just one match, as what your post has stated as a requirement. If you wish to find all vectors who match the same distance, simply remove the second parameter 1.
Edit - June 25th, 2014
Seeing as how you want to weight each component of the Euclidean distance, you can define your own custom function to calculate distances between two RGB pixels. As such, instead of specifying euclidean, you can specify your own function which can calculate the distances between two vectors within your matrix by calling pdist like so:
pdist(x, #(XI,XJ) ...);
#(XI,XJ)... is an anonymous function that takes in a vector XI and a matrix XJ. For pdist you need to make sure that the custom distance function takes in XI as a 1 x N vector which is a single row of pixels. XJ is then a M x N matrix that contains multiple rows of pixels. As such, this function needs to return a M x 1 vector of distances. Therefore, we can achieve your weighted Euclidean distance as so:
weights = [3;4;2];
weuc = #(XI, XJ, W) sqrt(bsxfun(#minus, XI, XJ).^2 * W);
dist_matrix = pdist(double(U), #(XI, XJ) weuc(XI, XJ, weights));
bsxfun can handle that nicely as it will replicate XI for as many rows as we need to, and it should compute this single vector with every single element in XJ by subtracting. We thus square each of the differences, weight by weights, then take the square root and sum. Note that I didn't use sum(X,2), but I used vector algebra to compute the sum. If you recall, we are simply computing the dot product between the square distance of each component with a weight. In other words, x^{T}y where x is the square distance of each component and y are the weights for each component. You could do sum(X,2) if you like, but I find this to be more elegant and easy to read... plus it's less code!
Now that I know how you're obtaining U, the type is uint8 so you need to cast the image to double before we do anything. This should achieve your weighted Euclidean distance as we talked about.
As a check, let's put in your matrix in your example, then run it through pdist then squareform
x=[62,29,64;
63,31,62;
65,29,60;
63,29,62;
63,31,62];
weights = [3;4;2];
weuc = #(XI, XJ, W) sqrt(bsxfun(#minus,XI,XJ).^2 * W);
%// Make sure you CAST TO DOUBLE, as your image is uint8
%// We don't have to do it here as x is already a double, but
%// I would like to remind you to do so!
dist_vector = pdist(double(x), #(XI, XJ) weuc(XI, XJ, weights));
dist_matrix = squareform(dist_vector)
dist_matrix =
0 5.1962 7.6811 3.3166 5.1962
5.1962 0 6.0000 4.0000 0
7.6811 6.0000 0 4.4721 6.0000
3.3166 4.0000 4.4721 0 4.0000
5.1962 0 6.0000 4.0000 0
As you can see, the distance between pixels 1 and 2 is 5.1962. To check, sqrt(3*(63-62)^2 + 4*(31-29)^2 + 2*(64-62)^2) = sqrt(3 + 16 + 8) = sqrt(27) = 5.1962. You can do similar checks among elements within this matrix. We can tell that the distance between pixels 5 and 2 is 0 as you have made these rows of pixels the same. Also, the distance between each of themselves is also 0 (along the diagonal). Cool!

Calculate threshold for vector

I have a vector for which I need to calculate a threshold to convert it to a binary vector (above threshold=1, below=0). The values of the vector are either close to zero or far from it. So if plotted the vector, values either lie near X-axis or shoot up high(so there is a clear difference between the values). Each time, the values in the vector change so I need to calculate the threshold dynamically. There is no limit on max or min values that the vector can take. I know that otsu's method is used for grayscale images but since the range values for my vector is varying, I think I cannot use it. Is there any standard way to calculate threshold for my case? If not, are there any good workarounds?
I suggest you specify the percentage of values that will become 1, and use the corresponding percentile value as the threshold (computed with prctile function from the Statistics Toolbox):
x = [3 45 0.1 0.4 10 5 6 1.2];
p = 70; %// percent of values that should become 1
threshold = prctile(x,p);
x_quant = x>=threshold;
This approach makes the threshold automatically adapt to your values. Since your data are unbounded, using percentiles may be better than using averages, because with the average a single large value can deviate your threshold more than desired.
In the example,
x_quant =
0 1 0 0 1 0 0 0
if the limits dont differ in a single vector and the 0 and 1 values are nearly equal in probability, why dont you simply use the mean of the vector as a threshold?
>> X=[6 .5 .9 3 .4 .6 7]
X =
6.0000 0.5000 0.9000 3.0000 0.4000 0.6000 7.0000
>> X>=mean(X)
ans =
1 0 0 1 0 0 1
if the probability is different for ones and zeros you might want to multiply the mean in the comparison to fit again. note that this is a very simplistic aproach, which can surly be improved to better fit your problem

ICA - Statistical Independence & Eigenvalues of Covariance Matrix

I am currently creating different signals using Matlab, mixing them by multiplying them by a mixing matrix A, and then trying to get back the original signals using FastICA.
So far, the recovered signals are really bad when compared to the original ones, which was not what I expected.
I'm trying to see whether I'm doing anything wrong. The signals I'm generating are the following: (Amplitudes are in the range [0,1].)
s1 = (-x.^2 + 100*x + 500) / 3000; % quadratic
s2 = exp(-x / 10); % -ve exponential
s3 = (sin(x)+ 1) * 0.5; % sine
s4 = 0.5 + 0.1 * randn(size(x, 2), 1); % gaussian
s5 = (sawtooth(x, 0.75)+ 1) * 0.5; % sawtooth
One condition for ICA to be successful is that at most one signal is Gaussian, and I've observed this in my signal generation.
However, another condition is that all signals are statistically independent.
All I know is that this means that, given two signals A & B, knowing one signal does not give any information with regards to the other, i.e.: P(A|B) = P(A) where P is the probability.
Now my question is this: Are my signals statistically independent? Is there any way I can determine this? Perhaps some property that must be observed?
Another thing I've noticed is that when I calculate the eigenvalues of the covariance matrix (calculated for the matrix containing the mixed signals), the eigenspectrum seems to show that there is only one (main) principal component. What does this really mean? Shouldn't there be 5, since I have 5 (supposedly) independent signals?
For example, when using the following mixing matrix:
A =
0.2000 0.4267 0.2133 0.1067 0.0533
0.2909 0.2000 0.2909 0.1455 0.0727
0.1333 0.2667 0.2000 0.2667 0.1333
0.0727 0.1455 0.2909 0.2000 0.2909
0.0533 0.1067 0.2133 0.4267 0.2000
The eigenvalues are: 0.0000 0.0005 0.0022 0.0042 0.0345 (only 4!)
When using the identity matrix as the mixing matrix (i.e. the mixed signals are the same as the original ones), the eigenspectrum is: 0.0103 0.0199 0.0330 0.0811 0.1762. There still is one value much larger than the rest..
Thank you for your help.
I apologise if the answers to my questions are painfully obvious, but I'm really new to statistics, ICA and Matlab. Thanks again.
EDIT - I have 500 samples of each signal, in the range [0.2, 100], in steps of 0.2, i.e. x = 0:0.1:100.
EDIT - Given the ICA Model: X = As + n (I'm not adding any noise at the moment), but I am referring to the eigenspectrum of the transpose of X, i.e. eig(cov(X')).
Your signals are correlated (not independent). Right off the bat, the sawtooth and the sine are the same period. Tell me the value of one I'll tell you the value of the other, perfect correlation.
If you change up the period of one of them that'll make them more independent.
Also S1 and S2 are kinda correlated.
As for the eigenvalues, first of all your signals are not independent (see above).
Second of all, your filter matrix A is also not well conditioned, spreading out your eigenvalues further.
Even if you were to pipe in five fully independent (iid, yada yada) signals the covariance would be:
E[ A y y' A' ] = E[ A I A' ] = A A'
The eigenvalues of that are:
eig(A*A')
ans =
0.000167972216475
0.025688510850262
0.035666735304024
0.148813869149738
1.042451912479502
So you're really filtering/squishing all the signals down onto one basis function / degree of freedom and of course they'll be hard to recover, whatever method you use.
To find if the signals are mutually independent you could look at the techniques described here In general two random variables are independent if they are orthogonal. This means that: E{s1*s2} = 0 Meaning that the expectation of the random variable s1 multiplied by the random variable s2 is zero. This orthogonality condition is extremely important in statistics and probability and shows up everywhere. Unfortunately it applies to 2 variables at a time. There are multivariable techniques, but none that I would feel comfortable recommending. Another link I dug up was this one, not sure what your application is, but that paper is very well done.
When I calculate the covariance matrix I get:
cov(A) =
0.0619 -0.0284 -0.0002 -0.0028 -0.0010
-0.0284 0.0393 0.0049 0.0007 -0.0026
-0.0002 0.0049 0.1259 0.0001 -0.0682
-0.0028 0.0007 0.0001 0.0099 -0.0012
-0.0010 -0.0026 -0.0682 -0.0012 0.0831
With eigenvectors,V and values D:
[V,D] = eig(cov(A))
V =
-0.0871 0.5534 0.0268 -0.8279 0.0063
-0.0592 0.8264 -0.0007 0.5584 -0.0415
-0.0166 -0.0352 0.5914 -0.0087 -0.8054
-0.9937 -0.0973 -0.0400 0.0382 -0.0050
-0.0343 0.0033 0.8050 0.0364 0.5912
D =
0.0097 0 0 0 0
0 0.0200 0 0 0
0 0 0.0330 0 0
0 0 0 0.0812 0
0 0 0 0 0.1762
Here's my code:
x = transpose(0.2:0.2:100);
s1 = (-x.^2 + 100*x + 500) / 3000; % quadratic
s2 = exp(-x / 10); % -ve exponential
s3 = (sin(x)+ 1) * 0.5; % sine
s4 = 0.5 + 0.1 * randn(length(x), 1); % gaussian
s5 = (sawtooth(x, 0.75)+ 1) * 0.5; % sawtooth
A = [s1 s2 s3 s4 s5];
cov(A)
[V,D] = eig(cov(A))
Let me know if I can help any more, or if I misunderstood.
EDIT Properly referred to eigenvalues and vectors, used 0.2 sampling interval added code.