Determine the position of a point in 3D space given the distance to N points with known coordinates - matlab

I am trying to determine the (x,y,z) coordinates of a point p. What I have are the distances to 4 different points m1, m2, m3, m4 with known coordinates.
In detail: what I have is the coordinates of 4 points (m1,m2,m3,m4) and they are not in the same plane:
m1: (x1,y1,z1),
m2: (x2,y2,z2),
m3: (x3,y3,z3),
m4: (x4,y4,z4)
and the Euclidean distances form m1->p, m2->p, m3->p and m4->p which are
D1 = sqrt( (x-x1)^2 + (y-y1)^2 + (z-z1)^2);
D2 = sqrt( (x-x2)^2 + (y-y2)^2 + (z-z2)^2);
D3 = sqrt( (x-x3)^2 + (y-y3)^2 + (z-z3)^2);
D4 = sqrt( (x-x4)^2 + (y-y4)^2 + (z-z4)^2);
I am looking for (x,y,z). I tried to solve this non-linear system of 4 equations and 3 unknowns with matlab fsolve by taking the euclidean distances but didn't manage.
There are two questions:
How can I find the unknown coordinates of point p: (x,y,z)
What is the minimum number of points m with known coordinates and
distances to p that I need in order to find (x,y,z)?
EDIT:
Here is a piece of code that gives no solutions:
Lets say that the points I have are:
m1 = [ 370; 1810; 863];
m2 = [1586; 185; 1580];
m3 = [1284; 1948; 348];
m4 = [1732; 1674; 1974];
x = cat(2,m1,m2,m3,m4)';
And the distance from each point to p are
d = [1387.5; 1532.5; 1104.7; 0855.6]
From what I understood if I want to run fsolve I have to use the following:
1. Create a function
2. Call fsolve
function F = calculateED(p)
m1 = [ 370; 1810; 863];
m2 = [1586; 185; 1580];
m3 = [1284; 1948; 348];
m4 = [1732; 1674; 1974];
x = cat(2,m1,m2,m3,m4)';
d = [1387.5; 1532.5; 1104.7; 0855.6]
F = [d(1,1)^2 - (p(1)-x(1,1))^2 - (p(2)-x(1,2))^2 - (p(3)-x(1,3))^2;
d(2,1)^2 - (p(1)-x(2,1))^2 - (p(2)-x(2,2))^2 - (p(3)-x(2,3))^2;
d(3,1)^2 - (p(1)-x(3,1))^2 - (p(2)-x(3,2))^2 - (p(3)-x(3,3))^2;
d(4,1)^2 - (p(1)-x(4,1))^2 - (p(2)-x(4,2))^2 - (p(3)-x(4,3))^2;];
and then call fsolve:
p0 = [1500,1500,1189]; % initial guess
options = optimset('Algorithm',{'levenberg-marquardt',.001},'Display','iter','TolX',1e-1);
[p,Fval,exitflag] = fsolve(#calculateED,p0,options);
I am running Matlab 2011b.
Am I missing something?
How would the least squares solution be?
One note here is that m1, m2, m3, m4 and d values may not be given accurately but for an analytical solution that shouldn't be a problem.

mathematica readily numericall solves the three point problem:
p = Table[ RandomReal[{-1, 1}, {3}], {3}]
r = RandomReal[{1, 2}, {3}]
Reduce[Simplify[ Table[Norm[{x, y, z} - p[[i]]] == r[[i]] , {i, 3}],
Assumptions -> {Element[x | y | z, Reals]}], {x, y, z}, Reals]
This will typically return false as random spheres will typically not have triple intersection points.
When you have a solution you'll typically have a pair like this..
(* (x == -0.218969 && y == -0.760452 && z == -0.136958) ||
(x == 0.725312 && y == 0.466006 && z == -0.290347) *)
This somewhat surprisingly has a failrly elegent analytic solution. Its a bit involved so I'll wait to see if someone has it handy and if not and there is interest I'll try to remember the steps..
Edit, approximate solution following Dmitys least squares suggestion:
p = {{370, 1810, 863}, {1586, 185, 1580}, {1284, 1948, 348}, {1732,
1674, 1974}};
r = {1387.5, 1532.5, 1104.7, 0855.6};
solution = {x, y, z} /.
Last#FindMinimum[
Sum[(Norm[{x, y, z} - p[[i]]] - r[[i]] )^2, {i, 1, 4}] , {x, y, z}]
Table[ Norm[ solution - p[[i]]], {i, 4}]
As you see you are pretty far from exact..
(* solution point {1761.3, 1624.18, 1178.65} *)
(* solution radii: {1438.71, 1504.34, 1011.26, 797.446} *)

I'll answer the second question. Let's name the unknown point X. If you have only known point A and know the distance form X to A then X can be on a sphere with the center in A.
If you have two points A,B then X is on a circle given by the intersection of the spheres with centers in A and B (if they intersect that is).
A third point will add another sphere and the final intersection between the three spheres will give two points.
The fourth point will finaly decide which of those two points you're looking for.
This is how GPS actually works. You have to have at least three satellites. Then the GPS will guess which of the two points is the correct one, since the other one is in space, but it won't be able to tell you the altitude. Technically it should, but there are also errors, so the more satellites you "see" the less the error.
Have found this question which might be a starting point.

Take first three equation and solve i for 3 equation and 3 variables in MATLAB. After solving the equation you will get two pairs of values or we can say two set of coordinates of p.
keep each set in the 4th equation and you can find out the set which satisfies the equation is the answer

Related

Optimization of matrix on matlab using fmincon

I have a 30x30 matrix as a base matrix (OD_b1), I also have two base vectors (bg and Ag). My aim is to optimize a matrix (X) who's dimensions are 30X30 such that:
1) the squared difference between vector (bg) and vector of sum of all the columns is minimized.
2)the squared difference between vector (Ag) and vector of sum of all rows is minimized.
3)the squared difference between the elements of matrix (X) and matrix (OD_b1) is minimized.
The mathematical form of the equation is as follows:
I have tried this:
fun=#(X)transpose(bg-sum(X,2))*(bg-sum(X,2))+ (Ag-sum(X,1))*transpose(Ag-sum(X,1))+sumsqr(X_b-X);
[val,X]=fmincon(fun,OD_b1,AA,BB,Aeq,beq,LB,UB)
I don't get errors but it seems like it's stuck.
Is it because I have too many variables or is there another reason?
Thanks in advance
This is a simple, unconstrained least squares problem and hence has a simple solution that can be expressed as the solution to a linear system.
I will show you (1) the precise and efficient way to solve this and (2) how to solve with fmincon.
The precise, efficient solution:
Problem setup
Just so we're on the same page, I initialize the variables as follows:
n = 30;
Ag = randn(n, 1); % observe the dimensions
X_b = randn(n, n);
bg = randn(n, 1);
The code:
A1 = kron(ones(1,n), eye(n));
A2 = kron(eye(n), ones(1,n));
A = (A1'*A1 + A2'*A2 + eye(n^2));
b = A1'*bg + A2'*Ag + X_b(:);
x = A \ b; % solves A*x = b
Xstar = reshape(x, n, n);
Why it works:
I first reformulated your problem so the objective is a vector x, not a matrix X. Observe that z = bg - sum(X,2) is equivalent to:
x = X(:) % vectorize X
A1 = kron(ones(1,n), eye(n)); % creates a special matrix that sums up
% stuff appropriately
z = A1*x;
Similarly, A2 is setup so that A2*x is equivalent to Ag'-sum(X,1). Your problem is then equivalent to:
minimize (over x) (bg - A1*x)'*(bg - A1*x) + (Ag - A2*x)'*(Ag - A2*x) + (y - x)'*(y-x) where y = Xb(:). That is, y is a vectorized version of Xb.
This problem is convex and the first order condition is a necessary and sufficient condition for the optimum. Take the derivative with respect to x and that equation will define your solution! Sample example math for almost equivalent (but slightly simpler problem is below):
minimize(over x) (b - A*x)'*(b - A*x) + (y - x)' * (y - x)
rewriting the objective:
b'b- b'Ax - x'A'b + x'A'Ax +y'y - 2y'x+x'x
Is equivalent to:
minimize(over x) (-2 b'A - 2y'*I) x + x' ( A'A + I) * x
the first order condition is:
(A'A+I+(A'A+I)')x -2A'b-2I'y = 0
(A'A+I) x = A'b+I'y
Your problem is essentially the same. It has the first order condition:
(A1'*A1 + A2'*A2 + I)*x = A1'*bg + A2'*Ag + y
How to solve with fmincon
You can do the following:
f = #(X) transpose(bg-sum(X,2))*(bg-sum(X,2)) + (Ag'-sum(X,1))*transpose(Ag'-sum(X,1))+sum(sum((X_b-X).^2));
o = optimoptions('fmincon');%MaxFunEvals',30000);
o.MaxFunEvals = 30000;
Xstar2 = fmincon(f,zeros(n,n),[],[],[],[],[],[],[],o);
You can then check the answers are about the same with:
normdif = norm(Xstar - Xstar2)
And you can see that gap is small, but that the linear algebra based solution is somewhat more precise:
gap = f(Xstar2) - f(Xstar)
If the fmincon approach hangs, try it with a smaller n just to gain confidence that my linear algebra based solution is more precise, way way faster etc... n = 30 is solving a 30^2 = 900 variable optimization problem: not easy. With the linear algebra approach, you can go up to n = 100 (i.e. 10000 variable problem) or even larger.
I would probably solve this as a QP using quadprog using the following reformulation (keeping the objective as simple as possible to make the problem "less nonlinear"):
min sum(i,v(i)^2)+sum(i,w(i)^2)+sum((i,j),z(i,j)^2)
v = bg - sum(c,x)
w = ag - sum(r,x)
Z = xbase-x
The QP solver is more precise (no gradients using finite differences). This approach also allows you to add additional bounds and linear equality and inequality constraints.
The other suggestion to form the first order conditions explicitly is also a good one: it also has no issue with imprecise gradients (the first order conditions are linear). I usually prefer a quadratic model because of its flexibility.

Optimization with Unknown Number of Variables

Since the original problem is more complicated, the idea is described using a simple example below.
For example, suppose we want to put several router antennas somewhere in a room so that the cellphone get most signal strength on the table (received power > Pmax) while weakest signal strength on bed (received power < Pmin). What is the best (minimum) number of antennas that should be used, and where should they be placed, in order to achieve the goal.
Mathematically,
SIGNAL_STRENGTH is dependent on variable (x, y, z) and the number
of variables
. i.e. location and number of antennas.
Besides, assume
PREDICTION = f((x1, y1, z1), (x2, y2, z2), ... (xi, yi, zi), ... (xn,
yn, zn))
where n and (xi, yi, zi) are to be optimized. The goal is to minimize
cost function = ||SIGNAL_STRENGTH - PREDICTION||
I tried to use GA with mixed integer programming in Matlab to implement that. Two optimization functions are used, outer function is to optimize n, and inner optimization function optimizes (x, y, z) with given n. This method works slow and I haven't seen one result given by this method so far.
Does anyone have a more efficient way to solve this problem? Any suggestion is appreciated. Thanks in advance.
Terminology | Problem Definition
An antenna is sending at position a in R^3 with constant power. Its signal strength can be measured by some S: R^3 -> R where S has a single maximum S_0 at a and the set, constructed by S(x) > const, is simply connected, i.e. S(x) = S_0 * exp(-const * (x-a)^2).
Given a set of antennas A the resulting signal strength is the maximum of a single antenna
S_A(x) = max{S_a(x) : for all a in A} ,
which means we 'lock' on the strongest antenna, which is what cell phones do.
Let K = R^3 x R denote a space of points (position, intensity). Now concider two finite subsets POI_min and POI_max of K. We want to find the set A with the minimal amount of antennas (|A| -> min.), that satisfies
for all (x,w) in POI_min : S_A(x) < w and for all (x,w) in POI_max : S_A(x) > w .
Implication
As S(x) > const is simply connected there has to be an antenna in a sphere around the position of each element (x,w) in POI_max with radius r = max{||xi - x|| : for all xi in S(xi) = w}. Which means that if we would put an antenna at the position of (x,w), then the furthest we can go away from x and still have signal strength w is the radius r within which an actual antenna has to be positioned.
With a similar argumentation for POI_min it follows that there is no antenna within r = min{||xi - x|| : for all xi in S(xi) = w}.
Solution
Instead of solving a nonlinear optimization task we can intersect spheres to obtain the optimal solution. If k spheres around the POI_max positions intersect, we can place a single antenna in the intersection, reducing the amount of antennas needed by k-1.
However each antenna that is placed must satisfy all constraints given by the elements of POI_min. Assuming that antennas are omnidirectional and thus orientation of an antenna doesn't matter we can do (pseudocode):
min_sphere = {(x_i,r_i) : from POI_min},
spheres_to_cover = {(x_i,r_i) : from POI_max}
A = {}
while not is_empty(spheres_to_cover)
power_set_score = struct // holds score, k
PS <- costruct power set of sphere_to_cover
for i = 1:number_of_elements(PS)
k = PS[i]
if intersection(k) \ min_sphere is not empty
power_set_score[i].score = |k|
else
power_set_score[i].score = 0
end if
power_set_score[i].k = k
end for
sort(power_set_score) // sort by score, biggest first
A <- add arbitrary point in (intersection(power_set_score[1].k) \ min_sphere)
spheres_to_cover = spheres_to_cover \ power_set_score[1].k
end while
On the other hand you have just given an example problem and thus this solution may not be applicable or broad enough for your case. I did make a few assumptions. So being more specific in the question might give you an even better answer.

Iteration of matrix-vector multiplication which stores specific index-positions

I need to solve a min distance problem, to see some of the work which has being tried take a look at:
link: click here
I have four elements: two column vectors: alpha of dim (px1) and beta of dim (qx1). In this case p = q = 50 giving two column vectors of dim (50x1) each. They are defined as follows:
alpha = alpha = 0:0.05:2;
beta = beta = 0:0.05:2;
and I have two matrices: L1 and L2.
L1 is composed of three column-vectors of dimension (kx1) each.
L2 is composed of three column-vectors of dimension (mx1) each.
In this case, they have equal size, meaning that k = m = 1000 giving: L1 and L2 of dim (1000x3) each. The values of these matrices are predefined.
They have, nevertheless, the following structure:
L1(kx3) = [t1(kx1) t2(kx1) t3(kx1)];
L2(mx3) = [t1(mx1) t2(mx1) t3(mx1)];
The min. distance problem I need to solve is given (mathematically) as follows:
d = min( (x-(alpha_p*t1_k - beta_q*t1_m)).^2 + (y-(alpha_p*t2_k - beta_q*t2_m)).^2 +
(z-(alpha_p*t3_k - beta_q*t3_m)).^2 )
the values x,y,z are three fixed constants.
My problem
I need to develop an iteration which can give me back the index positions from the combination of: alpha, beta, L1 and L2 which fulfills the min-distance problem from above.
I hope the formulation for the problem is clear, I have been very careful with the index notations. But if it is still not so clear... the step size for:
alpha is p = 1,...50
beta is q = 1,...50
for L1; t1, t2, t3 is k = 1,...,1000
for L2; t1, t2, t3 is m = 1,...,1000
And I need to find the index of p, index of q, index of k and index of m which gives me the min. distance to the point x,y,z.
Thanks in advance for your help!
I don't know your values so i wasn't able to check my code. I am using loops because it is the most obvious solution. Pretty sure that someone from the bsxfun-brigarde ( ;-D ) will find a shorter/more effective solution.
alpha = 0:0.05:2;
beta = 0:0.05:2;
L1(kx3) = [t1(kx1) t2(kx1) t3(kx1)];
L2(mx3) = [t1(mx1) t2(mx1) t3(mx1)];
idx_smallest_d =[1,1,1,1];
smallest_d = min((x-(alpha(1)*t1(1) - beta(1)*t1(1))).^2 + (y-(alpha(1)*t2(1) - beta(1)*t2(1))).^2+...
(z-(alpha(1)*t3(1) - beta(1)*t3(1))).^2);
%The min. distance problem I need to solve is given (mathematically) as follows:
for p=1:1:50
for q=1:1:50
for k=1:1:1000
for m=1:1:1000
d = min((x-(alpha(p)*t1(k) - beta(q)*t1(m))).^2 + (y-(alpha(p)*t2(k) - beta(q)*t2(m))).^2+...
(z-(alpha(p)*t3(k) - beta(q)*t3(m))).^2);
if d < smallest_d
smallest_d=d;
idx_smallest_d= [p,q,k,m];
end
end
end
end
end
What I am doing is predefining the smallest distance as the distance of the first combination and then checking for each combination rather the distance is smaller than the previous shortest distance.

Angle between two vectors matlab

I want to calculate the angle between 2 vectors V = [Vx Vy Vz] and B = [Bx By Bz].
is this formula correct?
VdotB = (Vx*Bx + Vy*By + Vz*Bz)
Angle = acosd (VdotB / norm(V)*norm(B))
and is there any other way to calculate it?
My question is not for normalizing the vectors or make it easier. I am asking about how to get the angle between this two vectors
Based on this link, this seems to be the most stable solution:
atan2(norm(cross(a,b)), dot(a,b))
There are a lot of options:
a1 = atan2(norm(cross(v1,v2)), dot(v1,v2))
a2 = acos(dot(v1, v2) / (norm(v1) * norm(v2)))
a3 = acos(dot(v1 / norm(v1), v2 / norm(v2)))
a4 = subspace(v1,v2)
All formulas from this mathworks thread. It is said that a3 is the most stable, but I don't know why.
For multiple vectors stored on the columns of a matrix, one can calculate the angles using this code:
% Calculate the angle between V (d,N) and v1 (d,1)
% d = dimensions. N = number of vectors
% atan2(norm(cross(V,v2)), dot(V,v2))
c = bsxfun(#cross,V,v2);
d = sum(bsxfun(#times,V,v2),1);%dot
angles = atan2(sqrt(sum(c.^2,1)),d)*180/pi;
The traditional approach to obtaining an angle between two vectors (i.e. arccos(dot(u, v) / (norm(u) * norm(v))), as presented in some of the other answers) suffers from numerical instability in several corner cases. The following code works for n-dimensions and in all corner cases (it doesn't check for zero length vectors, but that's easy to add). See notes below.
% Get angle between two vectors
function a = angle_btw(v1, v2)
% Returns true if the value of the sign of x is negative, otherwise false.
signbit = #(x) x < 0;
u1 = v1 / norm(v1);
u2 = v2 / norm(v2);
y = u1 - u2;
x = u1 + u2;
a0 = 2 * atan(norm(y) / norm(x));
if not(signbit(a0) || signbit(pi - a0))
a = a0;
elseif signbit(a0)
a = 0.0;
else
a = pi;
end;
end
This code is adapted from a Julia implementation by Jeffrey Sarnoff (MIT license), in turn based on these notes by Prof. W. Kahan (page 15).
You can compute VdotB much faster and for vectors of arbitrary length using the dot operator, namely:
VdotB = sum(V(:).*B(:));
Additionally, as mentioned in the comments, matlab has the dot function to compute inner products directly.
Besides that, the formula is what it is so what you are doing is correct.
This function should return the angle in radians.
function [ alpharad ] = anglevec( veca, vecb )
% Calculate angle between two vectors
alpharad = acos(dot(veca, vecb) / sqrt( dot(veca, veca) * dot(vecb, vecb)));
end
anglevec([1 1 0],[0 1 0])/(2 * pi/360)
>> 45.00
The solution of Dennis Jaheruddin is excellent for 3D vectors, for higher dimensional vectors I would suggest to use:
acos(min(max(dot(a,b)/sqrt(dot(a,a)*dot(b,b)),-1),1))
This fixes numerical issues which could bring the argument of acos just above 1 or below -1. It is, however, still problematic when one of the vectors is a null-vector. This method also only requires 3*N+1 multiplications and 1 sqrt. It, however also requires 2 comparisons which the atan method does not need.

linear combination of curves to match a single curve

I have a set of vectors (length of 50, essentially a set of curves) that i want to try to match another single curve(vector) and obtain the coefficients of each of the vectors in the first set to match the second curve. The coefficients need to be >= 0.0 . I.e, a linear combination of the first set of curves to match the single curve. Any help in which direction I should go would be helpful.
If I understand correctly, you have a set of curves
each of which you want to multiply with a scaling factor, so that it reproduces some target curve
as closely as possible.
This is easily done with a linear least squares approximation.
%# create some sample curves
x = -10:0.1:10;
g1 = exp(-(x-3).^2/4);
g2 = exp(-(x-0).^2/4);
g3 = exp(-(x+2).^2/4);
%# make a target curve, corrupt with noise
y = 2*g1+4*g2+g3+randn(size(x))*0.2;
%# use the `ldivide` operator to solve an equation of the form
%# A*x=B
%# so that x (=fact here) is x=A^-1*B or, in Matlab terms, A\B
%# note the transposes, A should be a n-by-3 array, B a n-by-1 array
%# so that x is a 3-by-1 array of factors
fact = [g1;g2;g3]'\y'
fact =
1.9524
3.9978
1.0105
%# Show the result
figure,plot(x,y)
hold on,plot(x,fact(1)*g1+fact(2)*g2+fact(3)*g3,'m')
so thats what he meant!..
mathematica version..
x = Table[i, {i, -10, 10, .1}];
basis = {
Exp[-(# - 3)^2/4] & /# x,
Exp[-(# - 0)^2/4] & /# x,
Exp[-(# + 2)^2/4] & /# x
};
Show[
ListPlot[Table[{x[[i]], #[[i]]}, {i, Length[x]}] ,
Joined -> True , PlotStyle -> Hue [Random[]]] & /# basis ]
y = Table [ 2 basis[[1, i]] + 4 basis[[2, i]] + basis[[3, i]] +
RandomReal[{.5, .5}] ,{i, Length[x]}];
dataplot = ListPlot[Table[{x[[i]], y[[i]]}, {i, Length[x]}] ]
mathematica does not magically do least squares if you simply solve an underdetermined system, so find a least squres result explicitly:
coefs = FindMinimum[
Total[(#^2 & /# (Sum[a[k] basis[[k]] , {k, Length[basis]}]-y) )],
Array[a, Length[basis]]][[2]]
Show[dataplot,
ListPlot[i = 0; {x[[++i]], #} & /#
(Sum[a[k] basis[[k]] , {k, 3}] /. coefs),
Joined -> True]]
note if you want ot restrict the coefficents to be >= 0 as stated you can simply square the values in the formulation like this:
coefs = FindMinimum[
Total[(#^2 & /#
(Sum[a[k]^2 basis[[k]] , {k, Length[basis]}]-y) )],
Array[a, Length[basis]]][[2]]
Show[dataplot,
ListPlot[i = 0; {x[[++i]], #} & /#
(Sum[a[k]^2 basis[[k]] , {k, 3}] /. coefs),
Joined -> True]]
you will get predictably poor results if the actual best fit wants to have a negative value.