Maple: How to convert Cylindrical coordinates to Cartesian coordinates? - coordinates

We get some expression in Cylindrical coordinates (r, ϕ, z ) like : expr := r*z^2*sin((1/3)*ϕ) we need to convert it into Cartesian coordinates and than back to Cylindrical coordinates. How to do such thing?
So I found something like this : eval(expr, {r = sqrt(x^2+y^2), z = z,ϕ= arctan(y, x)}) but it seems incorrect, how to correct it and how make eval to convert backwords from Cartesian to Cylindrical?
ϕ == ϕ
So I try:
R := 1;
H := h;
sigma[0] := sig0;
sigma := sigma[0]*z^2*sin((1/3)*`ϕ`);
toCar := eval(sigma, {r = sqrt(x^2+y^2), z = z, `ϕ` = arctan(y, x)});
toCyl := collect(eval(toCar, {x = r*cos(`ϕ`), y = r*sin(`ϕ`), z = z}), `ϕ`)
It looks close to true but look:
why arctan(r*sin(ϕ), r*cos(ϕ)) is not shown as ϕ?
Actually it is only begining of fun time for me because I also need to calculate
Q := int(int(int(toCar, x = 0 .. r), y = 0 .. 2*Pi), z = 0 .. H)
and to get it back into Cylindrical coordinates...

simplify(toCyl) assuming r>=0, `&varphi;`<=Pi, `&varphi;`>-Pi;
Notice,
arctan(sin(Pi/4),cos(Pi/4));
1
- Pi
4
arctan(sin(Pi/4 + 10*Pi),cos(Pi/4 + 10*Pi));
1
- Pi
4
arctan(sin(-7*Pi/4),cos(-7*Pi/4));
1
- Pi
4
arctan(sin(-15*Pi/4),cos(-15*Pi/4));
1
- Pi
4
arctan(sin(-Pi),cos(-Pi));
Pi
K:=arctan(r*sin(Pi/4),r*cos(Pi/4));
arctan(r, r)
simplify(K) assuming r<0;
3
- - Pi
4
simplify(K) assuming r>0;
1
- Pi
4
Once you've converted from cylindrical to rectangular, any information about how many times the original angle" might have wrapped around (past -Pi) is lost.
So you won't recover the original &varphi; unless it was in (-Pi,Pi]. If you tell Maple that is the case (along with r>-0 so that it knows which half-plane), using assumptions, then it can simplify to what you're expecting.

Related

converting/ translate from Python to Octave or Matlab

I have a Python-Code and want to rewrite it in Octave, but I meet so many problems during the converting. I found a solution for some of them and some of them still need your help. Now i would start with this part of the code :
INVOLUTE_FI = 0
INVOLUTE_FO = 1
INVOLUTE_OI = 2
INVOLUTE_OO = 3
def coords_inv(phi, geo, theta, inv):
"""
Coordinates of the involutes
Parameters
----------
phi : float
The involute angle
geo : struct
The structure with the geometry obtained from get_geo()
theta : float
The crank angle, between 0 and 2*pi
inv : int
The key for the involute to be considered
"""
rb = geo.rb
ro = rb*(pi - geo.phi_fi0 + geo.phi_oo0)
Theta = geo.phi_fie - theta - pi/2.0
if inv == INVOLUTE_FI:
x = rb*cos(phi)+rb*(phi-geo.phi_fi0)*sin(phi)
y = rb*sin(phi)-rb*(phi-geo.phi_fi0)*cos(phi)
elif inv == INVOLUTE_FO:
x = rb*cos(phi)+rb*(phi-geo.phi_fo0)*sin(phi)
y = rb*sin(phi)-rb*(phi-geo.phi_fo0)*cos(phi)
elif inv == INVOLUTE_OI:
x = -rb*cos(phi)-rb*(phi-geo.phi_oi0)*sin(phi)+ro*cos(Theta)
y = -rb*sin(phi)+rb*(phi-geo.phi_oi0)*cos(phi)+ro*sin(Theta)
elif inv == INVOLUTE_OO:
x = -rb*cos(phi)-rb*(phi-geo.phi_oo0)*sin(phi)+ro*cos(Theta)
y = -rb*sin(phi)+rb*(phi-geo.phi_oo0)*cos(phi)+ro*sin(Theta)
else:
raise ValueError('flag not valid')
return x,y
def CVcoords(CVkey, geo, theta, N = 1000):
"""
Return a tuple of numpy arrays for x,y coordinates for the lines which
determine the boundary of the control volume
Parameters
----------
CVkey : string
The key for the control volume for which the polygon is desired
geo : struct
The structure with the geometry obtained from get_geo()
theta : float
The crank angle, between 0 and 2*pi
N : int
How many elements to include in each entry in the polygon
Returns
-------
x : numpy array
X-coordinates of the outline of the control volume
y : numpy array
Y-coordinates of the outline of the control volume
"""
Nc1 = Nc(theta, geo, 1)
Nc2 = Nc(theta, geo, 2)
if CVkey == 'sa':
r = (2*pi*geo.rb-geo.t)/2.0
xee,yee = coords_inv(geo.phi_fie,geo,0.0,'fi')
xse,yse = coords_inv(geo.phi_foe-2*pi,geo,0.0,'fo')
xoie,yoie = coords_inv(geo.phi_oie,geo,theta,'oi')
xooe,yooe = coords_inv(geo.phi_ooe,geo,theta,'oo')
x0,y0 = (xee+xse)/2,(yee+yse)/2
beta = atan2(yee-y0,xee-x0)
t = np.linspace(beta,beta+pi,1000)
x,y = x0+r*np.cos(t),y0+r*np.sin(t)
return np.r_[x,xoie,xooe,x[0]],np.r_[y,yoie,yooe,y[0]]
https://docs.scipy.org/doc/numpy/reference/generated/numpy.r_.html I just don´t understand the last Output, and I am still confuse what´s mean _r here, and how can I write it by Octave?....I read what is written in the link, but it still not clear for me.
return np.r_[x,xoie,xooe,x[0]], np.r_[y,yoie,yooe,y[0]]
The function returns 2 values, both arrays created by np.r_.
np.r_[....] has indexing syntax, and ends up being translated into a function call to the np.r_ object. The result is just the concatenation of the arguments:
In [355]: np.r_[1, 3, 6:8, np.array([3,2,1])]
Out[355]: array([1, 3, 6, 7, 3, 2, 1])
With the [] notation it can accept slice like objects (6:8) though I don't see any of those here. I'd have to study the rest of the code to identify whether the other arguments are scalars (single values) or arrays.
My Octave is rusty (though I could experiment with the conversion).
t = np.lispace... # I think that exists in Octave, a 1000 values
x = x0+r*np.cos(t) # a derived array of 1000 values
xoie one of the values returned by coords_inv; may be scalar or array. x[0] the first value of x. So the r_ probably produces a 1d array made up of x, and the subsequent values.

How can I get all solutions to this equation in MATLAB?

I would like to solve the following equation: tan(x) = 1/x
What I did:
syms x
eq = tan(x) == 1/x;
sol = solve(eq,x)
But this gives me only one numerical approximation of the solution. After that I read about the following:
[sol, params, conds] = solve(eq, x, 'ReturnConditions', true)
But this tells me that it can't find an explicit solution.
How can I find numerical solutions to this equation within some given range?
I've never liked using solvers "blindly", that is, without some sort of decent initial value selection scheme. In my experience, the values you will find when doing things blindly, will be without context as well. Meaning, you'll often miss solutions, think something is a solution while in reality the solver exploded, etc.
For this particular case, it is important to realize that fzero uses numerical derivatives to find increasingly better approximations. But, derivatives for f(x) = x · tan(x) - 1 get increasingly difficult to accurately compute for increasing x:
As you can see, the larger x becomes, the better f(x) approximates a vertical line; fzero will simply explode! Therefore it is imperative to get an estimate as closely to the solution as possible before even entering fzero.
So, here's a way to get good initial values.
Consider the function
f(x) = x · tan(x) - 1
Knowing that tan(x) has Taylor expansion:
tan(x) ≈ x + (1/3)·x³ + (2/15)·x⁵ + (7/315)·x⁷ + ...
we can use that to approximate the function f(x). Truncating after the second term, we can write:
f(x) ≈ x · (x + (1/3)·x³) - 1
Now, key to realize is that tan(x) repeats with period π. Therefore, it is most useful to consider the family of functions:
fₙ(x) ≈ x · ( (x - n·π) + (1/3)·(x - n·π)³) - 1
Evaluating this for a couple of multiples and collecting terms gives the following generalization:
f₀(x) = x⁴/3 - 0π·x³ + ( 0π² + 1)x² - (0π + (0π³)/3)·x - 1
f₁(x) = x⁴/3 - 1π·x³ + ( 1π² + 1)x² - (1π + (1π³)/3)·x - 1
f₂(x) = x⁴/3 - 2π·x³ + ( 4π² + 1)x² - (2π + (8π³)/3)·x - 1
f₃(x) = x⁴/3 - 3π·x³ + ( 9π² + 1)x² - (3π + (27π³)/3)·x - 1
f₄(x) = x⁴/3 - 4π·x³ + (16π² + 1)x² - (4π + (64π³)/3)·x - 1
⋮
fₙ(x) = x⁴/3 - nπ·x³ + (n²π² + 1)x² - (nπ + (n³π³)/3)·x - 1
Implementing all this in a simple MATLAB test:
% Replace this with the whole number of pi's you want to
% use as offset
n = 5;
% The coefficients of the approximating polynomial for this offset
C = #(npi) [1/3
-npi
npi^2 + 1
-npi - npi^3/3
-1];
% Find the real, positive polynomial roots
R = roots(C(n*pi));
R = R(imag(R)==0);
R = R(R > 0);
% And use these as initial values for fzero()
x_npi = fzero(#(x) x.*tan(x) - 1, R)
In a loop, this can produce the following table:
% Estimate (polynomial) Solution (fzero)
0.889543617524132 0.860333589019380 0·π
3.425836967935954 3.425618459481728 1·π
6.437309348195653 6.437298179171947 2·π
9.529336042900365 9.529334405361963 3·π
12.645287627956868 12.645287223856643
15.771285009691695 15.771284874815882
18.902410011613000 18.902409956860023
22.036496753426441 22.036496727938566 ⋮
25.172446339768143 25.172446326646664
28.309642861751708 28.309642854452012
31.447714641852869 31.447714637546234
34.586424217960058 34.586424215288922 11·π
As you can see, the approximant is basically equal to the solution. Corresponding plot:
To find a numerical solution to a function within some range, you can use fzero like this:
fun = #(x)x*tan(x)-1; % Multiplied by x so fzero has no issue evaluating it at x=0.
range = [0 pi/2];
sol = fzero(fun,range);
The above would return just one solution (0.8603). If you want additional solutions, you will have to call fzero more times. This can be done, for example, in a loop:
fun = #(x)tan(x)-1/x;
RANGE_START = 0;
RANGE_END = 3*pi;
RANGE_STEP = pi/2;
intervals = repelem(RANGE_START:RANGE_STEP:RANGE_END,2);
intervals = reshape(intervals(2:end-1),2,[]).';
sol = NaN(size(intervals,1),1);
for ind1 = 1:numel(sol)
sol(ind1) = fzero(fun, mean(intervals(ind1,:)));
end
sol = sol(~isnan(sol)); % In case you specified more intervals than solutions.
Which gives:
[0.86033358901938;
1.57079632679490; % Wrong
3.42561845948173;
4.71238898038469; % Wrong
6.43729817917195;
7.85398163397449] % Wrong
Note that:
The function is symmetric, and so are its roots. This means you can solve on just the positive interval (for example) and get the negative roots "for free".
Every other entry in sol is wrong because this is where we have asymptotic discontinuities (tan transitions from +Inf to -Inf), which is mistakenly recognized by MATLAB as a solution. So you can just ignore them (i.e. sol = sol(1:2:end);.
Multiply the equation by x and cos(x) to avoid any denominators that can have the value 0,
f(x)=x*sin(x)-cos(x)==0
Consider the normalized function
h(x)=(x*sin(x)-cos(x)) / (abs(x)+1)
For large x this will be increasingly close to sin(x) (or -sin(x) for large negative x). Indeed, plotting this this is already visually true, up to an amplitude factor, for x>pi.
For the first root in [0,pi/2] use the Taylor approximation at x=0 of second degree x^2-(1-0.5x^2)==0 to get x[0]=sqrt(2.0/3) as root approximation, for the higher ones take the sine roots x[n]=n*pi, n=1,2,3,... as initial approximations in the Newton iteration xnext = x - f(x)/f'(x) to get
n initial 1. Newton limit of Newton
0 0.816496580927726 0.863034004302817 0.860333589019380
1 3.141592653589793 3.336084918413964 3.425618459480901
2 6.283185307179586 6.403911810682199 6.437298179171945
3 9.424777960769379 9.512307014150883 9.529334405361963
4 12.566370614359172 12.635021895208379 12.645287223856643
5 15.707963267948966 15.764435036320542 15.771284874815882
6 18.849555921538759 18.897518573777646 18.902409956860023
7 21.991148575128552 22.032830614521892 22.036496727938566
8 25.132741228718345 25.169597069842926 25.172446326646664
9 28.274333882308138 28.307365162331923 28.309642854452012
10 31.415926535897931 31.445852385744583 31.447714637546234
11 34.557519189487721 34.584873343220551 34.586424215288922

how to solve a matrix equation in maple

I have a program that does some work to get a matrix w, which is 3(n+1) by 3(n+1). I have a vector fbar that is 3(n+1) by 1. I want to get the matrix that, when w is multiplied by it, gives fbar.
In mathematical notation, w * A = fbar. I have w and fbar, and I want A.
I tried to solve it with this command:
fsolve({seq(multiply(w, A)[i, 1] = fbar[i, 1], i = 1 .. 3*(n+1))})
but I don't understand the response Maple gave:
fsolve({2.025881905 A1[2,1]+7.814009150 A1[3,1]+...
-7.071067816 10^(-13) A1[3,1]-0.0004999999990
A1[4,1]-0.0007071067294 A1[5,1]-0.0004999999990 A1[6,1])
A3[6,1]=0},{A1[1,1],A1[2,1],A1[3,1],A1[4,1],A1[5,1],A1[6,1],A\
2[1,1],A2[2,1],A2[3,1],A2[4,1],A2[5,1],A2[6,1],A3[1,1],A3[2,1]\
,A3[3,1],A3[4,1],A3[5,1],A3[6,1]})
What does this mean, and how can I get a more meaningful answer?
You can do this directly with the LinearSolve functional from the LinearAlgebra package if w and fbar are defined as a matrix and vector respectively. The below code makes a reproducible example. Note that the solution of LinearSolve should be equal to x.
w := Matrix(<<1,2,3>|<4,5,6>|<7,8,10>>);
LinearAlgebra[ReducedRowEchelonForm](%); ## Full rank => 1 solution)
x := <1,2,3>;
fbar := w.x;
## Solve the equation w.x = fbar
LinearAlgebra[LinearSolve](w,fbar);

Determine the position of a point in 3D space given the distance to N points with known coordinates

I am trying to determine the (x,y,z) coordinates of a point p. What I have are the distances to 4 different points m1, m2, m3, m4 with known coordinates.
In detail: what I have is the coordinates of 4 points (m1,m2,m3,m4) and they are not in the same plane:
m1: (x1,y1,z1),
m2: (x2,y2,z2),
m3: (x3,y3,z3),
m4: (x4,y4,z4)
and the Euclidean distances form m1->p, m2->p, m3->p and m4->p which are
D1 = sqrt( (x-x1)^2 + (y-y1)^2 + (z-z1)^2);
D2 = sqrt( (x-x2)^2 + (y-y2)^2 + (z-z2)^2);
D3 = sqrt( (x-x3)^2 + (y-y3)^2 + (z-z3)^2);
D4 = sqrt( (x-x4)^2 + (y-y4)^2 + (z-z4)^2);
I am looking for (x,y,z). I tried to solve this non-linear system of 4 equations and 3 unknowns with matlab fsolve by taking the euclidean distances but didn't manage.
There are two questions:
How can I find the unknown coordinates of point p: (x,y,z)
What is the minimum number of points m with known coordinates and
distances to p that I need in order to find (x,y,z)?
EDIT:
Here is a piece of code that gives no solutions:
Lets say that the points I have are:
m1 = [ 370; 1810; 863];
m2 = [1586; 185; 1580];
m3 = [1284; 1948; 348];
m4 = [1732; 1674; 1974];
x = cat(2,m1,m2,m3,m4)';
And the distance from each point to p are
d = [1387.5; 1532.5; 1104.7; 0855.6]
From what I understood if I want to run fsolve I have to use the following:
1. Create a function
2. Call fsolve
function F = calculateED(p)
m1 = [ 370; 1810; 863];
m2 = [1586; 185; 1580];
m3 = [1284; 1948; 348];
m4 = [1732; 1674; 1974];
x = cat(2,m1,m2,m3,m4)';
d = [1387.5; 1532.5; 1104.7; 0855.6]
F = [d(1,1)^2 - (p(1)-x(1,1))^2 - (p(2)-x(1,2))^2 - (p(3)-x(1,3))^2;
d(2,1)^2 - (p(1)-x(2,1))^2 - (p(2)-x(2,2))^2 - (p(3)-x(2,3))^2;
d(3,1)^2 - (p(1)-x(3,1))^2 - (p(2)-x(3,2))^2 - (p(3)-x(3,3))^2;
d(4,1)^2 - (p(1)-x(4,1))^2 - (p(2)-x(4,2))^2 - (p(3)-x(4,3))^2;];
and then call fsolve:
p0 = [1500,1500,1189]; % initial guess
options = optimset('Algorithm',{'levenberg-marquardt',.001},'Display','iter','TolX',1e-1);
[p,Fval,exitflag] = fsolve(#calculateED,p0,options);
I am running Matlab 2011b.
Am I missing something?
How would the least squares solution be?
One note here is that m1, m2, m3, m4 and d values may not be given accurately but for an analytical solution that shouldn't be a problem.
mathematica readily numericall solves the three point problem:
p = Table[ RandomReal[{-1, 1}, {3}], {3}]
r = RandomReal[{1, 2}, {3}]
Reduce[Simplify[ Table[Norm[{x, y, z} - p[[i]]] == r[[i]] , {i, 3}],
Assumptions -> {Element[x | y | z, Reals]}], {x, y, z}, Reals]
This will typically return false as random spheres will typically not have triple intersection points.
When you have a solution you'll typically have a pair like this..
(* (x == -0.218969 && y == -0.760452 && z == -0.136958) ||
(x == 0.725312 && y == 0.466006 && z == -0.290347) *)
This somewhat surprisingly has a failrly elegent analytic solution. Its a bit involved so I'll wait to see if someone has it handy and if not and there is interest I'll try to remember the steps..
Edit, approximate solution following Dmitys least squares suggestion:
p = {{370, 1810, 863}, {1586, 185, 1580}, {1284, 1948, 348}, {1732,
1674, 1974}};
r = {1387.5, 1532.5, 1104.7, 0855.6};
solution = {x, y, z} /.
Last#FindMinimum[
Sum[(Norm[{x, y, z} - p[[i]]] - r[[i]] )^2, {i, 1, 4}] , {x, y, z}]
Table[ Norm[ solution - p[[i]]], {i, 4}]
As you see you are pretty far from exact..
(* solution point {1761.3, 1624.18, 1178.65} *)
(* solution radii: {1438.71, 1504.34, 1011.26, 797.446} *)
I'll answer the second question. Let's name the unknown point X. If you have only known point A and know the distance form X to A then X can be on a sphere with the center in A.
If you have two points A,B then X is on a circle given by the intersection of the spheres with centers in A and B (if they intersect that is).
A third point will add another sphere and the final intersection between the three spheres will give two points.
The fourth point will finaly decide which of those two points you're looking for.
This is how GPS actually works. You have to have at least three satellites. Then the GPS will guess which of the two points is the correct one, since the other one is in space, but it won't be able to tell you the altitude. Technically it should, but there are also errors, so the more satellites you "see" the less the error.
Have found this question which might be a starting point.
Take first three equation and solve i for 3 equation and 3 variables in MATLAB. After solving the equation you will get two pairs of values or we can say two set of coordinates of p.
keep each set in the 4th equation and you can find out the set which satisfies the equation is the answer

Angle between two vectors matlab

I want to calculate the angle between 2 vectors V = [Vx Vy Vz] and B = [Bx By Bz].
is this formula correct?
VdotB = (Vx*Bx + Vy*By + Vz*Bz)
Angle = acosd (VdotB / norm(V)*norm(B))
and is there any other way to calculate it?
My question is not for normalizing the vectors or make it easier. I am asking about how to get the angle between this two vectors
Based on this link, this seems to be the most stable solution:
atan2(norm(cross(a,b)), dot(a,b))
There are a lot of options:
a1 = atan2(norm(cross(v1,v2)), dot(v1,v2))
a2 = acos(dot(v1, v2) / (norm(v1) * norm(v2)))
a3 = acos(dot(v1 / norm(v1), v2 / norm(v2)))
a4 = subspace(v1,v2)
All formulas from this mathworks thread. It is said that a3 is the most stable, but I don't know why.
For multiple vectors stored on the columns of a matrix, one can calculate the angles using this code:
% Calculate the angle between V (d,N) and v1 (d,1)
% d = dimensions. N = number of vectors
% atan2(norm(cross(V,v2)), dot(V,v2))
c = bsxfun(#cross,V,v2);
d = sum(bsxfun(#times,V,v2),1);%dot
angles = atan2(sqrt(sum(c.^2,1)),d)*180/pi;
The traditional approach to obtaining an angle between two vectors (i.e. arccos(dot(u, v) / (norm(u) * norm(v))), as presented in some of the other answers) suffers from numerical instability in several corner cases. The following code works for n-dimensions and in all corner cases (it doesn't check for zero length vectors, but that's easy to add). See notes below.
% Get angle between two vectors
function a = angle_btw(v1, v2)
% Returns true if the value of the sign of x is negative, otherwise false.
signbit = #(x) x < 0;
u1 = v1 / norm(v1);
u2 = v2 / norm(v2);
y = u1 - u2;
x = u1 + u2;
a0 = 2 * atan(norm(y) / norm(x));
if not(signbit(a0) || signbit(pi - a0))
a = a0;
elseif signbit(a0)
a = 0.0;
else
a = pi;
end;
end
This code is adapted from a Julia implementation by Jeffrey Sarnoff (MIT license), in turn based on these notes by Prof. W. Kahan (page 15).
You can compute VdotB much faster and for vectors of arbitrary length using the dot operator, namely:
VdotB = sum(V(:).*B(:));
Additionally, as mentioned in the comments, matlab has the dot function to compute inner products directly.
Besides that, the formula is what it is so what you are doing is correct.
This function should return the angle in radians.
function [ alpharad ] = anglevec( veca, vecb )
% Calculate angle between two vectors
alpharad = acos(dot(veca, vecb) / sqrt( dot(veca, veca) * dot(vecb, vecb)));
end
anglevec([1 1 0],[0 1 0])/(2 * pi/360)
>> 45.00
The solution of Dennis Jaheruddin is excellent for 3D vectors, for higher dimensional vectors I would suggest to use:
acos(min(max(dot(a,b)/sqrt(dot(a,a)*dot(b,b)),-1),1))
This fixes numerical issues which could bring the argument of acos just above 1 or below -1. It is, however, still problematic when one of the vectors is a null-vector. This method also only requires 3*N+1 multiplications and 1 sqrt. It, however also requires 2 comparisons which the atan method does not need.