I wonder if there's a standard function in Julia for Matlab's mrdivide? x = B/A which would solve the system of linear equations x*A = B for x. As far as I have seen, the standard linalg package does not have it.
It just works. To solve x * A = B for x: (mrdivide)
B / A
To solve A * x = B for x: (mldivide)
A \ B
The \ symbol means divide from the left, and the / symbol means divide from the right, as in MATLAB. You are correct that / is not documented for some reason. I do not know why.
My question is about DSL design. It is related to internal vs. external DSLs, but is more specific than that.
Background info: I have gone through DSL in Action and other tutorials. The difference between internal and external is clear to me. I also have experience developing external DSLs in Haskell.
Let's take a very simple example. Here below a (simplified) relational algebra exression:
SELECT conditions (
CROSS (A,B)
)
Algebra expressions (ADT in Haskell) can be easily rewritten. For example, this can be trivially rewritten into:
JOIN conditions (A,B)
In the DSLs I have developed, I have always taken this approach: write a parser which creates algebraic expressions like the one above. Then, with a language that allows pattern matching like Haskell, apply a number of rewrites and eventually translate into a target language.
Here comes the question.
For a new DSL I would like to develop, I'd rather opt for an internal DSL. Mainly because I want to take advantage of the host language capabilities (probably Scala in this case). The debate of whether this is the right choice is not the point here. Let's assume its a good choice.
What I miss is: if I go for an internal DSL, then there is no parsing into an ADT. Where is my loved pattern matching rewrite going to fit into this? Do I have to give up on it? Are there best practices to get the best of the two worlds? Or am I not seeing things correctly here?
I'll demonstrate this using an internal expression language for arithmetic in Haskell. We'll implement double-negation elimination.
Internal DSL "embeddings" are either deep or shallow. Shallow embeddings mean that you rely upon sharing operations from the host language to make the domain language run. In our example, this almost annihilates the very DSL-ness of our problem. I'll show it anyway.
newtype Shallow = Shallow { runShallow :: Int }
underShallow1 :: (Int -> Int) -> (Shallow -> Shallow)
underShallow1 f (Shallow a) = Shallow (f a)
underShallow2 :: (Int -> Int -> Int) -> (Shallow -> Shallow -> Shallow)
underShallow2 f (Shallow a) (Shallow b) = Shallow (f a b)
-- DSL definition
instance Num Shallow where
fromInteger n = Shallow (fromInteger n) -- embed constants
(+) = underShallow2 (+) -- lifting host impl into the DSL
(*) = underShallow2 (*)
(-) = underShallow2 (-)
negate = underShallow negate
abs = underShallow abs
signum = underShallow signum
So now we write and execute our Shallow DSL using the overloaded Num methods and runShallow :: Shallow -> Int
>>> fromShallow (2 + 2 :: Shallow)
4
Notably, since everything in this Shallow embedding is represented internally with almost no structure besides the result since all of the work has been dropped down to the host language where our domain language can't "see" it.
A deep embedding clearly separates the representation and the interpretation of a DSL. Typically, a good way to represent it is an ADT which has branches and arities which match a minimal, basis API. We'll just reflect the whole Num class
data Deep
= FromInteger Integer
| Plus Deep Deep
| Mult Deep Deep
| Subt Deep Deep
| Negate Deep
| Abs Deep
| Signum Deep
deriving ( Eq, Show )
Notably, this representation will admit equality (note that this is the smallest equality possible since it ignores "values" and "equivalences") and showing, which is nice. We tie it into the same internal API by instantiating Num
instance Num Deep where
fromInteger = FromInteger
(+) = Plus
(*) = Mult
(-) = Subt
negate = Negate
abs = Abs
signum = Signum
but now we have to create an interpreter which ties the deep embedding into values represented in the host language. Here an advantage of Deep embeddings arises in that we can trivially introduce multiple interpreters. For instance, "showing" can be considered an interpreter from Deep to String
interpretString :: Deep -> String
interpretString = show
We can count the number of embedded constants as an interpreter
countConsts :: Deep -> Int
countConsts x = case x of
FromInteger _ = 1
Plus x y = countConsts x + countConsts y
Mult x y = countConsts x + countConsts y
Subt x y = countConsts x + countConsts y
Negate x = countConsts x
Abs x = countConsts x
Signum x = countConsts x
And finally we can interpret the thing into not just an Int but any other thing which follows the Num API
interp :: Num a => Deep -> a
interp x = case x of
FromInteger n = fromInteger n
Plus x y = interp x + interp y
Mult x y = interp x * interp y
Subt x y = interp x - interp y
Negate x = negate (interp x)
Abs x = abs (interp x)
Signum x = signum (interp x)
So, finally, we can create a deep embedding and execute it in several ways
>>> let x = 3 + 4 * 5 in (interpString x, countConsts x, interp x)
(Plus (FromInteger 3) (Mult (FromInteger 4) (FromInteger 5)), 3, 23)
And, here's the finale, we can use our Deep ADT to implement optimizations
opt :: Deep -> Deep
opt x = case x of
(Negate (Negate x)) -> opt x
FromInteger n = FromInteger n
Plus x y = Plus (opt x) (opt y)
Mult x y = Mult (opt x) (opt y)
Subt x y = Sub (opt x) (opt y)
Negate x = Negate (opt x)
Abs x = Abs (opt x)
Signum x = Signum (opt x)
I am trying to do succesive mean quantization transform. And ı stuck the condition while making D0 and D1.ı dont know what ı should put in if the condition is false . (making 0 is not the solution because its changes the new matrix of mean).
Let x be a data point and D(x) be a set of |D(x)| = D data
points. The value of a data point will be denoted V(x).
D0(x) = {x | V(x) ≤ mean(V(x)), ∀x ∈ D}
D1(x) = {x | V(x) > mean(V(x)), ∀x ∈ D}
where D0(x) propagates left and D1(x) right in the binary
tree.
Article about SMQT
Is there anyone who studied smqt before or any idea to solve this problem.
thanks.
D0 and D1 are arrays
D0=V(V<=mean(V(:)));
D1=V(V>mean(V(:)));
I am trying to determine the (x,y,z) coordinates of a point p. What I have are the distances to 4 different points m1, m2, m3, m4 with known coordinates.
In detail: what I have is the coordinates of 4 points (m1,m2,m3,m4) and they are not in the same plane:
m1: (x1,y1,z1),
m2: (x2,y2,z2),
m3: (x3,y3,z3),
m4: (x4,y4,z4)
and the Euclidean distances form m1->p, m2->p, m3->p and m4->p which are
D1 = sqrt( (x-x1)^2 + (y-y1)^2 + (z-z1)^2);
D2 = sqrt( (x-x2)^2 + (y-y2)^2 + (z-z2)^2);
D3 = sqrt( (x-x3)^2 + (y-y3)^2 + (z-z3)^2);
D4 = sqrt( (x-x4)^2 + (y-y4)^2 + (z-z4)^2);
I am looking for (x,y,z). I tried to solve this non-linear system of 4 equations and 3 unknowns with matlab fsolve by taking the euclidean distances but didn't manage.
There are two questions:
How can I find the unknown coordinates of point p: (x,y,z)
What is the minimum number of points m with known coordinates and
distances to p that I need in order to find (x,y,z)?
EDIT:
Here is a piece of code that gives no solutions:
Lets say that the points I have are:
m1 = [ 370; 1810; 863];
m2 = [1586; 185; 1580];
m3 = [1284; 1948; 348];
m4 = [1732; 1674; 1974];
x = cat(2,m1,m2,m3,m4)';
And the distance from each point to p are
d = [1387.5; 1532.5; 1104.7; 0855.6]
From what I understood if I want to run fsolve I have to use the following:
1. Create a function
2. Call fsolve
function F = calculateED(p)
m1 = [ 370; 1810; 863];
m2 = [1586; 185; 1580];
m3 = [1284; 1948; 348];
m4 = [1732; 1674; 1974];
x = cat(2,m1,m2,m3,m4)';
d = [1387.5; 1532.5; 1104.7; 0855.6]
F = [d(1,1)^2 - (p(1)-x(1,1))^2 - (p(2)-x(1,2))^2 - (p(3)-x(1,3))^2;
d(2,1)^2 - (p(1)-x(2,1))^2 - (p(2)-x(2,2))^2 - (p(3)-x(2,3))^2;
d(3,1)^2 - (p(1)-x(3,1))^2 - (p(2)-x(3,2))^2 - (p(3)-x(3,3))^2;
d(4,1)^2 - (p(1)-x(4,1))^2 - (p(2)-x(4,2))^2 - (p(3)-x(4,3))^2;];
and then call fsolve:
p0 = [1500,1500,1189]; % initial guess
options = optimset('Algorithm',{'levenberg-marquardt',.001},'Display','iter','TolX',1e-1);
[p,Fval,exitflag] = fsolve(#calculateED,p0,options);
I am running Matlab 2011b.
Am I missing something?
How would the least squares solution be?
One note here is that m1, m2, m3, m4 and d values may not be given accurately but for an analytical solution that shouldn't be a problem.
mathematica readily numericall solves the three point problem:
p = Table[ RandomReal[{-1, 1}, {3}], {3}]
r = RandomReal[{1, 2}, {3}]
Reduce[Simplify[ Table[Norm[{x, y, z} - p[[i]]] == r[[i]] , {i, 3}],
Assumptions -> {Element[x | y | z, Reals]}], {x, y, z}, Reals]
This will typically return false as random spheres will typically not have triple intersection points.
When you have a solution you'll typically have a pair like this..
(* (x == -0.218969 && y == -0.760452 && z == -0.136958) ||
(x == 0.725312 && y == 0.466006 && z == -0.290347) *)
This somewhat surprisingly has a failrly elegent analytic solution. Its a bit involved so I'll wait to see if someone has it handy and if not and there is interest I'll try to remember the steps..
Edit, approximate solution following Dmitys least squares suggestion:
p = {{370, 1810, 863}, {1586, 185, 1580}, {1284, 1948, 348}, {1732,
1674, 1974}};
r = {1387.5, 1532.5, 1104.7, 0855.6};
solution = {x, y, z} /.
Last#FindMinimum[
Sum[(Norm[{x, y, z} - p[[i]]] - r[[i]] )^2, {i, 1, 4}] , {x, y, z}]
Table[ Norm[ solution - p[[i]]], {i, 4}]
As you see you are pretty far from exact..
(* solution point {1761.3, 1624.18, 1178.65} *)
(* solution radii: {1438.71, 1504.34, 1011.26, 797.446} *)
I'll answer the second question. Let's name the unknown point X. If you have only known point A and know the distance form X to A then X can be on a sphere with the center in A.
If you have two points A,B then X is on a circle given by the intersection of the spheres with centers in A and B (if they intersect that is).
A third point will add another sphere and the final intersection between the three spheres will give two points.
The fourth point will finaly decide which of those two points you're looking for.
This is how GPS actually works. You have to have at least three satellites. Then the GPS will guess which of the two points is the correct one, since the other one is in space, but it won't be able to tell you the altitude. Technically it should, but there are also errors, so the more satellites you "see" the less the error.
Have found this question which might be a starting point.
Take first three equation and solve i for 3 equation and 3 variables in MATLAB. After solving the equation you will get two pairs of values or we can say two set of coordinates of p.
keep each set in the 4th equation and you can find out the set which satisfies the equation is the answer
We get some expression in Cylindrical coordinates (r, ϕ, z ) like : expr := r*z^2*sin((1/3)*ϕ) we need to convert it into Cartesian coordinates and than back to Cylindrical coordinates. How to do such thing?
So I found something like this : eval(expr, {r = sqrt(x^2+y^2), z = z,ϕ= arctan(y, x)}) but it seems incorrect, how to correct it and how make eval to convert backwords from Cartesian to Cylindrical?
ϕ == ϕ
So I try:
R := 1;
H := h;
sigma[0] := sig0;
sigma := sigma[0]*z^2*sin((1/3)*`ϕ`);
toCar := eval(sigma, {r = sqrt(x^2+y^2), z = z, `ϕ` = arctan(y, x)});
toCyl := collect(eval(toCar, {x = r*cos(`ϕ`), y = r*sin(`ϕ`), z = z}), `ϕ`)
It looks close to true but look:
why arctan(r*sin(ϕ), r*cos(ϕ)) is not shown as ϕ?
Actually it is only begining of fun time for me because I also need to calculate
Q := int(int(int(toCar, x = 0 .. r), y = 0 .. 2*Pi), z = 0 .. H)
and to get it back into Cylindrical coordinates...
simplify(toCyl) assuming r>=0, `ϕ`<=Pi, `ϕ`>-Pi;
Notice,
arctan(sin(Pi/4),cos(Pi/4));
1
- Pi
4
arctan(sin(Pi/4 + 10*Pi),cos(Pi/4 + 10*Pi));
1
- Pi
4
arctan(sin(-7*Pi/4),cos(-7*Pi/4));
1
- Pi
4
arctan(sin(-15*Pi/4),cos(-15*Pi/4));
1
- Pi
4
arctan(sin(-Pi),cos(-Pi));
Pi
K:=arctan(r*sin(Pi/4),r*cos(Pi/4));
arctan(r, r)
simplify(K) assuming r<0;
3
- - Pi
4
simplify(K) assuming r>0;
1
- Pi
4
Once you've converted from cylindrical to rectangular, any information about how many times the original angle" might have wrapped around (past -Pi) is lost.
So you won't recover the original ϕ unless it was in (-Pi,Pi]. If you tell Maple that is the case (along with r>-0 so that it knows which half-plane), using assumptions, then it can simplify to what you're expecting.