Convert coordinate from one world coordinate system to object coordinate system - matlab

I am more interested in the mathematics behind this problem, but am using matlab to try and solve this problem. I have an object, positioned using the world coordinate system at (Wx, Wy, Wz)
I would like to calculate the coordinates of this point using the object coordinate system (Ox, Oy, Oz)
To do this, I first need to calculate the axes of the object coordinate system.
Step 1 is to find the Normal (Nx, Ny, Nz)
(assuming that this is not the world z-axis). My object has a yaw, pitch and roll angle applied so I need to find my normal relative to this.
To do this I use a rotational matrix and perform the operations in the order stated above.
Step 2 is to calculate the arbitrary axes.
If abs(Nx < 1/64) and abs(Ny < 1/64)
(Axx, Axy, Axz) = cross product of world y-axis (0,1,0) and the Normal
else
(Axx, Axy, Axz) = cross product of world z-axis (0,0,1) and the Normal
(Ayx, Ayy, Ayz) = cross product of N and Ax
I then scale my arbitrary axes by dividing by the sqrt of the sum of the squares.
Step 3 - Transform the coordinate
To transform the coordinate you point multiply the initial coordinate by each of the axes.
Ox = Wx * Axx + Wy * Axy + Wz * Axz
Oy = Wx * Ayx + Wy * Ayy + Wz * Ayz
Oz = Wx * Nx + Wy * Ny + Wz * Nz
DXF for autocad takes the object coordinates, the Normal vector and a rotation about the normal vector.
This appears to be working reasonable well at positioning the coordinate, but with some issues:
When I use the method above, I find that sometimes my objects are rotated 180 degrees. Digging into this, sometimes the Arbitrary x-axis is negative, sometimes it is positive. This may account for some objects being rotated, but Autocad does not actually reference this Ax vector. It calculates it. This means that I may have to correct this with a rotation about the normal, but I do not what it to always apply (I cannot simply look for a negative value and rotate if negative, as sometimes the object is required to be placed in this direction).I do not know how to overcome this.
If I apply a roll angle to the object and work through this process, the roll angle is not applied correctly. Instead this appears to translate this angle as an Yaw and Site change, but not actually the intended rotation. I cannot see what I have done using the above formula.

Related

How to convert cartesian to spherical coordinates for hotspots on a 360 image

I have an equirectangular 360 image which will have hotspots mapped onto it in standard X/Y coordinate space.
In Unity, this image will be mapped to a sphere, and I will position the hotspots to the inner surface of the sphere.
I need the Math for converting these cartesian coordinates to a spherical from the centre of the sphere (where the camera will be).
Peter O. is right, of course, although there is an easy standard way to perform an inverse equirectangular projection.
There are two primary ways of writing spherical coordinates, the 'mathematical' and the 'physical'. The only difference is the naming of the coordinates. See the two illustrations of coordinate systems at the top of this article: https://en.wikipedia.org/wiki/Spherical_coordinate_system.
I will assume we use the mathematical one with θ in the x-y-plane and φ in the (x,y)-z-plane. Then, the projection is simply:
θ = 2π * x / w - π, where w is the width of the image and x is the x-position in pixels. This will position midpoints in the image along the x-axis in the sphere, which is probably preferred. If the coordinate system takes value in the [0, 2π]-range, you should do (2π * x / w + π) % 2π instead.
φ = π * y / h, where h is the height of the image, and y is the y-position in pixels.
And r is just some constant which can be freely chosen, of course.
I hope this helps.

Character Jump Arc? Alter the Y axis during Vector3.MoveTowards?

I'm trying to make a character jump between two points. The two points are varying distances apart, and at different heights.
I have the character moving from point to point using Vector3.MoveTowards in a IEnumerator. But how can I make modify the Y axis so that the character moves in a curved path to appear as if jumping?
The character needs to land exactly at each point, so I cannot use physics.
Thanks! :-)
Image Example
Extra bonus points if you can adjust where you want the peak of the jump to occur (so the curve isn't perfectly circular, but more like an arc) E.g. so that the peak of the jump is closer to destination.
Looking at your given image, I would suggest using a projectile motion's equation to calculate the path between the source and destination in a given time with a given start velocity(Vo) and given angle (theta).
In case you are not familiar with projectile equation, have a look at here:
https://en.wikipedia.org/wiki/Projectile_motion
In the Displacement section you'll find 2 equations like this:
x = Vo * T * cos(theta)
y = Vo * T * sin(theta) - 0.5 * g * pow(T,2)
So, in Update function don't move the object directly towards the target, rather take temporary targets along the projectile motion, which you can calculate using the above two equations. You can then use,
Vector3.MoveTowards(curPosition,new Vector3(x,y,0),step);
Considering, the z value is 0.

finding the intersection of a line with an arbitrary surface?

I am using ray tracing and at the beginning I assumed a plane surface so I used the equation of the plane surface which is :
Ax + BY + CZ +d = 0
while A,B and C are the component of the normal vector of the Plane Normal = [A B C]
and using the Ray equation : Ray = Source + t*Direction
And then solve it for t and I can find the intersection points.
My question now that I have function in matlab to read the surface of the object but the object may not be plane surface and I am getting the data of the surface [X Y Z] of the surface but I don't know which equation should I use to find t and then the intersection point. And I even have a function to give me the normal vector at each point
If you can edit the tags the get the right ones please do it.
It might not be a plane, but you can always calculate a normal vector at each point. You'll just have to work harder to do it. Take two partial derivatives in planar coordinates, cross those vectors, and that's the normal at the point.
If your surface is defined as a height Z on a some X-Y grid, you can solve it easily using fzero. This would exclude some complex shapes, but might work for standard optics problems like a ray hitting a lens. Assume that X, Y and Z are 2-d matrices with the same shape. You can then do a 2d interpolation like
z_interp = interp2(X,Y,Z,x_interp,y_interp)
If this is not the case, you should try to define your own function that can calculate z based on x and y.
For the line, we have
x_ray = x_source + t * x_dir
y_ray = y_source + t * y_dir
z_ray = z_source + t * z_dir
So you can now define a function that calculates the height above the surface as a function of t as
height_above_plane = #(t) z_source + t * z_dir - interp2(X, Y, Z, ...
x_source + t*x_dir, y_source + t*y_dir)
Note that this might not be the shortest distance from the point to the plane, it is just the height measured along the z-direction. The time the ray hits the surface can now be found by searching for the t for which the height is zero. This can be done for arbitrary functions using fzero:
t_intercept = fzero(height_above_plane, 0);
This should work well for simple cases where the function defining the surface is relatively smooth and the ray crosses the surface only once. It might be possible to transform cases with more complex geometry into such a simple case.
If you can get the X Y Z of the surface and you said you can get the normal vector in each point so what is your problem now?
The X Y Z of the surface are the intersection points and if you have the normal vector in each point so you can calculate whatever you want ( the reflected or the refracted rays).
I think you have no troubles at all

IOS openGL best way to rotate Sphere with touchesMoved

I drew Globe object using OpenGL and i can rotate it with finger touch , but it doesn't work well in some cases because i am rotating using the difference between x and y
Rotation3D rot = sphere.currentRotation;
rot.x += diffX ;
rot.y += diffY ;
rot.z += 10 ;
sphere.currentRotation = rot;
when you move your finger from Top Right to bottom Left it isn't work good.
Any ideas ?
Thanks
Peter Gabra
To arbitrarily rotate objects, it's easiest to store their current orientation as a transformation matrix and manipulate the elements. I explain this in detail here.
The only difference is that in that other question, the OP wanted to apply rotations from two controls (horizontal and vertical), whereas you are after drag-based rotation. The technique is basically the same, but instead of rotating around either the X or Y axis, you need to compute an arbitrary axis of rotation from the touch's delta vector as follows:
axis = [0, 0, 1] ⨯ [diffX, diffY, 0]
(⨯ = "cross product")
Then you rotate the U, V and W vectors (as described in my other answer) around the axis by some angle in proportion to the length of the delta vector:
M = rotation(k * length([diffX, diffY, 0]), axis)
U = M * U
V = M * V
W = M * W
If you find the object rotating in the opposite direction to what you expect, there are three possibilities:
If it's only the vertical rotation that goes the wrong way, you need to negate diffY. This is a common mistake I make due to inconsistencies between OpenGL and UIKit coordinate systems.
If it's all rotation, you can either swap the arguments in the cross-product or use [0, 0, -1]. This is usually because of confusion between left- and right-handed coordinate systems.
If it's just the horizontal rotation, make both adjustments. (Don't negate diffX, no one uses left-to-right X-coordinates.)
In case you're using Euler angles: I recommend not using Euler angles to model rotations. Use Quaternions instead. It might seem like it makes your code more complicated, but rotations work well when you use Quaternions. Here's some advantages:
it's very straightforward to apply user interaction to current rotation state
no gimbal lock problems
no need for matrix drift adjustments after repeated rotations
you can interpolate rotations easily
Note that Apple give you a Quaternion type to use: GLKQuaternion. No need to write your own Quaternion class.
See also:
http://www.ogre3d.org/tikiwiki/Quaternion+and+Rotation+Primer
Euler angles vs. Quaternions - problems caused by the tension between internal storage and presentation to the user?

biot savart matlab (couldn't find a matlab forum)

I want to calculate the magnetic field from a given image using biot savarts law. For example if I have a picture of a triangle, I say that this triangle forms a closed wire carrying current. Using image derivatives I can get the co-ordinates and direction of the current (normals included). I am struggling implementing this...need a bit of help with logic too. Here is what I have:
Img = imread('littletriangle.bmp');
Img = Img(:,:,1);
Img = double(Img);
[x,y] = size(Img);
[Ix, Iy] = gradient(Img);
biot savart equation is:
b = mu/4*pi sum(Idl x rn / r^2)
where mu/4pi is const, I is current magnitude, rn distance unit vector between a pixel and current, r^2 is the squared magnitude of the displacement between a pixel and the current.
So just to start off, I read the image in, turn it into a binary and then take the image gradient. This gives me the location and orientation of the 'current'. I now need to calculate the magnetic field from this 'current' at every pixel in the image. I am only interested in getting the magnetic field in the x-y plane. anything just to start me off would be brilliant!
For wire
B = mu * I /(2*pi*r)
B is vector and has. Direction is perpendicular on line between wire an point of interest. Fastest way to rotate vector by 90° is just swapping (x.y) so it becomes (y,x) vector
What about current? If you deal whit current then current is homogenous inside of wire (can be triangle) and I in upper direction is just normalized I per point and per Whole I.
So how to do this?
Get current per pixel (current / number of pixel in shape)
For each point calculate B using (r calculated form protagora) as sum of all other mini wires expressed as pixel using upper equation. (B is vector and has also direction, so keep track of B as (x,y) )
having picture of 100*100 will yield (100*100)*(100*100) calculations of B equation or something less if you will not calculate filed from empty space.
B is at the end instead of just mu * I /(2*pi*r) sum of all wire and I becomes dI
You do not need to apply any derivatives, just integration (sum)