How to properly rotate a Matrix with respect to a parent Matrix? - unity3d

I am working with Matrix4x4's in Unity and have some local space Matrices under parent Transforms that have their own rotation, scale and position. Whenever a parent Transform is moved, scaled or rotated I need to reflect it also on to the child Matrices.
For translating the position of the child into worldspace relative to the parent, that seems pretty easy:
WorldSpaceMatrix = LocalSpaceMatrix * transform.localToWorldMatrix;
However I am really struggling with how to reflect a change in parent rotation on to a child Matrix and also update position due to any position offset. The above gets the local rotation of the Matrix itself which is fine. But it doesn't update any change in position. Effectively I need to "RotateAround" the parent, but I don't know how.

Related

Some problem with rotation angles in unity?

I just attached one object to another. I don't understand what happened, but the rotation angles of the child object in the inspector or myChildObject.transform.rotation.eulerAngles (or myChildObject.transform.localRotation.eulerAngles) are different.
The Inspector shows the localRotation of objects, which will not change unless you change (or reparent) that specific object. Changing a parent's rotation or localRotation will never change its (existing) child's localRotation.
A child object with 0,0,0 Euler angles just means it isn't rotated compared to its parent -- i.e. its world rotation should match its parent's world rotation.
Rotating the parent should change the child's global rotation, but if the child does not have a localRotation of 0,0,0 they won't match. For example if the parent had a rotation of 45,0,0 and the child had a localRotation of 0,0,90, the child would end up with a global rotation of 45,0,90. (Things can get much weirder than that, though.)
To put it another way, the child's global rotation is its own localRotation combined with the rotation of its parent/s.

SpriteKit sprite position inside node tree

Currently working on a simple 2D game, where I have player character that is split into multiple sprites (head, torso, legs, arms,...).
I have absolute coordinates right in Aseprite (if I take individual sprite and position them I get correct coordinates).
When I put everything into swift and use negative y instead of positive everything gets totally weird.
For example in Aseprite I have coordinates as follow: head (30, 17), torso (30, 24) and legs (28, 35). Everything aligns perfectly.
In SpriteKit I extend SKNode class and put every subsprites inside with just negative number for y. So instead of going up, I draw down. It looks like that coordinates give in pixels are not correct - sprites are off by few pixels. Mostly is off y coordinate but in some cases (character rotation) also x.
How to get from those absolute upper-left coordinates to correct SpriteKit coordinates then?
Ok I figured what was wrong (several things):
your parent class can be SKNode, but children has to have set anchor point at (0,1)
when changing texture of child sprite, always make sure that sprite size is set to new size of texture. If not it will use previous texture size and resize new texture to old size. This introduces additional problems with positioning. So you have to call: child.size = child.texture!.size() (I used force unwrap because I set (new) texture one step before so I'm 100% sure it exists.
when setting new texture set anchor point again (it seems it gets reset when changing texture of child sprite).

Why am I getting an incorrect vector when trying to find HingeJoint2D.anchor in world space?

In the scene, I have a long chain of children that are connected via hinge to their parent. For my code, I need the position of the hinge anchors in world space, so I use:
public Vector2 hingeVector => hinge.anchor + (Vector2)gameObject.transform.position;
For the first hinge, that code gives the correct position. But for the second hinge this happens:
The red point is the vector I get, the blue point is the actual position. As you can see, it's a somewhat small but still problematic difference.
Is there any way I can fix this? I couldn't find anything like this online.
You need to add the object's rotation
The anchor values are axis aligned and aren't affected by rotation, but in order to calculate the anchor point in world space, knowing the transform's position, you need to rotate the anchor point values by the object's rotation then add it to the position:
Vector2 p = hinge.anchor.Rotate(gameObject.transform.rotation.eulerAngles.z)
+ (Vector2)gameObject.transform.position;

Unity3D localposition relation to parent

I'm trying to position some gameObjects relative to it's parent. I'm using localposition, but the gameObjects are not placed well. If I use local position the parent's width is considered to be 1, right? So if I have a plane unity3d considers it to be a 1X1 square.
I tried to put some models in the local position (1, 1) but they are not placed in the top right corner of the plane...
Do you guys have some thoughts of what might be the problem?
You're incorrect: the parents' width is the x/y/z scale of the parent; local co-ordinates are still measured in world units. If your plane is 100x100 horizontally, and centred at 0,0,0 then you need to put the model to (50,0,50) to be at the corner.

iphone cocoa : how to drag an image along a path

I am trying to figure out how can you drag an image while constraining its movement along a certain path.
I tried several tricks including animation along a path, but couldn't get the animation to play and pause and play backwards - so that seems out of the question.
Any ideas ? anyone ?
What you're basically trying to do is match finger movement to a 'translation' transition.
As the user touches down and starts to move their finger you want to use the current touch point value to create a translation transform which you apply to your UIImageView. Here's how you would do it:
On touch down, save the imageview's starting x,y position.
On move, calculate the delta from old point to new one. This is where you can clamp the values. So you can ignore, say, the y change and only use the x deltas. This means that the image will only move left to right. If you ignore the x and use y, then it only moves up and down.
Once you have the 'new' calculated/clamped x,y values, use it to create a new transform using CGAffineTransformMakeTranslation(x, y). Assign this transform to the UIImageView. The image moves to that place.
Once the finger lifts, figure out the delta from the original starting x,y, point and the lift-off point, then adjust the ImageView's bounds and reset the transform to CGAffineTransformIdentity. This doesn't move the object, but it sets it so subsequent accesses to the ImageView use the actual position and don't have to keep adjusting for transforms.
Moving along on a grid is easy too. Just round out the x,y values in step 2 so they're a multiple of the grid size (i.e. round out to every 10 pixel) before you pass it on to make the translation transform.
If you want to make it extra smooth, surround the code where you assign the transition with UIView animation blocks. Mess around with the easing and timing settings. The image should drag behind a bit but smoothly 'rubber-band' from one touch point to the next.
See this Sample Code : Move Me