Conversion of coordinate systems. Unity. 2D - unity3d

2D Game.
Given specific points(x,y) both vary from 0 to 600. From the topleft corner.
How to covert the view so all instantiated objects wouldnt just appear in one corner. It would adjust to the camera.(the aspect ration)

Hard to answer questions without details. But you probably need to try
Instantiate(YourGameObject, Camera.main.transform.position);
to create a game object at camera's position, to move right/left and up/down just add another vector to the second parameter.
If you wanna use normalized coordinates just map them by manipulating the equation A1 / A2 = B1 / B2, but it's harder and you shouldn't.

Related

Does z-axis guarantee correct layer sorting in 2D?

Suppose I have a lot of SpriteRenderers in the scene and I don't want to sort them by layers, but using z-axis instead: The bigger the z of the object, the further it is from the camera, therefore it should be renderer behind other objects whose z is smaller.
I tried this out with an orthographic camera and it seems to work fine, but my question is: is it guaranteed that an object with bigger z will be renderer behind an object with smaller z?
More formally: Let A and B be two objects with z-positions of ZA and ZB, respectively. If ZA < ZB, then A is necessarily in front of B?
There are no sorting layers, all objects have the same order in layer (but not necessarily the same layer). The camera is orthographic.

Move 3d model forwards based on rotation

I have a 3d model in Xcode using SceneKit, that can rotate around itself, and i would like for it to move forwards based on rotation, for example if it is rotated 236 degrees in the z axis, it wouldn't go straight in x or y, but a bit of both so it would move forwards. Is it possible? Do i have to get any plugins?
No plugins required.
You can do this two main ways:
Move the object's position relative to its rotation by changing its "transform" over time.
Applying force (and/or impulses) over time (or instantly) in the direction you'd like your entity to travel.
Within these two approaches are a LOT of other considerations regarding scene size, resistance, speed, immediacy, etc.

3D trajectory reconstruction from video (taken by a single camera)

I am currently trying to reconstruct a 3D trajectory of a falling object like a ball or a rock out of a sequence of images taken from an iPhone video.
Where should I start looking? I know I have to calibrate the camera (I think I'll use the matlab calibration toolbox by Jean-Yves Bouguet) and then find the vanishing point from the same sequence, but then I'm really stuck.
read this: http://www.cs.auckland.ac.nz/courses/compsci773s1c/lectures/773-GG/lectA-773.htm
it explains 3d reconstruction using two cameras. Now for a simple summary, look at the figure from that site:
You only know pr/pl, the image points. By tracing a line from their respective focal points Or/Ol you get two lines (Pr/Pl) that both contain the point P. Because you know the 2 cameras origin and orientation, you can construct 3d equations for these lines. Their intersection is thus the 3d point, voila, it's that simple.
But when you discard one camera (let's say the left one), you only know for sure the line Pr. What's missing is depth. Luckily you know the radius of your ball, this extra information can give you the missing depth information. see next figure (don't mind my paint skills):
Now you know the depth using the intercept theorem
I see one last issue: the shape of ball changes when projected under an angle (ie not perpendicular on your capture plane). However you do know the angle, so compensation is possible, but I leave that up to you :p
edit: #ripkars' comment (comment box was too small)
1) ok
2) aha, the correspondence problem :D Typically solved by correlation analysis or matching features (mostly matching followed by tracking in a video). (other methods exist too)
I haven't used the image/vision toolbox myself, but there should definitely be some things to help you on the way.
3) = calibration of your cameras. Normally you should only do this once, when installing the cameras (and every other time you change their relative pose)
4) yes, just put the Longuet-Higgins equation to work, ie: solve
P = C1 + mu1*R1*K1^(-1)*p1
P = C2 + mu2*R2*K2^(-1)*p2
with
P = 3D point to find
C = camera center (vector)
R = rotation matrix expressing the orientation of the first camera in the world frame.
K = calibration matrix of the camera (containing internal parameters of the camera, not to be confused with the external parameters contained by R and C)
p1 and p2 = the image points
mu = parameter expressing the position of P on the projection line from camera center C to P (if i'm correct R*K^-1*p expresses a line equation/vector pointing from C to P)
these are 6 equations containing 5 unknowns: mu1, mu2 and P
edit: #ripkars' comment (comment box too small once again)
The only computer vison library that pops up in my mind is OpenCV (http://opencv.willowgarage.com/wiki ). But that's a C library, not matlab... I guess google is your friend ;)
About the calibration: yes, if those two images contain enough information to match some features. If you change the relative pose of the cameras, you'll have to recalibrate of course.
The choice of the world frame is arbitrary; it only becomes important when you want to analyze the retrieved 3d data afterwards: for example you could align one of the world planes with the plane of motion -> simplified motion equation if you want to fit one.
This world frame is just a reference frame, changeable with a 'change of reference frame transformation' (translation and/or rotation transformation)
Unless you have a stereo camera, you will never be able to know the position for sure, even with calibrated camera. Because you don't know whether the ball is small and close or large and far away.
There are other methods with single camera, based on series of images with different focus. But I doubt that you can control the camera of your cell phone in that way.
Edit(1):
as #GuntherStruyf points out correctly, you can know the position if one of your inputs is the size of the ball.

Not able to calibrate camera view to 3D Model

I am developing an app which uses LK for tracking and POSIT for estimation. I am successful in getting rotation matrix, projection matrix and able to track perfectly but the problem for me is I am not able to translate 3D object properly. The object is not fitting in to the right place where it has to fit.
Will some one help me regarding this?
Check this links, they may provide you some ideas.
http://computer-vision-talks.com/2011/11/pose-estimation-problem/
http://www.morethantechnical.com/2010/11/10/20-lines-ar-in-opencv-wcode/
Now, you must also check whether the intrinsic camera parameters are correct. Even a small error in estimating the field of view can cause troubles when trying to reconstruct 3D space. And from your details, it seems that the problem are bad fov angles (field of view).
You can try to measure them, or feed the half or double value to your algorithm.
There are two conventions for fov: half-angle (from image center to top or left, or from bottom to top, respectively from left to right) Maybe you just mixed them up, using full-angle instead of half, or vice-versa
Maybe you can show us how you build a transformation matrix from R and T components?
Remember, that cv::solvePnP function returns inverse transformation (e.g camera in world) - it finds object pose in 3D space where camera is in (0;0;0). For almost all cases you need inverse it to get correct result: {Rt; -T}

XCode translate matrix ala ActionScript?

I am wondering if there is a way to translate the underlying matrix of a layer much like you can in ActionScript3.
In AS3 I can get the transform of a layer and shift it to, let's say, make the center of the layer the anchor point, rather than the upper-left corner.
The reason I ask is because I am trying to rotate a layer (containing a square) along a diagonal axis. I thought it might be easy if I could rotate the matrix by 45 degrees, then I could just rotate around the X-axis and be done.
But I cannot figure out how to do that.
Any help, greatly appreciate, as always.
Cheers,
Chris
Use a CGAffineTransform.
Edit:
I am afraid I don't know what you mean by "rotating an object along a diagonal axis". What you most likely need to do is to concatenate two or more transforms.
See figure 5-8 in Quartz 2D programming Guide The concatenation of two transforms creates the appearance of the image rotating around its lower left corner.