Cocos2D iPhone - screen coordinates x sprite's internal coordinates - iphone

I am new to Cocos2D on iPhone. I see that Cocos2D uses a coordinate axis where 0,0 is at the bottom left corner and X positives are to the right and Y positives up.
Now I have created a sprite and added several sprites as subsprites of this one. For my surprise the subsprites appear mirrored in Y!!! The point 10,10 will be on the upper left corner of the sprite!!!
I can simply flip the sprite in Y to make it follow the same screen coordinate, but then the content will be reversed.
Is this a bug or what?
thanks.

Without seeing any example code is a shot in the dark, but I think you need to use Anchor points.
Each sprite has an anchor point of x, y.
ccp(0.5f, 0.5f) would be the center of the sprite.
(0,0) is the bottom left....(1.0f,1.0f) is top right etc.... Over 1.0 goes outside the sprite.
Child nodes (CCSprite) will use their anchor point on the parent node coordinates.
MySprite.anchorPoint = ccp(0.5f,0.5f);

Related

ARKit: Plot a Node at a specific pixel at a specific Z distance from Camera

Referring to the image above. I have a Red Node at the center of the screen with a distance of 1.0 unit (1 meter away) [See iPhone Portrait Top View]
What I do is I capture a screenshot of the iPhone screen and the resulting image is 750 x 1334 pixels [See iPhone Portrait Front View]
sceneView.snapshot()
What I want to do is put 4 Red Square Nodes located on the four sides of the iPhone screen relative to the Red Circle (at the dead center of the screen). I am making this to mark where I did a snapshot. What I want to know is how can I plot a box node precisely at a certain x,y point given z distance. (The value of Z is not fixed, I just used 1.0 as a sample scenario).. I want to plot (0,0), (750,0), (0, 1334) and (750, 1334) at a given z of 1.0 and assuming I am on a tripod, the plotted nodes would appear on the four sides of my iPhone screen.
I am very terrible at math and this problem is so complicated for me to solve alone with my current math skills. Can anyone help? Please?
Since you need ARnchor nodes (to mark where a snapshot was done) using the real time information from the camera instead of a snapshot would be probably easier. In special due to pixels in a 2D image are referenced from left to right and from top to bottom (with "0,0" coords located in the top left side)
... and we know the AR nodes are referenced in 3D coordinates with the center of the local node as the 0,0,0 coords.
I haven't yet code to test but I think the following properties should help:
let pPoint = sceneView.projectPoint(self.centerBall.position)
let fieldW = sceneView.session.currentFrame?.camera.imageResolution.width
let fieldH = sceneView.session.currentFrame?.camera.imageResolution.height
The "pPoint" should return the 2D coords corresponding to the (0,0,0) 3D coords of "centerBall" from there it should be just add or subtract calculations to obtain all 4 corners in 2D
Finally passing the 2D coords of every corner to the "unprojectPoint(:)" method should provide the 3D "world" coords and that can be converted to "centerBall" coordinates with the "convertPosition( position: SCNVector3, from node: SCNNode?)" method
It seems interesting so will try to code this before weekend
At the end I suspect ARanchor nodes may not be 100% stables

Working with the coordinate system and game screen in Unity 2d?

So I've developed games in other platforms where the x/y coordinate system made sense to me. The top left representing the game screen with coordinates of (0,0) and the bottom right was (width,height). Now I'm trying to make the jump to Unity 2d and I can't understand how the game screen works. If I had a background object and a character object on the screen, when I move the character around his x and y values vary between -3 and 3... very small coordinates and it doesn't match the game resolution I have setup (1024x768). Are there good tutorials for understanding the game grid in Unity? Or can anyone explain how I can accomplish what I'm trying to do?
There are three coordinates systems in Unity: Screen coordinates, view coordinates and the world coordinates.
World coordinates: Think of the absolute positioning of the objects in your scene, using "points". You can choose to have the units represent any length you want, for example 1 unit = 10 meters. What is actually shown on the screen is determined by where the camera is placed and how it is oriented.
View Coordinates: The coordinates in the viewport of a given camera. Viewport is the imaginary rectangle through which the world is viewed. These coordinates are porportional, and range from (0,0) to (1,1).
Screen Coordinates: The actual pixel coordinates denoting the position on the device's screen.
Note that the world co-ordinates of any given object will always be the same regardless of which camera is used to view, whereas the view coordinates depends on the camera being used. The screen coordinates in addition depend on the resolution of the device and the placement of the camera view on the screen.
The "Camera" object provides several methods to convert between these different coordinate systems like "ScreenToViewportPoint" "ScreenToWorldPoint" etc.
Example: Place object on top left of screen
float distanceFromCamera = 10.0f;
Vector3 pos = Camera.main.ScreenToWorldPoint (new Vector3 (0, Camera.main.pixelHeight, distanceFromCamera));
transform.position = pos;
The ScreenToWorldPoint function takes a Vector3 as an argument, where the x and y denote the pixel position on the screen ( 0,0 is the bottom left) and the z component denotes the desired distance from the camera. An infinite number of 3D locations can map to the same screen position, so you need to provide this value.
Just make sure that the desired position falls within the clipping region of the camera. Also, you might need to pick a proper pivot for your object depending on which part of your object you want centered on the top left.
Using:
Camera.main.WorldToScreenPoint (transform.position);
Let's me convert my GameObjects tranform position to the screen's x and y coordinate system

Align the camera to make screen space origin at the bottom-left corner

I'm reading the book "Learn Unity for 2D game development", and I don't know how to do this:
"The camera has been aligned in the world so that the screen space origin is at the bottom-left corner; meaning positive X spans across the screen width, and positive Y across the screen height from bottom to top."
I'm new on Unity and the book doesn't talk how to do it.
By the way, I'm using Unity 4.3.3f1 on a Windows 7.
How can I align the camera to make screen space origin at the bottom-left corner?
In a 2D game, you have an X-axis and Y-axis. When increasing an object's X-value, you could say the object is going right. When increasing the Y-value, you could say the object is going up.
In a 3D game, there is another additional axis, the Z-axis. This makes it possible to gain 'depth' in games.
Example:
If you wanna create a 2D game in a 3D environment, you'll have to 'remove' one of the axis. The most common is to remove the Z-axis to keep the naming in line (X and Y remain, like in a 2D game).
To achieve 'removing' an axis in a 3D environment, your view has to be looking straight at it. In this case, X and Y rotation can be anything but your Z rotation has to be 0.
Example:
Consider the above picture to have a Z-axis as well. But because you are looking from Z=0 towards the origin, the line doesn't go to the right, left up or bottom. The axis will be like 1 pixel size.
When you do this using the camera, in such a way that the world origin is in front of you and higher X numbers are to your right and higher Y numbers are above you, you've achieved this. This also means that the screen's value of X=0 is totally left, and the screen's value of Y=0 is totally bottom. This concludes that space origin is at the bottom-left corner; meaning positive X spans across the screen width, and positive Y across the screen height from bottom to top.
By saying "camera has been aligned", he doesn't mean that you manually align it in the scene, he's saying how screen space origin is at the bottom-left corner by default.
Source: Unity Script Reference

Cocos2D-iphone : Difference between Anchor point and position

Can any one explain difference between position and anchor point in cocos-2D with some example.I searched in google but cannot find good explanation,thanks in advance .
Suppose you have a square which is 10x10. If you say that you want to position it on your screen at position (50,40) then you need to know where that position refers to - the top left of your square, bottom left, etc.
The anchor point refers to this position. So, if your anchor point is (0,0) then the position (50,40) will be the position of the top left corner of your square.
If your anchor point is (10,0) then the position (50,40) will be the position of the top right corner of your square and so the top-left corner will be at (40,40).
So, the anchor point is the point that is positioned, and is then relative to your square.
Another example - suppose you have a building 100 floors high. Now, suppose you are a giant and you are 4 floors tall. If you are told to put your feet (that's your anchor point) on the 3rd floor, then your head will be on the 7th floor. If you were told to put your head (that's now your anchor point) on the 7th floor, then your feet would be on the 3rd. You are still in the same place, but your reference point (the anchor) has been changed.
The position property is a CGPoint that specifies the position of the layer relative to its superlayer, and is expressed in the superlayer's coordinate system.
The anchorPoint property is a CGPoint that specifies a location within the bounds of a layer that corresponds with the position coordinate. The anchor point specifies how the bounds are positioned relative to the position property, as well as serving as the point that transforms are applied around. It is expressed in the unit coordinate system-the (0.0,0.0) value is located closest to the layer’s origin and (1.0,1.0) is located in the opposite corner. Applying a transform to the layer’s parent (if one exists) can alter the anchorPoint orientation, depending on the parent’s coordinate system on the y-axis and also see this link

Rotate UIImageView and still get x/y click position

I have subclassed UIImageView to create a soccer ball.
The ball bounces and moves along an x & y axis, and depending on where you touch the ball, the ball will move as you'd expect a soccer ball would.
My problem is that I would like to rotate the Ball(UIImageView), but still know the x & y positions from it's original position.
I am rotating it with the following code:
ball.superview.transform = CGAffineTransformMakeRotation(M_PI+(ball.center.x*.015));
ball.transform= CGAffineTransformMakeRotation(M_PI+(ball.center.x*.015));
When I rotate it, the x & y position also rotate. Can I somehow get the x/y distance from the centre of the UIImageView? Any other ideas?
Thanks,
James.
I think if you set the anchor of your CALayer of the UIIMageView to be the center of the UIImageView youll be ok, right now its set to the upper left corner and so are expiriencing the x and y moving.
Why are you rotating the superview also? If you don't do that the center x,y will not be affected.
Why not just use ball.center as the position. This way, rotations will have no effect on the position (assuming your image is correctly centered).