Clipping node to camera for only one axis - swift

I would like to have a node in the scene, that is somehow clipped to camera node only for y axis. So that when camera moves, this node stays at the same y but moves in x and z with the rest of the scene. Is there some special way I could do something like this, or is the only way to movie the node every time camera moves?

You can use SCNTransformConstraint to have a blocked called at every frame that takes the node's transform and returns a new transform that satisfies your criteria.
There are conversion utils such as convertPosition:toNode: that will allow you to have the node's position in the coordinate system of the camera, and then back to coordinate system of the node's parent after you modified the y coordinate. Just remember to use the presentation nodes if there are animations, actions or physics in your scene.

Related

Swift SceneKit issue with node position on preview

I'm trying to add nodes to my scene view, but all of the nodes have a zero position, but different placements.
Each node has a position (0,0,0) but, on the scene view, this node has a different placement. Also, I can`t find the distance between nodes because each node has a position (0, 0, 0). Please explain to me what is wrong with my nodes.
If you want to fetch the real position of a node try this:
node.presentation.worldPosition
This should always return you the effective position of the node as seen on screen.
The problem was in 3d object, the original position of the 3d was not centered. I upload each 3c object to Blender and change original position to (0,0,0)

How to smoothly move a node in an ARkit Scene View based off device motion?

Swift beginner struggling with moving a scene node in ARkit in response to the device motion.
What I want to achieve is: First detect the floor plane, then place a sphere on the floor. From that point onwards depending on the movement of the device, I want to move the sphere along its x and z axis to move it around the floor of the room. (The sphere once created needs to be in the center of the device screen and locked to that view)
So far I can detect the floor and place a node no problem. I can use device motion to obtain the device attitude (pitch, roll and yaw) but how to translate these values into meaningful x, y, z positions that I can update my node with?
Are there any formulas or methods that are used to calculate such information or is this the wrong approach? I would appreciate a link to some info or an explanation of how to go about this. Also I am unsure how to ensure the node would be always at the center of the device screen.
so, as far as I understood you want to have a following workflow:
Step 1. You create a sphere on a plane (which is already done)
Step 2. Move the sphere with respect to the camera's horizontal plane (i.e. along its x and z axis to move it around the floor of the room depending on the movement of the device)
Assuming that the Step 1 is done, what you can do:
Get the position of the camera and the sphere
This should be first called within the function that is invoked after sphere creation (be it a tapGestureRecognizer(), touchesBegan(), etc.).
You can do it by calling position property of SCNNode for sphere and for camera position and/or orientation by calling sceneView.session.currentFrame's .camera.transform which contains all necessary parameters about current position of the camera
Move the sphere as camera moves
Having the sphere position on the Scene and the transformation matrix of the camera, you can find the distance relation between them. Here you can find a good explanation of how exactly you can do it
After you get those things you should implement a proper logic within renderer(_:didUpdate:for:) to obtain continuous lock of the ball with respect to the camera position
If you are interested about the math behind it, you can kick off by reading more about transformation matrices which is a big part of Image Processing and many other areas
Hope that this will help!

SpriteKit sprite position inside node tree

Currently working on a simple 2D game, where I have player character that is split into multiple sprites (head, torso, legs, arms,...).
I have absolute coordinates right in Aseprite (if I take individual sprite and position them I get correct coordinates).
When I put everything into swift and use negative y instead of positive everything gets totally weird.
For example in Aseprite I have coordinates as follow: head (30, 17), torso (30, 24) and legs (28, 35). Everything aligns perfectly.
In SpriteKit I extend SKNode class and put every subsprites inside with just negative number for y. So instead of going up, I draw down. It looks like that coordinates give in pixels are not correct - sprites are off by few pixels. Mostly is off y coordinate but in some cases (character rotation) also x.
How to get from those absolute upper-left coordinates to correct SpriteKit coordinates then?
Ok I figured what was wrong (several things):
your parent class can be SKNode, but children has to have set anchor point at (0,1)
when changing texture of child sprite, always make sure that sprite size is set to new size of texture. If not it will use previous texture size and resize new texture to old size. This introduces additional problems with positioning. So you have to call: child.size = child.texture!.size() (I used force unwrap because I set (new) texture one step before so I'm 100% sure it exists.
when setting new texture set anchor point again (it seems it gets reset when changing texture of child sprite).

Make a SKLabelNode in spritekit appear above all other sprites

Hi im pretty new to programming and I have encountered a small issue. I am creating a simple game which has a score label. However, the score label is covered whenever a downward scrolling obstacle passes. I would like for the score label to not be covered by the obstacles.Is there a way i can change the sprite hierarchy in spritekit? Thanks
I would recommend creating a node for the static stuff that should be fixed on screen (like a game HUD). Then you set this node's zPosition to be higher then the game play node (where all game objects are placed and manipulated).
This makes it also easier implementing a camera or moving the game play node if needed without the score label or any other fixed hud element moving as well.
So you will have something like this :
SKNode *hudNode = [SKNode node];
hudNode.zPosition = 1000;
[hudNode addChild:scoreLabel];
SKNode *gamePlayNode = [SKNode node];
gamePlayNode.zPosition = 0; // Not mandatory as it defaults to 0 but I put this here for clarification
[self addChild:hudNode]; // self is your scene instance
[self addChild:gamePlayNode];
You can change the zPosition property of your label node, by making it 100 it will be above all other nodes (default value is 0.0):
labelNode.zPosition = 100;
Nodes are drawn in the order they are added to the scene (or parent node). You can change the order in which nodes are drawn by setting their zPosition property.
From Apple's documentation...
...The z position is the node’s height relative to its parent node, much as a node’s position
property represents its x and y position relative to parent’s
position. So you use the z position to place a node above or below the
parent’s position.
When you take z positions into account, here is how the node tree is
rendered:
Each node’s global z position is calculated.
Nodes are drawn in order from smallest z value to largest z value.
If two nodes share the same z value, ancestors are rendered first, and siblings are rendered in child order.
You can also use zPosition to optimize rendering performance...
...it might be better if Sprite Kit could gather all of the nodes that share the same texture and drawing mode and and draw them with a single drawing pass. To enable these sorts of optimizations, you set the view’s ignoresSiblingOrder property to YES.

Working with the coordinate system and game screen in Unity 2d?

So I've developed games in other platforms where the x/y coordinate system made sense to me. The top left representing the game screen with coordinates of (0,0) and the bottom right was (width,height). Now I'm trying to make the jump to Unity 2d and I can't understand how the game screen works. If I had a background object and a character object on the screen, when I move the character around his x and y values vary between -3 and 3... very small coordinates and it doesn't match the game resolution I have setup (1024x768). Are there good tutorials for understanding the game grid in Unity? Or can anyone explain how I can accomplish what I'm trying to do?
There are three coordinates systems in Unity: Screen coordinates, view coordinates and the world coordinates.
World coordinates: Think of the absolute positioning of the objects in your scene, using "points". You can choose to have the units represent any length you want, for example 1 unit = 10 meters. What is actually shown on the screen is determined by where the camera is placed and how it is oriented.
View Coordinates: The coordinates in the viewport of a given camera. Viewport is the imaginary rectangle through which the world is viewed. These coordinates are porportional, and range from (0,0) to (1,1).
Screen Coordinates: The actual pixel coordinates denoting the position on the device's screen.
Note that the world co-ordinates of any given object will always be the same regardless of which camera is used to view, whereas the view coordinates depends on the camera being used. The screen coordinates in addition depend on the resolution of the device and the placement of the camera view on the screen.
The "Camera" object provides several methods to convert between these different coordinate systems like "ScreenToViewportPoint" "ScreenToWorldPoint" etc.
Example: Place object on top left of screen
float distanceFromCamera = 10.0f;
Vector3 pos = Camera.main.ScreenToWorldPoint (new Vector3 (0, Camera.main.pixelHeight, distanceFromCamera));
transform.position = pos;
The ScreenToWorldPoint function takes a Vector3 as an argument, where the x and y denote the pixel position on the screen ( 0,0 is the bottom left) and the z component denotes the desired distance from the camera. An infinite number of 3D locations can map to the same screen position, so you need to provide this value.
Just make sure that the desired position falls within the clipping region of the camera. Also, you might need to pick a proper pivot for your object depending on which part of your object you want centered on the top left.
Using:
Camera.main.WorldToScreenPoint (transform.position);
Let's me convert my GameObjects tranform position to the screen's x and y coordinate system