Math doesn't work for the size and position of Game Pieces WRT the size of their container "room" - swift

The positions and sizes of my Game Pieces, as set by CGPoint(..) and CGRect(..), don’t make arithmetic sense to me when looked at with respect to the width and height of the surrounding container of all Game Pieces?
Let me illustrate with just one specific example –
I call the surrounding container = “room”.
One of many specific Game Pieces = “rock”.
Here’s the math
roomWidth = Double(UIScreen.main.bounds.width)
roomHeight = Double(UIScreen.main.bounds.height)
While in Portrait mode:
roomWidth = 744.0
roomHeight = 1133.0
When rotated to Landscape mode:
roomWidth = 1133.0,
roomHeight = 744.0
So far so good .. here’s the problem:
When I look at my .sks file, the width of the “rock” and its adjacent game pieces far exceeds the roomWidth; for example,
Widths of rock + paddle + tree = 507 + 768 + 998 which obviously exceeds the room’s width for either Portrait or Landscape mode – and this math doesn’t even address the separation between Game Pieces.
The final math “craziness” looks at the swift xPos values for each Game Piece as specified in my .sks file:
Room: xPos = 40,
Rock: xPos = -390,
Paddle: xPos = -259,
Tree: xPos = 224
I cannot grasp the two high negative numbers .. to me, that means the Rock and the Paddle shouldn’t even be visible .. seriously off-screen.
One significant addition = I did set the Autoresizing Mask to center horizontally and vertically
I need a serious infusion of “smarts” here.

The default anchorPoint of an sks file (SpriteKit Scene file) is (0.5, 0.5). So the origin (0, 0) of the scene is drawn at the center of the SKView. You can change the anchor point in the Attributes inspector when editing the sks file. The default means that negative coordinates not too far from the origin will be visible in the SKView.
The scene also has a scaleMode property which determines how the scene is scaled if its size doesn't match the SKView's size. The default is .fill, which means the view scales the scene's axes independently so the scene's size exactly fills the view.

Related

Positioning UI or Camera relatively to each other (and screen border)

I'm struggling with this sort of
Screen disposition.
I want to position my Camera so that the world is positionned like in the image with the origin at bottom left. It's easy to set the orthographicSize of the camera as I know how many unit I want vertically. It is also easy to calculate the Y position of the camera as I just want it to be centered vertically. But I cannot find how to compute the X position of the camera to put the origin of the world in this position, no matter what the aspectRatio of the screen is.
It brings me two questions :
How can I calculate the X position of the camera so that the origin of the world is always as the same distance from the screen left and bottom borders ?
Instead of positioning the camera regarding the UI, should I use RenderMode Worldspace for the UI canvas. And if so, how could I manage responsiveness ?
I don't understand the second question, but regarding positioning the Camera on the X axis so that the lower left corner is always at world 0 you could do the following:
var lowerLeftScreen = new Vector3(0, 0, 10);
var pos = transform.position;
var lowerLeftScreenPoint = Camera.main.ScreenToWorldPoint(lowerLeftScreen).x;
if (lowerLeftScreenPoint > 0)
{
pos.x -= lowerLeftScreenPoint;
}
else
{
pos.x += Mathf.Abs(lowerLeftScreenPoint);
}
transform.position = pos;
Debug.Log(Camera.main.ScreenToWorldPoint(lowerLeftScreen));
Not the nicest code, but gets the job done.
Also the Z component in the Vector does not really matter for our orthographic camera.

Draw images on a canvas next to each other without space in flutter

I'm creating a game in flutter. I'm using a tilesheet to render tiles. I chop the image in 50x50 sections and then render them depending on their surrounding.
The tiles are rendered at the position of their corresponding "game object". In order to keep the code clean from details about converting from the game world positions and sizes to actual screen sizes my painting classes always obtain world space and then scales it up to screen space.
However, after scaling up the tiles I'm sometimes left with gaps between the tiles.
I've tried:
Placing the tiles at their appropriate screen position, without scaling the canvas first, in case the scaling produces the problem.
Adding borders around the tiles in the tilesheet in case the canvas.drawImageRect samples too many pixels.
Making the board take a size divisible by the number of tiles, in case there is a rounding error.
Drawing a subsection of the tile.
I can't seem to make the gap disappear. Do you have any suggestions?
Below is the relevant drawing code, where size and position are in world space and frameX and frameY is the positions to extract from the spritesheet.
void drawFrameAt(int x, int y, Canvas canvas, Offset position, Size size) {
var frameX = x * frameWidth;
var frameY = y * frameHeight;
var rect = Rect.fromLTWH(frameX, frameY, frameWidth, frameHeight);
var width = size.width;
var height = size.height;
var destination = Rect.fromLTWH(position.dx, position.dy, width, height);
canvas.drawImageRect(image, rect, destination, Paint());
}

spritekit how to selectively scale nodes

as background lets assume I have a map- literally a road map being rendered inside my SKScene. Roads are represented by SKShapenodes with path set to an array of CGPoints. I want the user to be able to zoom in/out so I created a camera node:
var cam: SKCameraNode = SKCameraNode()
and as the user wants to zoom in/out by scrolling on the trackpad:
let zoomInAction = SKAction.scale(to: CGFloat(scale), duration: 0.0)
camera?.run(zoomInAction)
This works great however I have an additional complexity which I'm not sure how to handle. I want some nodes (for examples road name labels, icons, map legend) to be exempt from scaling- such that as a user zooms in/out the road name label remains the same size while the road shape scales proportionally.
Not sure how to handle this? Can I have a hierarchy of scenes so one layer scales and the other doesnt scale? Can that be achieved by attaching the camera node to the "scalable" layer? Any help appreciated!
Here is the case. If you want the node scale won't change with camera, just add the node to the tree of camera. Don't forget add cameraNode to scene, otherwise, those nodes connected to camera won't be rendered.
In the following, label is rendered via camera and won't change scale.
let label = SKLabelNode.init(text: "GFFFGGG")
label.fontSize = 30
label.fontColor = UIColor.black
label.name = "cool"
label.zPosition = 100
let camera = SKCameraNode()
camera.addChild(label)
scene.addChild(camera)
scene.camera = camera
camera.position = CGPoint.init(x: 0, y: 0)
camera.xScale = 2.0
If you have nodes connecting to scene before,
you may remove the node from parent and then add to camera.
If using a function to batch handling them should not be as hard as thought.
Maybe not necessary:
You may transfer them to cameraNode tree via camera.convert(point: , from:) etc.

ARKit: Placing an SCNText at a particular point in front of the camera

I've managed to get a cube (SCNNode) placed on a surface where the camera is pointed, however I am finding it very difficult to do the simple (?) task of also placing text in the same position.
I've created the SCNText and subsequent SCNNode, however when I add it to the rootNode the text always seems to be added above my head and off the camera to the right (which tells me thats the global origin point).
Even when I use the exact same values of position I used for the the cube, the SCNText node still gets placed above my head in the same spot.
Apologies if this is a basic question, I've never worked in SceneKit before.
The coordinate center for an SCNGeometry is its center point. But when you are creating a SCNText the center point is somewhere in the bottom left corner:
You need to center the text first. This can be done by checking the bounding box of the node containing your text and setting a pivot transform to change the texts center to its actual center:
func center(node: SCNNode) {
let (min, max) = node.boundingBox
let dx = min.x + 0.5 * (max.x - min.x)
let dy = min.y + 0.5 * (max.y - min.y)
let dz = min.z + 0.5 * (max.z - min.z)
node.pivot = SCNMatrix4MakeTranslation(dx, dy, dz)
}
Edit:
Also note this answer that explains some additional pitfalls:
A text with 16 pts font size is 16 SceneKit units tall. But in ARKit 1 SceneKit units = 1 meter!

SpriteKit bodyInRect finding node when it shouldn't

I'm attempting to detect if there's a sprite node immediately to the left or right of the current sprite node.
This seems straightforward, but I'm seeing an odd behaviour.
I've created a thin rect (width = 1point) that's the same height as the current node and with the same origin as the current node.
e.g:
// Create a thin rect that's aligned to the left edge of 'block'
CGRect adjacentFrame;
adjacentFrame = CGRectMake(block.frame.origin.x,
block.frame.origin.y,
1,
block.frame.size.height);
// Shift the rect left a few points to position it to the left of 'block'
adjacentFrame.origin.x -= 10;
Then I test to see if that rect (adjacentFrame) is intersecting a node:
SKPhysicsBody* obstructingBody;
obstructingBody = [self.physicsWorld bodyInRect:adjacentFrame];
Now, the weird thing is, obstructingBody contains 'block' itself!
I've even added code to add a SpriteNode to the scene with a frame of adjacentFrame so I can check the rect's positioning. It's clearly displaying a few points left of 'block' and is clearly not touching it!
Any ideas what could be going on here?
Thanks,
Chris
bodyInRect needs scene coordinates. You provide coordinates in the coordinate space of block.parent. Unless block.parent is the scene itself, you need to convert origin with:
CGRect blockFrame = block.frame;
blockFrame.origin = [self convertPoint:blockFrame.origin toNode:self.scene];
Also block's width must be less than 10 otherwise your -10 offset isn't enough to move the frame outside the block's frame.