I'm attempting to detect if there's a sprite node immediately to the left or right of the current sprite node.
This seems straightforward, but I'm seeing an odd behaviour.
I've created a thin rect (width = 1point) that's the same height as the current node and with the same origin as the current node.
e.g:
// Create a thin rect that's aligned to the left edge of 'block'
CGRect adjacentFrame;
adjacentFrame = CGRectMake(block.frame.origin.x,
block.frame.origin.y,
1,
block.frame.size.height);
// Shift the rect left a few points to position it to the left of 'block'
adjacentFrame.origin.x -= 10;
Then I test to see if that rect (adjacentFrame) is intersecting a node:
SKPhysicsBody* obstructingBody;
obstructingBody = [self.physicsWorld bodyInRect:adjacentFrame];
Now, the weird thing is, obstructingBody contains 'block' itself!
I've even added code to add a SpriteNode to the scene with a frame of adjacentFrame so I can check the rect's positioning. It's clearly displaying a few points left of 'block' and is clearly not touching it!
Any ideas what could be going on here?
Thanks,
Chris
bodyInRect needs scene coordinates. You provide coordinates in the coordinate space of block.parent. Unless block.parent is the scene itself, you need to convert origin with:
CGRect blockFrame = block.frame;
blockFrame.origin = [self convertPoint:blockFrame.origin toNode:self.scene];
Also block's width must be less than 10 otherwise your -10 offset isn't enough to move the frame outside the block's frame.
Related
The positions and sizes of my Game Pieces, as set by CGPoint(..) and CGRect(..), don’t make arithmetic sense to me when looked at with respect to the width and height of the surrounding container of all Game Pieces?
Let me illustrate with just one specific example –
I call the surrounding container = “room”.
One of many specific Game Pieces = “rock”.
Here’s the math
roomWidth = Double(UIScreen.main.bounds.width)
roomHeight = Double(UIScreen.main.bounds.height)
While in Portrait mode:
roomWidth = 744.0
roomHeight = 1133.0
When rotated to Landscape mode:
roomWidth = 1133.0,
roomHeight = 744.0
So far so good .. here’s the problem:
When I look at my .sks file, the width of the “rock” and its adjacent game pieces far exceeds the roomWidth; for example,
Widths of rock + paddle + tree = 507 + 768 + 998 which obviously exceeds the room’s width for either Portrait or Landscape mode – and this math doesn’t even address the separation between Game Pieces.
The final math “craziness” looks at the swift xPos values for each Game Piece as specified in my .sks file:
Room: xPos = 40,
Rock: xPos = -390,
Paddle: xPos = -259,
Tree: xPos = 224
I cannot grasp the two high negative numbers .. to me, that means the Rock and the Paddle shouldn’t even be visible .. seriously off-screen.
One significant addition = I did set the Autoresizing Mask to center horizontally and vertically
I need a serious infusion of “smarts” here.
The default anchorPoint of an sks file (SpriteKit Scene file) is (0.5, 0.5). So the origin (0, 0) of the scene is drawn at the center of the SKView. You can change the anchor point in the Attributes inspector when editing the sks file. The default means that negative coordinates not too far from the origin will be visible in the SKView.
The scene also has a scaleMode property which determines how the scene is scaled if its size doesn't match the SKView's size. The default is .fill, which means the view scales the scene's axes independently so the scene's size exactly fills the view.
I'm struggling with this sort of
Screen disposition.
I want to position my Camera so that the world is positionned like in the image with the origin at bottom left. It's easy to set the orthographicSize of the camera as I know how many unit I want vertically. It is also easy to calculate the Y position of the camera as I just want it to be centered vertically. But I cannot find how to compute the X position of the camera to put the origin of the world in this position, no matter what the aspectRatio of the screen is.
It brings me two questions :
How can I calculate the X position of the camera so that the origin of the world is always as the same distance from the screen left and bottom borders ?
Instead of positioning the camera regarding the UI, should I use RenderMode Worldspace for the UI canvas. And if so, how could I manage responsiveness ?
I don't understand the second question, but regarding positioning the Camera on the X axis so that the lower left corner is always at world 0 you could do the following:
var lowerLeftScreen = new Vector3(0, 0, 10);
var pos = transform.position;
var lowerLeftScreenPoint = Camera.main.ScreenToWorldPoint(lowerLeftScreen).x;
if (lowerLeftScreenPoint > 0)
{
pos.x -= lowerLeftScreenPoint;
}
else
{
pos.x += Mathf.Abs(lowerLeftScreenPoint);
}
transform.position = pos;
Debug.Log(Camera.main.ScreenToWorldPoint(lowerLeftScreen));
Not the nicest code, but gets the job done.
Also the Z component in the Vector does not really matter for our orthographic camera.
This is a bizarre one for me and after having spent two days trying to fix it and reading what I could find on apple sites and stack overflow I still have no solution. Hopefully someone can help me.
So I am rotating a CAShapeLayer which is in the coordinate system of the view. After rotation the frame-coordinates are updated but those for the path are not.
On screen the path and frame both display as rotated! So If I used path.contains to see if a point belongs the CAShapeLayer after rotation I get wrong answer. Using the rotated frame does not work because frames of adjacent paths can overlap and give wrong answer.
Here is the code that rotates the relevant CAShapeLayer:
let shapeCopy = CAShapeLayer()
let inputShape = tempShapeList[index]
shapeCopy.backgroundColor =UIColor.red.withAlphaComponent(0.75).cgColor
shapeCopy.frame = inputShape.frame
shapeCopy.bounds = inputShape.bounds
shapeCopy.path = inputShape.path
shapeCopy.position = inputShape.position
shapeCopy.anchorPoint = inputShape.anchorPoint
print("bounding rect pre rotation: \(shapeCopy.frame)")
print("path pre rotation: \((shapeCopy.path)!)")
let transform = CATransform3DMakeRotation(CGFloat(75*Double.pi/180.0), 0, 0, 1)
shapeCopy.transform = transform
print("bounding rect post rotation:\(shapeCopy.frame)")
print("path post rotation: \((shapeCopy.path)!)")
if ((shapeCopy.path)!.contains(newPoint)) {
containingView.layer.addSublayer(shapeCopy)
answer = index
print("Prize is:\(String(describing: textLabelList[index].text))")
break
}
The message in the debugger:
bounding rect pre rotation: (139.075809065823, 236.846930318145, 174.164592138914, 163.153069681855)
path pre rotation: Path 0x600000236a60:
moveto (207, 400)
lineto (138.901, 266.349)
curveto (196.803, 236.847) (267.115, 247.983) (313.066, 293.934)
lineto (207, 400)
closepath
bounding rect post rotation:(189.419925763055, 292.163148046286, 202.670877072107, 210.457199272682)
path post rotation: Path 0x600000236a60:
moveto (207, 400)
lineto (138.901, 266.349)
curveto (196.803, 236.847) (267.115, 247.983) (313.066, 293.934)
lineto (207, 400)
closepath
ScreenShot of the simulator:
Screen shot of the simulator
In the screen shot you will see the rotated path and the frame of the path in the dark colored pie and slightly translucent frame.
However the coordinates of the path haven't changed. So the program believes that the red dot belongs to the shaded slice that got rotated away! If the paths updated correctly the red dot would belong to the yellow slice labelled "e6 ¢" gives wrong answers.
Also note that the background fortune wheel is a view etc in its own coordinate system. The rotated dark pie is in the coordinate system of the top level view as is the red dot.
Not sure if the post is fully clear - apologize in advance for this verbose post. If I have missed on any detail that can help please let me know.
Thanks in advance....
Applying a transform to a layer doesn't change the way the layer's content is stored. If the layer contains an image, the image is stored unrotated, and if the layer contains a path, the path is stored unrotated.
Instead, when the window server builds up (“composites”) the screen image, it applies the transform as it is drawing the layer's content into the frame buffer.
The frame property is different. It is actually computed from several other properties: position, bounds.size, anchorPoint, and transform.
You want to test whether a point is inside the on-screen appearance of the layer's path—that is, the path with the transform applied.
One way to do this is to convert the point into the layer's coordinate system. To convert it, you also need to know the original coordinate system of the point. Then you can use -[CALayer convertPoint:fromLayer] or -[CALayer convertPoint:toLayer:]. For example, suppose you have a tap gesture recognizer and you want to know if the tap is inside the path:
#IBAction func tapperDidFire(_ tapper: UITapGestureRecognizer) {
let newPoint = tapper.location(in: view)
let newPointInShapeLayer = shapeLayer.convert(newPoint, from: view.layer)
if shapeLayer.path?.contains(newPointInShapeLayer) ?? false {
print("Hit!")
}
}
I am trying to put together a game using SpriteKit, in Swift.
I have a moving sprite which is a rectangle of width (sprite.frame.size.width) and height 2*(sprite.frame.size.width)
I only want to check collision of the bottom half which is a square of width (sprite.frame.size.width) and height (sprite.frame.size.width)
I set sprite.anchorPoint = CGPointMake(0.5, 0.25) and use method CGRectIntersectsRect to check for collision with another sprite. But this does not work. The collision area remains as the first rectangle.
I do not want to use methods that call for physicsBody because there is no other physics in the game.
What am I missing here?
I actually found this UIEdgeInsets function which can be used to shrink a frame. If spriteB is the sprite in question, then instead of testing like this:
if CGRectIntersectsRect(spriteA.frame, spriteB.frame) {
// something happens
}
just inset the edges to the bottom half of spriteB, like this:
if CGRectIntersectsRect(spriteA.frame, UIEdgeInsetsInsetRect(spriteB.frame, UIEdgeInsetsMake(spriteB.frame.height/2 , 0 , 0 , 0))) {
// something happens
}
In my Unity2D project, I am trying to spawn my sprite on top of each other and across the entire height of the device's screen. For example to give an idea, think of a box on top of each other across the entire device's screen height. In my case, I'm spawning arrow sprites instead of boxes
I already got the sprites spawning on top of each other successfully. My problem now is how to calculate how many sprites to spawn to make sure it spreads across the screen's height.
I currently have this snippet of code:
public void SpawnInitialArrows()
{
// get the size of our sprite first
Vector3 arrowSizeInWorld = dummyArrow.GetComponent<Renderer>().bounds.size;
// get screen.height in world coords
float screenHeightInWorld = Camera.main.ScreenToWorldPoint(new Vector3(0, Screen.height, 0)).y;
// get the bottom edge of the screen in world coords
Vector3 bottomEdgeInWorld = Camera.main.ScreenToWorldPoint(new Vector3(0,0,0));
// calculate how many arrows to spawn based on screen.height/arrow.size.y
int numberOfArrowsToSpawn = (int)screenHeightInWorld / (int)arrowSizeInWorld.y;
// create a vector3 to store the position of the previous arrow
Vector3 lastArrowPos = Vector3.zero;
for(int i = 0; i < numberOfArrowsToSpawn; ++i)
{
GameObject newArrow = this.SpawnArrow();
// if this is the first arrow in the list, spawn at the bottom of the screen
if(LevelManager.current.arrowList.Count == 0)
{
// we only handle the y position because we're stacking them on top of each other!
newArrow.transform.position = new Vector3(newArrow.transform.position.x,
bottomEdgeInWorld.y + arrowSizeInWorld.y/2,
newArrow.transform.position.z);
}
else
{
// else, spawn on top of the previous arrow
newArrow.transform.position = new Vector3(newArrow.transform.position.x,
lastArrowPos.y + arrowSizeInWorld.y,
newArrow.transform.position.z);
}
// save the position of this arrow so that we know where to spawn the next arrow!
lastArrowPos = new Vector3(newArrow.transform.position.x,
newArrow.transform.position.y,
newArrow.transform.position.z);
LevelManager.current.arrowList.Add(newArrow);
}
}
The problem with my current code is that it doesn't spawn the correct number of sprites to cover the entire height of the device's screen. It only spawns my arrow sprites approximately up to the middle of the screen. What I want is for it to be able to spawn up to the top edge of the screen.
Anyone know where the calculation went wrong? and how to make the current code cleaner?
If sprites are rendered via camera mode in perspective and the sprites appear to have varying sizes when displayed on the screen (sprites farther away from the camera are smaller than sprites that are closer to the camera) then a new way to calculate the numberOfArrowsToSpawn value is needed.
You could try adding sprites with a while loop, instead of using a for loop, just continue creating sprites until the calculated world position for the sprite will no longer be visible to the camera. Check to see if a point will be visible in camera by using the technique Jessy provides in this link:
http://forum.unity3d.com/threads/point-in-camera-view.72523/
I think your screenHeightInWorld is really a screenTopInWorld, a point can be anywhere in the space.
You need the relative screen height in world coordinate.
Which is actially the half of the camera frustum size if you use ortographic projection, as you think of it.
float screenHeightInWorld = Camera.main.orthographicSize / 2.0f;
I did not read the rest, but is probably fine, up to you how you implement this.
I'd simply create an arrow method, something like bool SpawnArrowAboveIfFits(), which can call itself iteratively on the new instances.