I have an SKSpriteNode with a texture with a significant alpha margin around it. The texture is 92x92 points; the touchable frame of the node ends up being much smaller (40x40) because of the alpha margin. I would like the touch to register if it happens anywhere within the 92x92 texture.
I detect the nodes in touchesBegan with nodesAtPoint. However, if you touch in the alpha margin, the node is not detected. I tried changing this by overriding calculateAccumulatedFrame in my SKSpriteNode, but doesn't seem to have done anything. My method is rarely and unpredictably called. I assume if a node does not have children, SpriteKit refers to the frame property without using the calculateAccumulatedFrame method.
Any solutions?
I do not know how to actually change the way SpriteKit behaves regarding alpha in textures, but as an alternative solution, I overrode my scene's nodesAtPoint.
If you have to do this for some reason, keep in mind that CGRect's origin point is at the lower-left of the rect and not the center like a SKNode.
Related
I used to have a game object used as a sword with sprites with a specific size, despite including quite a bit of transparent space, to ensure these are aligned properly with the player game object:
This ensures the sword follows the player as he jumps by making use of the hero transition like this transform.position = hero.transform.position; , and though there may be issues with sprite changes I would address these later.
However, since I want to have several different equipment, and other sprites of this same sword might need a bigger dimension to look good (such as a sword attack while standing on the ground), I could either make even bigger sprites which would eventually affect performance due to transparent pixels loading, or I thought of making sprites with specific sizes:
(if this works I'd make sure to draw and put them close together instead of being separate)
And although when I prepare the animation I make sure to shift the position of the new sword to where it would be based on the player sprite on its own air attack animation (thus I had to modify this frame by frame,
The sword doesn't seem to follow the player, even when its game object still uses the script that makes use of the player's transform position:
I'm assuming something else has to be changed frame by frame, but what could it be? Is there a way to align or anchor a smaller sprite to follow the pivot of a bigger sprite?
All rotation or changing of sprites is done relative to the sprite's Pivot Point.
When you currently swap your sprites, your sword looks like it is rotating on it's blade rather than the handle.
Change the Pivot point to the handle, and it will do most of the work.
The rest is just making sure the handle of the sword follows the character's hands.
I'm making a game object to place sword sprites in it for the parent object. To accomplish this, I've made sword sprites which are bigger than the parent sprite (parent is 256x256, sword is 384x384), so that the weapon size is not limited by the size of the parent sprite:
I've made it so all of these images' pivots are at the center, so in theory when the sword attack animation plays the sword game object should show the sprites from the center of the parent. While that does appear to happen, it seems the inclusion of these bigger images affect the parent game object dimension:
So, aside from addressing possible delays and z-index, it seems that because the image is bigger, the parent game object gets pushed down to ensure the top of the bigger image (the sword in this case) is stuck to the parent's top. Is there a way then to ensure child image dimensions are ignored so that the parent image doesn't displace?
So it turns out that, because I had all sprites' pivots as Bottom Center and I had changed the sprites specific to this animation to Center (so the sword and the hero would align) that the position of the sprites would change with respect to the rest, producing the abrupt down shift. The lack of synchronization was also not due to (at least not strictly because of) lack of optimization, but rather because the sword animation's duration was 15 milliseconds while the body and all its other child sprites' animations for the attack were 18 milliseconds, so the system may have tried to compensate somewhere. Changing the sword animation to last 18 milliseconds (by adding 3 more at the end without a sprite) synced them.
When using SKCropNode in SpriteKit, does the masking affect the physics of the node? (e.g. I crop half of the sprite, will a ball fall through the masked part of the image?) If this is the case, how would I go about creating the SKCropNode so it would crop where ever I touch?
Cheers
SKCropNode only pertains to how a node appears on the screen, it does not deal with physics bodies. You can however use SKPhysicsBody(polygonFrom:CGPath) to create a path that is identical to the body you are trying to mimic, with the gap and everything. I recommend using the program PhysicsEditor to achieve such effect. https://www.codeandweb.com/physicseditor
I have a situation where i need to use the resizeToHeight method (to zero), but i want to achieve a sliding door effect, where the resize happens from the bottom and not the center of the image (with an anchorPoint of (0.5,0.5), the resize pretty much happens in the middle.
Now, if i change the anchorPoint to (0,0), the resize occurs the way i want it, but the physicsbody of the object is not really affected by the change of the anchorPoint, thus messing with my collision detection (the invisible frame actually also collides and not the visible part of the image).
Based on what i could find online, it looks that maybe it's not the best idea to change the anchor point to CGPointZero. If that is the case, how can i handle this properly ? Or if CGPointZero is the way to go about it, how do i handle the physicsBody discrepancy ?
disconnect the sprite from the physics body, ie have a node that represents the body and another that represents the door image, that way you can move, scale them independently.
Best solution is probably to use a SKNode with physics and add the SKSpriteNode as child so you can offset it in relation to the physics body in any way you like, without having to constantly synchronize their position/rotation.
I am implementing a paint program in SpriteKit and I am super impressed by the simplicity of implementation SprikeKit enables. Very cool indeed!
Each brush stroke is implemented as a SKSpriteNode with attached SKShader. Everything works as I expect.
Issue: Currently I am steadily growing the SKScene graph as brush strokes are appended. This is not a scalable solution as resource consumption mushrooms and framerate slows to a crawl. So, what I want to do "bake" the SKScene graph after brush stroke is painted.
I tried setting myScene.shouldRasterize = YES on my SKScene instance but it appeared to have no effect. How do I "bake" rendering results in SpriteKit? Do I have to roll my own?
Thanks,
Doug
The only was I can think of to do this is to call textureFromNode on your SKView passing in the SKNode that you want to "bake". Then take that texture, apply it to a SKSpriteNode, and remove the SKNode(s) that you just "baked" in to the texture.
Wanted to add a comment but reputation won't allow me. Just would like to add that you can rasterize group of sprites by using SKEffectNode. Just add any sprite to this node then use the same shouldRasterize property. You can even apply CoreImage filters to it such as Gaussian Blurs.
The downside is obviously performance when created the rasterized node.