SpriteKit texture atlas leads to image artifacts and blank areas - swift

I have a SpriteKit game in which there's a shape created via SKShapeNode(splinePoints:count:).
The shape has a fillTexture, loaded from a texture atlas via textureNamed(_:).
In my texture atlas there are 5 images -- 1 of which is used for the shape.
If I use the regular .xcassets folder instead of an atlas, the shape is textured correctly. So it's definitely an issue with the atlas. Also, the texture works correctly if it's the only image in the atlas. It's when I add additional images that the problem occurs.
Here's the code that results in the correct texture:
var splinePoints = mainData.getSplinePointsForGround()
let ground = SKShapeNode(splinePoints: &splinePoints, count: splinePoints.count)
let texture = SKTexture(imageNamed: "groundTexture")
ground.fillTexture = texture
ground.lineWidth = 1
ground.fillColor = UIColor(Color.purple)
ground.strokeColor = UIColor(Color.clear)
addChild(ground)
Expected results:
The shape, which has a purple gradient image, should look like this (please ignore the dotted white line):
Actual results:
Instead, the shape looks like this, with strange blank areas and little artifacts from the other images located in the atlas:
Here's the version of code that uses the atlas:
let textureAtlas = SKTextureAtlas(named: "assets")
var splinePoints = mainData.getSplinePointsForGround()
let ground = SKShapeNode(splinePoints: &splinePoints, count: splinePoints.count)
let texture = textureAtlas.textureNamed("groundTexture2.png")
ground.fillTexture = texture
ground.lineWidth = 1
ground.fillColor = UIColor(Color.purple)
ground.strokeColor = UIColor(Color.clear)
addChild(ground)
Why is this problem happening, and how do I resolve it?
Thank you!

It looks like I've solved the problem, though I'm not sure why my solution works.
Instead of having a top-level .atlas folder, I've put a .spriteatlas folder inside Assets.xcassets.
For whatever reason, this results in the correct textures being shown without any nasty artifacts or transparent areas.

Related

Where a shadow plane should be defined in Scenekit

It's so confusing to me, would be grateful if anyone help me on it.
I have a shadow plane to show the shadow below the AR object. I read some article that they define this shadow in viewDidLoadand add it as the child bode to sceneView.scene. The question is, it should be defined only once for the floor surface?
for instance, I can add the shadow plane to renderer(_:didAdd:for:), it call it once when a new surface is detected. That is so cool for me. But the position of the shadow plane should be changed as well? can someone explain it to me that where it should be defined and wehere/when it should be updated?
here how I define the shadow plane
private func addShadowPlane(node: SCNNode, planeAnchor: ARPlaneAnchor) {
let anchorX = planeAnchor.center.x
let anchorY: planeAnchor.center.y
let anchorZ = planeAnchor.center.z
let floor = SCNFloor()
let floorNode = SCNNode(geometry: floor)
floorNode.position = SCNVector3(anchorX, anchorY, anchorZ)
floor.length = CGFloat(planeAnchor.extent.z)
floor.width = CGFloat(planeAnchor.extent.x)
floor.reflectivity = 0
floor.materials = [shadowMaterialStandard()]
node.addChildNode(floorNode)
}
func shadowMaterialStandard() -> SCNMaterial {
let material = SCNMaterial()
material.lightingModel = .physicallyBased
material.writesToDepthBuffer = true
material.readsFromDepthBuffer = true
material.colorBufferWriteMask = []
return material
}
The issue you might run into is: Do you want one single shadow plane in a kind of initial defined position and then remains there (or can be repositioned). Or do you want a lots of shadow planes, like on any surface captured with the ARKit? The problem might be, that all those planes will not be exact and accurate to the surfaces on top they are created (just more or less). You can make more accurate shapes for surfaces, but they are built up in an ongoing process and need more time to complete (imagine you scan a table by walking around). I also did some ARApps with Shadow planes. I usually create one single shadow plane (like 20x20 meters) on my request using a focus square. I fetch the worldPosition from the focus square, then I add a plane to that location using Scenekit (and not the Renderer for plane anchors). Keep in mind, there are many ways to do this. There is no best way.
Try to study this Apple Sample App for more information on placing objects, casting shadows etc:
https://developer.apple.com/documentation/arkit/environmental_analysis/placing_objects_and_handling_3d_interaction

Swift ARKit: Get face anchor transform relative to camera

My Swift ARKit app needs the position and orientation of the face relative to the front-facing camera. If I set ARConfiguration.worldAlignment = .camera all I need to do is call for the faceAnchor.transform, which works perfectly; but I need to run in the default worldAlignment = .gravity. In this mode I can get faceAnchor.transform and camera.transform, which are both supplied in world coordinates. How can I use those transforms to get the face anchor in camera coordinates? I've tried multiplying those together as well as multiplying one by the other's inverse, in all four order combinations, but none of these results works. I just don't understand matrix operations well enough to succeed here. Can someone shed light on this for me?
I finally figured this out using SceneKit functions!
let currentFaceTransform = currentFaceAnchor!.transform
let currentCameraTransform = frame.camera.transform
let newFaceMatrix = SCNMatrix4.init(currentFaceTransform)
let newCameraMatrix = SCNMatrix4.init(currentCameraTransform)
let cameraNode = SCNNode()
cameraNode.transform = newCameraMatrix
let originNode = SCNNode()
originNode.transform = SCNMatrix4Identity
//Converts a transform from the node’s local coordinate space to that of another node.
let transformInCameraSpace = originNode.convertTransform(newFaceMatrix, to: cameraNode)
let faceTransformFromCamera = simd_float4x4(transformInCameraSpace)
Hope this helps some others out there!

spritekit how to selectively scale nodes

as background lets assume I have a map- literally a road map being rendered inside my SKScene. Roads are represented by SKShapenodes with path set to an array of CGPoints. I want the user to be able to zoom in/out so I created a camera node:
var cam: SKCameraNode = SKCameraNode()
and as the user wants to zoom in/out by scrolling on the trackpad:
let zoomInAction = SKAction.scale(to: CGFloat(scale), duration: 0.0)
camera?.run(zoomInAction)
This works great however I have an additional complexity which I'm not sure how to handle. I want some nodes (for examples road name labels, icons, map legend) to be exempt from scaling- such that as a user zooms in/out the road name label remains the same size while the road shape scales proportionally.
Not sure how to handle this? Can I have a hierarchy of scenes so one layer scales and the other doesnt scale? Can that be achieved by attaching the camera node to the "scalable" layer? Any help appreciated!
Here is the case. If you want the node scale won't change with camera, just add the node to the tree of camera. Don't forget add cameraNode to scene, otherwise, those nodes connected to camera won't be rendered.
In the following, label is rendered via camera and won't change scale.
let label = SKLabelNode.init(text: "GFFFGGG")
label.fontSize = 30
label.fontColor = UIColor.black
label.name = "cool"
label.zPosition = 100
let camera = SKCameraNode()
camera.addChild(label)
scene.addChild(camera)
scene.camera = camera
camera.position = CGPoint.init(x: 0, y: 0)
camera.xScale = 2.0
If you have nodes connecting to scene before,
you may remove the node from parent and then add to camera.
If using a function to batch handling them should not be as hard as thought.
Maybe not necessary:
You may transfer them to cameraNode tree via camera.convert(point: , from:) etc.

How to fit a texture to a surface? ARKit SceneKit Swift

I am very new to scenekit and 3d development in general and I'm playing around with ARKit and trying to fit a texture to a plane (well really a scnbox but only the top surface) but I'm seriously failing and also failing to find anything helpful on the web.
I have a texture of a road that is a very long rectangular png image. width:height ratio is about 20:1
I want to apply this texture to the surface of a table, once arkit has found the plane for me. I do not know the dimensions of the table before the app starts.
I can currently apply a texture to this plane, and also rotate the texture as desired.
What I would like to accomplish is to stretch the texture (keeping original ratio) so that the short sides of the plane and texture line up and then the texture continues until the end of the plane, cutting off or repeating depending on the length or ratio of the plane.
Here is the function that gets the ScnMaterial Object
class func getRunwayMaterial() -> SCNMaterial {
let name = "runway"
var mat = materials[name]
if let mat = mat {
return mat
}
mat = SCNMaterial()
mat!.lightingModel = SCNMaterial.LightingModel.physicallyBased
mat!.diffuse.contents = UIImage(named: "./Assets.scnassets/Materials/runway/runway.png")
mat!.diffuse.wrapS = SCNWrapMode.repeat
mat!.diffuse.wrapT = SCNWrapMode.repeat
materials[name] = mat
return mat!
}
This is the function that should be doing the scaling and rotating of the texture on the plane.
func setRunwayTextureScale(rotation: Float? = nil, material: SCNMaterial? = nil) {
let texture = material != nil ? material! : planeGeometry.materials[4]
var m: SCNMatrix4 = SCNMatrix4MakeScale(1, 1, 1)
if(rotation != nil){
textureRotation = rotation! + textureRotation
}
m = SCNMatrix4Rotate(m, textureRotation, 0, 1, 0)
texture.diffuse.contentsTransform = m
}
Please help me fill in the blanks here, and if anyone has any links or articles on how to do this kind of manipulation please link me!
Thanks!
Ethan
edit: btw I'm using xcode 9
Try using:
material.diffuse.wrapS = SCNWrapModeRepeat;
material.diffuse.wrapT = SCNWrapModeRepeat;
This would help the material not stretch, but simply keep adding more and more of the same png to itself.
You can also set the scale for the material by setting it to a width and height:
CGFloat width = self.planeGeometry.width;
CGFloat height = self.planeGeometry.length;
material.diffuse.contentsTransform = SCNMatrix4MakeScale(width, height, 1);
Sorry i'm working with Objective C here but should be pretty straightforward to translate this.
Also some good tutorials can be found on this link:
https://blog.markdaws.net/apple-arkit-by-example-ef1c8578fb59

how to avoid resizing texture in swift

I am adding the spritenode to the scene, the size is given.
But when I change the texture of the spritenode, the size automatically changes to the original size of the image(png) of the texture.
How can I avoid this?
My code:
var bomba = SKSpriteNode(imageNamed: "bomba2")
var actionbomba = SKAction()
bomba.size = CGSizeMake(frame2.size.width/18, frame2.size.width/18)
let bomba3 = SKTexture(imageNamed: "bomba3.png")
actionbomba.addObject(SKAction.moveBy(CGVectorMake(0, frame.size.height/2.65), duration: beweegsnelheid))
actionbomba.addObject(SKAction.setTexture(bomba3,resize: false))
addChild(bomba)
bomba.runAction(SKAction.repeatAction(SKAction.sequence(actionbomba), count: -1))
Do not set the size explicitly. From your information sprite kit will not automatically find the scale factor for each texture and scale it.
Instead, you set the scale factor of the node and each texture will have that scale applied to it.
[playernode setScale: x];
Something like this. You only have to set it when you create the node and each texture will be the size you would expect, given that your textures are the same size.
I use this method for all of my nodes that are animated with multiple textures and it works every time.