How to move around in an SCNKit Scene (Swift) - swift

In my macCatalyst app, I try to make a SCNScene where the user should be able to move around (with w, a, s and d) but It doesn't work.
I want
to make the possibility to just move around in the scene first
and then adjust it to the direction that the camera is facing
But I have already failed with the first step.
I tried to set the SCNView's point of view to a new SCNNode and tried to move that around:
//initializing the scene
let scene = SCNScene(named: "SceneKit Scene.scn")!
sceneView.scene = scene
//try to set the point of the scene
let node = SCNNode()
node.position = SCNVector3(x: 10, y: 10, z: 10)
sceneView.pointOfView = node
But it changed nothing.
I have also tried to move the node (that is holding all other nodes of the scene inside around)
override func pressesBegan(_ presses: Set<UIPress>, with event: UIPressesEvent){
guard let press = presses.first else {return}
guard let key = press.key else {return}
if key.charactersIgnoringModifiers == "w"{
node.position.z += 1
}
}
Which does work better, but it does not feel like going around in the scene, and if I move around the facing of the camera, I move into the wrong direction.
So how do I fix this?
Thanks for helping!

Related

ARKit SCNNode always in the center when camera move

I am working on a project where I have to place a green dot to be always in the center even when we rotate the camera in ARKit. I am using ARSCNView and I have added the node so far everything is good. Now I know I need to modify the position of the node in
func session(_ session: ARSession, didUpdate frame: ARFrame)
But I have no idea how to do that. I saw some example which was close to what I have but it does not run as it suppose to.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let location = sceneView.center
let hitTest = sceneView.hitTest(location, types: .featurePoint)
if hitTest.isEmpty {
print("No Plane Detected")
return
} else {
let columns = hitTest.first?.worldTransform.columns.3
let position = SCNVector3(x: columns!.x, y: columns!.y, z: columns!.z)
var node = sceneView.scene.rootNode.childNode(withName: "CenterShip", recursively: false) ?? nil
if node == nil {
let scene = SCNScene(named: "art.scnassets/ship.scn")!
node = scene.rootNode.childNode(withName: "ship", recursively: false)
node?.opacity = 0.7
let columns = hitTest.first?.worldTransform.columns.3
node!.name = "CenterShip"
node!.position = SCNVector3(x: columns!.x, y: columns!.y, z: columns!.z)
sceneView.scene.rootNode.addChildNode(node!)
}
let position2 = node?.position
if position == position2! {
return
} else {
//action
let action = SCNAction.move(to: position, duration: 0.1)
node?.runAction(action)
}
}
}
It doesn't matter how I rotate the camera this dot must be in the middle.
It's not clear exactly what you're trying to do, but I assume its one of the following:
A) Place the green dot centered in front of the camera at a fixed distance, eg. always exactly 1 meter in front of the camera.
B) Place the green dot centered in front of the camera at the depth of the nearest detected plane, i.e. using the results of a raycast from the mid point of the ARSCNView
I would have assumed A, but your example code is using (now deprecated) sceneView.hitTest() function which in this case would give you the depth of whatever is behind the pixel at sceneView.center
Anyway here's both:
Fixed Depth Solution
This is pretty straightforward, though there are few options. The simplest is to make the green dot a child node of the scene's camera node, and give it position with a negative z value, since z increases as a position moves toward the camera.
cameraNode.addChildNode(textNode)
textNode.position = SCNVector3(x: 0, y: 0, z: -1)
As the camera moves, so too will its child nodes. More details in this very thorough answer
Scene Depth Solution
To determine the estimated depth behind a pixel, you should use ARSession.raycast instead of SceneView.hitTest, because the latter is definitely deprecated.
Note that, if the raycast() (or still hitTest()) methods return an empty result set (not uncommon given the complexity of scene estimation going on in ARKit), you won't have a position to update the node and this it might not be directly centered in every frame. To handle this is a bit more complex, as you'd need decide exactly what you want to do in that case.
The SCNAction is unnecessary and potentially causing problems. These delegate methods run 60fps, so simply updating the position directly will produce smooth results.
Adapting and simplifying the code you posted:
func createCenterShipNode() -> SCNNode {
let scene = SCNScene(named: "art.scnassets/ship.scn")!
let node = scene.rootNode.childNode(withName: "ship", recursively: false)
node!.opacity = 0.7
node!.name = "CenterShip"
sceneView.scene.rootNode.addChildNode(node!)
return node!
}
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Check the docs for what the different raycast query parameters mean, but these
// give you the depth of anything ARKit has detected
guard let query = sceneView.raycastQuery(from: sceneView.center, allowing: .estimatedPlane, alignment: .any) else {
return
}
let results = session.raycast(query)
if let hit = results.first {
let node = sceneView.scene.rootNode.childNode(withName: "CenterShip", recursively: false) ?? createCenterShipNode()
let pos = hit.worldTransform.columns.3
node.simdPosition = simd_float3(pos.x, pos.y, pos.z)
}
}
See also: ARRaycastQuery
One last note - you generally don't want to do scene manipulation within this delegate method. It runs on a different thread than the Scenekit rendering thread, and SceneKit is very thread sensitive. This will likely work fine, but beyond adding or moving a node will certainly cause crashes from time to time. You'd ideally want to store the new position, and then update the actual scene contents from within the renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) delegate method.

ARKit – How to display the feed from a virtual SCNCamera placed on SCNPlane?

I put some objects in AR space using ARKit and SceneKit. That works well. Now I'd like to add an additional camera (SCNCamera) that is placed elsewhere in the scene attached and positioned by a common SCNNode. It is oriented to show me the current scene from an other (fixed) perspective.
Now I'd like to show this additional SCNCamera feed on i.Ex. a SCNPlane (as the diffuse first material) - Like a TV screen. Of course I am aware that it will only display the SceneKit content which stays in the camera focus and not rest of the ARKit image (which is only possible by the main camera of course). A simple colored background then would be fine.
I have seen tutorials that describes, how to play a video file on a virtual display in ARSpace, but I need a realtime camera feed from my own current scene.
I defined this objects:
let camera = SCNCamera()
let cameraNode = SCNNode()
Then in viewDidLoad I do this:
camera.usesOrthographicProjection = true
camera.orthographicScale = 9
camera.zNear = 0
camera.zFar = 100
cameraNode.camera = camera
sceneView.scene.rootNode.addChildNode(cameraNode)
Then I call my setup function to place the virtual Display next to all my AR stuff, position the cameraNode as well (pointing in the direction where objects stay in the scene)
cameraNode.position = SCNVector3(initialStartPosition.x, initialStartPosition.y + 0.5, initialStartPosition.z)
let cameraPlane = SCNNode(geometry: SCNPlane(width: 0.5, height: 0.3))
cameraPlane.geometry?.firstMaterial?.diffuse.contents = cameraNode.camera
cameraPlane.position = SCNVector3(initialStartPosition.x - 1.0, initialStartPosition.y + 0.5, initialStartPosition.z)
sceneView.scene.rootNode.addChildNode(cameraPlane)
Everything compiles and loads... The display shows up at the given position, but it stays entirely gray. Nothing is displayed at all from the SCNCamera I put in the scene. Everything else in the AR scene works well, I just don't get any feed from that camera.
Hay anyone an approach to get this scenario working?
To even better visualize, I add some more print screens.
The following shows the Image trough the SCNCamera according to ARGeo's input. But it takes the whole screen, instead of displaying its contents on a SCNPlane, like I need.
The next Print screen actually shows the current ARView result as I got it using my posted code. As you can see, the gray Display-Plane remains gray - it shows nothing.
The last print screen is a photomontage, showing the expected result, as I'd like to get.
How could this be realized? Am I missing something fundamental here?
After some research and sleep, I came to the following, working solution (including some inexplainable obstacles):
Currently, the additional SCNCamera feed is not linked to a SCNMaterial on a SCNPlane, as it was the initial idea, but I will use an additional SCNView (for the moment)
In the definitions I add an other view like so:
let overlayView = SCNView() // (also tested with ARSCNView(), no difference)
let camera = SCNCamera()
let cameraNode = SCNNode()
then, in viewDidLoad, I setup the stuff like so...
camera.automaticallyAdjustsZRange = true
camera.usesOrthographicProjection = false
cameraNode.camera = camera
cameraNode.camera?.focalLength = 50
sceneView.scene.rootNode.addChildNode(cameraNode) // add the node to the default scene
overlayView.scene = scene // the same scene as sceneView
overlayView.allowsCameraControl = false
overlayView.isUserInteractionEnabled = false
overlayView.pointOfView = cameraNode // this links the new SCNView to the created SCNCamera
self.view.addSubview(overlayView) // don't forget to add as subview
// Size and place the view on the bottom
overlayView.frame = CGRect(x: 0, y: 0, width: self.view.bounds.width * 0.8, height: self.view.bounds.height * 0.25)
overlayView.center = CGPoint(x: self.view.bounds.width * 0.5, y: self.view.bounds.height - 175)
then, in some other function, I place the node containing the SCNCamera to my desired position and angle.
// (exemplary)
cameraNode.position = initialStartPosition + SCNVector3(x: -0.5, y: 0.5, z: -(Float(shiftCurrentDistance * 2.0 - 2.0)))
cameraNode.eulerAngles = SCNVector3(-15.0.degreesToRadians, -15.0.degreesToRadians, 0.0)
The result, is a kind of window (the new SCNView) at the bottom of the screen, displaying the same SceneKit content as in the main sceneView, viewed trough the perspective of the SCNCamera plus its node position, and that very nicely.
In a common iOS/Swift/ARKit project, this construct generates some side effects, that one may struggle into.
1) Mainly, the new SCNView shows SceneKit content from the desired perspective, but the background is always the actual physical camera feed. I could not figure out, how to make the background a static color, by still displaying all the SceneKit content. Changing the new scene's background property affects also the whole main scene, what is actually NOT desired.
2) It might sound confusing, but as soon as the following code get's included (which is essential to make it work):
overlayView.scene = scene
the animation speed of the entire scenes (both) DOUBLES! (Why?)
I got this corrected by adding/changing the following property, which restores the animation speed behavour almost like normal (default):
// add or change this in the scene setup
scene.physicsWorld.speed = 0.5
3) If there are actions like SCNAction.playAudio in the project, all the effects will no longer play - as long as I don't do this:
overlayView.scene = nil
Of course, the additional SCNView stops working but everything else gets gets back to its normal.
Use this code (as a starting point) to find out how to setup a virtual camera.
Just create a default ARKit project in Xcode and copy-paste my code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 1)
cameraNode.camera?.focalLength = 70
cameraNode.camera?.categoryBitMask = 1
scene.rootNode.addChildNode(cameraNode)
sceneView.pointOfView = cameraNode
sceneView.allowsCameraControl = true
sceneView.backgroundColor = UIColor.darkGray
let plane = SCNNode(geometry: SCNPlane(width: 0.8, height: 0.45))
plane.position = SCNVector3(0, 0, -1.5)
// ASSIGN A VIDEO STREAM FROM SCENEKIT-RECORDER TO YOUR MATERIAL
plane.geometry?.materials.first?.diffuse.contents = capturedVideoFromSceneKitRecorder
scene.rootNode.addChildNode(plane)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
}
UPDATED:
Here's a SceneKit Recorder App that you can tailor to your needs (you don't need to write a video to disk, just use a CVPixelBuffer stream and assign it as a texture for a diffuse material).
Hope this helps.
I'm a little late to the party, but I've had a similar issue recently.
As far as I can tell, you cannot directly connect a camera to a node's material. You can, however, use a scene's layer as a texture for a node.
The code below is not verified, but should be more or less ok:
class MyViewController: UIViewController {
override func loadView() {
let projectedScene = createProjectedScene()
let receivingScene = createReceivingScene()
let projectionPlane = receivingScene.scene?.rootNode.childNode(withName: "ProjectionPlane", recursively: true)!
// Here's the important part:
// You can't directly connect a camera to a material's diffuse texture.
// But you can connect a scene's layer as a texture.
projectionPlane.geometry?.firstMaterial?.diffuse.contents = projectedScene.layer
projectedScene.layer.contentsScale = 1
// Note how we only need to connect the receiving view to the controller.
// The projected view is not directly connected as a subview,
// but updates in projectedScene will still be reflected in receivingScene.
self.view = receivingScene
}
func createProjectedScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
return view
}
func createReceivingScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
let projectionPlane = SCNNode(geometry: SCNPlane(width: 2, height: 2)
projectionPlane.name = "ProjectionPlane"
view.scene.rootNode.addChildNode(projectionPlane)
return view
}
}

How to update the pointOfView in ARKit

I'm trying to build my first ARKit app. The purpose of the app is to shoot little blocks in the direction that the camera is facing. Right now, here is the code I have.
sceneView.scene.physicsWorld.gravity = SCNVector3(x: 0, y: 0, z: -9.8)
#IBAction func tapScreen() {
if let camera = self.sceneView.pointOfView {
let sphere = NodeGenerator.generateCubeInFrontOf(node: camera, physics: true)
self.sceneView.scene.rootNode.addChildNode(sphere)
var isSphereAdded = true
print("Added box to scene")
}
}
The gravity works fine, whenever I tap on the screen the block shoots out each and every time I tap. However, they all shoot to the same point, no matter which direction the camera is facing. I'm trying to understand how pointOfView works, would I need to re-render the whole scene? Something else that I can't quite think of? Thanks for any help!
Change this line from
self.sceneView.scene.rootNode.addChildNode(sphere)
to
self.sceneView.pointOfView?.addChildNode(sphere)

How do I programmatically move an ARAnchor?

I'm trying out the new ARKit to replace another similar solution I have. It's pretty great! But I can't seem to figure out how to move an ARAnchor programmatically. I want to slowly move the anchor to the left of the user.
Creating the anchor to be 2 meters in front of the user:
var translation = matrix_identity_float4x4
translation.columns.3.z = -2.0
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
later, moving the object to the left/right of the user (x-axis)...
anchor.transform.columns.3.x = anchor.transform.columns.3.x + 0.1
repeated every 50 milliseconds (or whatever).
The above does not work because transform is a get-only property.
I need a way to change the position of an AR object in space relative to the user in a way that keeps the AR experience intact - meaning, if you move your device, the AR object will be moving but also won't be "stuck" to the camera like it's simply painted on, but moves like you would see a person move while you were walking by - they are moving and you are moving and it looks natural.
Please note the scope of this question relates only to how to move an object in space in relation to the user (ARAnchor), not in relation to a plane (ARPlaneAnchor) or to another detected surface (ARHitTestResult).
Thanks!
You don't need to move anchors. (hand wave) That's not the API you're looking for...
Adding ARAnchor objects to a session is effectively about "labeling" a point in real-world space so that you can refer to it later. The point (1,1,1) (for example) is always the point (1,1,1) — you can't move it someplace else because then it's not the point (1,1,1) anymore.
To make a 2D analogy: anchors are reference points sort of like the bounds of a view. The system (or another piece of your code) tells the view where it's boundaries are, and the view draws its content relative to those boundaries. Anchors in AR give you reference points you can use for drawing content in 3D.
What you're asking is really about moving (and animating the movement of) virtual content between two points. And ARKit itself really isn't about displaying or animating virtual content — there are plenty of great graphics engines out there, so ARKit doesn't need to reinvent that wheel. What ARKit does is provide a real-world frame of reference for you to display or animate content using an existing graphics technology like SceneKit or SpriteKit (or Unity or Unreal, or a custom engine built with Metal or GL).
Since you mentioned trying to do this with SpriteKit... beware, it gets messy. SpriteKit is a 2D engine, and while ARSKView provides some ways to shoehorn a third dimension in there, those ways have their limits.
ARSKView automatically updates the xScale, yScale, and zRotation of each sprite associated with an ARAnchor, providing the illusion of 3D perspective. But that applies only to nodes attached to anchors, and as noted above, anchors are static.
You can, however, add other nodes to your scene, and use those same properties to make those nodes match the ARSKView-managed nodes. Here's some code you can add/replace in the ARKit/SpriteKit Xcode template project to do that. We'll start with some basic logic to run a bouncing animation on the third tap (after using the first two taps to place anchors).
var anchors: [ARAnchor] = []
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
// Start bouncing on touch after placing 2 anchors (don't allow more)
if anchors.count > 1 {
startBouncing(time: 1)
return
}
// Create anchor using the camera's current position
guard let sceneView = self.view as? ARSKView else { return }
if let currentFrame = sceneView.session.currentFrame {
// Create a transform with a translation of 30 cm in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.3
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
anchors.append(anchor)
}
}
Then, some SpriteKit fun for making that animation happen:
var ballNode: SKLabelNode = {
let labelNode = SKLabelNode(text: "🏀")
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
return labelNode
}()
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSKView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
addChild(ballNode)
}
ballNode.setScale(start.xScale)
ballNode.zRotation = start.zRotation
ballNode.position = start.position
let scale = SKAction.scale(to: end.xScale, duration: time)
let rotate = SKAction.rotate(toAngle: end.zRotation, duration: time)
let move = SKAction.move(to: end.position, duration: time)
let scaleBack = SKAction.scale(to: start.xScale, duration: time)
let rotateBack = SKAction.rotate(toAngle: start.zRotation, duration: time)
let moveBack = SKAction.move(to: start.position, duration: time)
let action = SKAction.repeatForever(.sequence([
.group([scale, rotate, move]),
.group([scaleBack, rotateBack, moveBack])
]))
ballNode.removeAllActions()
ballNode.run(action)
}
Here's a video so you can see this code in action. You'll notice that the illusion only works as long as you don't move the camera — not so great for AR. When using SKAction, we can't adjust the start/end states of the animation while animating, so the ball keeps bouncing back and forth between its original (screen-space) positions/rotations/scales.
You could do better by animating the ball directly, but it's a lot of work. You'd need to, on every frame (or every view(_:didUpdate:for:) delegate callback):
Save off the updated position, rotation, and scale values for the anchor-based nodes at each end of the animation. You'll need to do this twice per didUpdate callback, because you'll get one callback for each node.
Work out position, rotation, and scale values for the node being animated, by interpolating between the two endpoint values based on the current time.
Set the new attributes on the node. (Or maybe animate it to those attributes over a very short duration, so it doesn't jump too much in one frame?)
That's kind of a lot of work to shoehorn a fake 3D illusion into a 2D graphics toolkit — hence my comments about SpriteKit not being a great first step into ARKit.
If you want 3D positioning and animation for your AR overlays, it's a lot easier to use a 3D graphics toolkit. Here's a repeat of the previous example, but using SceneKit instead. Start with the ARKit/SceneKit Xcode template, take the spaceship out, and paste the same touchesBegan function from above into the ViewController. (Change the as ARSKView casts to as ARSCNView, too.)
Then, some quick code for placing 2D billboarded sprites, matching via SceneKit the behavior of the ARKit/SpriteKit template:
// in global scope
func makeBillboardNode(image: UIImage) -> SCNNode {
let plane = SCNPlane(width: 0.1, height: 0.1)
plane.firstMaterial!.diffuse.contents = image
let node = SCNNode(geometry: plane)
node.constraints = [SCNBillboardConstraint()]
return node
}
// inside ViewController
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// emoji to image based on https://stackoverflow.com/a/41021662/957768
let billboard = makeBillboardNode(image: "⛹️".image())
node.addChildNode(billboard)
}
Finally, adding the animation for the bouncing ball:
let ballNode = makeBillboardNode(image: "🏀".image())
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSCNView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
sceneView.scene.rootNode.addChildNode(ballNode)
}
let animation = CABasicAnimation(keyPath: #keyPath(SCNNode.transform))
animation.fromValue = start.transform
animation.toValue = end.transform
animation.duration = time
animation.autoreverses = true
animation.repeatCount = .infinity
ballNode.removeAllAnimations()
ballNode.addAnimation(animation, forKey: nil)
}
This time the animation code is a lot shorter than the SpriteKit version.
Here's how it looks in action.
Because we're working in 3D to start with, we're actually animating between two 3D positions — unlike in the SpriteKit version, the animation stays where it's supposed to. (And without the extra work for directly interpolating and animating attributes.)

Why are objects in the same SKNode layer not interacting with each other?

I have less than 1 year using SpriteKit so I didn't use SKNodes as layers before until recently.
I have an SKNode layer that holds all of the fish and the user's position, for example:
var layerMainGame = SKNode()
layerMainGame.zPosition = 50
layerMainGame.addChild(userPosition)
layerMainGame.addChild(pipFish)
addChild(layerMainGame)
The interaction whether the user touched a fish or not is handled with this function, which is basically checking if their frames crossed:
if CGRectIntersectsRect(CGRectInset(node.frame, delta.dx, delta.dy), self.userPosition.frame) {
print("You got hit by \(name).")
gameOver()
}
It works. The interaction between the userPosition and pipFish works. What doesn't work is fish that are added as the game progresses. I have a function spawning different types of fish in intervals like this:
func spawnNew(fish: SKSpriteNode) {
layerMainGame.addChild(fish)
}
The interaction between the user and those fish that get added to the same layer later in the game does not work. I can pass right through them and no game over happens. When I completely remove the entire layerMainGame variable and just add them to the scene like normal, all the interactions work. Adding them all to the same SKNode layer doesn't work.
This is the function that creates a hit collision for every fish.
func createHitCollisionFor(name: String, GameOver gameOver: String!, delta: (dx: CGFloat, dy: CGFloat), index: Int = -1) {
enumerateChildNodesWithName(name) { [unowned me = self] node, _ in
if CGRectIntersectsRect(CGRectInset(node.frame, delta.dx, delta.dy), self.userPosition.frame) {
me.gameOverImage.texture = SKTexture(imageNamed: gameOver)
didGetHitActions()
me.runAction(audio.playSound(hit)!)
if index != -1 {
me.trophySet.encounterTrophy.didEncounter[index] = true
}
print("You got hit by \(name).")
}
}
}
And I call it like this:
createHitCollisionFor("GoldPiranha", GameOver: model.gameOverImage["Gold"], delta: (dx: 50, dy: 50), index: 1)
It works when the fish are not in the layer, but doesn't work when they are added to the layer.
When a node is placed in the node tree, its position property places it within a coordinate system provided by its parent.
Sprite Kit uses a coordinate orientation that starts from the bottom left corner of the screen (0, 0), and the x and y values increase as you move up and to the right.
For SKScene, the default value of the origin – anchorPoint is (0, 0), which corresponds to the lower-left corner of the view’s frame rectangle. To change it to center you can specify (0.5, 0.5)
For SKNode, the coordinate system origin is defined by its anchorPoint which by default is (0.5, 0.5) which is center of the node.
In your project you have layerMainGame added for example to the scene, his anchorPoint is by default (0.5,0.5) so the origin for the children like your fish is the center, you can see it if you change the fish positions like:
func spawnNew(fish: SKSpriteNode) {
layerMainGame.addChild(fish)
fish.position = CGPointZero // position 0,0 = parent center
}
Hope it help to understand how to solve your issue.
Update: (after your changes to the main question)
To help you better understand what happens I will give an example right away:
override func didMoveToView(view: SKView) {
var layerMainGame = SKNode()
addChild(layerMainGame)
let pipFish = SKSpriteNode(color: UIColor.yellowColor(), size: CGSizeMake(50,50))
pipFish.name = "son"
self.addChild(pipFish)
let layerPipFish = SKSpriteNode(color: UIColor.yellowColor(), size: CGSizeMake(50,50))
layerPipFish.name = "son"
layerMainGame.addChild(layerPipFish)
enumerateChildNodesWithName("son") { [unowned me = self] node, _ in
print(node)
}
}
Output:
Now I will simply change the line:
layerMainGame.addChild(layerPipFish)
with:
self.addChild(layerPipFish)
Output:
What happened?
As you can see enumerateChildNodesWithName written as your and my code print only childs directly added to self (because actually we launch enumerateChildNodesWithName which it is equal to launch self.enumerateChildNodesWithName )
How can I search in the full node tree?
If you have a node named "GoldPiranha" then you can search through all descendants by putting a // before the name. So you would search for "//GoldPiranha":
enumerateChildNodesWithName("//GoldPiranha") { [unowned me = self] ...