Is there a way to include a SpriteKit scene in a SceneKit Scene? - sprite-kit

I am wondering if it is possible to include a SpriteKit scene in a SceneKit scene, and if it is, how to do that ?

Yes there is!
You can assign a Sprite Kit scene (SKScene) as the contents of a material property (SCNMaterialProperty) in Scene Kit.

I've come across a few questions about this exact topic and there don't seem to be a whole lot of actual code examples, so here's another one. Hopefully this is helpful:
SKSpriteNode *someSKSpriteNode;
// initialize your SKSpriteNode and set it up..
SKScene *tvSKScene = [SKScene sceneWithSize:CGSizeMake(100, 100)];
tvSKScene.anchorPoint = CGPointMake(0.5, 0.5);
[tvSKScene addChild:someSKSpriteNode];
// use spritekit scene as plane's material
SCNMaterial *materialProperty = [SCNMaterial material];
materialProperty.diffuse.contents = tvSKScene;
// this will likely change to whereever you want to show this scene.
SCNVector3 tvLocationCoordinates = SCNVector3Make(0, 0, 0);
SCNPlane *scnPlane = [SCNPlane planeWithWidth:100.0 height:100.0];
SCNNode *scnNode = [SCNNode nodeWithGeometry:scnPlane];
scnNode.geometry.firstMaterial = materialProperty;
scnNode.position = tvLocationCoordinates;
// Assume we have a SCNCamera and SCNNode set up already.
SCNLookAtConstraint *constraint = [SCNLookAtConstraint lookAtConstraintWithTarget:cameraNode];
constraint.gimbalLockEnabled = NO;
scnNode.constraints = #[constraint];
// Assume we have a SCNView *sceneView set up already.
[sceneView.scene.rootNode addChildNode:scnNode];

Related

Swift - How to add scene boundaries for specific nodes?

I'm trying to develop an IOS game using SpriteKit, and I want to add a Physics body to the scene so that the player won't be able to go through the edges of the screen. At the same time, I want some nodes (for example - bombs that fall from the sky) to be able to go through the edges of the screen.
I know that I can use the following line to add a physics body to the scene:
self.physicsBody = SKPhysicsBody (edgeLoopFrom: self.frame)
My question is how can I allow a "bomb" object to go through such body while having a "player" object obligated to those boundaries.
The answer is relative to categoryBitMask and collisionBitMask of the involved physic bodies.
For example, for the scene:
if let scenePB = scene.physicsBody {
scenePB.categoryBitMask = 1
scenePB.collisionBitMask = 2 // collides with player
}
For the player:
if let playerPB = player.physicsBody {
playerPB.categoryBitMask = 2
playerPB.collisionBitMask = 1+4 // collides with scene and bombs
}
For any bomb:
if let bombPB = bomb.physicsBody {
bombPB.categoryBitMask = 4
bombPB.collisionBitMask = 2 // collides with player
}

ARKit – How to display the feed from a virtual SCNCamera placed on SCNPlane?

I put some objects in AR space using ARKit and SceneKit. That works well. Now I'd like to add an additional camera (SCNCamera) that is placed elsewhere in the scene attached and positioned by a common SCNNode. It is oriented to show me the current scene from an other (fixed) perspective.
Now I'd like to show this additional SCNCamera feed on i.Ex. a SCNPlane (as the diffuse first material) - Like a TV screen. Of course I am aware that it will only display the SceneKit content which stays in the camera focus and not rest of the ARKit image (which is only possible by the main camera of course). A simple colored background then would be fine.
I have seen tutorials that describes, how to play a video file on a virtual display in ARSpace, but I need a realtime camera feed from my own current scene.
I defined this objects:
let camera = SCNCamera()
let cameraNode = SCNNode()
Then in viewDidLoad I do this:
camera.usesOrthographicProjection = true
camera.orthographicScale = 9
camera.zNear = 0
camera.zFar = 100
cameraNode.camera = camera
sceneView.scene.rootNode.addChildNode(cameraNode)
Then I call my setup function to place the virtual Display next to all my AR stuff, position the cameraNode as well (pointing in the direction where objects stay in the scene)
cameraNode.position = SCNVector3(initialStartPosition.x, initialStartPosition.y + 0.5, initialStartPosition.z)
let cameraPlane = SCNNode(geometry: SCNPlane(width: 0.5, height: 0.3))
cameraPlane.geometry?.firstMaterial?.diffuse.contents = cameraNode.camera
cameraPlane.position = SCNVector3(initialStartPosition.x - 1.0, initialStartPosition.y + 0.5, initialStartPosition.z)
sceneView.scene.rootNode.addChildNode(cameraPlane)
Everything compiles and loads... The display shows up at the given position, but it stays entirely gray. Nothing is displayed at all from the SCNCamera I put in the scene. Everything else in the AR scene works well, I just don't get any feed from that camera.
Hay anyone an approach to get this scenario working?
To even better visualize, I add some more print screens.
The following shows the Image trough the SCNCamera according to ARGeo's input. But it takes the whole screen, instead of displaying its contents on a SCNPlane, like I need.
The next Print screen actually shows the current ARView result as I got it using my posted code. As you can see, the gray Display-Plane remains gray - it shows nothing.
The last print screen is a photomontage, showing the expected result, as I'd like to get.
How could this be realized? Am I missing something fundamental here?
After some research and sleep, I came to the following, working solution (including some inexplainable obstacles):
Currently, the additional SCNCamera feed is not linked to a SCNMaterial on a SCNPlane, as it was the initial idea, but I will use an additional SCNView (for the moment)
In the definitions I add an other view like so:
let overlayView = SCNView() // (also tested with ARSCNView(), no difference)
let camera = SCNCamera()
let cameraNode = SCNNode()
then, in viewDidLoad, I setup the stuff like so...
camera.automaticallyAdjustsZRange = true
camera.usesOrthographicProjection = false
cameraNode.camera = camera
cameraNode.camera?.focalLength = 50
sceneView.scene.rootNode.addChildNode(cameraNode) // add the node to the default scene
overlayView.scene = scene // the same scene as sceneView
overlayView.allowsCameraControl = false
overlayView.isUserInteractionEnabled = false
overlayView.pointOfView = cameraNode // this links the new SCNView to the created SCNCamera
self.view.addSubview(overlayView) // don't forget to add as subview
// Size and place the view on the bottom
overlayView.frame = CGRect(x: 0, y: 0, width: self.view.bounds.width * 0.8, height: self.view.bounds.height * 0.25)
overlayView.center = CGPoint(x: self.view.bounds.width * 0.5, y: self.view.bounds.height - 175)
then, in some other function, I place the node containing the SCNCamera to my desired position and angle.
// (exemplary)
cameraNode.position = initialStartPosition + SCNVector3(x: -0.5, y: 0.5, z: -(Float(shiftCurrentDistance * 2.0 - 2.0)))
cameraNode.eulerAngles = SCNVector3(-15.0.degreesToRadians, -15.0.degreesToRadians, 0.0)
The result, is a kind of window (the new SCNView) at the bottom of the screen, displaying the same SceneKit content as in the main sceneView, viewed trough the perspective of the SCNCamera plus its node position, and that very nicely.
In a common iOS/Swift/ARKit project, this construct generates some side effects, that one may struggle into.
1) Mainly, the new SCNView shows SceneKit content from the desired perspective, but the background is always the actual physical camera feed. I could not figure out, how to make the background a static color, by still displaying all the SceneKit content. Changing the new scene's background property affects also the whole main scene, what is actually NOT desired.
2) It might sound confusing, but as soon as the following code get's included (which is essential to make it work):
overlayView.scene = scene
the animation speed of the entire scenes (both) DOUBLES! (Why?)
I got this corrected by adding/changing the following property, which restores the animation speed behavour almost like normal (default):
// add or change this in the scene setup
scene.physicsWorld.speed = 0.5
3) If there are actions like SCNAction.playAudio in the project, all the effects will no longer play - as long as I don't do this:
overlayView.scene = nil
Of course, the additional SCNView stops working but everything else gets gets back to its normal.
Use this code (as a starting point) to find out how to setup a virtual camera.
Just create a default ARKit project in Xcode and copy-paste my code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 1)
cameraNode.camera?.focalLength = 70
cameraNode.camera?.categoryBitMask = 1
scene.rootNode.addChildNode(cameraNode)
sceneView.pointOfView = cameraNode
sceneView.allowsCameraControl = true
sceneView.backgroundColor = UIColor.darkGray
let plane = SCNNode(geometry: SCNPlane(width: 0.8, height: 0.45))
plane.position = SCNVector3(0, 0, -1.5)
// ASSIGN A VIDEO STREAM FROM SCENEKIT-RECORDER TO YOUR MATERIAL
plane.geometry?.materials.first?.diffuse.contents = capturedVideoFromSceneKitRecorder
scene.rootNode.addChildNode(plane)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
}
UPDATED:
Here's a SceneKit Recorder App that you can tailor to your needs (you don't need to write a video to disk, just use a CVPixelBuffer stream and assign it as a texture for a diffuse material).
Hope this helps.
I'm a little late to the party, but I've had a similar issue recently.
As far as I can tell, you cannot directly connect a camera to a node's material. You can, however, use a scene's layer as a texture for a node.
The code below is not verified, but should be more or less ok:
class MyViewController: UIViewController {
override func loadView() {
let projectedScene = createProjectedScene()
let receivingScene = createReceivingScene()
let projectionPlane = receivingScene.scene?.rootNode.childNode(withName: "ProjectionPlane", recursively: true)!
// Here's the important part:
// You can't directly connect a camera to a material's diffuse texture.
// But you can connect a scene's layer as a texture.
projectionPlane.geometry?.firstMaterial?.diffuse.contents = projectedScene.layer
projectedScene.layer.contentsScale = 1
// Note how we only need to connect the receiving view to the controller.
// The projected view is not directly connected as a subview,
// but updates in projectedScene will still be reflected in receivingScene.
self.view = receivingScene
}
func createProjectedScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
return view
}
func createReceivingScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
let projectionPlane = SCNNode(geometry: SCNPlane(width: 2, height: 2)
projectionPlane.name = "ProjectionPlane"
view.scene.rootNode.addChildNode(projectionPlane)
return view
}
}

Diffuse an animated SpriteKit Scene over an SCNPlane (ARKit)

tl;dr I'm trying to animate an SKScene on a plane in ARKit (i.e. a table). I can get the SKScene to appear by texturing an SCNPlane with it, but it does not animate. How can I get the animations to work?
Hi,
I'm trying to make an ARKit game which is essentially just a 2D game rendered on a plane anchored in the 3D world. I have the game as an SKScene object. Usually when rendering an SKScene in ARKit I'll create an SCNPlane and set the material to be an SCNMaterial with the SKScene diffused. This is what my current code looks like (in my renderer(_:didAdd:for:):
guard let scene = SKScene(fileNamed: "GameScene") else { return }
let scnMaterial = SCNMaterial()
scnMaterial.diffuse.contents = scene
let plane = SCNPlane(width: 0.8, height: 0.6)
plane.firstMaterial = scnMaterial
let planeNode = SCNNode(geometry: plane)
This renders the SKScene fine, but the update(_:) function is never called. After some research, I found out that when diffusing an SKScene onto an SCNMaterial, it pauses the scene. So, I added self.isPaused = false to the sceneDidLoad() of the SKScene. This gets the update(_:) function to run, but the texture on the SCNPlane is never updated, so the SKScene maintains its starting look. When the SKScene is run on its own in a UIViewController it works fine.
Does anyone know how I can get an animated SKScene to animate on an SCNPlane? At the end of the day, I just want an SKScene to animate on a plane (i.e. a table) in ARKit, so any alternative methods would work.
A Scenekit view runs its own rendering loop
and it does not update/render those node who are not moving by actions or physics movement.
Scenekit automatically decides when to update/render the nodes and when to not.
Try to apply continuous SCNAction or CAAnimation to your SCNPlane node with some movement, and it the diffuse property animation should work.

Can not use CGPoint to set the position in Swift with Sprite Kit

I don't seem to be able to set the position of my node in sprite kit using swift:
let sprite = SKSpriteNode(imageNamed:"Spaceship")
let point: CGPoint = CGPoint(x:10,y:10)
sprite.position = point
self.addChild(sprite)
It works when I do:
sprite.position = CGPoint(x:CGRectGetMidX(self.frame), y:CGRectGetMidY(self.frame));
Any ideas?
I think you problem is, that you just don't see the node. It gets created, but at a point where you don't see it. To change that, open your GameViewController file and add the following line before skView.presentScene(scene):
scene.size = skView.bounds.size
That code will make sure that the size of your scene is the same size as the size of your screen. So now you should be able to see the node.

Get a list of nodes in an specific area?

I'm working in a side-scolling game and I need to know what nodes are in an area to implement something like "line of sight". Right now I'm trying using enumerateBodyiesInRect() however it's detecting bodies that are 20px or more from the evaluated rect and I cannot figure out why it's so imprecise.
This is what I'm trying now:
import SpriteKit
import CoreMotion
class GameScene: SKScene, SKPhysicsContactDelegate
{
var player = SKShapeNode()
var world = SKShapeNode()
var rShape = SKShapeNode()
override func didMoveToView(view: SKView) {
self.physicsWorld.contactDelegate = self
self.scaleMode = SKSceneScaleMode.AspectFit
self.size = view.bounds.size
// Add world
world = SKShapeNode(rectOfSize: view.bounds.size)
world.physicsBody = SKPhysicsBody(edgeLoopFromPath: world.path)
world.position = CGPointMake(self.frame.size.width/2, self.frame.size.height/2) // Move camera
self.addChild(world)
// Add player
player = SKShapeNode(rectOfSize: CGSize(width: 25, height: 25))
player.physicsBody = SKPhysicsBody(rectangleOfSize: player.frame.size)
player.physicsBody.dynamic = false
player.strokeColor = SKColor.blueColor()
player.fillColor = SKColor.blueColor()
player.position = CGPointMake(90, -50)
world.addChild(player)
}
override func update(currentTime: CFTimeInterval) {
// Define rect position and size (area that will be evaluated for bodies)
var r : CGRect = CGRect(x: 200, y: 200, width: 25, height: 25)
// Show rect for debug
rShape.removeFromParent()
rShape = SKShapeNode(rect: r)
rShape.strokeColor = SKColor.redColor()
self.addChild(rShape)
// Evaluate rect
rShape.fillColor = SKColor.clearColor()
self.physicsWorld.enumerateBodiesInRect(r) {
(body: SKPhysicsBody!, stop: UnsafePointer<ObjCBool>) in
self.rShape.fillColor = SKColor.redColor() // Paint the area blue if it detects a node
}
}
}
This code should show the evaluated rect and ray on the screen (for debugging purposes) and paint them red if they contact the player node. However you can see in the screenshot how it turns red when the player is 25px or more away from it, it's like if the drawing is a little bit off, or smaller than the actual area being evaluated. You can copy paste it to a project to duplicate the problem.
Could this be because this is just beta or am I doing something wrong?
You are creating a physical world where there is a specific rectangle that has 'special properties' - this is the rectangle that you use in enumerateBodiesInRect(). Why not create an invisible, inert physical body with the required rectangular dimension and then use SKPhysicsBody to check for collisions and/or contacts? You could then use allContactedBodies() or some delegate callbacks to learn what other bodies are inside your special rectangle.
Think of it like a 'tractor beam' or a 'warp rectangle'.
I believe you want SKPhysicsWorld's enumerateBodyiesInRect() instance method, which will iterate over all nodes in a given rectangle. If you're looking to get at the physics world through your scene, usage could look like this:
self.physicsWorld.enumerateBodiesInRect(CGRect(x: 0, y: 0, width: 50, height: 50)) {(body: SKPhysicsBody!, stop: UnsafePointer<ObjCBool>) in
// enumerates all nodes in given frame
}
I've experimented quite a bit with enumerateBodiesInRect now, and I've found it to be incredibly inaccurate. It seems to not have any of the claimed functionality, and instead produces random results. I honestly cannot even determine any pattern from its products.
enumerateBodiesAlongRay seems better, but still very buggy. The problem with that function seems to be the conversion between Screen and PhysicsWorld coordinates. I would avoid that one, as well.
I think your solution should simply be to use the existing contact detection system. All of your desired functionality can be written in the didBeginContact() and didEndContact() functions. This has the added benefit of allowing you to specify distinct functionality for both entering and leaving the area. You can also add particle effects, animations, and similar, as well as intentionally ignoring specific types of nodes.
The only thing to ensure success with this method is to clarify that the contact area has a unique category, that the contactTestBitMask contains all desired nodes and the collisionBitMask is set to 0.
The enumerateBodiesInRect method of SKPhysicsWorld expects the rect parameter to be in scene coordinates. This is important. If you have a scene hierarchy of nodes, you need to convert the rect you calculate from a reference node to the scene coordinates.
I faced a lot of issues with this method returning bodies that were off by values like 30px to the left etc. and finally realized the issue was because of the rect parameter not defined in scene coordinate space.
In my case, I had a worldNode inside my scene, and all objects were created in the worldNode. My camera was moving the worldNode about, and applying scaling to it for zooming out and in.
In order to use enumerateBodiesInRect correctly, I had to do something as follows:
// get your world rect based on game logic
let worldRect = getWorldRect()
// calculate the scene rect
let sceneRectOrigin = scene.convertPoint(worldRect.origin, fromNode:scene.worldNode)
let worldScale = scene.worldNode.xScale // assert this is not 0
// now to get the scene rect relative to the world rect, in scene coordinates
let sceneRect = CGRectMake( sceneRectOrigin.x, sceneRectOrigin.y, worldRect.width / worldScale, worldRect.height / worldScale)
world.physicsWorld.enumerateBodiesInRect(sceneRect) {
// your code here
}
Hope this helps.
I am not sure if this is a good practice. Correct me if not. But I am using
let shapeNode = SKShapeNode()
shapeNode.intersects(playerNode)
I checked selected nodes with simple loop if they intersect the player. Additionally I created SKShapeNodes which are drawn in front of nodes representing view sight of other actors in the game. They are moved along those actors.
There is only nodesAtPoint: method.
To achieve what you want you'd better to store all enemies in an array and have an int variable, something like nextEnemyIndex. This approach lets you to easily return the next enemy node, it's much more efficient than trying to find a node on the scene.
yes problem may occur because of your player's image, for example try to use 10px smaller body size:
player.physicsBody = SKPhysicsBody(rectangleOfSize: CGRectMake(self.frame.origin.x, self.frame.origin.y, self.size.width-10, self.size.height-10)));