Why when I use SKSpriteNode with an image it doesn't bounce but when I use SKShapeNode it does work? - swift

I am very new to Swift and I am trying to create a ball that bounces up and down. When I use:
import Foundation
import SpriteKit
class Ball {
let movingObject: SKShapeNode
init() {
movingObject = SKShapeNode(circleOfRadius: 25)
movingObject.physicsBody = SKPhysicsBody(circleOfRadius: 25)
movingObject.physicsBody?.affectedByGravity = true
movingObject.physicsBody?.restitution = 1
movingObject.physicsBody?.linearDamping = 0
}
}
it works fine. However, when I try to use an image, it doesn't bounce.
class Ball {
let movingObject: SKSpriteNode
let picture: String
init(picture: String) {
self.picture = picture
movingObject = SKSpriteNode(imageNamed: picture)
movingObject.physicsBody = SKPhysicsBody(circleOfRadius: movingObject.size.width * 0.5)
movingObject.physicsBody?.affectedByGravity = true
movingObject.physicsBody?.restitution = 1
movingObject.physicsBody?.linearDamping = 0
}
}

The only difference between your two physics bodies is the radius. Therefore, this has to be the problem.
Try setting the radius to 25 like you did with the shape node to confirm, then try to reason about why movingObject.size.width * 0.5 isn't coming out to a reasonable value. You can set a breakpoint and use the debugger to print movingObject.size to help.

About the SKSpriteNode sources:
/**
Initialize a sprite with an image from your app bundle (An SKTexture is created for the image and set on the sprite. Its size is set to the SKTexture's pixel width/height)
The position of the sprite is (0, 0) and the texture anchored at (0.5, 0.5), so that it is offset by half the width and half the height.
Thus the sprite has the texture centered about the position. If you wish to have the texture anchored at a different offset set the anchorPoint to another pair of values in the interval from 0.0 up to and including 1.0.
#param name the name or path of the image to load.
*/
public convenience init(imageNamed name: String)
In the first case you use 25 as radius, in the next you must check if movingObject.size.width` * 0.5 is a valid measure.
When you are in debug phases try to help yourself by turning on the showsPhysics property:
Code of example:
override func viewDidLoad() {
super.viewDidLoad()
if let scene = GameScene(fileNamed:"GameScene") {
// Configure the view.
let skView = self.view as! SKView
skView.showsPhysics = true
...
}
}
You can easily view the physicBody boundaries of your objects and you can notice it immediately if something is wrong.

Related

ARKit – How to display the feed from a virtual SCNCamera placed on SCNPlane?

I put some objects in AR space using ARKit and SceneKit. That works well. Now I'd like to add an additional camera (SCNCamera) that is placed elsewhere in the scene attached and positioned by a common SCNNode. It is oriented to show me the current scene from an other (fixed) perspective.
Now I'd like to show this additional SCNCamera feed on i.Ex. a SCNPlane (as the diffuse first material) - Like a TV screen. Of course I am aware that it will only display the SceneKit content which stays in the camera focus and not rest of the ARKit image (which is only possible by the main camera of course). A simple colored background then would be fine.
I have seen tutorials that describes, how to play a video file on a virtual display in ARSpace, but I need a realtime camera feed from my own current scene.
I defined this objects:
let camera = SCNCamera()
let cameraNode = SCNNode()
Then in viewDidLoad I do this:
camera.usesOrthographicProjection = true
camera.orthographicScale = 9
camera.zNear = 0
camera.zFar = 100
cameraNode.camera = camera
sceneView.scene.rootNode.addChildNode(cameraNode)
Then I call my setup function to place the virtual Display next to all my AR stuff, position the cameraNode as well (pointing in the direction where objects stay in the scene)
cameraNode.position = SCNVector3(initialStartPosition.x, initialStartPosition.y + 0.5, initialStartPosition.z)
let cameraPlane = SCNNode(geometry: SCNPlane(width: 0.5, height: 0.3))
cameraPlane.geometry?.firstMaterial?.diffuse.contents = cameraNode.camera
cameraPlane.position = SCNVector3(initialStartPosition.x - 1.0, initialStartPosition.y + 0.5, initialStartPosition.z)
sceneView.scene.rootNode.addChildNode(cameraPlane)
Everything compiles and loads... The display shows up at the given position, but it stays entirely gray. Nothing is displayed at all from the SCNCamera I put in the scene. Everything else in the AR scene works well, I just don't get any feed from that camera.
Hay anyone an approach to get this scenario working?
To even better visualize, I add some more print screens.
The following shows the Image trough the SCNCamera according to ARGeo's input. But it takes the whole screen, instead of displaying its contents on a SCNPlane, like I need.
The next Print screen actually shows the current ARView result as I got it using my posted code. As you can see, the gray Display-Plane remains gray - it shows nothing.
The last print screen is a photomontage, showing the expected result, as I'd like to get.
How could this be realized? Am I missing something fundamental here?
After some research and sleep, I came to the following, working solution (including some inexplainable obstacles):
Currently, the additional SCNCamera feed is not linked to a SCNMaterial on a SCNPlane, as it was the initial idea, but I will use an additional SCNView (for the moment)
In the definitions I add an other view like so:
let overlayView = SCNView() // (also tested with ARSCNView(), no difference)
let camera = SCNCamera()
let cameraNode = SCNNode()
then, in viewDidLoad, I setup the stuff like so...
camera.automaticallyAdjustsZRange = true
camera.usesOrthographicProjection = false
cameraNode.camera = camera
cameraNode.camera?.focalLength = 50
sceneView.scene.rootNode.addChildNode(cameraNode) // add the node to the default scene
overlayView.scene = scene // the same scene as sceneView
overlayView.allowsCameraControl = false
overlayView.isUserInteractionEnabled = false
overlayView.pointOfView = cameraNode // this links the new SCNView to the created SCNCamera
self.view.addSubview(overlayView) // don't forget to add as subview
// Size and place the view on the bottom
overlayView.frame = CGRect(x: 0, y: 0, width: self.view.bounds.width * 0.8, height: self.view.bounds.height * 0.25)
overlayView.center = CGPoint(x: self.view.bounds.width * 0.5, y: self.view.bounds.height - 175)
then, in some other function, I place the node containing the SCNCamera to my desired position and angle.
// (exemplary)
cameraNode.position = initialStartPosition + SCNVector3(x: -0.5, y: 0.5, z: -(Float(shiftCurrentDistance * 2.0 - 2.0)))
cameraNode.eulerAngles = SCNVector3(-15.0.degreesToRadians, -15.0.degreesToRadians, 0.0)
The result, is a kind of window (the new SCNView) at the bottom of the screen, displaying the same SceneKit content as in the main sceneView, viewed trough the perspective of the SCNCamera plus its node position, and that very nicely.
In a common iOS/Swift/ARKit project, this construct generates some side effects, that one may struggle into.
1) Mainly, the new SCNView shows SceneKit content from the desired perspective, but the background is always the actual physical camera feed. I could not figure out, how to make the background a static color, by still displaying all the SceneKit content. Changing the new scene's background property affects also the whole main scene, what is actually NOT desired.
2) It might sound confusing, but as soon as the following code get's included (which is essential to make it work):
overlayView.scene = scene
the animation speed of the entire scenes (both) DOUBLES! (Why?)
I got this corrected by adding/changing the following property, which restores the animation speed behavour almost like normal (default):
// add or change this in the scene setup
scene.physicsWorld.speed = 0.5
3) If there are actions like SCNAction.playAudio in the project, all the effects will no longer play - as long as I don't do this:
overlayView.scene = nil
Of course, the additional SCNView stops working but everything else gets gets back to its normal.
Use this code (as a starting point) to find out how to setup a virtual camera.
Just create a default ARKit project in Xcode and copy-paste my code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
let scene = SCNScene(named: "art.scnassets/ship.scn")!
sceneView.scene = scene
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 1)
cameraNode.camera?.focalLength = 70
cameraNode.camera?.categoryBitMask = 1
scene.rootNode.addChildNode(cameraNode)
sceneView.pointOfView = cameraNode
sceneView.allowsCameraControl = true
sceneView.backgroundColor = UIColor.darkGray
let plane = SCNNode(geometry: SCNPlane(width: 0.8, height: 0.45))
plane.position = SCNVector3(0, 0, -1.5)
// ASSIGN A VIDEO STREAM FROM SCENEKIT-RECORDER TO YOUR MATERIAL
plane.geometry?.materials.first?.diffuse.contents = capturedVideoFromSceneKitRecorder
scene.rootNode.addChildNode(plane)
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
}
UPDATED:
Here's a SceneKit Recorder App that you can tailor to your needs (you don't need to write a video to disk, just use a CVPixelBuffer stream and assign it as a texture for a diffuse material).
Hope this helps.
I'm a little late to the party, but I've had a similar issue recently.
As far as I can tell, you cannot directly connect a camera to a node's material. You can, however, use a scene's layer as a texture for a node.
The code below is not verified, but should be more or less ok:
class MyViewController: UIViewController {
override func loadView() {
let projectedScene = createProjectedScene()
let receivingScene = createReceivingScene()
let projectionPlane = receivingScene.scene?.rootNode.childNode(withName: "ProjectionPlane", recursively: true)!
// Here's the important part:
// You can't directly connect a camera to a material's diffuse texture.
// But you can connect a scene's layer as a texture.
projectionPlane.geometry?.firstMaterial?.diffuse.contents = projectedScene.layer
projectedScene.layer.contentsScale = 1
// Note how we only need to connect the receiving view to the controller.
// The projected view is not directly connected as a subview,
// but updates in projectedScene will still be reflected in receivingScene.
self.view = receivingScene
}
func createProjectedScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
return view
}
func createReceivingScene() -> SCNView {
let view = SCNView()
// ... set up scene ...
let projectionPlane = SCNNode(geometry: SCNPlane(width: 2, height: 2)
projectionPlane.name = "ProjectionPlane"
view.scene.rootNode.addChildNode(projectionPlane)
return view
}
}

Swift: Strange Collision Behavior

I'm building a Breakout game (#cs193p) and I've got the beginnings of the general working set up: The bricks, ball, and paddle all draw as they should. The collisions sort of work as they should, except the collision boundaries appear to be sort of wrong. They're generally larger than the paths I've constructed them with, but not always.
I've set the elasticity of the ball to zero, so that it rests on the paddle, so that the discrepancy is clear. This screenshot shows the ball resting on the incorrect collision boundary of the paddle.
The bricks and the black area at the bottom respond a little differently. For the bricks, the ball seems to be colliding with the bottom row of bricks when it visually reaches the next row. I've got the bricks disappearing "correctly" so the collision is doubly confirmed by the bricks disappearing. The black area at the bottom (the space for panning to move the paddle) has a similar issue: the ball dips a little into this area before bouncing.
Here is, well, a bunch of code, because I don't know where the problem might lie.
From my BreakoutBehavior class (I'm leaving out the gravity section because it doesn't seem to be a part of the problem):
let collider: UICollisionBehavior = {
let collider = UICollisionBehavior()
collider.translatesReferenceBoundsIntoBoundary = true
return collider
}()
private let ballBehavior: UIDynamicItemBehavior = {
let behavior = UIDynamicItemBehavior()
behavior.allowsRotation = true
behavior.elasticity = 1.25
return behavior
}()
internal func addColliderBoundary(path: UIBezierPath, named name: String) {
collider.removeBoundary(withIdentifier: name as NSCopying)
collider.addBoundary(withIdentifier: name as NSCopying, for: path)
}
override init() {
super.init()
addChildBehavior(gravity)
addChildBehavior(collider)
addChildBehavior(ballBehavior)
}
internal func addItem (_ item: UIDynamicItem) {
gravity.addItem(item)
collider.addItem(item)
ballBehavior.addItem(item)
}
And here's some code from my BreakoutView class:
I'm giving you the paddle as just one example of a boundary problem, rather than also giving the bricks and pan area, to be more efficient. So of course some of the variables I refer to won't be present in my excerpts. Note that I've left out some things like adding the behavior to the animator, but know that everything is indeed working, I'm just having these boundary problems.
private var paddleSize: CGSize {
return CGSize(width: brickSize.width * 3, height: brickSize.height)
}
private var paddleOrigin: CGPoint {
let x = frame.midX - paddleSize.width / 2
let y = panOrigin.y - paddleSize.height
return CGPoint(x: x, y: y)
}
private func addBallAndPaddle () {
let paddleFrame = CGRect(origin: paddleOrigin, size: paddleSize)
let paddleView = UIView(frame: paddleFrame)
paddleView.backgroundColor = UIColor.white
paddleView.layer.borderWidth = 0.25
addSubview(paddleView)
paddle = paddleView
behavior.addColliderBoundary(path: UIBezierPath(rect: paddleFrame), named: Boundaries.paddle)
let ballFrame = CGRect(origin: ballOrigin, size: ballSize)
let ballView = UIView(frame: ballFrame)
ballView.backgroundColor = UIColor.white
ballView.layer.borderWidth = 0.25
addSubview(ballView)
behavior.addItem(ballView)
ball = ballView
}
private lazy var animator: UIDynamicAnimator = {
let animator = UIDynamicAnimator(referenceView: self.superview!)
animator.delegate = self
return animator
}()
private lazy var behavior: BreakoutBehavior = {
let behavior = BreakoutBehavior()
behavior.collider.collisionDelegate = self
return behavior
}()
And here's code from the BreakoutViewController. configureGameView() is called in viewDidLayoutSubviews
private func configureGameView () {
gameView.frame = topView.frame // topView is the view in IB
gameView.backgroundColor = UIColor.lightGray
gameView.addViews() // adds bricks, ball, paddle, pan area
firstTap.numberOfTapsRequired = 1
gameView.addGestureRecognizer(firstTap) // DOESN'T WORK
if let panView = gameView.panner {
panView.addGestureRecognizer(UIPanGestureRecognizer(target: gameView, action: #selector(gameView.movePaddle(_:))))
panView.addGestureRecognizer(firstTap) // DOESN'T WORK
}
}
Thanks!

How can I increase the area of a sprite only for touch recognition?

My sprites are quite small so occasionally taps are not very accurate. I would like to know how I can increase the area around the sprite for touch recognition only so that other stuff like physics are not affected?
For example, say my sprite sizes are 40x40 and I want the touch to be recognized by a larger area of 60x60 for a given sprite.
Please see code below:
func singleTapHandler(recognizer:UITapGestureRecognizer){
let locationInView = recognizer.location(in: self.view)
let locationInScene = self.convertPoint(fromView: locationInView)
print(locationInScene)
let node = atPoint(locationInScene)
if let nodeName = node.name {
if nodeName = "group1" {
fliesGroup1.changePosition()
}
}
}
UPDATE
Based on Hola's answer I did the following to create my empty SKNode but it isn't recognized when I use the flyTouchName to check for location in scene with the code above.
var touchNode = SKNode()
touchNode.physicsBody = SKPhysicsBody(rectangleOf: CGSize(width: 60, height: 60))
touchNode.physicsBody?.collisionBitMask = 0
touchNode.isUserInteractionEnabled = true
touchNode.physicsBody?.contactTestBitMask = 0
touchNode.name = flyTouchName
fly.addChild(touchNode)
You could create an empty SKNode, set it's dimensions, enable user interaction, and add it as a child of your sprite.
Then use that node as your input handler.

Why are objects in the same SKNode layer not interacting with each other?

I have less than 1 year using SpriteKit so I didn't use SKNodes as layers before until recently.
I have an SKNode layer that holds all of the fish and the user's position, for example:
var layerMainGame = SKNode()
layerMainGame.zPosition = 50
layerMainGame.addChild(userPosition)
layerMainGame.addChild(pipFish)
addChild(layerMainGame)
The interaction whether the user touched a fish or not is handled with this function, which is basically checking if their frames crossed:
if CGRectIntersectsRect(CGRectInset(node.frame, delta.dx, delta.dy), self.userPosition.frame) {
print("You got hit by \(name).")
gameOver()
}
It works. The interaction between the userPosition and pipFish works. What doesn't work is fish that are added as the game progresses. I have a function spawning different types of fish in intervals like this:
func spawnNew(fish: SKSpriteNode) {
layerMainGame.addChild(fish)
}
The interaction between the user and those fish that get added to the same layer later in the game does not work. I can pass right through them and no game over happens. When I completely remove the entire layerMainGame variable and just add them to the scene like normal, all the interactions work. Adding them all to the same SKNode layer doesn't work.
This is the function that creates a hit collision for every fish.
func createHitCollisionFor(name: String, GameOver gameOver: String!, delta: (dx: CGFloat, dy: CGFloat), index: Int = -1) {
enumerateChildNodesWithName(name) { [unowned me = self] node, _ in
if CGRectIntersectsRect(CGRectInset(node.frame, delta.dx, delta.dy), self.userPosition.frame) {
me.gameOverImage.texture = SKTexture(imageNamed: gameOver)
didGetHitActions()
me.runAction(audio.playSound(hit)!)
if index != -1 {
me.trophySet.encounterTrophy.didEncounter[index] = true
}
print("You got hit by \(name).")
}
}
}
And I call it like this:
createHitCollisionFor("GoldPiranha", GameOver: model.gameOverImage["Gold"], delta: (dx: 50, dy: 50), index: 1)
It works when the fish are not in the layer, but doesn't work when they are added to the layer.
When a node is placed in the node tree, its position property places it within a coordinate system provided by its parent.
Sprite Kit uses a coordinate orientation that starts from the bottom left corner of the screen (0, 0), and the x and y values increase as you move up and to the right.
For SKScene, the default value of the origin – anchorPoint is (0, 0), which corresponds to the lower-left corner of the view’s frame rectangle. To change it to center you can specify (0.5, 0.5)
For SKNode, the coordinate system origin is defined by its anchorPoint which by default is (0.5, 0.5) which is center of the node.
In your project you have layerMainGame added for example to the scene, his anchorPoint is by default (0.5,0.5) so the origin for the children like your fish is the center, you can see it if you change the fish positions like:
func spawnNew(fish: SKSpriteNode) {
layerMainGame.addChild(fish)
fish.position = CGPointZero // position 0,0 = parent center
}
Hope it help to understand how to solve your issue.
Update: (after your changes to the main question)
To help you better understand what happens I will give an example right away:
override func didMoveToView(view: SKView) {
var layerMainGame = SKNode()
addChild(layerMainGame)
let pipFish = SKSpriteNode(color: UIColor.yellowColor(), size: CGSizeMake(50,50))
pipFish.name = "son"
self.addChild(pipFish)
let layerPipFish = SKSpriteNode(color: UIColor.yellowColor(), size: CGSizeMake(50,50))
layerPipFish.name = "son"
layerMainGame.addChild(layerPipFish)
enumerateChildNodesWithName("son") { [unowned me = self] node, _ in
print(node)
}
}
Output:
Now I will simply change the line:
layerMainGame.addChild(layerPipFish)
with:
self.addChild(layerPipFish)
Output:
What happened?
As you can see enumerateChildNodesWithName written as your and my code print only childs directly added to self (because actually we launch enumerateChildNodesWithName which it is equal to launch self.enumerateChildNodesWithName )
How can I search in the full node tree?
If you have a node named "GoldPiranha" then you can search through all descendants by putting a // before the name. So you would search for "//GoldPiranha":
enumerateChildNodesWithName("//GoldPiranha") { [unowned me = self] ...

How to drag a SKSpriteNode without touching it with Swift

I'm trying to move SKSpriteNode horizontally by dragging. To get the idea of what I try to achieve you can watch this. I want player to be able to drag sprite without touching it. But I don't really know how to implement it correctly.
I tried to do something like:
override func didMoveToView(view: SKView) {
/* Setup your scene here */
capLeft.size = self.capLeft.size
self.capLeft.position = CGPointMake(CGRectGetMinX(self.frame) + self.capLeft.size.height * 2, CGRectGetMinY(self.frame) + self.capLeft.size.height * 1.5)
capLeft.zPosition = 1
self.addChild(capLeft)
let panLeftCap: UIPanGestureRecognizer = UIPanGestureRecognizer(target: capLeft, action: Selector("moveLeftCap:"))
And when I'm setting a moveLeftCap function, code that I've found for UIPanGestureRecognizer is requiring "View" and gives me an error. I also wanted to limit min and max positions of a sprite through which it shouldn't go.
Any ideas how to implement that?
You probably get that error because you can't just access the view from any node in the tree. You could to refer to it as scene!.view or you handle the gesture within you scene instead which is preferable if you want to keep things simple.
I gave it a try and came up with this basic scene:
import SpriteKit
class GameScene: SKScene {
var shape:SKNode!
override func didMoveToView(view: SKView) {
//creates the shape to be moved
shape = SKShapeNode(circleOfRadius: 30.0)
shape.position = CGPointMake(frame.midX, frame.midY)
addChild(shape)
//sets up gesture recognizer
let pan = UIPanGestureRecognizer(target: self, action: "panned:")
view.addGestureRecognizer(pan)
}
var previousTranslateX:CGFloat = 0.0
func panned (sender:UIPanGestureRecognizer) {
//retrieve pan movement along the x-axis of the view since the gesture began
let currentTranslateX = sender.translationInView(view!).x
//calculate translation since last measurement
let translateX = currentTranslateX - previousTranslateX
//move shape within frame boundaries
let newShapeX = shape.position.x + translateX
if newShapeX < frame.maxX && newShapeX > frame.minX {
shape.position = CGPointMake(shape.position.x + translateX, shape.position.y)
}
//(re-)set previous measurement
if sender.state == .Ended {
previousTranslateX = 0
} else {
previousTranslateX = currentTranslateX
}
}
}
when you move you finger across the screen, the circle gets moves along the x-axis accordingly.
if you want to move the sprite in both x and y directions, remember to invert the y-values from the view (up in view is down in scene).