Adding ARAnchor at camera position, not World Origin - swift

By default when adding an ARAnchor it is added at World Origin position. I am adding map anchors and these are updated when the location changes, the anchor is pushed out a certain distance and put on a bearing. If the distance is far the image is too small so its capped at 14 meters. If I continue walking towards the pin I can evently pass it because the position gets pushed out again from world origin.
I have used code below to add pin at camera origin but it still seems to be at world origin, any idea why?
public class MapAnchor: ARAnchor {
public var calloutString: String?
public var distance: Float?
public var location: CLLocation?
public var originLocation: CLLocation?
public convenience init(originLocation: CLLocation, location: CLLocation, pointOfView: SCNNode) {
pointOfView.rotation = SCNVector4(0,0,0,0)
let povTransform = pointOfView.simdWorldTransform
let transform = povTransform.transformMatrix(originLocation: originLocation, location: location)
self.init(transform: transform)
self.distance = Float(location.distance(from: originLocation))
self.location = location
self.originLocation = originLocation
}
}
internal extension simd_float4x4 {
func transformMatrix(originLocation: CLLocation, location: CLLocation) -> simd_float4x4 {
// Determine the distance and bearing between the start and end locations
let bearing = GLKMathDegreesToRadians(Float(originLocation.coordinate.direction(to: location.coordinate)))
var distance = Float(location.distance(from: originLocation))
distance = distance/10
if(distance>14) {distance=14}
//if(computedDistance<2) {computedDistance=2}
let position = vector_float4(0.0, 0.0, -distance, 0.0)
let translationMatrix = matrix_identity_float4x4.translationMatrix(position)
let rotationMatrix = matrix_identity_float4x4.rotationAroundY(radians: bearing)
let transformMatrix = simd_mul(rotationMatrix, translationMatrix)
return simd_mul(self, transformMatrix)
}
}
internal extension matrix_float4x4 {
func rotationAroundY(radians: Float) -> matrix_float4x4 {
var m : matrix_float4x4 = self;
m.columns.0.x = cos(radians);
m.columns.0.z = -sin(radians);
m.columns.2.x = sin(radians);
m.columns.2.z = cos(radians);
return m.inverse;
}
func translationMatrix(_ translation : vector_float4) -> matrix_float4x4 {
var m : matrix_float4x4 = self
m.columns.3 = translation
return m
}
}
}

Related

Implementing AI to Air Hockey in SpriteKIt?

I am working on a game similar to air hockey in SpriteKit for fun and to learn Swift/Xcode. I anticipate the AI to be quite a challenge as there is other elements to the game which will need to be accounted for. I know I'll have to keep tackling each issue one by one. I have created the 2 player mode for the game, and I'm working on AI now. Here is some code I have used for calculating and delegating the impulse from mallet to puck (in the 2 player mode):
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?)
{
bottomTouchIsActive = true
var releventTouch:UITouch!
//convert set to known type
let touchSet = touches
//get array of touches so we can loop through them
let orderedTouches = Array(touchSet)
for touch in orderedTouches
{
//if we've not yet found a relevent touch
if releventTouch == nil
{
//look for a touch that is in the activeArea (Avoid touches by opponent)
if activeArea.contains(CGPoint(x: touch.location(in: parent!).x, y: touch.location(in: parent!).y + frame.height * 0.24))
{
isUserInteractionEnabled = true
releventTouch = touch
}
else
{
releventTouch = nil
}
}
}
if (releventTouch != nil)
{
//get touch position and relocate player
let location = CGPoint(x: releventTouch!.location(in: parent!).x, y: releventTouch!.location(in: parent!).y + frame.height * 0.24)
position = location
//find old location and use pythagoras to determine length between both points
let oldLocation = CGPoint(x: releventTouch!.previousLocation(in: parent!).x, y: releventTouch!.previousLocation(in: parent!).y + frame.height * 0.24)
let xOffset = location.x - oldLocation.x
let yOffset = location.y - oldLocation.y
let vectorLength = sqrt(xOffset * xOffset + yOffset * yOffset)
//get eleapsed and use to calculate speed6A
if lastTouchTimeStamp != nil
{
let seconds = releventTouch.timestamp - lastTouchTimeStamp!
let velocity = 0.01 * Double(vectorLength) / seconds
//to calculate the vector, the velcity needs to be converted to a CGFloat
let velocityCGFloat = CGFloat(velocity)
//calculate the impulse
let directionVector = CGVector(dx: velocityCGFloat * xOffset / vectorLength, dy: velocityCGFloat * yOffset / vectorLength)
//pass the vector to the scene (so it can apply an impulse to the puck)
delegate?.bottomForce(directionVector, fromBottomPlayer: self)
delegate?.bottomTouchIsActive(bottomTouchIsActive, fromBottomPlayer: self)
}
//update latest touch time for next calculation
lastTouchTimeStamp = releventTouch.timestamp
}
}
I am wondering how I can convert this code for the AI. I have been adding some AI logic to the update function which I believe could also use time stamps and calculate distance traveled between frames to calculate the impulse. I just don't know exactly how to implement that thought. Any help is greatly appreciated :)
Here is some bare bones code I have so far for testing purposes mostly for the AI mode in the update function:
if (ball?.position.y)! < frame.height / 2
{
if (botPlayer?.position.y)! < frame.height * 0.75
{
botPlayer?.position.y += 1
}
}
else
{
if (botPlayer?.position.y)! > (ball?.position.y)!
{
if (botPlayer?.position.y)! - (ball?.position.y)! > frame.height * 0.1
{
botPlayer?.position.y -= 1
}
else
{
botPlayer?.position.y -= 3
}
}
else
{
botPlayer?.position.y += 1
}
}
if ((botPlayer?.position.x)! - (ball?.position.x)!) < 2
{
botPlayer?.position.x = (ball?.position.x)!
}
if (botPlayer?.position.x)! > (ball?.position.x)!
{
botPlayer?.position.x -= 2
}
else if (botPlayer?.position.x)! < (ball?.position.x)!
{
botPlayer?.position.x += 2
}
For AI to make a decision it must have information about the game state. To do this you can write a function that reads all available game data (placement of pucks, player scores, previous moves, etc) and returns the state as a dictionary. Then write a function to take in this dictionary, and output a decision. Consider this workflow in Python.
# we want the AI to make a decision
# start by grabbing game state
gameState = getGameState()
# pass this to the decision function
decision = getDecision( gameState )
# implement the decision
if decision == "move left":
moveLeft()
else:
moveRight()
Is this what you're looking for?

Swift scene kit - cant apply velocity WHILE doing rotation? Direction is off?

Ok, so Im going straight off Apple's tutorial using a joystick moving an SCNNode for SceneKithere.
I've copied the code and gotten the joystick to both move and rotate the character - but not simultaneously, and not in the right direction relative to the node.
All the correct code is in that download, but what I've done is here is where I get the angle offset of the joystick handle and the float2 from the joystick UI-
characterDirection = float2(Float(padNode.stickPosition.x), -Float(padNode.stickPosition.y))
let direction = theDude.characterDirection(withPointOfView: renderer.pointOfView)
directionAngle = CGFloat(atan2f(direction.x, direction.z))
public func characterDirection(withPointOfView pointOfView: SCNNode?) -> float3 {
let controllerDir = theDude.direction //THIS ISNT BEING UPDATED
if controllerDir.allZero() {
return float3.zero
}
var directionWorld = float3.zero
if let pov = pointOfView {
let p1 = pov.presentation.simdConvertPosition(float3(controllerDir.x, 0.0, controllerDir.y), to: nil)
let p0 = pov.presentation.simdConvertPosition(float3.zero, to: nil)
directionWorld = p1 - p0
directionWorld.y = 0
if simd_any(directionWorld != float3.zero) {
let minControllerSpeedFactor = Float(0.2)
let maxControllerSpeedFactor = Float(1.0)
let speed = simd_length(controllerDir) * (maxControllerSpeedFactor - minControllerSpeedFactor) + minControllerSpeedFactor
directionWorld = speed * simd_normalize(directionWorld)
}
}
return directionWorld
}
I didn't write the last part and still trying to understand it. But what is relevant is I have a float3 and an angle, and they are conflicting when I try to run them both as SCNActions in my renderer update func:
Here is what Apple basically had in update:
// move
if !direction.allZero() {
theDude.characterVelocity = direction * Float(characterSpeed)
var runModifier = Float(1.0)
theDude.walkSpeed = CGFloat(runModifier * simd_length(direction))
// move character - IMPORTANT
theDude.directionAngle = CGFloat(atan2f(direction.x, direction.z))
theDude.node.runAction(SCNAction.move(by: SCNVector3(theDude.characterDirection(withPointOfView: theDude.node)), duration: TimeInterval(40))) //HERE - random time
theDude.isWalking = true
} else {
theDude.isWalking = false
theDude.node.removeAllActions()
}
}
Where on the commented line I applied the move and here Apple had the rotation applied:
var directionAngle: CGFloat = 0.0 {
didSet {
theDude.node.runAction(
SCNAction.rotateTo(x: 0.0, y: directionAngle, z: 0.0, duration: 0.1, usesShortestUnitArc:true))
}
}
They are both happening, problem is I don't know really what to put as my time and my node moves say, left when I have the joystick pointed right, etc because I am not doing the move correctly.
I tried to copy the demo but they have a moving floor, so it is different. What am I doing wrong here?

> REQ Swift refactor / code layout / review

I have made a mess of my translation from Obj-C to Swift so I'd really appreciate a refactor/code layout review. The curly braces are really throwing me. Are there any Xcode plugins or something to help me better manage my code blocks?
Some of my functions and calculations may not be so efficient as well so if you have any suggestions for those areas that would be great too. For example if you have used or seen better filter algorithms etc.
p.s. thanks Martin.
import UIKit
import Foundation
import AVFoundation
import CoreMedia
import CoreVideo
let minFramesForFilterToSettle = 10
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
let captureSession = AVCaptureSession()
// If we find a device we'll store it here for later use
var captureDevice : AVCaptureDevice?
var validFrameCounter: Int = 0
var detector: Detector!
var filter: Filter!
// var currentState = CurrentState.stateSampling // Is this initialized correctly?
override func viewDidLoad() {
super.viewDidLoad()
self.detector = Detector()
self.filter = Filter()
// startCameraCapture() // call to un-used function.
captureSession.sessionPreset = AVCaptureSessionPresetHigh
let devices = AVCaptureDevice.devices()
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Front) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
//println("Capture device found")
beginSession()
}
}
}
}
} // end of viewDidLoad ???
// configure device for camera and focus mode // maybe not needed since we dont use focuc?
func configureDevice() {
if let device = captureDevice {
device.lockForConfiguration(nil)
//device.focusMode = .Locked
device.unlockForConfiguration()
}
}
// start capturing frames
func beginSession() {
// Create the AVCapture Session
configureDevice()
var err : NSError? = nil
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))
if err != nil {
println("error: \(err?.localizedDescription)")
}
// Automatic Switch ON torch mode
if captureDevice!.hasTorch {
// lock your device for configuration
captureDevice!.lockForConfiguration(nil)
// check if your torchMode is on or off. If on turns it off otherwise turns it on
captureDevice!.torchMode = captureDevice!.torchActive ? AVCaptureTorchMode.Off : AVCaptureTorchMode.On
// sets the torch intensity to 100%
captureDevice!.setTorchModeOnWithLevel(1.0, error: nil)
// unlock your device
captureDevice!.unlockForConfiguration()
}
// Create a AVCaptureInput with the camera device
var deviceInput : AVCaptureInput = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &err) as! AVCaptureInput
if deviceInput == nil! {
println("error: \(err?.localizedDescription)")
}
// Set the output
var videoOutput : AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
// create a queue to run the capture on
var captureQueue : dispatch_queue_t = dispatch_queue_create("captureQueue", nil)
// setup ourself up as the capture delegate
videoOutput.setSampleBufferDelegate(self, queue: captureQueue)
// configure the pixel format
videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32BGRA)] // kCVPixelBufferPixelFormatTypeKey is a CFString btw.
// set the minimum acceptable frame rate to 10 fps
captureDevice!.activeVideoMinFrameDuration = CMTimeMake(1, 10)
// and the size of the frames we want - we'll use the smallest frame size available
captureSession.sessionPreset = AVCaptureSessionPresetLow
// Add the input and output
captureSession.addInput(deviceInput)
captureSession.addOutput(videoOutput)
// Start the session
captureSession.startRunning()
// we're now sampling from the camera
enum CurrentState {
case statePaused
case stateSampling
}
var currentState = CurrentState.statePaused
func setState(state: CurrentState){
switch state
{
case .statePaused:
// what goes here? Something like this?
UIApplication.sharedApplication().idleTimerDisabled = false
case .stateSampling:
// what goes here? Something like this?
UIApplication.sharedApplication().idleTimerDisabled = true // singletons
}
}
// we're now sampling from the camera
currentState = CurrentState.stateSampling
// stop the app from sleeping
UIApplication.sharedApplication().idleTimerDisabled = true
// update our UI on a timer every 0.1 seconds
NSTimer.scheduledTimerWithTimeInterval(0.1, target: self, selector: Selector("update"), userInfo: nil, repeats: true)
func stopCameraCapture() {
captureSession.stopRunning()
captureSession = nil
}
// pragma mark Pause and Resume of detection
func pause() {
if currentState == CurrentState.statePaused {
return
}
// switch off the torch
if captureDevice!.isTorchModeSupported(AVCaptureTorchMode.On) {
captureDevice!.lockForConfiguration(nil)
captureDevice!.torchMode = AVCaptureTorchMode.Off
captureDevice!.unlockForConfiguration()
}
currentState = CurrentState.statePaused
// let the application go to sleep if the phone is idle
UIApplication.sharedApplication().idleTimerDisabled = false
}
func resume() {
if currentState != CurrentState.statePaused {
return
}
// switch on the torch
if captureDevice!.isTorchModeSupported(AVCaptureTorchMode.On) {
captureDevice!.lockForConfiguration(nil)
captureDevice!.torchMode = AVCaptureTorchMode.On
captureDevice!.unlockForConfiguration()
}
currentState = CurrentState.stateSampling
// stop the app from sleeping
UIApplication.sharedApplication().idleTimerDisabled = true
}
// beginning of paste
// r,g,b values are from 0 to 1 // h = [0,360], s = [0,1], v = [0,1]
// if s == 0, then h = -1 (undefined)
func RGBtoHSV(r : Float, g : Float, b : Float, inout h : Float, inout s : Float, inout v : Float) {
let rgbMin = min(r, g, b)
let rgbMax = max(r, g, b)
let delta = rgbMax - rgbMin
v = rgbMax
s = delta/rgbMax
h = Float(0.0)
// start of calculation
if (rgbMax != 0) {
s = delta / rgbMax
}
else{
// r = g = b = 0
s = 0
h = -1
return
}
if r == rgbMax {
h = (g - b) / delta
}
else if (g == rgbMax) {
h = 2 + (b - r ) / delta
}
else{
h = 4 + (r - g) / delta
h = 60
}
if (h < 0) {
h += 360
}
}
// process the frame of video
func captureOutput(captureOutput:AVCaptureOutput, didOutputSampleBuffer sampleBuffer:CMSampleBuffer, fromConnection connection:AVCaptureConnection) {
// if we're paused don't do anything
if currentState == CurrentState.statePaused {
// reset our frame counter
self.validFrameCounter = 0
return
}
// this is the image buffer
var cvimgRef:CVImageBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)
// Lock the image buffer
CVPixelBufferLockBaseAddress(cvimgRef, 0)
// access the data
var width: size_t = CVPixelBufferGetWidth(cvimgRef)
var height:size_t = CVPixelBufferGetHeight(cvimgRef)
// get the raw image bytes
let buf = UnsafeMutablePointer<UInt8>(CVPixelBufferGetBaseAddress(cvimgRef))
var bprow: size_t = CVPixelBufferGetBytesPerRow(cvimgRef)
var r = 0
var g = 0
var b = 0
for var y = 0; y < height; y++ {
for var x = 0; x < width * 4; x += 4 {
b+=buf[x](UnsafeMutablePointer(UInt8)) // fix
g+=buf[x + 1](UnsafeMutablePointer(Float)) // fix
r+=buf[x + 2](UnsafeMutablePointer(Int)) // fix
}
buf += bprow()
}
r /= 255 * (width*height)
g /= 255 * (width*height)
b /= 255 * (width*height)
}
// convert from rgb to hsv colourspace
var h = Float()
var s = Float()
var v = Float()
RGBtoHSV(r, g, b, &h, &s, &v)
// do a sanity check for blackness
if s > 0.5 && v > 0.5 {
// increment the valid frame count
validFrameCounter++
// filter the hue value - the filter is a simple band pass filter that removes any DC component and any high frequency noise
var filtered: Float = filter.processValue(h)
// have we collected enough frames for the filter to settle?
if validFrameCounter > minFramesForFilterToSettle {
// add the new value to the detector
detector.addNewValue(filtered, atTime: CACurrentMediaTime())
}
} else {
validFrameCounter = 0
// clear the detector - we only really need to do this once, just before we start adding valid samples
detector.reset()
}
}
You can actually do that
RGBtoHSV(r: r, g: g, b: b, h: &h, s: &s, v: &v)

SpriteKit - touchesMoved, aim crosshair

I have a spritenode and a crosshair node, i want when the player touch the spritenode and move the crosshair also moves.
override func touchesMoved(touches: NSSet, withEvent event: UIEvent) {
for touch: AnyObject in touches {
let location = touch.locationInNode(self)
var body = self.nodeAtPoint(location)
if var name: String = body.name {
if body.name == "aim-button" {
crossHair.position = CGPointMake(crossHair.position.x + 10, crossHair.position.y + 10)
}
}
}
}
The crosshair does get move but only in one direction, and i have no idea how to make it accurate depending on the space the touch moved to-from (which is obviously way smaller than what the crosshair should actually move on the screen) and the direction.
for touch: AnyObject in touches {
//Get the current position in scene of the touch.
let location = touch.locationInNode(self)
//Get the previous position in scene of the touch.
let previousLocation = touch.previousLocationInNode(self)
//Calculate the translation.
let translation = CGPointMake(location.x - previousLocation.x, location.y - previousLocation.y)
//Get the current position in scene of the crossHair.
let position = crossHair.position
// Get the bode touched
var body = self.nodeAtPoint(location)
if var name: String = body.name {
if body.name == "aim-button" {
//Set the position of the crosshair to its current position plus the translation.
crossHair.position = CGPointMake(position.x + translation.x * 2, position.y + translation.y * 2)
//Set the position of the body
body.position = location
}
}
}
If the crosshair should move more distance than the touch, just multiply the translation by a factor (i'm using 2 in the example above).

eption 'NSInvalidArgumentException', reason: 'Attemped to add a SKNode which already has a parent

i cant figure out this problem. Im deleting a certain SpriteNode, than re-adding it sometimes on a condition, however it crashes every time im calling addChild(). I know a SpriteNode can only have one parent so i dont understand this. Here is the relevant code:
override func touchesBegan(touches: NSSet, withEvent event:UIEvent) {
var touch: UITouch = touches.anyObject() as UITouch
var location = touch.locationInNode(self)
var node = self.nodeAtPoint(location)
for var i=0; i < tileNodeArray.count; i++
{
if (node == tileNodeArray[i]) {
flippedTilesCount++;
flippedTilesArray.append(tileNodeArray[i])
let removeAction = SKAction.removeFromParent()
tileNodeArray[i].runAction(removeAction)
if flippedTilesCount == 2
{
var helperNode1 = newMemoLabelNode("first",x: 0,y: 0,aka: "first")
var helperNode2 = newMemoLabelNode("second",x: 0,y: 0,aka: "second")
for var k = 0; k < labelNodeArray.count ;k++
{
if labelNodeArray[k].position == flippedTilesArray[0].position
{
helperNode1 = labelNodeArray[k]
}
if labelNodeArray[k].position == flippedTilesArray[1].position
{
helperNode2 = labelNodeArray[k]
}
}
if helperNode1.name == helperNode2.name
{
erasedTiles = erasedTiles + 2;
}
else
{
for var j = 0; j < flippedTilesArray.count ;j++
{
let waitAction = SKAction.waitForDuration(1.0)
flippedTilesArray[j].runAction(waitAction)
//self.addChild(flippedTilesArray[j]);
}
}
flippedTilesCount = 0;
flippedTilesArray = []
println("erased tiles:")
println(erasedTiles)
}
}
}
}
Appreciate your help!
I would recommend you not to use SKAction.removeFromParent but remove the node itself by calling:
tileNodeArray[i].removeFromParent()
instead of:
let removeAction = SKAction.removeFromParent()
tileNodeArray[i].runAction(removeAction)
The problem might be, that the SKActions don't wait for each other to finish. For example if you call the waitAction, the other actions will keep running.