Hit-test on Animated Character – Getting the correct worldCoordinates - swift

In my ARKit app I have an animated character (stored as T-Bone Model in a SCN file). The Animations are taken from several DAE files and applied to the model using SCNAnimationPlayer like so:
let myAnimation = Animations.configMyAnimationFunction()
myAnimation.stop()
enemyNode.childNodes[2].addAnimationPlayer(myAnimation, forKey: "myKey")
enemyNode.childNodes[2].animationPlayer(forKey: "myKey")?.play()
the Animation plays perfectly.
Now I do "hit-test" against the animated geometry, like this:
let currentTouchPoint = touches.first?.location(in: self.sceneView)
let hitTest = sceneView.hitTest(currentTouchPoint!, options: [SCNHitTestOption.categoryBitMask: NodeCategory.catEnemy.rawValue, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
let hitObject = hitTest.first?.node // let that stores the hitTest
if hitObject != nil {
// code...
let hitLocation = hitTest.first?.worldCoordinates
// code...
}
I want to use the result from "worldCoordinates". But it seems, that the result always contains the coordinates from the static T-Bone Model, instead of the location on which it is during the animated runtime.
Imagine the animated model is clapping its hands (as a humanoid character) or touching the ground. When I touch the Models hands the hittest works and returns the result, but finally at the wrong coordinates.
Apple Docs on worldCoordinates is like this: "The point of intersection between the geometry and the search ray, in the scene’s world coordinate system." I also tried localCoordinates, but with even less success.
How can I determine the real coordinates at the current touch location on the geometry during the model is animated?

Related

How to detect which CAShapeLayer was clicked on in macOS using Swift?

I have a view with many CAShapeLayer objects (there can be other CALayer objects as well) and I want to modify the CAShapeLayer object that the user clicks on.
I was experimenting with the below two methods but none of them works. Any tips would be great.
Approach one:
private func modifyDrawing(at point: NSPoint) {
for layer in view.layer!.sublayers! {
let s = layer.hitTest(point)
if s != nil && s is CAShapeLayer {
selectedShape = s as? CAShapeLayer
}
}
// modify some properties
selectedShape?.shadowRadius = 20
selectedShape?.shadowOpacity = 1
selectedShape?.shadowColor = CGColor.black
}
Approach two:
private func modifyDrawing(at point: NSPoint) {
let drawingsAtMouseClick: [CAShapeLayer] = view.layer!.sublayers!.compactMap{ $0 as? CAShapeLayer }
if drawingsAtMouseClick.isEmpty {
return
}
for drawing in drawingsAtMouseClick {
if drawing.contains(point) {
selectedShape = drawing
break
}
}
// modify some properties
selectedShape?.shadowRadius = 20
selectedShape?.shadowOpacity = 1
selectedShape?.shadowColor = CGColor.black
}
The point parameter passed to these functions is the NSEvent.locationInWindow. Not sure whether I should convert this with respect to the CAShapeLayer or something.
P.S.: This isn't production code so please kindly ignore Swift best practices, etc.
The CALayer.hitTest(_:) method will tell you the layer that was hit, including in a layer's sublayers.
You shouldn't need to check every sublayer. You should be able to ask your view's layer what layer was hit by asking the top-level layer.
A view's layer's coordinates are generally the same as the view's bounds. It's anchored at 0,0 in the parent layer, and the sublayers use that coordinate space. Thus, you should convert your point to view/layer coordinates before hit testing.
(I always have to go check to see which coordinate systems are flipped from the other between iOS and Mac OS and views and layers. You might need to flip coordinates. I leave that research up to you.)
Edit:
I seem to remember that CALayer.hitTest(_:) just checks that the layer's frame contains the point, not that it actually contains an opaque pixel at that position. It's more complex if you want to check to see if the point contains an opaque pixel.

sceneView projectPoint after changing eulerAngles

I'm working on an app that shows location-based AR using ARKit-CoreLocation. I have 2 SCNNodes: a sceneNode and a visibleNode (which is a visible AR element) as a child.
sceneNode.addChildNode(visibleNode)
There are instances where I need to align the AR with other UI elements in the app. In order to do it, I rotate my sceneNode by 1 degree and check the position of the visibleNode until it is centered on the screen. I do something like this in my SCNView:
// 1. visibleNode x,y before rotating sceneNode
let nodePositionBefore = self.projectPoint(visibleNode.position)
// 2. rotate sceneNode
sceneNode?.eulerAngles.y -= Float(1).degreesToRadians
// 3. visibleNode x,y after rotating sceneNode
let nodePositionAfter = self.projectPoint(visibleNode.position)
When I run the code, I see the visibleNode moving left/right on the screen. But if I look at nodePositionBefore and nodePositionAfter they are exactly the same value.
What am I missing?

How to prevent Entities from overlapping?

I'm trying to create an AR experience with RealityKit but I'm finding that by default, entities will move into each other and overlap when they are moved by user interaction.
I want to prevent the objects from overlapping and entering each other, so that when they are moved by the user they just hit/bounce off without overlapping.
I'm loading the entities from a RealityComposer file as such and adding them to the scene (within a catch block and others not shown in this simplified version):
let entity = try Experience.loadBallSort()
anchorEntity.addChild(entity)
// anchorEntity is an AnchorEntity that is already attached to the scene
I'm using the default gestures like this to enable user interaction, which is how the objects are caused to overlap because they don't stop once they touch:
arView.installGestures([.rotation, .translation], for: entity)
Within Reality Composer, I've got Physics enabled with a Static motion type, and the default Physics material/collision shape for each object. I've also tried to use generateCollisionShapes as such, but it doesn't change the behaviour of the collision:
entity.generateCollisionShapes(recursive: true)
How can I prevent entities from overlapping in RealityKit?
There's no overlapping when colliding
To implement such a scenario, let's take 2 objects - one is dynamic and the other – kinematic.
PhysicsBodyMode.dynamic
Forces and collisions control body movement.
PhysicsBodyMode.kinematic
The user controls body movement. This type of physics body is unaffected by forces or collisions but that can cause collisions affecting other bodies when moved.
Code:
var arView = ARView(frame: .zero)
arView.frame = self.view.frame
self.view.addSubview(arView)
let scene = try! Experience.loadModels()
// Kinematic
let red = scene.redBox!.children[0] as! (Entity &
HasCollision &
HasPhysicsBody)
red.physicsBody = .init()
red.physicsBody?.massProperties.mass = 5
red.physicsBody?.mode = .kinematic
red.generateCollisionShapes(recursive: true)
arView.installGestures([.translation], for: red)
// Dynamic
let green = scene.greenCube!.children[0] as! (Entity &
HasCollision &
HasPhysicsBody)
green.physicsBody = .init()
green.physicsBody?.massProperties.mass = 5
green.physicsBody?.mode = .dynamic
green.generateCollisionShapes(recursive: true)
P.S.
Don't apply physics in Reality Composer, apply it programmatically in RealityKit.

How do I programmatically move an ARAnchor?

I'm trying out the new ARKit to replace another similar solution I have. It's pretty great! But I can't seem to figure out how to move an ARAnchor programmatically. I want to slowly move the anchor to the left of the user.
Creating the anchor to be 2 meters in front of the user:
var translation = matrix_identity_float4x4
translation.columns.3.z = -2.0
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
later, moving the object to the left/right of the user (x-axis)...
anchor.transform.columns.3.x = anchor.transform.columns.3.x + 0.1
repeated every 50 milliseconds (or whatever).
The above does not work because transform is a get-only property.
I need a way to change the position of an AR object in space relative to the user in a way that keeps the AR experience intact - meaning, if you move your device, the AR object will be moving but also won't be "stuck" to the camera like it's simply painted on, but moves like you would see a person move while you were walking by - they are moving and you are moving and it looks natural.
Please note the scope of this question relates only to how to move an object in space in relation to the user (ARAnchor), not in relation to a plane (ARPlaneAnchor) or to another detected surface (ARHitTestResult).
Thanks!
You don't need to move anchors. (hand wave) That's not the API you're looking for...
Adding ARAnchor objects to a session is effectively about "labeling" a point in real-world space so that you can refer to it later. The point (1,1,1) (for example) is always the point (1,1,1) — you can't move it someplace else because then it's not the point (1,1,1) anymore.
To make a 2D analogy: anchors are reference points sort of like the bounds of a view. The system (or another piece of your code) tells the view where it's boundaries are, and the view draws its content relative to those boundaries. Anchors in AR give you reference points you can use for drawing content in 3D.
What you're asking is really about moving (and animating the movement of) virtual content between two points. And ARKit itself really isn't about displaying or animating virtual content — there are plenty of great graphics engines out there, so ARKit doesn't need to reinvent that wheel. What ARKit does is provide a real-world frame of reference for you to display or animate content using an existing graphics technology like SceneKit or SpriteKit (or Unity or Unreal, or a custom engine built with Metal or GL).
Since you mentioned trying to do this with SpriteKit... beware, it gets messy. SpriteKit is a 2D engine, and while ARSKView provides some ways to shoehorn a third dimension in there, those ways have their limits.
ARSKView automatically updates the xScale, yScale, and zRotation of each sprite associated with an ARAnchor, providing the illusion of 3D perspective. But that applies only to nodes attached to anchors, and as noted above, anchors are static.
You can, however, add other nodes to your scene, and use those same properties to make those nodes match the ARSKView-managed nodes. Here's some code you can add/replace in the ARKit/SpriteKit Xcode template project to do that. We'll start with some basic logic to run a bouncing animation on the third tap (after using the first two taps to place anchors).
var anchors: [ARAnchor] = []
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
// Start bouncing on touch after placing 2 anchors (don't allow more)
if anchors.count > 1 {
startBouncing(time: 1)
return
}
// Create anchor using the camera's current position
guard let sceneView = self.view as? ARSKView else { return }
if let currentFrame = sceneView.session.currentFrame {
// Create a transform with a translation of 30 cm in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.3
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
anchors.append(anchor)
}
}
Then, some SpriteKit fun for making that animation happen:
var ballNode: SKLabelNode = {
let labelNode = SKLabelNode(text: "🏀")
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
return labelNode
}()
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSKView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
addChild(ballNode)
}
ballNode.setScale(start.xScale)
ballNode.zRotation = start.zRotation
ballNode.position = start.position
let scale = SKAction.scale(to: end.xScale, duration: time)
let rotate = SKAction.rotate(toAngle: end.zRotation, duration: time)
let move = SKAction.move(to: end.position, duration: time)
let scaleBack = SKAction.scale(to: start.xScale, duration: time)
let rotateBack = SKAction.rotate(toAngle: start.zRotation, duration: time)
let moveBack = SKAction.move(to: start.position, duration: time)
let action = SKAction.repeatForever(.sequence([
.group([scale, rotate, move]),
.group([scaleBack, rotateBack, moveBack])
]))
ballNode.removeAllActions()
ballNode.run(action)
}
Here's a video so you can see this code in action. You'll notice that the illusion only works as long as you don't move the camera — not so great for AR. When using SKAction, we can't adjust the start/end states of the animation while animating, so the ball keeps bouncing back and forth between its original (screen-space) positions/rotations/scales.
You could do better by animating the ball directly, but it's a lot of work. You'd need to, on every frame (or every view(_:didUpdate:for:) delegate callback):
Save off the updated position, rotation, and scale values for the anchor-based nodes at each end of the animation. You'll need to do this twice per didUpdate callback, because you'll get one callback for each node.
Work out position, rotation, and scale values for the node being animated, by interpolating between the two endpoint values based on the current time.
Set the new attributes on the node. (Or maybe animate it to those attributes over a very short duration, so it doesn't jump too much in one frame?)
That's kind of a lot of work to shoehorn a fake 3D illusion into a 2D graphics toolkit — hence my comments about SpriteKit not being a great first step into ARKit.
If you want 3D positioning and animation for your AR overlays, it's a lot easier to use a 3D graphics toolkit. Here's a repeat of the previous example, but using SceneKit instead. Start with the ARKit/SceneKit Xcode template, take the spaceship out, and paste the same touchesBegan function from above into the ViewController. (Change the as ARSKView casts to as ARSCNView, too.)
Then, some quick code for placing 2D billboarded sprites, matching via SceneKit the behavior of the ARKit/SpriteKit template:
// in global scope
func makeBillboardNode(image: UIImage) -> SCNNode {
let plane = SCNPlane(width: 0.1, height: 0.1)
plane.firstMaterial!.diffuse.contents = image
let node = SCNNode(geometry: plane)
node.constraints = [SCNBillboardConstraint()]
return node
}
// inside ViewController
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// emoji to image based on https://stackoverflow.com/a/41021662/957768
let billboard = makeBillboardNode(image: "⛹️".image())
node.addChildNode(billboard)
}
Finally, adding the animation for the bouncing ball:
let ballNode = makeBillboardNode(image: "🏀".image())
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSCNView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
sceneView.scene.rootNode.addChildNode(ballNode)
}
let animation = CABasicAnimation(keyPath: #keyPath(SCNNode.transform))
animation.fromValue = start.transform
animation.toValue = end.transform
animation.duration = time
animation.autoreverses = true
animation.repeatCount = .infinity
ballNode.removeAllAnimations()
ballNode.addAnimation(animation, forKey: nil)
}
This time the animation code is a lot shorter than the SpriteKit version.
Here's how it looks in action.
Because we're working in 3D to start with, we're actually animating between two 3D positions — unlike in the SpriteKit version, the animation stays where it's supposed to. (And without the extra work for directly interpolating and animating attributes.)

SpriteNode position versus touch location

So i'm using a spritenode with an image and I don't set the position of it. Funny enough I was having issues setting the position of it so I didn't and it by default was set to center of page (for obvious reason) which ended up being perfect. So now in my touchesbegan method i'm looking to change the said image IF the original image was pressed so i'm checking if the touched location is equal to the node (of the image)'s position. Then if that is true, I replace it with the new image which is called "nowaves".
for touch in touches
{
let location = touch.location(in: self)
if (location == audioPlaying.position)
{
audioPlaying = SKSpriteNode(imageNamed: "nowaves")
}
else
{
Do you guys think I need to readd it? Well right as I asked that I tested it to no result. This is what I tried:
audioPlaying.size = CGSize(width: frame.size.width / 13, height: frame.size.height / 20)
addChild(audioPlaying)
If anyone has any idea what the problem is I would love some feedback, thank you :D
By default, the position property returns the coordinates of the center of the node. In other words, unless you touched the exact center of the sprite, if (location == audioPlaying.position) will never be true. However, you should be aware that touching a node's exact center point is practically impossible.
So we use another method. We just need to check if the touched node is the node you want.
let touchedNode = self.nodes(at: touch.location(in: self)).first
if touchedNode == audioPlaying {
// I think setting the texture is more suitable here, instead of creating a new node.
audioPlaying.texture = SKTexture(imageNamed: "nowaves")
}
To add on to #sweeper ,
I had all my initial image content in the scenedidLoad method, rather than that I created a new method that I was already using for another functionality to start the game and had the initial image there instead and now everything works well.