How to detect which CAShapeLayer was clicked on in macOS using Swift? - swift

I have a view with many CAShapeLayer objects (there can be other CALayer objects as well) and I want to modify the CAShapeLayer object that the user clicks on.
I was experimenting with the below two methods but none of them works. Any tips would be great.
Approach one:
private func modifyDrawing(at point: NSPoint) {
for layer in view.layer!.sublayers! {
let s = layer.hitTest(point)
if s != nil && s is CAShapeLayer {
selectedShape = s as? CAShapeLayer
}
}
// modify some properties
selectedShape?.shadowRadius = 20
selectedShape?.shadowOpacity = 1
selectedShape?.shadowColor = CGColor.black
}
Approach two:
private func modifyDrawing(at point: NSPoint) {
let drawingsAtMouseClick: [CAShapeLayer] = view.layer!.sublayers!.compactMap{ $0 as? CAShapeLayer }
if drawingsAtMouseClick.isEmpty {
return
}
for drawing in drawingsAtMouseClick {
if drawing.contains(point) {
selectedShape = drawing
break
}
}
// modify some properties
selectedShape?.shadowRadius = 20
selectedShape?.shadowOpacity = 1
selectedShape?.shadowColor = CGColor.black
}
The point parameter passed to these functions is the NSEvent.locationInWindow. Not sure whether I should convert this with respect to the CAShapeLayer or something.
P.S.: This isn't production code so please kindly ignore Swift best practices, etc.

The CALayer.hitTest(_:) method will tell you the layer that was hit, including in a layer's sublayers.
You shouldn't need to check every sublayer. You should be able to ask your view's layer what layer was hit by asking the top-level layer.
A view's layer's coordinates are generally the same as the view's bounds. It's anchored at 0,0 in the parent layer, and the sublayers use that coordinate space. Thus, you should convert your point to view/layer coordinates before hit testing.
(I always have to go check to see which coordinate systems are flipped from the other between iOS and Mac OS and views and layers. You might need to flip coordinates. I leave that research up to you.)
Edit:
I seem to remember that CALayer.hitTest(_:) just checks that the layer's frame contains the point, not that it actually contains an opaque pixel at that position. It's more complex if you want to check to see if the point contains an opaque pixel.

Related

How to adjust position of CAShapeLayer based upon device size?

I'm attempting to create a CAShapeLayer animation that draws an outline around the frame of a UILabel. Here's the code:
func newQuestionOutline() -> CAShapeLayer {
let outlineShape = CAShapeLayer()
outlineShape.isHidden = false
let circularPath = UIBezierPath(roundedRect: questionLabel.frame, cornerRadius: 5)
outlineShape.path = circularPath.cgPath
outlineShape.fillColor = UIColor.clear.cgColor
outlineShape.strokeColor = UIColor.yellow.cgColor
outlineShape.lineWidth = 5
outlineShape.strokeEnd = 0
view.layer.addSublayer(outlineShape)
return outlineShape
}
func newQuestionAnimation() {
let outlineAnimation = CABasicAnimation(keyPath: "strokeEnd")
outlineAnimation.toValue = 1
outlineAnimation.duration = 5
newQuestionOutline().add(outlineAnimation, forKey: "key")
}
The animation performs as expected when running on the simulator for an iPhone 11 which is the device size that I used in the storyboard. However when running the project on a different device with different screen dimensions (like iPhone 8 plus) the shape is drawn out of place and not around the UILabel as it should be. I used autolayout to horizontally and vertically center the UILabel to the center of the view so the UILabel is centered no matter what device.
Any suggestions? Thanks in advance!
Cheers!
A shape layer is not a view, so it is not subject to auto layout. And any time you say something like roundedRect: questionLabel.frame you are making yourself dependent on what questionLabel.frame is at that moment, which is a huge mistake because that is exactly what is not determined until auto layout determines what the frame will be (and can change later if auto layout changes its mind due to changing conditions, such as rotation etc.)
There are two kinds of solution:
Host the shape layer in a view. Now you have something that is subject to autolayout. You will still need to redraw the shape layer whenever the view changes its frame, but you can detect that and perform the redraw.
Implement your view controller's viewDidLayoutSubviews to detect that auto layout has just done its work. Respond by (for example) removing the shape layer and making a new one based on the current conditions.

Hit-test on Animated Character – Getting the correct worldCoordinates

In my ARKit app I have an animated character (stored as T-Bone Model in a SCN file). The Animations are taken from several DAE files and applied to the model using SCNAnimationPlayer like so:
let myAnimation = Animations.configMyAnimationFunction()
myAnimation.stop()
enemyNode.childNodes[2].addAnimationPlayer(myAnimation, forKey: "myKey")
enemyNode.childNodes[2].animationPlayer(forKey: "myKey")?.play()
the Animation plays perfectly.
Now I do "hit-test" against the animated geometry, like this:
let currentTouchPoint = touches.first?.location(in: self.sceneView)
let hitTest = sceneView.hitTest(currentTouchPoint!, options: [SCNHitTestOption.categoryBitMask: NodeCategory.catEnemy.rawValue, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
let hitObject = hitTest.first?.node // let that stores the hitTest
if hitObject != nil {
// code...
let hitLocation = hitTest.first?.worldCoordinates
// code...
}
I want to use the result from "worldCoordinates". But it seems, that the result always contains the coordinates from the static T-Bone Model, instead of the location on which it is during the animated runtime.
Imagine the animated model is clapping its hands (as a humanoid character) or touching the ground. When I touch the Models hands the hittest works and returns the result, but finally at the wrong coordinates.
Apple Docs on worldCoordinates is like this: "The point of intersection between the geometry and the search ray, in the scene’s world coordinate system." I also tried localCoordinates, but with even less success.
How can I determine the real coordinates at the current touch location on the geometry during the model is animated?

How do I programmatically move an ARAnchor?

I'm trying out the new ARKit to replace another similar solution I have. It's pretty great! But I can't seem to figure out how to move an ARAnchor programmatically. I want to slowly move the anchor to the left of the user.
Creating the anchor to be 2 meters in front of the user:
var translation = matrix_identity_float4x4
translation.columns.3.z = -2.0
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
later, moving the object to the left/right of the user (x-axis)...
anchor.transform.columns.3.x = anchor.transform.columns.3.x + 0.1
repeated every 50 milliseconds (or whatever).
The above does not work because transform is a get-only property.
I need a way to change the position of an AR object in space relative to the user in a way that keeps the AR experience intact - meaning, if you move your device, the AR object will be moving but also won't be "stuck" to the camera like it's simply painted on, but moves like you would see a person move while you were walking by - they are moving and you are moving and it looks natural.
Please note the scope of this question relates only to how to move an object in space in relation to the user (ARAnchor), not in relation to a plane (ARPlaneAnchor) or to another detected surface (ARHitTestResult).
Thanks!
You don't need to move anchors. (hand wave) That's not the API you're looking for...
Adding ARAnchor objects to a session is effectively about "labeling" a point in real-world space so that you can refer to it later. The point (1,1,1) (for example) is always the point (1,1,1) β€” you can't move it someplace else because then it's not the point (1,1,1) anymore.
To make a 2D analogy: anchors are reference points sort of like the bounds of a view. The system (or another piece of your code) tells the view where it's boundaries are, and the view draws its content relative to those boundaries. Anchors in AR give you reference points you can use for drawing content in 3D.
What you're asking is really about moving (and animating the movement of) virtual content between two points. And ARKit itself really isn't about displaying or animating virtual content β€” there are plenty of great graphics engines out there, so ARKit doesn't need to reinvent that wheel. What ARKit does is provide a real-world frame of reference for you to display or animate content using an existing graphics technology like SceneKit or SpriteKit (or Unity or Unreal, or a custom engine built with Metal or GL).
Since you mentioned trying to do this with SpriteKit... beware, it gets messy. SpriteKit is a 2D engine, and while ARSKView provides some ways to shoehorn a third dimension in there, those ways have their limits.
ARSKView automatically updates the xScale, yScale, and zRotation of each sprite associated with an ARAnchor, providing the illusion of 3D perspective. But that applies only to nodes attached to anchors, and as noted above, anchors are static.
You can, however, add other nodes to your scene, and use those same properties to make those nodes match the ARSKView-managed nodes. Here's some code you can add/replace in the ARKit/SpriteKit Xcode template project to do that. We'll start with some basic logic to run a bouncing animation on the third tap (after using the first two taps to place anchors).
var anchors: [ARAnchor] = []
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
// Start bouncing on touch after placing 2 anchors (don't allow more)
if anchors.count > 1 {
startBouncing(time: 1)
return
}
// Create anchor using the camera's current position
guard let sceneView = self.view as? ARSKView else { return }
if let currentFrame = sceneView.session.currentFrame {
// Create a transform with a translation of 30 cm in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.3
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
anchors.append(anchor)
}
}
Then, some SpriteKit fun for making that animation happen:
var ballNode: SKLabelNode = {
let labelNode = SKLabelNode(text: "πŸ€")
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
return labelNode
}()
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSKView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
addChild(ballNode)
}
ballNode.setScale(start.xScale)
ballNode.zRotation = start.zRotation
ballNode.position = start.position
let scale = SKAction.scale(to: end.xScale, duration: time)
let rotate = SKAction.rotate(toAngle: end.zRotation, duration: time)
let move = SKAction.move(to: end.position, duration: time)
let scaleBack = SKAction.scale(to: start.xScale, duration: time)
let rotateBack = SKAction.rotate(toAngle: start.zRotation, duration: time)
let moveBack = SKAction.move(to: start.position, duration: time)
let action = SKAction.repeatForever(.sequence([
.group([scale, rotate, move]),
.group([scaleBack, rotateBack, moveBack])
]))
ballNode.removeAllActions()
ballNode.run(action)
}
Here's a video so you can see this code in action. You'll notice that the illusion only works as long as you don't move the camera β€” not so great for AR. When using SKAction, we can't adjust the start/end states of the animation while animating, so the ball keeps bouncing back and forth between its original (screen-space) positions/rotations/scales.
You could do better by animating the ball directly, but it's a lot of work. You'd need to, on every frame (or every view(_:didUpdate:for:) delegate callback):
Save off the updated position, rotation, and scale values for the anchor-based nodes at each end of the animation. You'll need to do this twice per didUpdate callback, because you'll get one callback for each node.
Work out position, rotation, and scale values for the node being animated, by interpolating between the two endpoint values based on the current time.
Set the new attributes on the node. (Or maybe animate it to those attributes over a very short duration, so it doesn't jump too much in one frame?)
That's kind of a lot of work to shoehorn a fake 3D illusion into a 2D graphics toolkit β€” hence my comments about SpriteKit not being a great first step into ARKit.
If you want 3D positioning and animation for your AR overlays, it's a lot easier to use a 3D graphics toolkit. Here's a repeat of the previous example, but using SceneKit instead. Start with the ARKit/SceneKit Xcode template, take the spaceship out, and paste the same touchesBegan function from above into the ViewController. (Change the as ARSKView casts to as ARSCNView, too.)
Then, some quick code for placing 2D billboarded sprites, matching via SceneKit the behavior of the ARKit/SpriteKit template:
// in global scope
func makeBillboardNode(image: UIImage) -> SCNNode {
let plane = SCNPlane(width: 0.1, height: 0.1)
plane.firstMaterial!.diffuse.contents = image
let node = SCNNode(geometry: plane)
node.constraints = [SCNBillboardConstraint()]
return node
}
// inside ViewController
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// emoji to image based on https://stackoverflow.com/a/41021662/957768
let billboard = makeBillboardNode(image: "⛹️".image())
node.addChildNode(billboard)
}
Finally, adding the animation for the bouncing ball:
let ballNode = makeBillboardNode(image: "πŸ€".image())
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSCNView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
sceneView.scene.rootNode.addChildNode(ballNode)
}
let animation = CABasicAnimation(keyPath: #keyPath(SCNNode.transform))
animation.fromValue = start.transform
animation.toValue = end.transform
animation.duration = time
animation.autoreverses = true
animation.repeatCount = .infinity
ballNode.removeAllAnimations()
ballNode.addAnimation(animation, forKey: nil)
}
This time the animation code is a lot shorter than the SpriteKit version.
Here's how it looks in action.
Because we're working in 3D to start with, we're actually animating between two 3D positions β€” unlike in the SpriteKit version, the animation stays where it's supposed to. (And without the extra work for directly interpolating and animating attributes.)

Animation in core graphics

I followed this awesome Rey Wenderlich tutorial to make an Bezier arc and increment/decrement values. But how to animate the arch instead of just step-up and step-down?
http://www.raywenderlich.com/90690/modern-core-graphics-with-swift-part-1
I tried putting animation block in custom property declaration, which I dont think is the right place to do it and xcode doesn't let me do it anyway.
#IBInspectable var counter: Int = 5 {
didSet {
if counter <= NoOfGlasses {
//the view needs to be refreshed
UIView.animateWithDuration(0.2, animations: {
setNeedsDisplay()
}, completion:nil
)
}
}
}
Also tried to put the increment in animation block in View controller, didn't work.
#IBAction func btnPushButton(sender: AnyObject) {
UIView.animateWithDuration(0.2, animations: {
self.arcView.counter = self.arcView.counter + 10
self.counterLabel.text = String(self.arcView.counter)
}, completion:nil
)
}
Are you describing this sort of thing?
That's a simpler example - it's just a drawn triangle - but it's the same idea, if I'm understanding you correctly: we are animating the difference between one drawing and another.
Basically you have two choices. The easy way is to use CAShapeLayer, which animates for you automatically when you change its path. The other choice is to do what I'm doing here, which is to create a custom animatable property - in this case, a property representing the x-position of the bottom point of the triangle.

Why does overriding the position variable of my SKSpriteNode subclass slow things down so much?

In Scratch there is a cool function called penDown which causes your sprite to trace a line of some color across the screen whenever it moves from A to B. I wanted to recreate this behavior by subclassing SKSpriteNode and getting notified whenever the position changes. However, this simple override is causing the whole thing to slow down a ton (FPS drops from 20 to 7 with only two sprites):
override var position : CGPoint {
get {
return super.position
}
set {
super.position = newValue
// Add this new point to the bezier path of the line so that I can trace it.
}
}
Why is this happening?
In this case you should be using property observers instead of overriding get and set.
override var position : CGPoint {
didSet {
// Add this new point to the bezier path of the line so that I can trace it.
}
}