Core Graphics with DisplayLink Unexpected Behavior - swift

I'm trying to learn Core Graphics and am having trouble understanding the behavior of the code I've written, which uses a subclassed UIView and an override of the draw(_ rect:) function.
I've written a basic bouncing ball demo. Any number of random balls are created with random position and speed. They then bounce around the screen.
My issue is the way that the balls appear to move is unexpected and there is a lot of flicker. Here is the sequence inside for loops to iterate through all balls:
Check for collisions.
If there is a collision with the wall, multiply speed by -1.
Increment ball position by ball speed.
I'm currently not clearing the context, so I would expect the existing balls to stay put. Instead they seem to slide smoothly along with the ball that's moving.
I'd like to understand why this is the case.
Here is an image of how the current code runs at 4 fps so that you can see how the shapes are being drawn and shift back and forth:
Here is my code:
class ViewController: UIViewController {
let myView = MyView()
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemBlue
myView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(myView)
NSLayoutConstraint.activate([
myView.centerXAnchor.constraint(equalTo: view.centerXAnchor),
myView.centerYAnchor.constraint(equalTo: view.centerYAnchor),
myView.widthAnchor.constraint(equalTo: view.widthAnchor),
myView.heightAnchor.constraint(equalTo: view.heightAnchor)
])
createDisplayLink(fps: 60)
}
func createDisplayLink(fps: Int) {
let displaylink = CADisplayLink(target: self,
selector: #selector(step))
displaylink.preferredFramesPerSecond = fps
displaylink.add(to: .current,
forMode: RunLoop.Mode.default)
}
#objc func step(displaylink: CADisplayLink) {
myView.setNeedsDisplay()
}
}
class MyView: UIView {
let numBalls = 5
var balls = [Ball]()
override init(frame:CGRect) {
super.init(frame:frame)
for _ in 0..<numBalls {
balls.append(
Ball(
ballPosition: Vec2(x: CGFloat.random(in: 0...UIScreen.main.bounds.width), y: CGFloat.random(in: 0...UIScreen.main.bounds.height)),
ballSpeed: Vec2(x: CGFloat.random(in: 0.5...2), y: CGFloat.random(in: 0.5...2))))
}
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
for i in 0..<numBalls {
if balls[i].ballPosition.x > self.bounds.width - balls[i].ballSize || balls[i].ballPosition.x < 0 {
balls[i].ballSpeed.x *= -1
}
balls[i].ballPosition.x += balls[i].ballSpeed.x
if balls[i].ballPosition.y > self.bounds.height - balls[i].ballSize || balls[i].ballPosition.y < 0 {
balls[i].ballSpeed.y *= -1
}
balls[i].ballPosition.y += balls[i].ballSpeed.y
}
for i in 0..<numBalls {
context.setFillColor(UIColor.red.cgColor)
context.setStrokeColor(UIColor.green.cgColor)
context.setLineWidth(0)
let rectangle = CGRect(x: balls[i].ballPosition.x, y: balls[i].ballPosition.y, width: balls[i].ballSize, height: balls[i].ballSize)
context.addEllipse(in: rectangle)
context.drawPath(using: .fillStroke)
}
}
}

There are a lot of misunderstandings here, so I'll try to take them one by one:
CADisplayLink does not promise it will call your step method every 1/60 of a second. There's a reason the property is called preferred frames per second. It's just a hint to the system of what you'd like. It may call you less often, and in any case there will be some amount of error.
To perform your own animations by hand, you need to look at what time is actually attached to the given frame, and use that to determine where things are. The CADisplayLink includes a timestamp to let you know that. You can't just increment by speed. You need to multiply speed by actual time to determine distance.
"I'm currently not clearing the context, so I would expect the existing balls to stay put." Every time draw(rect:) is called, you receive a fresh context. It is your responsibility to draw everything for the current frame. There is no persistence between frames. (Core Animation generally provides those kinds of features by efficiently composing CALayers together; but you've chosen to use Core Graphics, and there you need to draw everything every time. We generally do not use Core Graphics this way.)
myView.setNeedsDisplay() does not mean "draw this frame right now." It means "the next time you're going to draw, this view needs to be redrawn." Depending on exactly when the CADisplayLink fires, you may drop a frame, or you might not. Using Core Graphics, you would need to update all the circle's locations before calling setNeedsDisplay(). Then draw(rect:) should just draw them, not compute what they are. (CADisplayLink is really designed to work with CALayers, though, and NSView drawing isn't designed to be updated so often, so this still may be a little tricky to keep smooth.)
The more normal way to create this system would be to generate a CAShapeLayer for each ball and position them on the NSView's layer. Then in the CADisplayLink callback, you would adjust their positions based on the timestamp of the next frame. Alternately, you could just set up a repeating NSTimer or DispatchTimerSource (rather than a CADisplayLink) at something well below the screen refresh speed (like 1/20 s) and move the layer positions in that callback. This would be nice and simple and avoid the complexities of CADisplayLink (which is much more powerful, but expects you to use the timestamp and consider other soft real-time concerns).

Related

GameplayKit GKGoal: can't get wandering to work

Trying to learn how to use GameplayKit, and in particular, agents & behaviors. Trying to boil down all the tutorials and examples out there into a small, simple piece of code that I can use as a starting point for my own app. Unfortunately, what I've come up with doesn't work properly, and I can't figure out why. It's just a simple sprite with a simple GKGoal(toWander:). Rather than wandering, it just moves in a straight line to the right, forever. It also starts out impossibly slowly and speeds up impossibly slowly, despite my setting the max speed & acceleration to ridiculously high values. I can't figure out the fundamental difference between my simple code and all the complex examples out there. Here's the code, minus required init?(coder aDecoder: NSCoder):
class GremlinAgent: GKAgent2D {
override init() {
super.init()
maxAcceleration = 100000
maxSpeed = 1000000
radius = 20
}
override func update(deltaTime seconds: TimeInterval) {
super.update(deltaTime: seconds)
let goal = GKGoal(toWander: 100)
behavior = GKBehavior(goal: goal, weight: 1)
}
}
class Gremlin: GKEntity {
let sprite: SKShapeNode
init(scene: GameScene) {
sprite = SKShapeNode(circleOfRadius: 20)
sprite.fillColor = .blue
scene.addChild(sprite)
super.init()
let agent = GremlinAgent()
addComponent(agent)
let node = GKSKNodeComponent(node: sprite)
addComponent(node)
agent.delegate = node
}
}
And in GameScene.swift, didMove(to view:):
let gremlin = Gremlin(scene: self)
entities.append(gremlin)
Can anyone help me out?
As has been pointed out elsewhere, you have to set the weight for the goal very high. Try 100, or even 1000, and notice the difference in behavior. But even with these large weights, there's still a problem in your example: the maxSpeed value. You can't set it so high, or your sprite will just fly off in a straight line. Set it to a value closer to the speed you set in the GKGoal object.
Also notice that the wandering will always start off in the direction your sprite is pointing, so if you don't want it to always start off moving to the right, set zRotation to some random value.
Finally, don't create a new behavior in every call to update(). For wandering, you can just set it once, say, in init().
Here's some code that works:
class GremlinAgent: GKAgent2D {
override init() {
super.init()
maxAcceleration = 100000
maxSpeed = 100
let goal = GKGoal(toWander: 100)
behavior = GKBehavior(goal: goal, weight: 1000)
}
}
class Gremlin: GKEntity {
let sprite: SKShapeNode
init(scene: GameScene) {
sprite = SKShapeNode(circleOfRadius: 20)
sprite.fillColor = .blue
sprite.zRotation = CGFloat(GKRandomDistribution(lowestValue: 0, highestValue: 360).nextInt())
scene.addChild(sprite)
super.init()
let agent = GremlinAgent()
addComponent(agent)
let node = GKSKNodeComponent(node: sprite)
addComponent(node)
agent.delegate = node
}
}

How do I programmatically move an ARAnchor?

I'm trying out the new ARKit to replace another similar solution I have. It's pretty great! But I can't seem to figure out how to move an ARAnchor programmatically. I want to slowly move the anchor to the left of the user.
Creating the anchor to be 2 meters in front of the user:
var translation = matrix_identity_float4x4
translation.columns.3.z = -2.0
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
later, moving the object to the left/right of the user (x-axis)...
anchor.transform.columns.3.x = anchor.transform.columns.3.x + 0.1
repeated every 50 milliseconds (or whatever).
The above does not work because transform is a get-only property.
I need a way to change the position of an AR object in space relative to the user in a way that keeps the AR experience intact - meaning, if you move your device, the AR object will be moving but also won't be "stuck" to the camera like it's simply painted on, but moves like you would see a person move while you were walking by - they are moving and you are moving and it looks natural.
Please note the scope of this question relates only to how to move an object in space in relation to the user (ARAnchor), not in relation to a plane (ARPlaneAnchor) or to another detected surface (ARHitTestResult).
Thanks!
You don't need to move anchors. (hand wave) That's not the API you're looking for...
Adding ARAnchor objects to a session is effectively about "labeling" a point in real-world space so that you can refer to it later. The point (1,1,1) (for example) is always the point (1,1,1) — you can't move it someplace else because then it's not the point (1,1,1) anymore.
To make a 2D analogy: anchors are reference points sort of like the bounds of a view. The system (or another piece of your code) tells the view where it's boundaries are, and the view draws its content relative to those boundaries. Anchors in AR give you reference points you can use for drawing content in 3D.
What you're asking is really about moving (and animating the movement of) virtual content between two points. And ARKit itself really isn't about displaying or animating virtual content — there are plenty of great graphics engines out there, so ARKit doesn't need to reinvent that wheel. What ARKit does is provide a real-world frame of reference for you to display or animate content using an existing graphics technology like SceneKit or SpriteKit (or Unity or Unreal, or a custom engine built with Metal or GL).
Since you mentioned trying to do this with SpriteKit... beware, it gets messy. SpriteKit is a 2D engine, and while ARSKView provides some ways to shoehorn a third dimension in there, those ways have their limits.
ARSKView automatically updates the xScale, yScale, and zRotation of each sprite associated with an ARAnchor, providing the illusion of 3D perspective. But that applies only to nodes attached to anchors, and as noted above, anchors are static.
You can, however, add other nodes to your scene, and use those same properties to make those nodes match the ARSKView-managed nodes. Here's some code you can add/replace in the ARKit/SpriteKit Xcode template project to do that. We'll start with some basic logic to run a bouncing animation on the third tap (after using the first two taps to place anchors).
var anchors: [ARAnchor] = []
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
// Start bouncing on touch after placing 2 anchors (don't allow more)
if anchors.count > 1 {
startBouncing(time: 1)
return
}
// Create anchor using the camera's current position
guard let sceneView = self.view as? ARSKView else { return }
if let currentFrame = sceneView.session.currentFrame {
// Create a transform with a translation of 30 cm in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.3
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
anchors.append(anchor)
}
}
Then, some SpriteKit fun for making that animation happen:
var ballNode: SKLabelNode = {
let labelNode = SKLabelNode(text: "🏀")
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
return labelNode
}()
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSKView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
addChild(ballNode)
}
ballNode.setScale(start.xScale)
ballNode.zRotation = start.zRotation
ballNode.position = start.position
let scale = SKAction.scale(to: end.xScale, duration: time)
let rotate = SKAction.rotate(toAngle: end.zRotation, duration: time)
let move = SKAction.move(to: end.position, duration: time)
let scaleBack = SKAction.scale(to: start.xScale, duration: time)
let rotateBack = SKAction.rotate(toAngle: start.zRotation, duration: time)
let moveBack = SKAction.move(to: start.position, duration: time)
let action = SKAction.repeatForever(.sequence([
.group([scale, rotate, move]),
.group([scaleBack, rotateBack, moveBack])
]))
ballNode.removeAllActions()
ballNode.run(action)
}
Here's a video so you can see this code in action. You'll notice that the illusion only works as long as you don't move the camera — not so great for AR. When using SKAction, we can't adjust the start/end states of the animation while animating, so the ball keeps bouncing back and forth between its original (screen-space) positions/rotations/scales.
You could do better by animating the ball directly, but it's a lot of work. You'd need to, on every frame (or every view(_:didUpdate:for:) delegate callback):
Save off the updated position, rotation, and scale values for the anchor-based nodes at each end of the animation. You'll need to do this twice per didUpdate callback, because you'll get one callback for each node.
Work out position, rotation, and scale values for the node being animated, by interpolating between the two endpoint values based on the current time.
Set the new attributes on the node. (Or maybe animate it to those attributes over a very short duration, so it doesn't jump too much in one frame?)
That's kind of a lot of work to shoehorn a fake 3D illusion into a 2D graphics toolkit — hence my comments about SpriteKit not being a great first step into ARKit.
If you want 3D positioning and animation for your AR overlays, it's a lot easier to use a 3D graphics toolkit. Here's a repeat of the previous example, but using SceneKit instead. Start with the ARKit/SceneKit Xcode template, take the spaceship out, and paste the same touchesBegan function from above into the ViewController. (Change the as ARSKView casts to as ARSCNView, too.)
Then, some quick code for placing 2D billboarded sprites, matching via SceneKit the behavior of the ARKit/SpriteKit template:
// in global scope
func makeBillboardNode(image: UIImage) -> SCNNode {
let plane = SCNPlane(width: 0.1, height: 0.1)
plane.firstMaterial!.diffuse.contents = image
let node = SCNNode(geometry: plane)
node.constraints = [SCNBillboardConstraint()]
return node
}
// inside ViewController
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// emoji to image based on https://stackoverflow.com/a/41021662/957768
let billboard = makeBillboardNode(image: "⛹️".image())
node.addChildNode(billboard)
}
Finally, adding the animation for the bouncing ball:
let ballNode = makeBillboardNode(image: "🏀".image())
func startBouncing(time: TimeInterval) {
guard
let sceneView = self.view as? ARSCNView,
let first = anchors.first, let start = sceneView.node(for: first),
let last = anchors.last, let end = sceneView.node(for: last)
else { return }
if ballNode.parent == nil {
sceneView.scene.rootNode.addChildNode(ballNode)
}
let animation = CABasicAnimation(keyPath: #keyPath(SCNNode.transform))
animation.fromValue = start.transform
animation.toValue = end.transform
animation.duration = time
animation.autoreverses = true
animation.repeatCount = .infinity
ballNode.removeAllAnimations()
ballNode.addAnimation(animation, forKey: nil)
}
This time the animation code is a lot shorter than the SpriteKit version.
Here's how it looks in action.
Because we're working in 3D to start with, we're actually animating between two 3D positions — unlike in the SpriteKit version, the animation stays where it's supposed to. (And without the extra work for directly interpolating and animating attributes.)

How to reproduce this Xcode blue drag line

I'd like to reproduce the Xcode blue drag line in my app.
Do you know a way to code this ?
I know how to draw a line using Core Graphics ...
But this line has to be over the top of all other items (on the screen).
I'm posting this after you've posted your own answer, so this is probably a huge waste of time. But your answer only covers drawing a really bare-bones line on the screen and doesn't cover a bunch of other interesting stuff that you need to take care of to really replicate Xcode's behavior and even go beyond it:
drawing a nice connection line like Xcode's (with a shadow, an outline, and big rounded ends),
drawing the line across multiple screens,
using Cocoa drag and drop to find the drag target and to support spring-loading.
Here's a demo of what I'm going to explain in this answer:
In this github repo, you can find an Xcode project containing all the code in this answer plus the remaining glue code necessary to run a demo app.
Drawing a nice connection line like Xcode's
Xcode's connection line looks like an old-timey barbell. It has a straight bar of arbitrary length, with a circular bell at each end:
What do we know about that shape? The user provides the start and end points (the centers of the bells) by dragging the mouse, and our user interface designer specifies the radius of the bells and the thickness of the bar:
The length of the bar is the distance from startPoint to endPoint: length = hypot(endPoint.x - startPoint.x, endPoint.y - startPoint.y).
To simplify the process of creating a path for this shape, let's draw it in a standard pose, with the left bell at the origin and the bar parallel to the x axis. In this pose, here's what we know:
We can create this shape as a path by making a circular arc centered at the origin, connected to another (mirror image) circular arc centered at (length, 0). To create these arcs, we need this mysteryAngle:
We can figure out mysteryAngle if we can find any of the arc endpoints where the bell meets the bar. Specifically, we'll find the coordinates of this point:
What do we know about that mysteryPoint? We know it's at the intersection of the bell and the top of the bar. So we know it's at distance bellRadius from the origin, and at distance barThickness / 2 from the x axis:
So immediately we know that mysteryPoint.y = barThickness / 2, and we can use the Pythagorean theorem to compute mysteryPoint.x = sqrt(bellRadius² - mysteryPoint.y²).
With mysteryPoint located, we can compute mysteryAngle using our choice of inverse trigonometry function. Arcsine, I choose you! mysteryAngle = asin(mysteryPoint.y / bellRadius).
We now know everything we need to create the path in the standard pose. To move it from the standard pose to the desired pose (which goes from startPoint to endPoint, remember?), we'll apply an affine transform. The transform will translate (move) the path so the left bell is centered at startPoint and rotate the path so the right bell ends up at endPoint.
In writing the code to create the path, we want to be careful of a few things:
What if the length is so short that the bells overlap? We should handle that gracefully by adjusting mysteryAngle so the bells connect seamlessly with no weird “negative bar” between them.
What if bellRadius is smaller than barThickness / 2? We should handle that gracefully by forcing bellRadius to be at least barThickness / 2.
What if length is zero? We need to avoid division by zero.
Here's my code to create the path, handling all those cases:
extension CGPath {
class func barbell(from start: CGPoint, to end: CGPoint, barThickness proposedBarThickness: CGFloat, bellRadius proposedBellRadius: CGFloat) -> CGPath {
let barThickness = max(0, proposedBarThickness)
let bellRadius = max(barThickness / 2, proposedBellRadius)
let vector = CGPoint(x: end.x - start.x, y: end.y - start.y)
let length = hypot(vector.x, vector.y)
if length == 0 {
return CGPath(ellipseIn: CGRect(origin: start, size: .zero).insetBy(dx: -bellRadius, dy: -bellRadius), transform: nil)
}
var yOffset = barThickness / 2
var xOffset = sqrt(bellRadius * bellRadius - yOffset * yOffset)
let halfLength = length / 2
if xOffset > halfLength {
xOffset = halfLength
yOffset = sqrt(bellRadius * bellRadius - xOffset * xOffset)
}
let jointRadians = asin(yOffset / bellRadius)
let path = CGMutablePath()
path.addArc(center: .zero, radius: bellRadius, startAngle: jointRadians, endAngle: -jointRadians, clockwise: false)
path.addArc(center: CGPoint(x: length, y: 0), radius: bellRadius, startAngle: .pi + jointRadians, endAngle: .pi - jointRadians, clockwise: false)
path.closeSubpath()
let unitVector = CGPoint(x: vector.x / length, y: vector.y / length)
var transform = CGAffineTransform(a: unitVector.x, b: unitVector.y, c: -unitVector.y, d: unitVector.x, tx: start.x, ty: start.y)
return path.copy(using: &transform)!
}
}
Once we have the path, we need to fill it with the correct color, stroke it with the correct color and line width, and draw a shadow around it. I used Hopper Disassembler on IDEInterfaceBuilderKit to figure out Xcode's exact sizes and colors. Xcode draws it all into a graphics context in a custom view's drawRect:, but we'll make our custom view use a CAShapeLayer. We won't end up drawing the shadow precisely the same as Xcode, but it's close enough.
class ConnectionView: NSView {
struct Parameters {
var startPoint = CGPoint.zero
var endPoint = CGPoint.zero
var barThickness = CGFloat(2)
var ballRadius = CGFloat(3)
}
var parameters = Parameters() { didSet { needsLayout = true } }
override init(frame: CGRect) {
super.init(frame: frame)
commonInit()
}
required init?(coder decoder: NSCoder) {
super.init(coder: decoder)
commonInit()
}
let shapeLayer = CAShapeLayer()
override func makeBackingLayer() -> CALayer { return shapeLayer }
override func layout() {
super.layout()
shapeLayer.path = CGPath.barbell(from: parameters.startPoint, to: parameters.endPoint, barThickness: parameters.barThickness, bellRadius: parameters.ballRadius)
shapeLayer.shadowPath = CGPath.barbell(from: parameters.startPoint, to: parameters.endPoint, barThickness: parameters.barThickness + shapeLayer.lineWidth / 2, bellRadius: parameters.ballRadius + shapeLayer.lineWidth / 2)
}
private func commonInit() {
wantsLayer = true
shapeLayer.lineJoin = kCALineJoinMiter
shapeLayer.lineWidth = 0.75
shapeLayer.strokeColor = NSColor.white.cgColor
shapeLayer.fillColor = NSColor(calibratedHue: 209/360, saturation: 0.83, brightness: 1, alpha: 1).cgColor
shapeLayer.shadowColor = NSColor.selectedControlColor.blended(withFraction: 0.2, of: .black)?.withAlphaComponent(0.85).cgColor
shapeLayer.shadowRadius = 3
shapeLayer.shadowOpacity = 1
shapeLayer.shadowOffset = .zero
}
}
We can test this in a playground to make sure it looks good:
import PlaygroundSupport
let view = NSView()
view.setFrameSize(CGSize(width: 400, height: 200))
view.wantsLayer = true
view.layer!.backgroundColor = NSColor.white.cgColor
PlaygroundPage.current.liveView = view
for i: CGFloat in stride(from: 0, through: 9, by: CGFloat(0.4)) {
let connectionView = ConnectionView(frame: view.bounds)
connectionView.parameters.startPoint = CGPoint(x: CGFloat(i) * 40 + 15, y: 50)
connectionView.parameters.endPoint = CGPoint(x: CGFloat(i) * 40 + 15, y: 50 + CGFloat(i))
view.addSubview(connectionView)
}
let connectionView = ConnectionView(frame: view.bounds)
connectionView.parameters.startPoint = CGPoint(x: 50, y: 100)
connectionView.parameters.endPoint = CGPoint(x: 350, y: 150)
view.addSubview(connectionView)
Here's the result:
Drawing across multiple screens
If you have multiple screens (displays) attached to your Mac, and if you have “Displays have separate Spaces” turned on (which is the default) in the Mission Control panel of your System Preferences, then macOS will not let a window span two screens. This means that you can't use a single window to draw the connecting line across multiple monitors. This matters if you want to let the user connect an object in one window to an object in another window, like Xcode does:
Here's the checklist for drawing the line, across multiple screens, on top of our other windows:
We need to create one window per screen.
We need to set up each window to fill its screen and be completely transparent with no shadow.
We need to set the window level of each window to 1 to keep it above our normal windows (which have a window level of 0).
We need to tell each window not to release itself when closed, because we don't like mysterious autorelease pool crashes.
Each window needs its own ConnectionView.
To keep the coordinate systems uniform, we'll adjust the bounds of each ConnectionView so that its coordinate system matches the screen coordinate system.
We'll tell each ConnectionView to draw the entire connecting line; each view will clip what it draws to its own bounds.
It probably won't happen, but we'll arrange to be notified if the screen arrangement changes. If that happens, we'll add/remove/update windows to cover the new arrangement.
Let's make a class to encapsulate all these details. With an instance of LineOverlay, we can update the start and end points of the connection as needed, and remove the overlay from the screen when we're done.
class LineOverlay {
init(startScreenPoint: CGPoint, endScreenPoint: CGPoint) {
self.startScreenPoint = startScreenPoint
self.endScreenPoint = endScreenPoint
NotificationCenter.default.addObserver(self, selector: #selector(LineOverlay.screenLayoutDidChange(_:)), name: .NSApplicationDidChangeScreenParameters, object: nil)
synchronizeWindowsToScreens()
}
var startScreenPoint: CGPoint { didSet { setViewPoints() } }
var endScreenPoint: CGPoint { didSet { setViewPoints() } }
func removeFromScreen() {
windows.forEach { $0.close() }
windows.removeAll()
}
private var windows = [NSWindow]()
deinit {
NotificationCenter.default.removeObserver(self)
removeFromScreen()
}
#objc private func screenLayoutDidChange(_ note: Notification) {
synchronizeWindowsToScreens()
}
private func synchronizeWindowsToScreens() {
var spareWindows = windows
windows.removeAll()
for screen in NSScreen.screens() ?? [] {
let window: NSWindow
if let index = spareWindows.index(where: { $0.screen === screen}) {
window = spareWindows.remove(at: index)
} else {
let styleMask = NSWindowStyleMask.borderless
window = NSWindow(contentRect: .zero, styleMask: styleMask, backing: .buffered, defer: true, screen: screen)
window.contentView = ConnectionView()
window.isReleasedWhenClosed = false
window.ignoresMouseEvents = true
}
windows.append(window)
window.setFrame(screen.frame, display: true)
// Make the view's geometry match the screen geometry for simplicity.
let view = window.contentView!
var rect = view.bounds
rect = view.convert(rect, to: nil)
rect = window.convertToScreen(rect)
view.bounds = rect
window.backgroundColor = .clear
window.isOpaque = false
window.hasShadow = false
window.isOneShot = true
window.level = 1
window.contentView?.needsLayout = true
window.orderFront(nil)
}
spareWindows.forEach { $0.close() }
}
private func setViewPoints() {
for window in windows {
let view = window.contentView! as! ConnectionView
view.parameters.startPoint = startScreenPoint
view.parameters.endPoint = endScreenPoint
}
}
}
Using Cocoa drag and drop to find the drag target and perform spring-loading
We need a way to find the (potential) drop target of the connection as the user drags the mouse around. It would also be nice to support spring loading.
In case you don't know, spring loading is a macOS feature in which, if you hover a drag over a container for a moment, macOS will automatically open the container without interrupting the drag. Examples:
If you drag onto a window that's not the frontmost window, macOS will bring the window to the front.
if you drag onto a Finder folder icon, and the Finder will open the folder window to let you drag onto an item in the folder.
If you drag onto a tab handle (at the top of the window) in Safari or Chrome, the browser will select the tab, letting you drop your item in the tab.
If you control-drag a connection in Xcode onto a menu item in the menu bar in your storyboard or xib, Xcode will open the item's menu.
If we use the standard Cocoa drag and drop support to track the drag and find the drop target, then we'll get spring loading support “for free”.
To support standard Cocoa drag and drop, we need to implement the NSDraggingSource protocol on some object, so we can drag from something, and the NSDraggingDestination protocol on some other object, so we can drag to something. We'll implement NSDraggingSource in a class called ConnectionDragController, and we'll implement NSDraggingDestination in a custom view class called DragEndpoint.
First, let's look at DragEndpoint (an NSView subclass). NSView already conforms to NSDraggingDestination, but doesn't do much with it. We need to implement four methods of the NSDraggingDestination protocol. The drag session will call these methods to let us know when the drag enters and leaves the destination, when the drag ends entirely, and when to “perform” the drag (assuming this destination was where the drag actually ended). We also need to register the type of dragged data that we can accept.
We want to be careful of two things:
We only want to accept a drag that is a connection attempt. We can figure out whether a drag is a connection attempt by checking whether the source is our custom drag source, ConnectionDragController.
We'll make DragEndpoint appear to be the drag source (visually only, not programmatically). We don't want to let the user connect an endpoint to itself, so we need to make sure the endpoint that is the source of the connection cannot also be used as the target of the connection. We'll do that using a state property that tracks whether this endpoint is idle, acting as the source, or acting as the target.
When the user finally releases the mouse button over a valid drop destination, the drag session makes it the destination's responsibility to “perform” the drag by sending it performDragOperation(_:). The session doesn't tell the drag source where the drop finally happened. But we probably want to do the work of making the connection (in our data model) back in the source. Think about how it works in Xcode: when you control-drag from a button in Main.storyboard to ViewController.swift and create an action, the connection is not recorded in ViewController.swift where the drag ended; it's recorded in Main.storyboard, as part of the button's persistent data. So when the drag session tells the destination to “perform” the drag, we'll make our destination (DragEndpoint) pass itself back to a connect(to:) method on the drag source where the real work can happen.
class DragEndpoint: NSView {
enum State {
case idle
case source
case target
}
var state: State = State.idle { didSet { needsLayout = true } }
public override func draggingEntered(_ sender: NSDraggingInfo) -> NSDragOperation {
guard case .idle = state else { return [] }
guard (sender.draggingSource() as? ConnectionDragController)?.sourceEndpoint != nil else { return [] }
state = .target
return sender.draggingSourceOperationMask()
}
public override func draggingExited(_ sender: NSDraggingInfo?) {
guard case .target = state else { return }
state = .idle
}
public override func draggingEnded(_ sender: NSDraggingInfo?) {
guard case .target = state else { return }
state = .idle
}
public override func performDragOperation(_ sender: NSDraggingInfo) -> Bool {
guard let controller = sender.draggingSource() as? ConnectionDragController else { return false }
controller.connect(to: self)
return true
}
override init(frame: NSRect) {
super.init(frame: frame)
commonInit()
}
required init?(coder decoder: NSCoder) {
super.init(coder: decoder)
commonInit()
}
private func commonInit() {
wantsLayer = true
register(forDraggedTypes: [kUTTypeData as String])
}
// Drawing code omitted here but is in my github repo.
}
Now we can implement ConnectionDragController to act as the drag source and to manage the drag session and the LineOverlay.
To start a drag session, we have to call beginDraggingSession(with:event:source:) on a view; it'll be the DragEndpoint where the mouse-down event happened.
The session notifies the source when the drag actually starts, when it moves, and when it ends. We use those notifications to create and update the LineOverlay.
Since we're not providing any images as part of our NSDraggingItem, the session won't draw anything being dragged. This is good.
By default, if the drag ends outside of a valid destination, the session will animate… nothing… back to the start of the drag, before notifying the source that the drag has ended. During this animation, the line overlay hangs around, frozen. It looks broken. We tell the session not to animate back to the start to avoid this.
Since this is just a demo, the “work” we do to connect the endpoints in connect(to:) is just printing their descriptions. In a real app, you'd actually modify your data model.
class ConnectionDragController: NSObject, NSDraggingSource {
var sourceEndpoint: DragEndpoint?
func connect(to target: DragEndpoint) {
Swift.print("Connect \(sourceEndpoint!) to \(target)")
}
func trackDrag(forMouseDownEvent mouseDownEvent: NSEvent, in sourceEndpoint: DragEndpoint) {
self.sourceEndpoint = sourceEndpoint
let item = NSDraggingItem(pasteboardWriter: NSPasteboardItem(pasteboardPropertyList: "\(view)", ofType: kUTTypeData as String)!)
let session = sourceEndpoint.beginDraggingSession(with: [item], event: mouseDownEvent, source: self)
session.animatesToStartingPositionsOnCancelOrFail = false
}
func draggingSession(_ session: NSDraggingSession, sourceOperationMaskFor context: NSDraggingContext) -> NSDragOperation {
switch context {
case .withinApplication: return .generic
case .outsideApplication: return []
}
}
func draggingSession(_ session: NSDraggingSession, willBeginAt screenPoint: NSPoint) {
sourceEndpoint?.state = .source
lineOverlay = LineOverlay(startScreenPoint: screenPoint, endScreenPoint: screenPoint)
}
func draggingSession(_ session: NSDraggingSession, movedTo screenPoint: NSPoint) {
lineOverlay?.endScreenPoint = screenPoint
}
func draggingSession(_ session: NSDraggingSession, endedAt screenPoint: NSPoint, operation: NSDragOperation) {
lineOverlay?.removeFromScreen()
sourceEndpoint?.state = .idle
}
func ignoreModifierKeys(for session: NSDraggingSession) -> Bool { return true }
private var lineOverlay: LineOverlay?
}
That's all you need. As a reminder, you can find a link at the top of this answer to a github repo containing a complete demo project.
Using a transparent NSWindow :
var window: NSWindow!
func createLinePath(from: NSPoint, to: NSPoint) -> CGPath {
let path = CGMutablePath()
path.move(to: from)
path.addLine(to: to)
return path
}
override func viewDidLoad() {
super.viewDidLoad()
//Transparent window
window = NSWindow()
window.styleMask = .borderless
window.backgroundColor = .clear
window.isOpaque = false
window.hasShadow = false
//Line
let line = CAShapeLayer()
line.path = createLinePath(from: NSPoint(x: 0, y: 0), to: NSPoint(x: 100, y: 100))
line.lineWidth = 10.0
line.strokeColor = NSColor.blue.cgColor
//Update
NSEvent.addLocalMonitorForEvents(matching: [.mouseMoved]) {
let newPos = NSEvent.mouseLocation()
line.path = self.createLinePath(from: NSPoint(x: 0, y: 0), to: newPos)
return $0
}
window.contentView!.layer = line
window.contentView!.wantsLayer = true
window.setFrame(NSScreen.main()!.frame, display: true)
window.makeKeyAndOrderFront(nil)
}
Trying to adopt Rob Mayoff's excellent solution above into my own project's interface, which is based around an NSOutlineView, I ran into a few problems. In case it helps anyone trying to achieve the same thing, I'll detail those pitfalls in this answer.
The sample code provided in the solution detects the start of a drag by implementing mouseDown(with:) on the view controller, and then calling hittest() on the window's content view in order to obtain the DragEndpoint subview where the (potential) drag is originating. When using outline views, this causes two pitfalls detailed in the next sections.
1. Mouse-Down Event
It seems that when a table view or outline view is involved, mouseDown(with:) never gets called on the view controller, and we need to instead override that method in the outline view itself.
2. Hit Testing
NSTableView -and by extension, NSOutlineView- overrides the NSResponder method validateProposedFirstResponder(_:for:), and this causes the hittest() method to fail: it always returns the outline view itself, and all subviews (including our target DragEndpoint subview inside the cell) remain inaccessible.
From the documentation:
Views or controls in a table sometimes need to respond to incoming
events. To determine whether a particular subview should receive the
current mouse event, a table view calls
validateProposedFirstResponder:forEvent: in its implementation of
hitTest. If you create a table view subclass, you can override
validateProposedFirstResponder:forEvent: to specify which views can
become the first responder. In this way, you receive mouse events.
At first I tried overriding:
override func validateProposedFirstResponder(_ responder: NSResponder, for event: NSEvent?) -> Bool {
if responder is DragEndpoint {
return true
}
return super.validateProposedFirstResponder(responder, for: event)
}
...and it worked, but reading the documentation further suggests a smarter, less intrusive approach:
The default NSTableView implementation of
validateProposedFirstResponder:forEvent: uses the following logic:
Return YES for all proposed first responder views unless they are
instances or subclasses of NSControl.
Determine whether the proposed
first responder is an NSControl instance or subclass. If the control
is an NSButton object, return YES. If the control is not an NSButton,
call the control’s hitTestForEvent:inRect:ofView: to see whether the
hit area is trackable (that is, NSCellHitTrackableArea) or is an
editable text area (that is, NSCellHitEditableTextArea), and return
the appropriate value. Note that if a text area is hit, NSTableView
also delays the first responder action.
(emphasis mine)
...which is weird, because it feels like it should say:
Return NO for all proposed first responder views unless they are
instances or subclasses of NSControl.
, but anyway, I instead modified Rob's code to make DragEndpoint a subclass of NSControl (not just NSView), and that works too.
3. Managing the Dragging Session
Because NSOutlineView only exposes a limited number of drag-and-drop events through its data source protocol (and the drag session itself can not be meaningfully modified from the data source's side), it seems that taking full control of the drag session is not possible unless we subclass the outline view and override the NSDraggingSource methods.
Only by overriding draggingSession(_:willBeginAt:) at the outline view itself can we prevent calling the superclass implementation and starting an actual item drag (which displays the dragged row image).
We could start a separate drag session from the mouseDown(with:) method of the DragEndpoint subview: when implemented, it is called before the same method on the outline view (which in turn is what triggers the dragging session to be started). But if we move the dragging session away from the outline view, it seems like it will be impossible to have springloading "for free" when dragging above an expandable item.
So instead, I discarded the ConnectionDragController class and moved all its logic to the outline view subclass: the tackDrag() method, the active DragEndpoint property, and all methods of the NSDraggingSource protocol into the outline view.
Ideally, I would have liked to avoid subclassing NSOutlineView (it is discouraged) and instead implement this behaviour more cleanly, exclusively through the outline view's delegate/data source and/or external classes (like the original ConnectionDragController), but it seems that it is impossible.
I haven't got the springloading part to work yet (it was working at a moment, but not now so I'm still looking into it...).
I too made a sample project, but I'm still fixing minor issues. I'll post a link to the GiHub repository as soon as it is ready.

How would I make a simple spinning dash in Swift? that spins on its center like a loader in terminal

I'm trying to create a simple spinning loading dash. I know how to do the loop but I can't seem to make it on a single line. Any ideas?
let loop = 1
while loop > 0 {
// spinning dash
}
I will not provide you with all the code to your question but rather a guideline of what to do. In general, its a two step algorithm.
Draw a line
Perform a 360° rotation of it for a desired time, t
The code posted below implements the first portion. I have added comments and I believe it should be self explanatory. After reviewing it, I'd recommend you read about UIBezierPath.
As for the second part, there are two ways of going about this.
1. Rotate the line itself (recommended)
Should you choose this method, here's a tutorial from Ray Wenderlich which covers it extensively along with the Math behind it. Follow through both portions of the tutorial if possible.
2. Rotate the view encompassing the line
Changing the outer view's background color to clear then rotating itself will give the illusion that the line inside is the one rotated. Here's a video guide for view rotations.
import UIKit
class ViewController: UIViewController
{
override func viewDidLoad()
{
super.viewDidLoad()
// This is the black subview in which the line will be drawn into
let lineView: GeneralDraw = GeneralDraw(frame: CGRect(origin: CGPoint(x: 20, y: 30), size: CGSize(width: 300, height: 300)))
// uncomment this to remove the black colour
// lineView.backgroundColor = .clear
// add this lineView to the mainView
self.view.addSubview(lineView)
}
}
// This handles the drawing inside a given view
class GeneralDraw: UIView
{
override init(frame: CGRect)
{
super.init(frame: frame)
}
required init?(coder aDecoder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
override func draw(_ rect: CGRect)
{
let linePath = UIBezierPath()
// start point of the line
linePath.move(to: CGPoint(x:50, y:10))
// end point of the line
linePath.addLine(to: CGPoint(x:200, y:10))
linePath.close()
// cosmetic settings for the line
UIColor.red.set()
linePath.stroke()
linePath.fill()
}
}
I would use a CAReplicatorLayer for this. You start with a layer that draws a horizontal bar and combine it with transforms that show the bar in the other positions. Then you animate the fading out of the bar, with an offset coordinated to the fading.
In this gif, I've deliberately slowed down the animation. (There is a mild glitch at the point where the gif repeats, but ignore that; the real project doesn't have that glitch.)
1. Solution: rotate Images
create a set of images which shows the dash rotating.
set the images to an array. then animate that `UIImageView.startAnimating()
see section "Animating a Sequence of Images" of UIImageView.
2. Solution: standard iOS activity indicator
But better go with the standard UIActivityIndicatorView
see also:
iOS Human Interface Guidelines: Progress Indicators
Reference for UIActivityIndicatorView

I'm having some trouble using x and y coordinates from touchesBegan as the center key in a CI filter

I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.