How to detect mouse event outside current NSView in MacOS? - swift

Dues to I am using CALayer's mask attribute to draw and clip the content, the mask's size is like a NSView with a 10 pixel border, some of my content is larger than the current NSView's size with 10 pixel. But currently the mouse move event seems only detected when mouse move into the NSView area, I am trying to use NSTrackingArea the negative origin but seems not work?
Does anybody know how to detect the mouse move event outside current view?
Thanks,
Eric
More detail of my question:
I need to draw some content outside my view (which is inside a bigger view) and detect the mouse
event on my content and this is my current limitation that I can't use a larger view to draw my whole content, just consider it a special type of shadow and I need to detect mouse event on the shadow, so I use a mask to clip the whole content. Here is my code used to apple the mask. The margin is how much pixel I want to extend from my view. (apply this function to a NSView directly)
func applyMaskToView(src: NSView, margin: CGFloat) {
src.wantsLayer = true
let mask = CALayer()
mask.backgroundColor = NSColor.black.cgColor
let maskFrame = CGRect(x: -margin,
y: -margin,
width: src.frame.size.width + 2*margin,
height: src.frame.size.height + 2*margin )
mask.frame = maskFrame
src.layer?.masksToBounds = false
src.layer?.mask = mask
return
}
The code I used to apply tracking area is here (inside the NSView subclass):
override func updateTrackingAreas() {
var trackingArea: NSTrackingArea?
let options1: NSTrackingArea.Options = [.mouseEnteredAndExited,
.activeAlways,
.mouseMoved,
.enabledDuringMouseDrag]
let options2: NSTrackingArea.Options = [options1, .inVisibleRect]
for area in self.trackingAreas {
self.removeTrackingArea(area)
}
if let activeBound = self.layer?.mask?.frame {
trackingArea = NSTrackingArea.init(rect: activeBound, options: options1, owner: self, userInfo: nil)
}
else {
trackingArea = NSTrackingArea.init(rect: self.bounds, options: options2, owner: self, userInfo: nil)
}
self.addTrackingArea(trackingArea!)
super.updateTrackingAreas()
}
If I don't use tracking area, I can't receive the mouse event in the margin area mention in the above code. However I can receive "mouse move" only now.

Related

Rotating UIControl with CAGradientLayer not updating correctly Swift

Rather than using a normal button, I subclassed a UIControl because I needed to add a gradient to it. I also have a way to add a shadow and an activity indicator (not visible in the image below) as a stateful button to stop users hammering the button if (for example) an API call is being made.
It was really tricky to try to get the UIControl to rotate, and to be able to do this I added the shadow as a separate view to a container view containing the UIControl so a shadow could be added.
Now the issue is the control does not behave quite like a view on rotation - let me show you a screen grab for context:
This is mid-rotation but is just about visible to the eye - the image shows that the Gradient is 75% of the length of a blue UIView in the image.
https://github.com/stevencurtis/statefulbutton
In order to perform this rotation I remove the shadowview and then change the frame of the gradient frame to its bounds, and this is the problem.
func viewRotated() {
CATransaction.setDisableActions(true)
shadowView!.removeFromSuperview()
shadowView!.frame = self.frame
shadowView!.layer.masksToBounds = false
shadowView!.layer.shadowOffset = CGSize(width: 0, height: 3)
shadowView!.layer.shadowRadius = 3
shadowView!.layer.shadowOpacity = 0.3
shadowView!.layer.shadowPath = UIBezierPath(roundedRect: self.bounds, byRoundingCorners: .allCorners, cornerRadii: CGSize(width: 20, height: 20)).cgPath
shadowView!.layer.shouldRasterize = true
shadowView!.layer.rasterizationScale = UIScreen.main.scale
self.gradientViewLayer.frame = self.bounds
self.selectedViewLayer.frame = self.bounds
CATransaction.commit()
self.insertSubview(shadowView!, at: 0)
}
So this rotation method is called through the parent view controller:
override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
super.viewWillTransition(to: size, with: coordinator)
coordinator.animate(alongsideTransition: { context in
context.viewController(forKey: UITransitionContextViewControllerKey.from)
//inform the loginButton that it is being rotated
self.loginButton.viewRotated()
}, completion: { context in
// can call here when completed the transition
})
}
I know this is the problem, and I guess it is not happening at quite the right time to act the same way as a UIView. Now the issue is that I have tried many things to get this to work, and my best solution (above) is not quite there.
It isn't helpful to suggest to use a UIButton, to use an image for the gradient (please don't suggest using a gradient image as a background for a UIButton, I've tried this) or a third party library. This is my work, it functions but does not work acceptably to me and I want to get it to work as well as a usual view (or at least know why not). I have tried the other solutions above as well, and have gone for my own UIControl. I know I can lock the view if there is an API call, or use other ways to stop the user pressing the button too many times. I'm trying to fix my solution, not invent ways of getting around this issue with CAGradientLayer.
The problem: I need to make a UIControlView with a CAGradientLayer as a background rotate in the same way as a UIView, and not exhibit the issue shown in the image above.
Full Example:
https://github.com/stevencurtis/statefulbutton
Here is working code:
https://gist.github.com/alldne/22d340b36613ae5870b3472fa1c64654
These are my recommendations to your code:
1. A proper place for setting size and the position of sublayers
The size of a view, namely your button, is determined after the layout is done. What you should do is just to set the proper size of sublayers after the layout. So I recommend you to set the size and position of the gradient sublayers in layoutSubviews.
override func layoutSubviews() {
super.layoutSubviews()
let center = CGPoint(x: self.bounds.width / 2, y: self.bounds.height / 2)
selectedViewLayer.bounds = self.bounds
selectedViewLayer.position = center
gradientViewLayer.bounds = self.bounds
gradientViewLayer.position = center
}
2. You don’t need to use an extra view to draw shadow
Remove shadowView and just set the layer properties:
layer.shadowOffset = CGSize(width: 0, height: 3)
layer.shadowRadius = 3
layer.shadowOpacity = 0.3
layer.shadowColor = UIColor.black.cgColor
clipsToBounds = false
If you have to use an extra view to draw shadow, then you can add the view once in init() and set the proper size and position in layoutSubviews or you can just programmatically set auto layout constraints to the superview.
3. Animation duration & timing function
After setting proper sizes, your animation of the gradient layers and the container view doesn’t sync well.
It seems that:
During the rotation transition, coordinator(UIViewControllerTransitionCoordinator) has its own transition duration and easing function.
And the duration and easing function are applied automatically to all the subviews (UIView).
However, those values are not applied to the CALayer without an associated UIView. Consequently, it uses the default timing function and duration of CoreAnimation.
To sync the animations, explicitly set the animation duration and the timing function like below:
class ViewController: UIViewController {
...
override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
super.viewWillTransition(to: size, with: coordinator)
CATransaction.setAnimationDuration(coordinator.transitionDuration)
CATransaction.setAnimationTimingFunction(coordinator.completionCurve.timingFunction)
}
...
}
// Swift 4
extension UIView.AnimationCurve {
var timingFunction: CAMediaTimingFunction {
let functionName: CAMediaTimingFunctionName
switch self {
case .easeIn:
functionName = kCAMediaTimingFunctionEaseIn as CAMediaTimingFunctionName
case .easeInOut:
functionName = kCAMediaTimingFunctionEaseInEaseOut as CAMediaTimingFunctionName
case .easeOut:
functionName = kCAMediaTimingFunctionEaseOut as CAMediaTimingFunctionName
case .linear:
functionName = kCAMediaTimingFunctionLinear as CAMediaTimingFunctionName
}
return CAMediaTimingFunction(name: functionName as String)
}
}

How to reproduce this Xcode blue drag line

I'd like to reproduce the Xcode blue drag line in my app.
Do you know a way to code this ?
I know how to draw a line using Core Graphics ...
But this line has to be over the top of all other items (on the screen).
I'm posting this after you've posted your own answer, so this is probably a huge waste of time. But your answer only covers drawing a really bare-bones line on the screen and doesn't cover a bunch of other interesting stuff that you need to take care of to really replicate Xcode's behavior and even go beyond it:
drawing a nice connection line like Xcode's (with a shadow, an outline, and big rounded ends),
drawing the line across multiple screens,
using Cocoa drag and drop to find the drag target and to support spring-loading.
Here's a demo of what I'm going to explain in this answer:
In this github repo, you can find an Xcode project containing all the code in this answer plus the remaining glue code necessary to run a demo app.
Drawing a nice connection line like Xcode's
Xcode's connection line looks like an old-timey barbell. It has a straight bar of arbitrary length, with a circular bell at each end:
What do we know about that shape? The user provides the start and end points (the centers of the bells) by dragging the mouse, and our user interface designer specifies the radius of the bells and the thickness of the bar:
The length of the bar is the distance from startPoint to endPoint: length = hypot(endPoint.x - startPoint.x, endPoint.y - startPoint.y).
To simplify the process of creating a path for this shape, let's draw it in a standard pose, with the left bell at the origin and the bar parallel to the x axis. In this pose, here's what we know:
We can create this shape as a path by making a circular arc centered at the origin, connected to another (mirror image) circular arc centered at (length, 0). To create these arcs, we need this mysteryAngle:
We can figure out mysteryAngle if we can find any of the arc endpoints where the bell meets the bar. Specifically, we'll find the coordinates of this point:
What do we know about that mysteryPoint? We know it's at the intersection of the bell and the top of the bar. So we know it's at distance bellRadius from the origin, and at distance barThickness / 2 from the x axis:
So immediately we know that mysteryPoint.y = barThickness / 2, and we can use the Pythagorean theorem to compute mysteryPoint.x = sqrt(bellRadius² - mysteryPoint.y²).
With mysteryPoint located, we can compute mysteryAngle using our choice of inverse trigonometry function. Arcsine, I choose you! mysteryAngle = asin(mysteryPoint.y / bellRadius).
We now know everything we need to create the path in the standard pose. To move it from the standard pose to the desired pose (which goes from startPoint to endPoint, remember?), we'll apply an affine transform. The transform will translate (move) the path so the left bell is centered at startPoint and rotate the path so the right bell ends up at endPoint.
In writing the code to create the path, we want to be careful of a few things:
What if the length is so short that the bells overlap? We should handle that gracefully by adjusting mysteryAngle so the bells connect seamlessly with no weird “negative bar” between them.
What if bellRadius is smaller than barThickness / 2? We should handle that gracefully by forcing bellRadius to be at least barThickness / 2.
What if length is zero? We need to avoid division by zero.
Here's my code to create the path, handling all those cases:
extension CGPath {
class func barbell(from start: CGPoint, to end: CGPoint, barThickness proposedBarThickness: CGFloat, bellRadius proposedBellRadius: CGFloat) -> CGPath {
let barThickness = max(0, proposedBarThickness)
let bellRadius = max(barThickness / 2, proposedBellRadius)
let vector = CGPoint(x: end.x - start.x, y: end.y - start.y)
let length = hypot(vector.x, vector.y)
if length == 0 {
return CGPath(ellipseIn: CGRect(origin: start, size: .zero).insetBy(dx: -bellRadius, dy: -bellRadius), transform: nil)
}
var yOffset = barThickness / 2
var xOffset = sqrt(bellRadius * bellRadius - yOffset * yOffset)
let halfLength = length / 2
if xOffset > halfLength {
xOffset = halfLength
yOffset = sqrt(bellRadius * bellRadius - xOffset * xOffset)
}
let jointRadians = asin(yOffset / bellRadius)
let path = CGMutablePath()
path.addArc(center: .zero, radius: bellRadius, startAngle: jointRadians, endAngle: -jointRadians, clockwise: false)
path.addArc(center: CGPoint(x: length, y: 0), radius: bellRadius, startAngle: .pi + jointRadians, endAngle: .pi - jointRadians, clockwise: false)
path.closeSubpath()
let unitVector = CGPoint(x: vector.x / length, y: vector.y / length)
var transform = CGAffineTransform(a: unitVector.x, b: unitVector.y, c: -unitVector.y, d: unitVector.x, tx: start.x, ty: start.y)
return path.copy(using: &transform)!
}
}
Once we have the path, we need to fill it with the correct color, stroke it with the correct color and line width, and draw a shadow around it. I used Hopper Disassembler on IDEInterfaceBuilderKit to figure out Xcode's exact sizes and colors. Xcode draws it all into a graphics context in a custom view's drawRect:, but we'll make our custom view use a CAShapeLayer. We won't end up drawing the shadow precisely the same as Xcode, but it's close enough.
class ConnectionView: NSView {
struct Parameters {
var startPoint = CGPoint.zero
var endPoint = CGPoint.zero
var barThickness = CGFloat(2)
var ballRadius = CGFloat(3)
}
var parameters = Parameters() { didSet { needsLayout = true } }
override init(frame: CGRect) {
super.init(frame: frame)
commonInit()
}
required init?(coder decoder: NSCoder) {
super.init(coder: decoder)
commonInit()
}
let shapeLayer = CAShapeLayer()
override func makeBackingLayer() -> CALayer { return shapeLayer }
override func layout() {
super.layout()
shapeLayer.path = CGPath.barbell(from: parameters.startPoint, to: parameters.endPoint, barThickness: parameters.barThickness, bellRadius: parameters.ballRadius)
shapeLayer.shadowPath = CGPath.barbell(from: parameters.startPoint, to: parameters.endPoint, barThickness: parameters.barThickness + shapeLayer.lineWidth / 2, bellRadius: parameters.ballRadius + shapeLayer.lineWidth / 2)
}
private func commonInit() {
wantsLayer = true
shapeLayer.lineJoin = kCALineJoinMiter
shapeLayer.lineWidth = 0.75
shapeLayer.strokeColor = NSColor.white.cgColor
shapeLayer.fillColor = NSColor(calibratedHue: 209/360, saturation: 0.83, brightness: 1, alpha: 1).cgColor
shapeLayer.shadowColor = NSColor.selectedControlColor.blended(withFraction: 0.2, of: .black)?.withAlphaComponent(0.85).cgColor
shapeLayer.shadowRadius = 3
shapeLayer.shadowOpacity = 1
shapeLayer.shadowOffset = .zero
}
}
We can test this in a playground to make sure it looks good:
import PlaygroundSupport
let view = NSView()
view.setFrameSize(CGSize(width: 400, height: 200))
view.wantsLayer = true
view.layer!.backgroundColor = NSColor.white.cgColor
PlaygroundPage.current.liveView = view
for i: CGFloat in stride(from: 0, through: 9, by: CGFloat(0.4)) {
let connectionView = ConnectionView(frame: view.bounds)
connectionView.parameters.startPoint = CGPoint(x: CGFloat(i) * 40 + 15, y: 50)
connectionView.parameters.endPoint = CGPoint(x: CGFloat(i) * 40 + 15, y: 50 + CGFloat(i))
view.addSubview(connectionView)
}
let connectionView = ConnectionView(frame: view.bounds)
connectionView.parameters.startPoint = CGPoint(x: 50, y: 100)
connectionView.parameters.endPoint = CGPoint(x: 350, y: 150)
view.addSubview(connectionView)
Here's the result:
Drawing across multiple screens
If you have multiple screens (displays) attached to your Mac, and if you have “Displays have separate Spaces” turned on (which is the default) in the Mission Control panel of your System Preferences, then macOS will not let a window span two screens. This means that you can't use a single window to draw the connecting line across multiple monitors. This matters if you want to let the user connect an object in one window to an object in another window, like Xcode does:
Here's the checklist for drawing the line, across multiple screens, on top of our other windows:
We need to create one window per screen.
We need to set up each window to fill its screen and be completely transparent with no shadow.
We need to set the window level of each window to 1 to keep it above our normal windows (which have a window level of 0).
We need to tell each window not to release itself when closed, because we don't like mysterious autorelease pool crashes.
Each window needs its own ConnectionView.
To keep the coordinate systems uniform, we'll adjust the bounds of each ConnectionView so that its coordinate system matches the screen coordinate system.
We'll tell each ConnectionView to draw the entire connecting line; each view will clip what it draws to its own bounds.
It probably won't happen, but we'll arrange to be notified if the screen arrangement changes. If that happens, we'll add/remove/update windows to cover the new arrangement.
Let's make a class to encapsulate all these details. With an instance of LineOverlay, we can update the start and end points of the connection as needed, and remove the overlay from the screen when we're done.
class LineOverlay {
init(startScreenPoint: CGPoint, endScreenPoint: CGPoint) {
self.startScreenPoint = startScreenPoint
self.endScreenPoint = endScreenPoint
NotificationCenter.default.addObserver(self, selector: #selector(LineOverlay.screenLayoutDidChange(_:)), name: .NSApplicationDidChangeScreenParameters, object: nil)
synchronizeWindowsToScreens()
}
var startScreenPoint: CGPoint { didSet { setViewPoints() } }
var endScreenPoint: CGPoint { didSet { setViewPoints() } }
func removeFromScreen() {
windows.forEach { $0.close() }
windows.removeAll()
}
private var windows = [NSWindow]()
deinit {
NotificationCenter.default.removeObserver(self)
removeFromScreen()
}
#objc private func screenLayoutDidChange(_ note: Notification) {
synchronizeWindowsToScreens()
}
private func synchronizeWindowsToScreens() {
var spareWindows = windows
windows.removeAll()
for screen in NSScreen.screens() ?? [] {
let window: NSWindow
if let index = spareWindows.index(where: { $0.screen === screen}) {
window = spareWindows.remove(at: index)
} else {
let styleMask = NSWindowStyleMask.borderless
window = NSWindow(contentRect: .zero, styleMask: styleMask, backing: .buffered, defer: true, screen: screen)
window.contentView = ConnectionView()
window.isReleasedWhenClosed = false
window.ignoresMouseEvents = true
}
windows.append(window)
window.setFrame(screen.frame, display: true)
// Make the view's geometry match the screen geometry for simplicity.
let view = window.contentView!
var rect = view.bounds
rect = view.convert(rect, to: nil)
rect = window.convertToScreen(rect)
view.bounds = rect
window.backgroundColor = .clear
window.isOpaque = false
window.hasShadow = false
window.isOneShot = true
window.level = 1
window.contentView?.needsLayout = true
window.orderFront(nil)
}
spareWindows.forEach { $0.close() }
}
private func setViewPoints() {
for window in windows {
let view = window.contentView! as! ConnectionView
view.parameters.startPoint = startScreenPoint
view.parameters.endPoint = endScreenPoint
}
}
}
Using Cocoa drag and drop to find the drag target and perform spring-loading
We need a way to find the (potential) drop target of the connection as the user drags the mouse around. It would also be nice to support spring loading.
In case you don't know, spring loading is a macOS feature in which, if you hover a drag over a container for a moment, macOS will automatically open the container without interrupting the drag. Examples:
If you drag onto a window that's not the frontmost window, macOS will bring the window to the front.
if you drag onto a Finder folder icon, and the Finder will open the folder window to let you drag onto an item in the folder.
If you drag onto a tab handle (at the top of the window) in Safari or Chrome, the browser will select the tab, letting you drop your item in the tab.
If you control-drag a connection in Xcode onto a menu item in the menu bar in your storyboard or xib, Xcode will open the item's menu.
If we use the standard Cocoa drag and drop support to track the drag and find the drop target, then we'll get spring loading support “for free”.
To support standard Cocoa drag and drop, we need to implement the NSDraggingSource protocol on some object, so we can drag from something, and the NSDraggingDestination protocol on some other object, so we can drag to something. We'll implement NSDraggingSource in a class called ConnectionDragController, and we'll implement NSDraggingDestination in a custom view class called DragEndpoint.
First, let's look at DragEndpoint (an NSView subclass). NSView already conforms to NSDraggingDestination, but doesn't do much with it. We need to implement four methods of the NSDraggingDestination protocol. The drag session will call these methods to let us know when the drag enters and leaves the destination, when the drag ends entirely, and when to “perform” the drag (assuming this destination was where the drag actually ended). We also need to register the type of dragged data that we can accept.
We want to be careful of two things:
We only want to accept a drag that is a connection attempt. We can figure out whether a drag is a connection attempt by checking whether the source is our custom drag source, ConnectionDragController.
We'll make DragEndpoint appear to be the drag source (visually only, not programmatically). We don't want to let the user connect an endpoint to itself, so we need to make sure the endpoint that is the source of the connection cannot also be used as the target of the connection. We'll do that using a state property that tracks whether this endpoint is idle, acting as the source, or acting as the target.
When the user finally releases the mouse button over a valid drop destination, the drag session makes it the destination's responsibility to “perform” the drag by sending it performDragOperation(_:). The session doesn't tell the drag source where the drop finally happened. But we probably want to do the work of making the connection (in our data model) back in the source. Think about how it works in Xcode: when you control-drag from a button in Main.storyboard to ViewController.swift and create an action, the connection is not recorded in ViewController.swift where the drag ended; it's recorded in Main.storyboard, as part of the button's persistent data. So when the drag session tells the destination to “perform” the drag, we'll make our destination (DragEndpoint) pass itself back to a connect(to:) method on the drag source where the real work can happen.
class DragEndpoint: NSView {
enum State {
case idle
case source
case target
}
var state: State = State.idle { didSet { needsLayout = true } }
public override func draggingEntered(_ sender: NSDraggingInfo) -> NSDragOperation {
guard case .idle = state else { return [] }
guard (sender.draggingSource() as? ConnectionDragController)?.sourceEndpoint != nil else { return [] }
state = .target
return sender.draggingSourceOperationMask()
}
public override func draggingExited(_ sender: NSDraggingInfo?) {
guard case .target = state else { return }
state = .idle
}
public override func draggingEnded(_ sender: NSDraggingInfo?) {
guard case .target = state else { return }
state = .idle
}
public override func performDragOperation(_ sender: NSDraggingInfo) -> Bool {
guard let controller = sender.draggingSource() as? ConnectionDragController else { return false }
controller.connect(to: self)
return true
}
override init(frame: NSRect) {
super.init(frame: frame)
commonInit()
}
required init?(coder decoder: NSCoder) {
super.init(coder: decoder)
commonInit()
}
private func commonInit() {
wantsLayer = true
register(forDraggedTypes: [kUTTypeData as String])
}
// Drawing code omitted here but is in my github repo.
}
Now we can implement ConnectionDragController to act as the drag source and to manage the drag session and the LineOverlay.
To start a drag session, we have to call beginDraggingSession(with:event:source:) on a view; it'll be the DragEndpoint where the mouse-down event happened.
The session notifies the source when the drag actually starts, when it moves, and when it ends. We use those notifications to create and update the LineOverlay.
Since we're not providing any images as part of our NSDraggingItem, the session won't draw anything being dragged. This is good.
By default, if the drag ends outside of a valid destination, the session will animate… nothing… back to the start of the drag, before notifying the source that the drag has ended. During this animation, the line overlay hangs around, frozen. It looks broken. We tell the session not to animate back to the start to avoid this.
Since this is just a demo, the “work” we do to connect the endpoints in connect(to:) is just printing their descriptions. In a real app, you'd actually modify your data model.
class ConnectionDragController: NSObject, NSDraggingSource {
var sourceEndpoint: DragEndpoint?
func connect(to target: DragEndpoint) {
Swift.print("Connect \(sourceEndpoint!) to \(target)")
}
func trackDrag(forMouseDownEvent mouseDownEvent: NSEvent, in sourceEndpoint: DragEndpoint) {
self.sourceEndpoint = sourceEndpoint
let item = NSDraggingItem(pasteboardWriter: NSPasteboardItem(pasteboardPropertyList: "\(view)", ofType: kUTTypeData as String)!)
let session = sourceEndpoint.beginDraggingSession(with: [item], event: mouseDownEvent, source: self)
session.animatesToStartingPositionsOnCancelOrFail = false
}
func draggingSession(_ session: NSDraggingSession, sourceOperationMaskFor context: NSDraggingContext) -> NSDragOperation {
switch context {
case .withinApplication: return .generic
case .outsideApplication: return []
}
}
func draggingSession(_ session: NSDraggingSession, willBeginAt screenPoint: NSPoint) {
sourceEndpoint?.state = .source
lineOverlay = LineOverlay(startScreenPoint: screenPoint, endScreenPoint: screenPoint)
}
func draggingSession(_ session: NSDraggingSession, movedTo screenPoint: NSPoint) {
lineOverlay?.endScreenPoint = screenPoint
}
func draggingSession(_ session: NSDraggingSession, endedAt screenPoint: NSPoint, operation: NSDragOperation) {
lineOverlay?.removeFromScreen()
sourceEndpoint?.state = .idle
}
func ignoreModifierKeys(for session: NSDraggingSession) -> Bool { return true }
private var lineOverlay: LineOverlay?
}
That's all you need. As a reminder, you can find a link at the top of this answer to a github repo containing a complete demo project.
Using a transparent NSWindow :
var window: NSWindow!
func createLinePath(from: NSPoint, to: NSPoint) -> CGPath {
let path = CGMutablePath()
path.move(to: from)
path.addLine(to: to)
return path
}
override func viewDidLoad() {
super.viewDidLoad()
//Transparent window
window = NSWindow()
window.styleMask = .borderless
window.backgroundColor = .clear
window.isOpaque = false
window.hasShadow = false
//Line
let line = CAShapeLayer()
line.path = createLinePath(from: NSPoint(x: 0, y: 0), to: NSPoint(x: 100, y: 100))
line.lineWidth = 10.0
line.strokeColor = NSColor.blue.cgColor
//Update
NSEvent.addLocalMonitorForEvents(matching: [.mouseMoved]) {
let newPos = NSEvent.mouseLocation()
line.path = self.createLinePath(from: NSPoint(x: 0, y: 0), to: newPos)
return $0
}
window.contentView!.layer = line
window.contentView!.wantsLayer = true
window.setFrame(NSScreen.main()!.frame, display: true)
window.makeKeyAndOrderFront(nil)
}
Trying to adopt Rob Mayoff's excellent solution above into my own project's interface, which is based around an NSOutlineView, I ran into a few problems. In case it helps anyone trying to achieve the same thing, I'll detail those pitfalls in this answer.
The sample code provided in the solution detects the start of a drag by implementing mouseDown(with:) on the view controller, and then calling hittest() on the window's content view in order to obtain the DragEndpoint subview where the (potential) drag is originating. When using outline views, this causes two pitfalls detailed in the next sections.
1. Mouse-Down Event
It seems that when a table view or outline view is involved, mouseDown(with:) never gets called on the view controller, and we need to instead override that method in the outline view itself.
2. Hit Testing
NSTableView -and by extension, NSOutlineView- overrides the NSResponder method validateProposedFirstResponder(_:for:), and this causes the hittest() method to fail: it always returns the outline view itself, and all subviews (including our target DragEndpoint subview inside the cell) remain inaccessible.
From the documentation:
Views or controls in a table sometimes need to respond to incoming
events. To determine whether a particular subview should receive the
current mouse event, a table view calls
validateProposedFirstResponder:forEvent: in its implementation of
hitTest. If you create a table view subclass, you can override
validateProposedFirstResponder:forEvent: to specify which views can
become the first responder. In this way, you receive mouse events.
At first I tried overriding:
override func validateProposedFirstResponder(_ responder: NSResponder, for event: NSEvent?) -> Bool {
if responder is DragEndpoint {
return true
}
return super.validateProposedFirstResponder(responder, for: event)
}
...and it worked, but reading the documentation further suggests a smarter, less intrusive approach:
The default NSTableView implementation of
validateProposedFirstResponder:forEvent: uses the following logic:
Return YES for all proposed first responder views unless they are
instances or subclasses of NSControl.
Determine whether the proposed
first responder is an NSControl instance or subclass. If the control
is an NSButton object, return YES. If the control is not an NSButton,
call the control’s hitTestForEvent:inRect:ofView: to see whether the
hit area is trackable (that is, NSCellHitTrackableArea) or is an
editable text area (that is, NSCellHitEditableTextArea), and return
the appropriate value. Note that if a text area is hit, NSTableView
also delays the first responder action.
(emphasis mine)
...which is weird, because it feels like it should say:
Return NO for all proposed first responder views unless they are
instances or subclasses of NSControl.
, but anyway, I instead modified Rob's code to make DragEndpoint a subclass of NSControl (not just NSView), and that works too.
3. Managing the Dragging Session
Because NSOutlineView only exposes a limited number of drag-and-drop events through its data source protocol (and the drag session itself can not be meaningfully modified from the data source's side), it seems that taking full control of the drag session is not possible unless we subclass the outline view and override the NSDraggingSource methods.
Only by overriding draggingSession(_:willBeginAt:) at the outline view itself can we prevent calling the superclass implementation and starting an actual item drag (which displays the dragged row image).
We could start a separate drag session from the mouseDown(with:) method of the DragEndpoint subview: when implemented, it is called before the same method on the outline view (which in turn is what triggers the dragging session to be started). But if we move the dragging session away from the outline view, it seems like it will be impossible to have springloading "for free" when dragging above an expandable item.
So instead, I discarded the ConnectionDragController class and moved all its logic to the outline view subclass: the tackDrag() method, the active DragEndpoint property, and all methods of the NSDraggingSource protocol into the outline view.
Ideally, I would have liked to avoid subclassing NSOutlineView (it is discouraged) and instead implement this behaviour more cleanly, exclusively through the outline view's delegate/data source and/or external classes (like the original ConnectionDragController), but it seems that it is impossible.
I haven't got the springloading part to work yet (it was working at a moment, but not now so I'm still looking into it...).
I too made a sample project, but I'm still fixing minor issues. I'll post a link to the GiHub repository as soon as it is ready.

Transform a view by dragging its edge (Swift)

I try to make an app where the user can draw or add an arrow, and transform this arrow (translate, rotate, ...).
For the moment, I manage to draw the arrow and make all the transformations on it, but what I would like to do now is to be able to modify the arrow by dragging its edges.
To draw an arrow, I just create a UIView with a height of 20px (it is the thickness of the arrow), and a width of 400 (length of the arrow).
func drawArrow(frame: CGRect) {
let thicknessArrow = 20
let viewHorizontalArrow = UIView()
viewHorizontalArrow.frame.origin = CGPoint(x: 100, y: 600)
viewHorizontalArrow.frame.size = CGSize(width: 400, height: thicknessArrow)
drawDoubleArrow(viewHorizontalArrow, startPoint: CGPoint(x: 0, y: thicknessViewWithArrow/2), endPoint: CGPoint(x: viewHorizontalArrow.frame.width, y: thicknessViewWithArrow/2), lineWidth: 10, color: UIColor.blackColor())
}
After that, I transform the UIView thanks to the pan, pinch and rotate gesture.
The function "drawDoubleArrow" create an arrow with BezierPath and add it to the layer of the UIView.
I hope these explanations are clear enough :).
Could you help me to find a solution ?
Thanks !
UIBezierPath on it's own cannot easily be manipulated by gestures because it doesn't recognize them. But a UIView can recognize gestures. Combined with a CALayer's ability to be transformed and it's hitTest() method, I think you can achieve what you want.
// I'm declaring the layer as a shape layer, but depending on how your error thick is, you may need a rectangular one
let panGesture = UIPanGestureRecognizer()
var myLayer = CAShapeLayer()
viewDidLoad:
// I'm assuming you've created myBezierPath, an arrow path
myLayer.path = myBezierPath.cgPath
// Adding the pan gesture to the view's controller
panGesture.addTarget(self, action: #selector(moveLayer))
self.addGestureRecognizer(panGesture)
moveLayer:
func moveLayer(_ recognizer:UIPanGestureRecognizer) {
let p = recognizer.location(in: self)
// You will want to loop through all the layers you wish to transform
if myLayer.hitTest(p) != nil {
myLayer.position = p
}
}

I'm having some trouble using x and y coordinates from touchesBegan as the center key in a CI filter

I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.

How do I make a UIScrollView scrollable only when touches are inside a custom shape?

I am working on creating an image collage app. And I am going to have multiple UIScrollView's. The scroll views will have boundaries with custom shapes and the user will be able to dynamically change the corners of the shapes where they intersect. The scroll views have UIImageView's as subviews.
The scroll views are subviews of other UIView's. I applied a CAShapeLayer mask to each of these UIView's. That way I can mask the scroll views with no problem.
But the problem is that, I can only scroll the contents of the last scroll view added. Also, I can pan and zoom beyond the boundaries of the masks. I should only able to pan or zoom when I am touching inside the boundaries of the polygons that I have as masks.
I tried;
scrollView.clipsToBounds = true
scrollView.layer.masksToBounds = true
But the result is the same.
Unfortunately I'm not able to post screenshots but, here is the code that I use to create masks for the UIViews:
func createMask(v: UIView, viewsToMask: [UIView], anchorPoint: CGPoint)
{
let frame = v.bounds
var shapeLayer = [CAShapeLayer]()
var path = [CGMutablePathRef]()
for i in 0...3 {
path.append(CGPathCreateMutable())
shapeLayer.append(CAShapeLayer())
}
//define frame constants
let center = CGPointMake(frame.origin.x + frame.size.width / 2, frame.origin.y + frame.size.height / 2)
let bottomLeft = CGPointMake(frame.origin.x, frame.origin.y + frame.size.height)
let bottomRight = CGPointMake(frame.origin.x + frame.size.width, frame.origin.y + frame.size.height)
switch frameType {
case 1:
// First view for Frame Type 1
CGPathMoveToPoint(path[0], nil, 0, 0)
CGPathAddLineToPoint(path[0], nil, bottomLeft.x, bottomLeft.y)
CGPathAddLineToPoint(path[0], nil, anchorPoint.x, bottomLeft.y)
CGPathAddLineToPoint(path[0], nil, anchorPoint.x, anchorPoint.y)
CGPathCloseSubpath(path[0])
// Second view for Frame Type 1
CGPathMoveToPoint(path[1], nil, anchorPoint.x, anchorPoint.y)
CGPathAddLineToPoint(path[1], nil, anchorPoint.x, bottomLeft.y)
CGPathAddLineToPoint(path[1], nil, bottomRight.x, bottomRight.y)
CGPathAddLineToPoint(path[1], nil, bottomRight.x, anchorPoint.y)
CGPathCloseSubpath(path[1])
// Third view for Frame Type 1
CGPathMoveToPoint(path[2], nil, 0, 0)
CGPathAddLineToPoint(path[2], nil, anchorPoint.x, anchorPoint.y)
CGPathAddLineToPoint(path[2], nil, bottomRight.x, anchorPoint.y)
CGPathAddLineToPoint(path[2], nil, bottomRight.x, 0)
CGPathCloseSubpath(path[2])
default:
break
}
for (key, view) in enumerate(viewsToMask) {
shapeLayer[key].path = path[key]
view.layer.mask = shapeLayer[key]
}
}
So, how can I make the scroll views behave in such a way that they will only scroll or zoom content when touches happen inside their corresponding mask boundaries?
EDIT:
According to the answer to this question: UIView's masked-off area still touchable? the masks only modify what you can see, not the area that you can touch. So I subclassed the UIScrollView and tried to override the hitTest:withEvent: method like so,
protocol CoolScrollViewDelegate: class {
var scrollViewPaths: [CGMutablePathRef] { get set }
}
class CoolScrollView: UIScrollView
{
weak var coolDelegate: CoolScrollViewDelegate?
override func hitTest(point: CGPoint, withEvent event: UIEvent?) -> UIView?
{
if CGPathContainsPoint(coolDelegate?.scrollViewPaths[tag], nil, point, true) {
return self
} else {
return nil
}
}
}
But with this implementation, I can only check against the last scroll view and path boundaries change when I zoom in. For example if I zoom in on the image the hitTest:withEvent: method returns nil.
I would agree with #Kendel in the comments - to start with it might be an easier approach to create a UIScrollView subclass that knows how to mask itself with a particular shape. Keeping the shape logic within a scroll view subclass will keep things tidy, and allow you to easily restrict touches to within the shape (I'll come to that in a minute).
It's a little hard to tell from your description exactly how your shaped views should behave, but as a brief example your ShapedScrollView might look like something like this:
import UIKit
class ShapedScrollView: UIScrollView {
// MARK: Types
enum Shape {
case First // Choose a better name!
}
// MARK: Properties
private let shapeLayer = CAShapeLayer()
var shape: Shape = .First {
didSet { setNeedsLayout() }
}
// MARK: Initializers
init(frame: CGRect, shape: Shape = .First) {
self.shape = shape
super.init(frame: frame)
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
// MARK: Layout
override func layoutSubviews() {
super.layoutSubviews()
updateShape()
}
// MARK: Updating the Shape
private func updateShape() {
// Disable core animation actions to prevent changes to the shape layer animating implicitly
CATransaction.begin()
CATransaction.setDisableActions(true)
if bounds.size != shapeLayer.bounds.size {
// Bounds size has changed, completely update the shape
shapeLayer.frame = CGRect(origin: contentOffset, size: bounds.size)
shapeLayer.path = pathForShape(shape).CGPath
layer.mask = shapeLayer
} else {
// Bounds size has NOT changed, just update origin of shape path to
// match content offset - makes it appear stationary as we scroll
var shapeFrame = shapeLayer.frame
shapeFrame.origin = contentOffset
shapeLayer.frame = shapeFrame
}
CATransaction.commit()
}
private func pathForShape(shape: Shape) -> UIBezierPath {
let path = UIBezierPath()
switch shape {
case .First:
// Build the shape path, whatever that might be...
// path.moveToPoint(...)
// ...
}
return path
}
}
So making the touches only work inside the specified shape is the easy part. We already have a reference to a shape layer that describes the shape we want to restrict touches to. UIView provides a helpful hit-testing method that lets you specify whether or not a particular point should be considered to be "inside" that view: pointInside(_:withEvent:). Simply add the following override to ShapedScrollView:
override func pointInside(point: CGPoint, withEvent event: UIEvent?) -> Bool {
return CGPathContainsPoint(shapeLayer.path, nil, layer.convertPoint(point, toLayer: shapeLayer), false)
}
This just says: "If point (converted to the shape layer's coordinate system) is inside the shape's path, consider it to be inside the view; otherwise consider it outside the view."
If a scroll view that masks itself isn't appropriate, you can still adopt this technique by using a ShapedScrollContainerView: UIView with a scrollView property. Then, apply the shape mask to the container as above, and again use pointInside(_:withEvent:) to test whether it should respond to particular touch points.