How to transform an NSImageView in cocoa? - swift

I'm struggling with cocoa. I'm trying to write this piece of animation from iOS into cocoa. The idea is to slightly decrease the size of the NSImageView and after the animation completes, increase it again to the original size. So that it looks as if the button (picture) was pressed.
#IBOutlet weak var vpnButton: NSImageView!
#objc func vpnButtonPressed(pressedGestureRecognizer: UILongPressGestureRecognizer) {
UserDefaults.standard.set(true, forKey: Constants.vpnButtonTapped)
if (pressedGestureRecognizer.state == .began) {
UIView.animate(withDuration: 0.15, animations: {() -> Void in
self.vpnButton?.transform = CGAffineTransform(scaleX: 0.965, y: 0.965)})
} else if (pressedGestureRecognizer.state == .ended) {
UIView.animate(withDuration: 0.15, animations: {() -> Void in
self.vpnButton.isHighlighted = !self.vpnButton.isHighlighted
self.vpnButton?.transform = CGAffineTransform(scaleX: 1, y: 1)})
}
}
In cocoa, I was able to find the clickGesture. I'm not sure if this is the best choice.
So I came up with this:
#objc func vpnButtonPressed(clickedGestureRecognizer: NSGestureRecognizer) {
UserDefaults.standard.set(true, forKey: Constants.vpnButtonTapped)
print("clicked")
NSAnimationContext.runAnimationGroup({_ in
//Indicate the duration of the animation
NSAnimationContext.current.duration = 0.5
var transform = CGAffineTransform(scaleX: 0.965, y: 0.965)
self.vpnButton.layer?.setAffineTransform(transform)
}, completionHandler:{
// var transform = self.vpnButton.layer?.affineTransform()
// transform = CGAffineTransform(scaleX: 1, y: 1)
// self.vpnButton.layer?.setAffineTransform(transform!)
print("Animation completed")
})
}
This worked only once by moving the image slightly aside, but it doesn't make it smaller. If I uncomment the three lines in the completion handler, I don't see the animation moving it back either.

As far as I understood you need something like the following (sending action itself, if needed, is out of scope - here just animation)
class MyImageView: NSImageView {
override init(frame frameRect: NSRect) {
super.init(frame: frameRect)
self.wantsLayer = true
}
required init?(coder: NSCoder) {
super.init(coder: coder)
self.wantsLayer = true
}
override func mouseDown(with event: NSEvent) {
NSAnimationContext.runAnimationGroup({ (context) in
context.duration = 0.15
self.layer?.setAffineTransform(CGAffineTransform(scaleX: 0.965, y: 0.965))
})
}
override func mouseUp(with event: NSEvent) {
NSAnimationContext.runAnimationGroup({ (context) in
context.duration = 0.15
self.layer?.setAffineTransform(.identity)
})
}
}

Related

How to move NSImageView inside NSView in Cocoa?

In my Mac OS app I need a functionality to be able to move NSImageView around inside an NSView after mouseDown event (triggered by user) happen in this NSImageView. When the user triggers mouse Up event, this view must move to the last mouseDrag event direction and stay there. During the move I want the NSImageView to be visible on the screen (it should move along with the mouse cursor).
I've read a Handling Mouse Events guide by Apple, https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/EventOverview/HandlingMouseEvents/HandlingMouseEvents.html
Also I've downloaded this sample code: https://developer.apple.com/library/content/samplecode/DragItemAround/Introduction/Intro.html
Both links contain code in Objective C. Code for DragItemAround is outdated. I've tried to search for solutions on GitHub and other StackOverflow threads, but have not found any working solutions.
Would be glad to hear the answers on this question. I'm using Swift 3.
I've created a custom MovableImageView which is a subclass of NSImageView with this code:
import Cocoa
class MovableImageView: NSImageView {
override func draw(_ dirtyRect: NSRect) {
super.draw(dirtyRect)
// Drawing code here.
// setup the starting location of the
// draggable item
}
override func mouseDown(with event: NSEvent) {
Swift.print("mouseDown")
//let window = self.window!
//let startingPoint = event.locationInWindow
}
override func mouseDragged(with event: NSEvent) {
Swift.print("mouseDragged")
self.backgroundColor = NSColor.black
}
override func mouseUp(with event: NSEvent) {
Swift.print("mouseUp")
}
}
After that in Interface Builder i've set this class to NSImageView. I've also set constraints in Interface Builder for this NSImageView.
Now i'm trying to figure out how to move NSImageView inside NSView? (Swift 3, XCode 8)
You need to save the mouseDown point and use it for offset later. Please check the following code:
class MovableImageView: NSImageView {
var firstMouseDownPoint: NSPoint = NSZeroPoint
init() {
super.init(frame: NSZeroRect)
self.wantsLayer = true
self.layer?.backgroundColor = NSColor.red.cgColor
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func draw(_ dirtyRect: NSRect) {
super.draw(dirtyRect)
// Drawing code here.
}
override func mouseDown(with event: NSEvent) {
Swift.print("mouseDown")
firstMouseDownPoint = (self.window?.contentView?.convert(event.locationInWindow, to: self))!
}
override func mouseDragged(with event: NSEvent) {
Swift.print("mouseDragged")
let newPoint = (self.window?.contentView?.convert(event.locationInWindow, to: self))!
let offset = NSPoint(x: newPoint.x - firstMouseDownPoint.x, y: newPoint.y - firstMouseDownPoint.y)
let origin = self.frame.origin
let size = self.frame.size
self.frame = NSRect(x: origin.x + offset.x, y: origin.y + offset.y, width: size.width, height: size.height)
}
override func mouseUp(with event: NSEvent) {
Swift.print("mouseUp")
}
}
In the parent view, just add this MovableImageView as subview like this:
let view = MovableImageView()
view.frame = NSRect(x: 0, y: 0, width: 100, height: 100)
self.view.addSubview(view) //Self is your parent view
Constraint version of #brianLikeApple's answer:
class MovableImageView: NSImageView {
var firstMouseDownPoint: NSPoint = NSZeroPoint
var xConstraint: NSLayoutConstraint!
var yContstraint: NSLayoutConstraint!
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
self.wantsLayer = true
self.layer?.backgroundColor = NSColor.red.cgColor
}
override func draw(_ dirtyRect: NSRect) {
super.draw(dirtyRect)
// Drawing code here.
}
override func mouseDown(with event: NSEvent) {
Swift.print("mouseDown")
firstMouseDownPoint = (self.window?.contentView?.convert(event.locationInWindow, to: self))!
}
override func mouseDragged(with event: NSEvent) {
Swift.print("mouseDragged")
let newPoint = (self.window?.contentView?.convert(event.locationInWindow, to: self))!
let offset = NSPoint(x: newPoint.x - firstMouseDownPoint.x, y: newPoint.y - firstMouseDownPoint.y)
let origin = self.frame.origin
let size = self.frame.size
yContstraint.constant = (self.superview?.frame.height)! - origin.y - offset.y - size.height
xConstraint.constant = origin.x + offset.x
}
override func mouseUp(with event: NSEvent) {
Swift.print("mouseUp")
}
Here's a NSView you can drag. It uses constraints exactly like #ExeRhythm answer.
import AppKit
class DraggableNSView: NSView {
var down: NSPoint = NSZeroPoint
#IBOutlet var across: NSLayoutConstraint!
#IBOutlet var up: NSLayoutConstraint!
override func mouseDown(with e: NSEvent) {
down = window?.contentView?.convert(e.locationInWindow, to: self) ?? NSZeroPoint
}
override func mouseDragged(with e: NSEvent) {
guard let m = window?.contentView?.convert(e.locationInWindow, to: self) else { return }
let p = frame.origin + (m - down)
across.constant = p.x
up.constant = p.y
}
}
Don't forget that you must connect the across and up constraints in storyboard!
Note. If you prefer a "down" constraint rather than "up", it is:
// if you prefer a "down" constraint:
down.constant = (superview?.frame.height)! - p.y - frame.size.height
As always, add these two functions in your project,
func -(lhs: CGPoint, rhs: CGPoint) -> CGPoint {
return CGPoint(x: lhs.x - rhs.x, y: lhs.y - rhs.y)
}
func +(lhs: CGPoint, rhs: CGPoint) -> CGPoint {
return CGPoint(x: lhs.x + rhs.x, y: lhs.y + rhs.y)
}
Note that you can of course do this, to any subclass of NSView such as NSImageView:
class DraggableNSImageView: NSView {
... same

How to make NSView draggable in Swift (OSX Mac app)?

I'm trying to make two NSView's draggable and I was wondering how to code that in Swift for OSX Mac app? I'm thinking to put two NSView's within an NSView and make them draggable within that NSView. For example, create NSViewA and nested within in are NSViewB and NSViewC (both draggable within NSViewA). I would like to code in Swift. Cheers!
You can do this fairly easily if you take advantage of the information passed on the responder chain. Consider this custom NSView class.
class DraggableView : NSView {
var startPoint: NSPoint?
var frameOrigin: NSPoint?
override func mouseDown(with event: NSEvent) {
startPoint = event.locationInWindow
frameOrigin = frame.origin
}
override func mouseDragged(with event: NSEvent) {
let offset = event.locationInWindow - startPoint!
frame.origin = frameOrigin! + offset
}
override func mouseUp(with event: NSEvent) {
startPoint = nil
frameOrigin = nil
}
}
To make the math work for NSPoint, I overloaded operators in the extension:
extension NSPoint {
static func -(pointA: NSPoint, pointB: NSPoint) -> NSPoint {
return NSPoint(x: pointA.x - pointB.x, y: pointA.y - pointB.y)
}
static func +(pointA: NSPoint, pointB: NSPoint) -> NSPoint {
return NSPoint(x: pointA.x + pointB.x, y: pointA.y + pointB.y)
}
}
Then your ViewController class is just setup code:
class ViewController: NSViewController {
let viewA: DraggableView = DraggableView()
let viewB: DraggableView = DraggableView()
override func viewDidLoad() {
super.viewDidLoad()
viewA.frame = NSRect(origin: .zero, size: NSSize(width: 100, height: 100))
viewB.frame = NSRect(origin: NSPoint(x: 125, y: 0), size: NSSize(width: 100, height: 100))
viewA.wantsLayer = true
viewB.wantsLayer = true
viewA.layer?.backgroundColor = NSColor.blue.cgColor
viewB.layer?.backgroundColor = NSColor.green.cgColor
view.addSubview(viewA)
view.addSubview(viewB)
}
}

Swift: how to implement a resizable personalized uiview with four buttons at vertexes

I would like to implement a resizable and draggable UIView like the following picture shows:
This is the class I have implemented up to now using simple gestures recognizers:
class PincherView: UIView {
var pinchRec:UIPinchGestureRecognizer!
var rotateRec:UIRotationGestureRecognizer!
var panRec:UIPanGestureRecognizer!
var containerView:UIView!
init(size: CGRect, container:UIView) {
super.init(frame: size)
self.containerView = container
self.pinchRec = UIPinchGestureRecognizer(target: self, action: "handlePinch:")
self.addGestureRecognizer(self.pinchRec)
self.rotateRec = UIRotationGestureRecognizer(target: self, action: "handleRotate:")
self.addGestureRecognizer(self.rotateRec)
self.panRec = UIPanGestureRecognizer(target: self, action: "handlePan:")
self.addGestureRecognizer(self.panRec)
self.backgroundColor = UIColor(red: 0.0, green: 0.6, blue: 1.0, alpha: 0.4)
self.userInteractionEnabled = true
self.multipleTouchEnabled = true
self.hidden = false
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func handlePinch(recognizer : UIPinchGestureRecognizer) {
if let view = recognizer.view {
view.transform = CGAffineTransformScale(view.transform, 1.0, recognizer.scale)
//view.transform = CGAffineTransformScale(view.transform, recognizer.scale, recognizer.scale)
recognizer.scale = 1
}
}
func handleRotate(recognizer : UIRotationGestureRecognizer) {
if let view = recognizer.view {
view.transform = CGAffineTransformRotate(view.transform, recognizer.rotation)
recognizer.rotation = 0
}
}
func handlePan(recognizer:UIPanGestureRecognizer) {
let translation = recognizer.translationInView(containerView)
if let view = recognizer.view {
view.center = CGPoint(x:view.center.x + translation.x,
y:view.center.y + translation.y)
}
recognizer.setTranslation(CGPointZero, inView: containerView)
}
}
As you can imagine, with a simple UIPinchGestureRecognizer the uiview is scaled equally both in x and y directions but I want the users to have the chance to be as precise al possibile so that they can either scale even in one single direction or alter the form of the uiview, not necessarily to be a rectangle. Moreover, it's more important for me that user is as precise as possible in setting the heigth of the uiviewThat's why I need to have those 4 buttons at the corners.
I just would like you to give me some hints from where to start from and how.
Thank you
UPDATE
This is what I have done up to know
In this way I have the following hierarchy:
-UIImageView embed in
-UIView embed in
- UIScrollView embed in
- UIViewController
I had to add a further UIView to use as container, so that all sub uiviews it contains will be zoomed in/out together with the picture in order to have more precision in order to better overlap the azure uiview to photographed objects.
Concerning the pin, the little white circle, this is the class I have tried to implement up to know:
class PinImageView: UIImageView {
var lastLocation:CGPoint!
var container:UIView!
var viewToScale:UIView!
var id:String!
init(imageIcon: UIImage?, location:CGPoint, container:UIView, viewToScale:UIView, id:String) {
super.init(image: imageIcon)
self.lastLocation = location
self.frame = CGRect(x: location.x, y: location.y, width: 10.0, height: 10.0)
self.center = location
self.userInteractionEnabled = true
self.container = container
self.viewToScale = viewToScale
self.id = id
self.hidden = false
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func getScaleParameters(arrivalPoint:CGPoint) -> (sx:CGFloat, sy:CGFloat) {
//var result:(sx:CGFloat, sy:CGFloat) = (1.0, 1.0)
var scaleX:CGFloat = 1.0
var scaleY:CGFloat = 1.0
switch (self.id) {
case "up_left":
/* up_left is moving towards right*/
if (self.lastLocation.x < arrivalPoint.x) {
scaleX = -0.99
}
/* up_left is moving towards left*/
else {
scaleX = 0.9
}
/* up_left is moving towards up*/
if (self.lastLocation.y < arrivalPoint.y) {
scaleY = -0.9
}
/* up_left is moving towards down*/
else {
scaleY = 0.9
}
break
default:
scaleX = 1.0
scaleY = 1.0
}
return (scaleX, scaleY)
}
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent?) {
if let touch: UITouch = touches.first {
let scaleParameters:(sx:CGFloat, sy:CGFloat) = self.getScaleParameters(touch.locationInView(self.container))
self.viewToScale.transform = CGAffineTransformMakeScale(scaleParameters.sx, scaleParameters.sy)
//self.center = touch.locationInView(self.container)
self.center = touch.locationInView(self.container)
self.lastLocation = touch.locationInView(self.container)
}
}
}
where viewToScale is the azure floating uiview called pincherViewRec. The center of the pin at the beginning is in pincherViewRec.frame.origin. It's useless to say that it does not work as expected.
Do you have any idea? Any hint would be appreciated to help me getting the right way.

How to make google map tiles tapable in swift

I'm working with google maps sdk in ios9 and I'm currently displaying tile layers over the map with different colors that represent a value. I want to add the ability to tap a tile and display information about the tile such as what the actual value of that tile is. I looked online and the only thing I found was to make transparent markers for each tile and then write a handler for each marker to display the value, but I think there has to be a more efficient way to do this. Can someone lead me in the right direction? Thanks!
class MyMapView: UIView{
var myMap: GMSMapView!
var closestPoint: CLLocationCoordinate2D!
required init?(coder aDecoder: NSCoder) {
fatalError("This class does not support NSCoding")
}
init(closestPoint: CLLocationCoordinate2D!){
super.init(frame: CGRectZero)
self.backgroundColor = UIVariables.blackOpaque
self.closestPoint = closestPoint
var camera = GMSCameraPosition.cameraWithLatitude(AppVariables.location2D!.latitude, longitude: AppVariables.location2D!.longitude, zoom: 6)
self.myMap = GMSMapView.mapWithFrame(CGRectZero, camera: camera)
self.myMap.mapType = GoogleMaps.kGMSTypeSatellite
delay(seconds: 0.5) { () -> () in
let path = GMSMutablePath()
path.addCoordinate(self.closestPoint!)
path.addCoordinate(AppVariables.location2D!)
let bounds = GMSCoordinateBounds(path: path)
self.myMap.moveCamera(GMSCameraUpdate.fitBounds(bounds, withPadding: 50.0))
let layer = MyMapTileLayer()
layer.map = self.myMap
}
self.myMap.translatesAutoresizingMaskIntoConstraints = false
//self.frame = CGRectMake(0, 0, UIScreen.mainScreen().bounds.width, 400)
self.addSubview(self.myMap)
let viewsDictionary = ["map": self.myMap]
let metricDictionary = ["width": self.frame.size.width/2]
let view_constraint_V1:Array = NSLayoutConstraint.constraintsWithVisualFormat("V:|[map]-15-|", options: NSLayoutFormatOptions.AlignAllLeading, metrics: metricDictionary, views: viewsDictionary)
let view_constraint_H1:Array = NSLayoutConstraint.constraintsWithVisualFormat("H:|-10-[map]-10-|", options: NSLayoutFormatOptions(rawValue: 0), metrics: metricDictionary, views: viewsDictionary)
self.addConstraints(view_constraint_V1)
self.addConstraints(view_constraint_H1)
}
func delay(seconds seconds: Double, completion:()->()) {
let popTime = dispatch_time(DISPATCH_TIME_NOW, Int64( Double(NSEC_PER_SEC) * seconds ))
dispatch_after(popTime, dispatch_get_main_queue()) {
completion()
}
}
override func awakeFromNib() {
super.awakeFromNib()
// Initialization code
}
}
class MyTileLayer: GMSTileLayer{
var webService = WebService()
override func requestTileForX(x: UInt, y: UInt, zoom: UInt, receiver: GMSTileReceiver!) {
webService.getTiles(x, y: y, z: zoom, completion: { (result) -> Void in
receiver.receiveTileWithX(x, y: y, zoom: zoom, image: UIImage(data: result))
})
}
}

Looping with animations

I have this code
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
//'img' is a picture imported in your project
self.mrock.image = UIImage(named: "bird")
self.rrock.image = UIImage(named: "rock2")
self.lrock.image = UIImage(named: "rock3")
}
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
if rockNamesArray[1] == "rock2" {
let firstPos: CGFloat = 300.0
UIView.animateWithDuration(3.0, animations: { () -> Void in
self.mrock.frame = CGRectMake(167, 600, CGFloat(self.mrock.bounds.size.width), CGFloat(self.mrock.bounds.size.height))
})}
if rockNamesArray[2] == "rock3" {
let secondPos: CGFloat = 300.0
UIView.animateWithDuration(3.0, animations: { () -> Void in
self.rrock.frame = CGRectMake(220, 600, CGFloat(self.rrock.bounds.size.width), CGFloat(self.rrock.bounds.size.height))
})}
if rockNamesArray[0] == "bird" {
let thirdPos: CGFloat = 300.0
UIView.animateWithDuration(3.0, animations: { () -> Void in
self.lrock.frame = CGRectMake(115, 600, CGFloat(self.lrock.bounds.size.width), CGFloat(self.lrock.bounds.size.height))
})}
And I would like to know how to make it so that when the png reaches the end of the screen, that it goes back to its original position, how would I do this?
If I understand your question correctly, you can pass a completion closure to UIView.animateWithDuration that will be executed once your animation is complete. In the closure you can make the view return to its original position, for example:
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
let originalFrame = lrock.frame
UIView.animateWithDuration(3.0, animations: {
lrock.frame = CGRect(origin: CGPoint(x: 115, y: 600), size: lrock.frame.size)
},
completion: { _ in
lrock.frame = originalFrame
})
}