Rotating UIControl with CAGradientLayer not updating correctly Swift - swift

Rather than using a normal button, I subclassed a UIControl because I needed to add a gradient to it. I also have a way to add a shadow and an activity indicator (not visible in the image below) as a stateful button to stop users hammering the button if (for example) an API call is being made.
It was really tricky to try to get the UIControl to rotate, and to be able to do this I added the shadow as a separate view to a container view containing the UIControl so a shadow could be added.
Now the issue is the control does not behave quite like a view on rotation - let me show you a screen grab for context:
This is mid-rotation but is just about visible to the eye - the image shows that the Gradient is 75% of the length of a blue UIView in the image.
https://github.com/stevencurtis/statefulbutton
In order to perform this rotation I remove the shadowview and then change the frame of the gradient frame to its bounds, and this is the problem.
func viewRotated() {
CATransaction.setDisableActions(true)
shadowView!.removeFromSuperview()
shadowView!.frame = self.frame
shadowView!.layer.masksToBounds = false
shadowView!.layer.shadowOffset = CGSize(width: 0, height: 3)
shadowView!.layer.shadowRadius = 3
shadowView!.layer.shadowOpacity = 0.3
shadowView!.layer.shadowPath = UIBezierPath(roundedRect: self.bounds, byRoundingCorners: .allCorners, cornerRadii: CGSize(width: 20, height: 20)).cgPath
shadowView!.layer.shouldRasterize = true
shadowView!.layer.rasterizationScale = UIScreen.main.scale
self.gradientViewLayer.frame = self.bounds
self.selectedViewLayer.frame = self.bounds
CATransaction.commit()
self.insertSubview(shadowView!, at: 0)
}
So this rotation method is called through the parent view controller:
override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
super.viewWillTransition(to: size, with: coordinator)
coordinator.animate(alongsideTransition: { context in
context.viewController(forKey: UITransitionContextViewControllerKey.from)
//inform the loginButton that it is being rotated
self.loginButton.viewRotated()
}, completion: { context in
// can call here when completed the transition
})
}
I know this is the problem, and I guess it is not happening at quite the right time to act the same way as a UIView. Now the issue is that I have tried many things to get this to work, and my best solution (above) is not quite there.
It isn't helpful to suggest to use a UIButton, to use an image for the gradient (please don't suggest using a gradient image as a background for a UIButton, I've tried this) or a third party library. This is my work, it functions but does not work acceptably to me and I want to get it to work as well as a usual view (or at least know why not). I have tried the other solutions above as well, and have gone for my own UIControl. I know I can lock the view if there is an API call, or use other ways to stop the user pressing the button too many times. I'm trying to fix my solution, not invent ways of getting around this issue with CAGradientLayer.
The problem: I need to make a UIControlView with a CAGradientLayer as a background rotate in the same way as a UIView, and not exhibit the issue shown in the image above.
Full Example:
https://github.com/stevencurtis/statefulbutton

Here is working code:
https://gist.github.com/alldne/22d340b36613ae5870b3472fa1c64654
These are my recommendations to your code:
1. A proper place for setting size and the position of sublayers
The size of a view, namely your button, is determined after the layout is done. What you should do is just to set the proper size of sublayers after the layout. So I recommend you to set the size and position of the gradient sublayers in layoutSubviews.
override func layoutSubviews() {
super.layoutSubviews()
let center = CGPoint(x: self.bounds.width / 2, y: self.bounds.height / 2)
selectedViewLayer.bounds = self.bounds
selectedViewLayer.position = center
gradientViewLayer.bounds = self.bounds
gradientViewLayer.position = center
}
2. You don’t need to use an extra view to draw shadow
Remove shadowView and just set the layer properties:
layer.shadowOffset = CGSize(width: 0, height: 3)
layer.shadowRadius = 3
layer.shadowOpacity = 0.3
layer.shadowColor = UIColor.black.cgColor
clipsToBounds = false
If you have to use an extra view to draw shadow, then you can add the view once in init() and set the proper size and position in layoutSubviews or you can just programmatically set auto layout constraints to the superview.
3. Animation duration & timing function
After setting proper sizes, your animation of the gradient layers and the container view doesn’t sync well.
It seems that:
During the rotation transition, coordinator(UIViewControllerTransitionCoordinator) has its own transition duration and easing function.
And the duration and easing function are applied automatically to all the subviews (UIView).
However, those values are not applied to the CALayer without an associated UIView. Consequently, it uses the default timing function and duration of CoreAnimation.
To sync the animations, explicitly set the animation duration and the timing function like below:
class ViewController: UIViewController {
...
override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
super.viewWillTransition(to: size, with: coordinator)
CATransaction.setAnimationDuration(coordinator.transitionDuration)
CATransaction.setAnimationTimingFunction(coordinator.completionCurve.timingFunction)
}
...
}
// Swift 4
extension UIView.AnimationCurve {
var timingFunction: CAMediaTimingFunction {
let functionName: CAMediaTimingFunctionName
switch self {
case .easeIn:
functionName = kCAMediaTimingFunctionEaseIn as CAMediaTimingFunctionName
case .easeInOut:
functionName = kCAMediaTimingFunctionEaseInEaseOut as CAMediaTimingFunctionName
case .easeOut:
functionName = kCAMediaTimingFunctionEaseOut as CAMediaTimingFunctionName
case .linear:
functionName = kCAMediaTimingFunctionLinear as CAMediaTimingFunctionName
}
return CAMediaTimingFunction(name: functionName as String)
}
}

Related

Swift: NSBezier

This is the code inside my custom view class:
func drawTestingPoint(_ point: CGPoint, target: Int, output: Int) {
let path = NSBezierPath()
path.appendArc(withCenter: point, radius: 5, startAngle: 0, endAngle: 360)
NSColor.black.setStroke()
if target == output {
NSColor.green.setFill()
} else {
NSColor.red.setFill()
}
path.lineWidth = 3
path.fill()
path.stroke()
}
override func draw(_ dirtyRect: NSRect) {
//If I call the drawTestingPoint function here it works
}
Inside my viewDidLoad method in my NSViewController class I set up the custom view and try to draw the testing point:
let size = getDataViewSize()
let origin = CGPoint(x: view.frame.width/2-size.width/2, y: view.frame.height/2-size.height/2)
dataView = DataView(frame: CGRect(origin: origin, size: size))
view.addSubview(dataView)
dataView.drawTestingPoint(CGPoint(x: view.frame.width/2 y: view.frame.height/2), target: target, output: output)
dataView.needsDisplay = true
My problem is that no point is getting drawn. I think there can't be anything wrong with my drawTestingPoint function because when I call it inside my draw(_ dirtyRect: NSRect) function in my custom NSView class, it works. What can I do so I can call this function inside my viewDidLoad function how you can see in the codes snippets above so my point gets drawn
You can't just draw any time you want. Normally you set up a view and implement draw(_:) as you've done. The system calls the draw method when it needs the view to draw its contents. Before calling your draw(_:) method it sets up the drawing context correctly to draw inside your view and clip if you draw outside of the view. That's the bit you're missing.
As a general rule you should NOT draw outside of the view's draw(_:) method. I've done drawing outside of the draw(_:) method so infrequently that I don't remember what you'd need to do to set up the drawing context correctly. (To be fair I do mostly iOS development these days and my MacOS is getting rusty.)
So the short answer is "Don't do that."
EDIT:
Instead, set up your custom view to save the information it needs to draw itself. As others have suggested, when you make changes to the view, set needsDisplay=true on the view. That will cause the system to call the view's draw(_:) method on the next pass through the event loop

Dismiss UIView border drawn as CGRect around CALayer.frame

I'm adding borders to views like this:
extension UIView {
func addTopBorderWithColor(color: UIColor, width: CGFloat) {
let border = CALayer()
border.backgroundColor = color.cgColor
border.frame = CGRect(x:0,y: 0, width:self.frame.size.width, height:width)
self.layer.addSublayer(border)
}
}
in viewDidLoad:
textLbl.addTopBorderWithColor(color: UIColor.white, width: 1)
in viewWillAppear:
textLbl.text = "placeholder text \(variableLengthText)"
How can I dismiss the border I've drawn using the above extension?
In other views (for instance a ToolBar) as a sloppy hack, I would just write over the border with one matching the background color of the object so it didn’t overlap (for instance on a screen orientation change)
Well, that was always wrong, so the first thing to do is to stop doing it.
This has nothing to do with viewDidDisappear or viewWillAppear. The signal that the view is changing size is that its layoutSubviews is called. So you could implement layoutSubviews to remove the border layer and add a new one at the new size.
But it would be even better to add a subview to the label, consisting of a clear background except for the border. That way, the subview could be configured using constraints to grow and shrink together with the label automatically with no need for any code. Here's a screencast demonstrating that implementation; the red line is the right border:
While #matt's storyboard solution looks elegant, I'm unable to implement it. I did however get it to work by setting the layer.name field in the extension and checking for it in ViewWillLayoutSubviews
extension UIView {
func addTopBorderWithColor(color: UIColor, width: CGFloat) {
let border = CALayer()
border.backgroundColor = color.cgColor
border.frame = CGRect(x:0,y: 0, width:self.frame.size.width, height:width)
border.name = "topBorder"
self.layer.addSublayer(border)
}
In ViewWillLayoutSubviews I call:
if self.textLbl.layer.sublayers != nil {
for layer in textLbl.layer.sublayers! {
if layer.name == "rightBorder" || layer.name == "leftBorder" || layer.name == "topBorder" {
layer.removeFromSuperlayer()
}
}
}
In ViewDidLayoutSubviews I call:
textLbl.addTopBorderWithColor(color: UIColor.white, width: 1.5)

Swift 3 - Custom UIButton radius deletes constraints?

My view controller possesses a stack view with 3 buttons. One fixed in the center with a fixed width and the stackview set to fill proportionally such that the other two buttons will be symmetrical.
However I am also customizing the corner radius of the buttons and as soon as the application loads the button resizes in an undesired fashion.
Ive attempted numerous stackview distribution and fill settings. Removing the buttons from the stackview and simply trying to contraint them to edges on a normal UIView to no avail as it seems, but uncertain if, the constraints get deleted.
Visually the button will be located at the bottom right hand corner of the screen with 0 space between the edge and the button. Currently it gets laid out in a manner where there is no constraint it seems on multiple devices causing it to have a space on larger displays, and exit the screen on small displays within the simulator.
Attempted coding efforts to round the desired corners:
#IBOutlet fileprivate weak var button: UIButton! { didSet {
button.round(corners: [.topRight, .bottomLeft], radius: 50 , borderColor: UIColor.blue, borderWidth: 5.0)
button.setNeedsUpdateConstraints()
button.setNeedsLayout()
} }
override func viewDidLayoutSubviews() {
button.layoutIfNeeded()
}
ovverride func updateViewContraints() {
button.updateConstraintsIfNeeded()
super.updateViewConstraints()
}
The UIView extension that is being used can be found here:
https://stackoverflow.com/a/35621736/5434541
What solution is available to properly adjust the buttons corner radius' and allow the constraints to update the button to as it should be.
Without seeing your full code, I suspect you're getting this because it's drawing multiple layers with the border, and never removing any! If the buttons get resized, the old layers are still there. Here's a snip that shows removing the old layer on redraw that you should be able to adapt. I tested this inside a stack view, and it also behaves correctly on rotation:
class RoundedCorner: UIButton {
var lastBorderLayer:CAShapeLayer?
override func layoutSubviews() { setup() }
func setup() {
let r = self.bounds.size.height / 2
let path = UIBezierPath(roundedRect: self.bounds,
byRoundingCorners: [.topLeft, .bottomRight],
cornerRadii: CGSize(width: r, height: r))
let mask = CAShapeLayer()
mask.path = path.cgPath
addBorder(mask: mask, borderColor: UIColor.blue, borderWidth: 5)
self.layer.mask = mask
}
func addBorder(mask: CAShapeLayer, borderColor: UIColor, borderWidth: CGFloat) {
let borderLayer = CAShapeLayer()
borderLayer.path = mask.path
borderLayer.fillColor = UIColor.clear.cgColor
borderLayer.strokeColor = borderColor.cgColor
borderLayer.lineWidth = borderWidth
borderLayer.frame = bounds
if let last = self.lastBorderLayer, let index = layer.sublayers?.index(of: last) {
layer.sublayers?.remove(at: index)
}
layer.addSublayer(borderLayer)
self.lastBorderLayer = borderLayer
}
}
You also noted in your comment about the app lag. I'm not surprised, since layout can get called many times per update, and you're creating a new layer every single time. You can avoid this by saving the last dimensions of the botton frame and comparing. If it's identical, don't recreate the layer.

I'm having some trouble using x and y coordinates from touchesBegan as the center key in a CI filter

I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.

Borders not covering background

I've got a UILabel is using a border the same color as a background which it is half obscuring, to create a nice visual effect. However the problem is that there is still a tiny, yet noticeable, sliver of the label's background color on the OUTSIDE of the border.
The border is not covering the whole label!
Changing the border width doesn't change anything either, sadly.
Here's a picture of what's going on, enlarged so you can see it:
And my code follows:
iconLbl.frame = CGRectMake(theWidth/2-20, bottomView.frame.minY-20, 40, 40)
iconLbl.font = UIFont.fontAwesomeOfSize(23)
iconLbl.text = String.fontAwesomeIconWithName(.Info)
iconLbl.layer.masksToBounds = true
iconLbl.layer.cornerRadius = iconLbl.frame.size.width/2
iconLbl.layer.borderWidth = 5
iconLbl.layer.borderColor = topBackgroundColor.CGColor
iconLbl.backgroundColor = UIColor.cyanColor()
iconLbl.textColor = UIColor.whiteColor()
Is there something I'm missing?
Or am I going to have to figure out another to achieve this effect?
Thanks!
EDIT:
List of things I've tried so far!
Changing layer.borderWidth
Fussing around with clipsToBounds/MasksToBounds
Playing around the the layer.frame
Playing around with an integral frame
EDIT 2:
No fix was found! I used a workaround by extending this method on to my UIViewController
func makeFakeBorder(inputView:UIView,width:CGFloat,color:UIColor) -> UIView {
let fakeBorder = UIView()
fakeBorder.frame = CGRectMake(inputView.frame.origin.x-width, inputView.frame.origin.y-width, inputView.frame.size.width+width*2, inputView.frame.size.height+width*2)
fakeBorder.backgroundColor = color
fakeBorder.clipsToBounds = true
fakeBorder.layer.cornerRadius = fakeBorder.frame.size.width/2
fakeBorder.addSubview(inputView)
inputView.center = CGPointMake(fakeBorder.frame.size.width/2, fakeBorder.frame.size.height/2)
return fakeBorder
}
I believe this is the way a border is drawn to a layer in iOS. In the document it says:
When this value is greater than 0.0, the layer draws a border using the current borderColor value. The border is drawn inset from the receiver’s bounds by the value specified in this property. It is composited above the receiver’s contents and sublayers and includes the effects of the cornerRadius property.
One way to fix this is to apply a mask to a view's layer, but I found out that even if so we still can see a teeny tiny line around the view when doing snapshot tests. So to fix it more, I put this code to layoutSubviews
class MyView: UIView {
override func layoutSubviews() {
super.layoutSubviews()
let maskInset: CGFloat = 1
// Extends the layer's frame.
layer.frame = layer.frame.inset(dx: -maskInset, dy: -maskInset)
// Increase the border width
layer.borderWidth = layer.borderWidth + maskInset
layer.cornerRadius = bounds.height / 2
layer.maskToBounds = true
// Create a circle shape layer with true bounds.
let mask = CAShapeLayer()
mask.path = UIBezierPath(ovalIn: bounds.inset(dx: maskInset, dy: maskInset)).cgPath
layer.mask = mask
}
}
CALayer's mask