I followed this awesome Rey Wenderlich tutorial to make an Bezier arc and increment/decrement values. But how to animate the arch instead of just step-up and step-down?
http://www.raywenderlich.com/90690/modern-core-graphics-with-swift-part-1
I tried putting animation block in custom property declaration, which I dont think is the right place to do it and xcode doesn't let me do it anyway.
#IBInspectable var counter: Int = 5 {
didSet {
if counter <= NoOfGlasses {
//the view needs to be refreshed
UIView.animateWithDuration(0.2, animations: {
setNeedsDisplay()
}, completion:nil
)
}
}
}
Also tried to put the increment in animation block in View controller, didn't work.
#IBAction func btnPushButton(sender: AnyObject) {
UIView.animateWithDuration(0.2, animations: {
self.arcView.counter = self.arcView.counter + 10
self.counterLabel.text = String(self.arcView.counter)
}, completion:nil
)
}
Are you describing this sort of thing?
That's a simpler example - it's just a drawn triangle - but it's the same idea, if I'm understanding you correctly: we are animating the difference between one drawing and another.
Basically you have two choices. The easy way is to use CAShapeLayer, which animates for you automatically when you change its path. The other choice is to do what I'm doing here, which is to create a custom animatable property - in this case, a property representing the x-position of the bottom point of the triangle.
Related
I have a view with many CAShapeLayer objects (there can be other CALayer objects as well) and I want to modify the CAShapeLayer object that the user clicks on.
I was experimenting with the below two methods but none of them works. Any tips would be great.
Approach one:
private func modifyDrawing(at point: NSPoint) {
for layer in view.layer!.sublayers! {
let s = layer.hitTest(point)
if s != nil && s is CAShapeLayer {
selectedShape = s as? CAShapeLayer
}
}
// modify some properties
selectedShape?.shadowRadius = 20
selectedShape?.shadowOpacity = 1
selectedShape?.shadowColor = CGColor.black
}
Approach two:
private func modifyDrawing(at point: NSPoint) {
let drawingsAtMouseClick: [CAShapeLayer] = view.layer!.sublayers!.compactMap{ $0 as? CAShapeLayer }
if drawingsAtMouseClick.isEmpty {
return
}
for drawing in drawingsAtMouseClick {
if drawing.contains(point) {
selectedShape = drawing
break
}
}
// modify some properties
selectedShape?.shadowRadius = 20
selectedShape?.shadowOpacity = 1
selectedShape?.shadowColor = CGColor.black
}
The point parameter passed to these functions is the NSEvent.locationInWindow. Not sure whether I should convert this with respect to the CAShapeLayer or something.
P.S.: This isn't production code so please kindly ignore Swift best practices, etc.
The CALayer.hitTest(_:) method will tell you the layer that was hit, including in a layer's sublayers.
You shouldn't need to check every sublayer. You should be able to ask your view's layer what layer was hit by asking the top-level layer.
A view's layer's coordinates are generally the same as the view's bounds. It's anchored at 0,0 in the parent layer, and the sublayers use that coordinate space. Thus, you should convert your point to view/layer coordinates before hit testing.
(I always have to go check to see which coordinate systems are flipped from the other between iOS and Mac OS and views and layers. You might need to flip coordinates. I leave that research up to you.)
Edit:
I seem to remember that CALayer.hitTest(_:) just checks that the layer's frame contains the point, not that it actually contains an opaque pixel at that position. It's more complex if you want to check to see if the point contains an opaque pixel.
I'm trying to learn Core Graphics and am having trouble understanding the behavior of the code I've written, which uses a subclassed UIView and an override of the draw(_ rect:) function.
I've written a basic bouncing ball demo. Any number of random balls are created with random position and speed. They then bounce around the screen.
My issue is the way that the balls appear to move is unexpected and there is a lot of flicker. Here is the sequence inside for loops to iterate through all balls:
Check for collisions.
If there is a collision with the wall, multiply speed by -1.
Increment ball position by ball speed.
I'm currently not clearing the context, so I would expect the existing balls to stay put. Instead they seem to slide smoothly along with the ball that's moving.
I'd like to understand why this is the case.
Here is an image of how the current code runs at 4 fps so that you can see how the shapes are being drawn and shift back and forth:
Here is my code:
class ViewController: UIViewController {
let myView = MyView()
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemBlue
myView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(myView)
NSLayoutConstraint.activate([
myView.centerXAnchor.constraint(equalTo: view.centerXAnchor),
myView.centerYAnchor.constraint(equalTo: view.centerYAnchor),
myView.widthAnchor.constraint(equalTo: view.widthAnchor),
myView.heightAnchor.constraint(equalTo: view.heightAnchor)
])
createDisplayLink(fps: 60)
}
func createDisplayLink(fps: Int) {
let displaylink = CADisplayLink(target: self,
selector: #selector(step))
displaylink.preferredFramesPerSecond = fps
displaylink.add(to: .current,
forMode: RunLoop.Mode.default)
}
#objc func step(displaylink: CADisplayLink) {
myView.setNeedsDisplay()
}
}
class MyView: UIView {
let numBalls = 5
var balls = [Ball]()
override init(frame:CGRect) {
super.init(frame:frame)
for _ in 0..<numBalls {
balls.append(
Ball(
ballPosition: Vec2(x: CGFloat.random(in: 0...UIScreen.main.bounds.width), y: CGFloat.random(in: 0...UIScreen.main.bounds.height)),
ballSpeed: Vec2(x: CGFloat.random(in: 0.5...2), y: CGFloat.random(in: 0.5...2))))
}
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
for i in 0..<numBalls {
if balls[i].ballPosition.x > self.bounds.width - balls[i].ballSize || balls[i].ballPosition.x < 0 {
balls[i].ballSpeed.x *= -1
}
balls[i].ballPosition.x += balls[i].ballSpeed.x
if balls[i].ballPosition.y > self.bounds.height - balls[i].ballSize || balls[i].ballPosition.y < 0 {
balls[i].ballSpeed.y *= -1
}
balls[i].ballPosition.y += balls[i].ballSpeed.y
}
for i in 0..<numBalls {
context.setFillColor(UIColor.red.cgColor)
context.setStrokeColor(UIColor.green.cgColor)
context.setLineWidth(0)
let rectangle = CGRect(x: balls[i].ballPosition.x, y: balls[i].ballPosition.y, width: balls[i].ballSize, height: balls[i].ballSize)
context.addEllipse(in: rectangle)
context.drawPath(using: .fillStroke)
}
}
}
There are a lot of misunderstandings here, so I'll try to take them one by one:
CADisplayLink does not promise it will call your step method every 1/60 of a second. There's a reason the property is called preferred frames per second. It's just a hint to the system of what you'd like. It may call you less often, and in any case there will be some amount of error.
To perform your own animations by hand, you need to look at what time is actually attached to the given frame, and use that to determine where things are. The CADisplayLink includes a timestamp to let you know that. You can't just increment by speed. You need to multiply speed by actual time to determine distance.
"I'm currently not clearing the context, so I would expect the existing balls to stay put." Every time draw(rect:) is called, you receive a fresh context. It is your responsibility to draw everything for the current frame. There is no persistence between frames. (Core Animation generally provides those kinds of features by efficiently composing CALayers together; but you've chosen to use Core Graphics, and there you need to draw everything every time. We generally do not use Core Graphics this way.)
myView.setNeedsDisplay() does not mean "draw this frame right now." It means "the next time you're going to draw, this view needs to be redrawn." Depending on exactly when the CADisplayLink fires, you may drop a frame, or you might not. Using Core Graphics, you would need to update all the circle's locations before calling setNeedsDisplay(). Then draw(rect:) should just draw them, not compute what they are. (CADisplayLink is really designed to work with CALayers, though, and NSView drawing isn't designed to be updated so often, so this still may be a little tricky to keep smooth.)
The more normal way to create this system would be to generate a CAShapeLayer for each ball and position them on the NSView's layer. Then in the CADisplayLink callback, you would adjust their positions based on the timestamp of the next frame. Alternately, you could just set up a repeating NSTimer or DispatchTimerSource (rather than a CADisplayLink) at something well below the screen refresh speed (like 1/20 s) and move the layer positions in that callback. This would be nice and simple and avoid the complexities of CADisplayLink (which is much more powerful, but expects you to use the timestamp and consider other soft real-time concerns).
I've been wanting to implement this nice little UICollectionViewCell animation shown below that was on Dribble.
Do you think it's possible?
I looked around for any guides to get a head start on this but found nothing quite similar.
I have the idea that a custom flow layout is the way to go here. Would it be that I will have to make snapshots of each visible cell, add pan gestures to each cell and based on the movement detected through the gesture recogniser, capture visible cells and animate the snapshot images? Would appreciate any help to understand how I could implement this.
Thank you.
This is a pretty interesting challenge.
Instead of doing a custom layout, I would override scrollViewDidScroll, store the offset every time it's called, compare it with the last stored offset in order to get the velocity, and based off of that, apply a transform to all visibleCells in your collection view.
var lastOffsetX: CGFloat?
func scrollViewDidScroll(_ scrollView: UIScrollView) {
defer { lastOffsetX = scrollView.contentOffset.x }
guard let lastOffsetX = lastOffsetX else { return }
// You'll have to evaluate how large velocity gets to avoid the cells
// from stretching too much
let maxVelocity: CGFloat = 60
let maxStretch: CGFloat = 10
let velocity = min(scrollView.contentOffset.x - lastOffsetX, maxVelocity)
let stretch = velocity / maxVelocity * maxStretch
var cumulativeStretch: CGFloat = 0
collectionView.visibleCells.forEach { cell in
cumulativeStretch += stretch
cell.transform = CGAffineTransform(translateX: cumulativeStretch, y: 0)
}
}
I would start with something like this, and make lastOffsetX = nil when the scroll view stops scrolling (this exercise is left to the reader).
It will probably require some tweaking.
I created the method below as part of custom CAAnimationGroup. The method first adds itself to weak reference to a CALayer assigned at initialization.
Then it iterates over it's own animations array and applies each animation 's toValue to the associated keyPath using KVC on the weak CALayer reference.
final public class FAAnimationGroup : CAAnimationGroup {
weak var weakLayer : CALayer?
override init() {
super.init()
animations = [CAAnimation]()
fillMode = kCAFillModeForwards
removedOnCompletion = true
}
override public func copyWithZone(zone: NSZone) -> AnyObject {
let animationGroup = super.copyWithZone(zone) as! FAAnimationGroup
animationGroup.weakLayer = weakLayer
return animationGroup
}
......
func applyFinalState() {
guard let animationLayer = weakLayer else {
return
}
animationLayer.addAnimation(self, forKey: self.animationKey)
if let groupAnimations = animations {
for animation in groupAnimations {
if let toValue = animation.toValue {
animationLayer.setValue(toValue, forKeyPath: animation.keyPath!)
}
}
}
}
}
So everything works accordingly for bounds, size, transform, and alpha for all my views just as expected with the current removedOnCompletion flag and fillMode values.
Once the animation is complete, I query the UIView, and it's backing layer. What is see is the frame reflects the correct result, the view's alpha reflects the animated opacity value. Great!
But here comes the fun part. When animation the opacity of a UISlider from 0.0 to 1.0. Once the animation is complete, I begin to adjust the UISlider value, and right as I move it, the alpha goes back to 0.0.
I tried to set the removedCompletion flag to false, and as expected, keeping the animation around kept the layer in it's final state, but that is not what I wanted. I need it to remove itself after finishing, since I did set the values directly on the the backing layer.
So after setting the removedCompletion back to true, I tried the following which has me completely stumped leading up to my question....
.....
if let groupAnimations = animations {
for animation in groupAnimations {
if let toValue = animation.toValue {
if animation.keyPath! == "opacity" {
animationLayer.owningView()!.setValue(toValue, forKeyPath: "alpha")
} else {
animationLayer.setValue(toValue, forKeyPath: animation.keyPath!)
}
}
}
}
In the code above, I would, instead of setting opacity on the layer, I set the alpha value on the owningView associated with animating layer (aka the layer's delegate). In this instance everything worked accordingly, I adjusted the slider and it did not reset to alpha 0.0
The fact that this is happening only with a UISlider is possibly irrelevant. I thought that by setting the UIView's properties, the backing layer will reflect the equivalent, and I assumed the vice versa to also be true.
Question
Why are the final alpha/opacity values in sync when I set the alpha of the view, but not reflected when I set the opacity on it's backing layer? What is the relationship between UIView and CALayer in this specific example?
From what I understood the two are very intricately interlinked, the UIView is kind of a wrapper full of access to the backing layer which redraws itself accordingly. What is this opacity/alpha relationship in the context of animations?
Is it possible to adjust the blur radius and transparency of an NSVisualEffectView when it's applied to an NSWindow (Swift or Objective-C)? I tried all variations of NSVisualEffectMaterial (dark, medium, light) - but that's not cutting it. In the image below I've used Apple's non-public API with CGSSetWindowBackgroundBlurRadius on the left, and NSVisualEffectView on the right.
I'm trying to achieve the look of what's on the left, but it seems I'm relegated to use the methods of the right.
Here's my code:
blurView.blendingMode = NSVisualEffectBlendingMode.BehindWindow
blurView.material = NSVisualEffectMaterial.Medium
blurView.state = NSVisualEffectState.Active
self.window!.contentView!.addSubview(blurView)
Possibly, related - but doesn't answer my question:
OS X NSVisualEffect decrease blur radius? - no answer
Although I wouldn't recommend this unless you are ready to fall back to it not working in a future release, you can subclass NSVisualEffectView with the following to do what you want:
- (void)updateLayer
{
[super updateLayer];
[CATransaction begin];
[CATransaction setDisableActions:YES];
CALayer *backdropLayer = self.layer.sublayers.firstObject;
if ([backdropLayer.name hasPrefix:#"kCUIVariantMac"]) {
for (CALayer *activeLayer in backdropLayer.sublayers) {
if ([activeLayer.name isEqualToString:#"Active"]) {
for (CALayer *sublayer in activeLayer.sublayers) {
if ([sublayer.name isEqualToString:#"Backdrop"]) {
for (id filter in sublayer.filters) {
if ([filter respondsToSelector:#selector(name)] && [[filter name] isEqualToString:#"blur"]) {
if ([filter respondsToSelector:#selector(setValue:forKey:)]) {
[filter setValue:#5 forKey:#"inputRadius"];
}
}
}
}
}
}
}
}
[CATransaction commit];
}
Although this doesn't use Private APIs per se, it does start to dig into layer hierarchies which you do not own, so be sure to double check that what you are getting back is what you expect, and fail gracefully if not. For instance, on 10.10 Yosemite, the Backdrop layer was a direct decedent of the Visual Effect view, so things are likely to change in the future.
I had the same issue as you had and I have solved it with a little trick seems to do the job that I wanted. I hope that it will also help you.
So in my case, I have added the NSVisualEffectView in storyboards and set its properties as follows:
and my View hierarchy is as follows:
All that reduces the blur is in the NSViewController in:
override func viewWillAppear() {
super.viewWillAppear()
//Adds transparency to the app
view.window?.isOpaque = false
view.window?.alphaValue = 0.98 //tweak the alphaValue for your desired effect
}
Going with your example this code should work in addition with the tweak above:
let blurView = NSVisualEffectView(frame: view.bounds)
blurView.blendingMode = .behindWindow
blurView.material = .fullScreenUI
blurView.state = .active
view.window?.contentView?.addSubview(blurView)
Swift 5 code
For anyone interested here is a link to my repo where I have created a small prank app which uses the code above:
GitHub link