How to animate NSSlider value change in Cocoa application using Swift - swift

How do I animate NSSlider value change so it looks continuous?
I tried using NSAnimation context
private func moveSlider(videoTime: VLCTime) {
DispatchQueue.main.async { [weak self] in
NSAnimationContext.beginGrouping()
NSAnimationContext.current.duration = 1
self?.playerControls.seekSlider.animator().intValue = videoTime.intValue
NSAnimationContext.endGrouping()
}
}
My NSSlider still does not move smoothly.
To put you into the picture, I am trying to make a video player which uses NSSlider for scrubbing through the movie. That slider should also move as the video goes on. As I said, it does move but I can not get it to move smoothly.

There is an Apple sample code in objective-C for exactly what you are looking for. Below how it would look like in Swift.
Basically you need an extension on NSSlider
extension NSSlider {
override open class func defaultAnimation(forKey key: NSAnimatablePropertyKey) -> Any?{
if key == "floatValue"{
return CABasicAnimation()
}else {
return super.defaultAnimation(forKey: key)
}
}
}
Then you can simply use something like this in your move slider function.
private func moveSlider(videoTime: Float) {
NSAnimationContext.beginGrouping()
NSAnimationContext.current.duration = 0.5
slider.animator().floatValue = videoTime
NSAnimationContext.endGrouping()
}

Related

Swift: Play a local video and apply CIFilter in realtime

I'm trying to play a local video and apply a CIFilter in realtime with no lag. How can I do that? I already know how to apply a CIFilter to a video in AVPlayer but the performance it's not as fast as I want.
This is my current code:
#objc func exposure(slider: UISlider, event: UIEvent) {
if let touchEvent = event.allTouches?.first {
switch touchEvent.phase {
case .moved:
player.currentItem?.videoComposition = AVVideoComposition(asset: player.currentItem!.asset, applyingCIFiltersWithHandler: { request in
let exposureFilter = CIFilter.exposureAdjust()
exposureFilter.inputImage = request.sourceImage.clampedToExtent()
exposureFilter.ev = slider.value
let output = self.exposureFilter.outputImage!.cropped(to: request.sourceImage.extent)
// Provide the filter output to the composition
request.finish(with: output, context: nil)
})
default:
break
}
}
}
The problem is that you re-create and re-assign the video composition to the player item every time the slider value changes. This is very costly and unnecessary. You can do the following instead:
Create the filter somewhere outside the composition block and keep a reference to it, for instance in a property.
Also, create the composition only once and let it apply the referenced filter (instead of creating a new one with every frame).
When the slider value changes, only set the corresponding parameter value of the filter. The next time the composition will render a frame, it will automatically use the new parameter value because it uses a reference to the just-changed filter.
Something like this:
let exposureFilter = CIFilter.exposureAdjust()
init() {
// set initial composition
self.updateComposition()
}
func updateComposition() {
player.currentItem?.videoComposition = AVVideoComposition(asset: player.currentItem!.asset, applyingCIFiltersWithHandler: { request in
self.exposureFilter.inputImage = request.sourceImage.clampedToExtent()
let output = self.exposureFilter.outputImage!.cropped(to: request.sourceImage.extent)
request.finish(with: output, context: nil)
})
}
#objc func exposureChanged(slider: UISlider) {
self.exposureFilter.ev = slider.value
// we need to re-set the composition if the player is paused to cause an update (see remark below)
if player.rate == 0.0 {
self.updateComposition()
}
}
(By the way, you can just do slider.addTarget(self, action:#selector(exposureChanged(slider:)), for: .valueChanged) to get notified when the slider value changes. No need to evaluate events.)
One final note: There actually is a use case when you want to re-assign the composition, which is when the player is currently paused but you still want to show a preview of the current frame with the filter values change. Please refer to this technical note from Apple on how to do that.

Drawing directly in a NSView without using the draw(_ updateRect: NSRect) function

I would like to draw CGImage pictures directly to a View and with the normal method using the draw func I only get 7 pictures in a second on a new Mac Book Pro. So I decided to use the updateLayer func instead. I have defined wantsUpdateLayer = true and my new updateLayer func is called as expected. But then starts my problem. When using the draw func, I get the current CGContext with "NSGraphicsContext.current?.cgContext" but in my updateLayer func the "NSGraphicsContext.current?.cgContext" is nil. So I do not know where to put my CGImage, that it will be displayed on my screen. Also "self.view?.window?.graphicsContext?.cgContext" and "self.window?.graphicsContext?.cgContext" are nil, too. There are no buttons or other elements in this view and in the window of the view, only one picture, filling the complete window. And this picture must change 30 times in a second. Generating the pictures is done by a separate thread and needs about 1 millisecond for a picture. I think that from "outside" the NSView class it is not possible to write the picture but my updateLayer func is inside the class.
Here is what the func looks like actually:
override func updateLayer ()
{
let updateRect: NSRect = NSRect(x: 0.0, y: 0.0, width: 1120.0, height: 768.0)
let context1 = self.view?.window?.graphicsContext?.cgContext
let context2 = self.window?.graphicsContext?.cgContext
let context3 = NSGraphicsContext.current?.cgContext
}
And all three contexts are nil in the time the function is called automatically after I set the needsDisplay flag.
Any ideas where to draw my CGImages?
The updateLayer func is called automatically by the user interface. I do not call it manually. It is called by the view. My problem is where inside this method to put my picture to be shown on the screen. Perhaps I have to add a layer or use a default layer of the view but I do not know how to do this.
Meanwhile I have found the solution with some tipps from a good friend:
override var wantsUpdateLayer: Bool
{
return (true)
}
override func updateLayer ()
{
let cgimage: CGImage? = picture // Here comes the picture
if cgimage != nil
{
let nsimage: NSImage? = NSImage(cgImage: cgimage!, size: NSZeroSize)
if nsimage != nil
{
let desiredScaleFactor: CGFloat? = self.window?.backingScaleFactor
if desiredScaleFactor != nil
{
let actualScaleFactor: CGFloat? = nsimage!.recommendedLayerContentsScale(desiredScaleFactor!)
if actualScaleFactor != nil
{
self.layer!.contents = nsimage!.layerContents(forContentsScale: actualScaleFactor!)
self.layer!.contentsScale = actualScaleFactor!
}
}
}
}
}
This is the way to directly write into the layer. Depending on the format (CGImage or NSImage) you first must convert it. As soon as the func wantsUpdateLayer returns a true, the func updateLayer() is used instead of the func draw(). Thats all.
For all who want to see my "Normal" draw function:
override func draw (_ updateRect: NSRect)
{
let cgimage: CGImage? = picture // Here comes the picture
if cgimage != nil
{
if #available(macOS 10.10, *)
{
NSGraphicsContext.current?.cgContext.draw(cgimage!, in: updateRect)
}
}
else
{
super.draw(updateRect)
}
}
The additional speed is 2 times or more, depending on what hardware you use. On a modern Mac Pro there is only a little bit more speed but on a modern Mac Book Pro you will get 10 times or more speed. This works with Mojave 10.14.6 and Catalina 10.15.6. I did not test it with older macOS versions. The "Normal" draw function works with 10.10.6 to 10.15.6.

Remove SKAction and restore node state

Desired behavior is: when an action is removed from a node (with removeAction(forKey:) for instance) it stops to animate and all the changes caused by action are discarded, so the node returns back to pervious state. In other words, I want to achieve behavior similar to CAAnimation.
But when a SKAction is removed, the node remains changed. It's not good, because to restore it's state I need to know exactly what action was removed. And if I then change the action, I also will need to update the node state restoration.
Update:
The particular purpose is to show possible move in a match-3 game. When I show a move, pieces start pulsating (scale action, repeating forever). And when the user moves I want to stop showing the move, so I remove the action. As the result, pieces may remain downscaled. Later I would like to add more fancy and complicated animations, so I want to be able to edit it easily.
Thanks to the helpful comment and answer I came to my own solution. I think the state machine would be bit too heavy here. Instead I created a wrapper node, which main purpose is run the animation. It also has a state: isAimating property. But, first of all, it allows to keep startAnimating() and stopAnimating() methods close to each other, incapsulated, so it's more difficult to mess up.
class ShowMoveAnimNode: SKNode {
let animKey = "showMove"
var isAnimating: Bool = false {
didSet {
guard oldValue != isAnimating else { return }
if isAnimating {
startAnimating()
} else {
stopAnimating()
}
}
}
private func startAnimating() {
let shortPeriod = 0.2
let scaleDown = SKAction.scale(by: 0.75, duration: shortPeriod)
let seq = SKAction.sequence([scaleDown,
scaleDown.reversed(),
scaleDown,
scaleDown.reversed(),
SKAction.wait(forDuration: shortPeriod * 6)])
let repeated = SKAction.repeatForever(seq)
run(repeated, withKey: animKey)
}
private func stopAnimating() {
removeAction(forKey: animKey)
xScale = 1
yScale = 1
}
}
Usage: just add everything that should be animated to this node. Works well with simple animations, like: fade, scale and move.
As #Knight0fDragon suggested, you would be better off using the GKStateMachine functionality, I will give you an example.
First declare the states of your player/character in your scene
lazy var playerState: GKStateMachine = GKStateMachine(states: [
Idle(scene: self),
Run(scene: self)
])
Then you need to create a class for each of these states, in this example I will show you only the Idle class
import SpriteKit
import GameplayKit
class Idle: GKState {
weak var scene: GameScene?
init(scene: SKScene) {
self.scene = scene as? GameScene
super.init()
}
override func didEnter(from previousState: GKState?) {
//Here you can make changes to your character when it enters this state, for example, change his texture.
}
override func isValidNextState(_ stateClass: AnyClass) -> Bool {
return stateClass is Run.Type //This is pretty obvious by the method name, which states can the character go to from this state.
}
override func update(deltaTime seconds: TimeInterval) {
//Here is the update method for this state, lets say you have a button which controls your character velocity, then you can check if the player go over a certain velocity you make it go to the Run state.
if playerVelocity > 500 { //playerVelocity is just an example of a variable to check the player velocity.
scene?.playerState.enter(Run.self)
}
}
}
Now of course in your scene you need to do two things, first is initialize the character to a certain state or else it will remain stateless, so you can to this in the didMove method.
override func didMove(to view: SKView) {
playerState.enter(Idle.self)
}
And last but no least is make sure the scene update method calls the state update method.
override func update(_ currentTime: TimeInterval) {
playerState.update(deltaTime: currentTime)
}

UIViewPropertyAnimator AutoLayout Completion Issue

I'm using UIViewPropertyAnimator to run an array interactive animations, and one issue I'm having is that whenever the I reverse the animations I can't run the animations back forward again.
I'm using three functions to handle the animations in conjunction with a pan gesture recognizer.
private var runningAnimations = [UIViewPropertyAnimator]()
private func startInteractiveTransition(gestureRecognizer: UIPanGestureRecognizer, state: ForegroundState, duration: TimeInterval) {
if runningAnimations.isEmpty {
animateTransitionIfNeeded(gestureRecognizer: gestureRecognizer, state: state, duration: duration)
}
for animator in runningAnimations {
animator.pauseAnimation()
animationProgressWhenInterrupted = animator.fractionComplete
}
}
private func animateTransitionIfNeeded(gestureRecognizer: UIPanGestureRecognizer, state: ForegroundState, duration: TimeInterval) {
guard runningAnimations.isEmpty else {
return
}
let frameAnimator = UIViewPropertyAnimator(duration: duration, dampingRatio: 1) {
switch state {
case .expanded:
// change frame
case .collapsed:
// change frame
}
}
frameAnimator.isReversed = false
frameAnimator.addCompletion { _ in
print("remove all animations")
self.runningAnimations.removeAll()
}
self.runningAnimations.append(frameAnimator)
for animator in runningAnimations {
animator.startAnimation()
}
}
private func updateInteractiveTransition(gestureRecognizer: UIPanGestureRecognizer, fractionComplete: CGFloat) {
if runningAnimations.isEmpty {
print("empty")
}
for animator in runningAnimations {
animator.fractionComplete = fractionComplete + animationProgressWhenInterrupted
}
}
What I've noticed is after I reverse the animations and then call animateTransitionIfNeeded, frameAnimator is appended to running animations however when I call updateInteractiveTransition immediately after and check runningAnimations, it's empty.
So I'm led to believe that this may have to do with how swift handles memory possibly or how UIViewAnimating completes animations.
Any suggestions?
I've come to realize the issue I was having the result of how UIViewPropertyAnimator handles layout constraints upon reversal.
I couldn't find much detail on it online or in the official documentation, but I did find this which helped a lot.
Animator just animates views into new frames. However, reversed or not, the new constraints still hold regardless of whether you reversed the animator or not. Therefore after the animator finishes, if later autolayout again lays out views, I would expect the views to go into places set by currently active constraints. Simply said: The animator animates frame changes, but not constraints themselves. That means reversing animator reverses frames, but it does not reverse constraints - as soon as autolayout does another layout cycle, they will be again applied.
Like normal you set your constraints and call view.layoutIfNeeded()
animator = UIViewPropertyAnimator(duration: duration, dampingRatio: 1) {
[unowned self] in
switch state {
case .expanded:
self.constraintA.isActive = false
self.constraintB.isActive = true
self.view.layoutIfNeeded()
case .collapsed:
self.constraintB.isActive = false
self.constraintA.isActive = true
self.view.layoutIfNeeded()
}
}
And now, since our animator has the ability to reverse, we add a completion handler to ensure that the correct constraints are active upon completion by using the finishing position.
animator.addCompletion { [weak self] (position) in
if position == .start {
switch state {
case .collapsed:
self?.constraintA.isActive = false
self?.constraintB.isActive = true
self?.view.layoutIfNeeded()
case .expanded:
self?.constraintA.isActive = false
self?.constraintB.isActive = true
self?.view.layoutIfNeeded()
}
}
}
The animator operates on animatable properties of views, such as the frame, center, alpha, and transform properties, creating the needed animations from the blocks you provide.
This is the crucial part of the documentation.
You can properly animate:
frame, center, alpha and transform, so you would not be able to animate properly NSConstraints.
You should modify frames of views inside of addAnimations block

Sliding Animation NSView background color in swift

Trying to modifying the color of NSView with sliding animation like Google Trends
let hexColors = ["56A55B", "4F86EC", "F2BC42", "DA5040"]
#IBAction func changeColor(sender: NSButton) {
let randomIndex = Int(arc4random_uniform(UInt32(hexColors.count)))
NSAnimationContext.runAnimationGroup({_ in
//duration
NSAnimationContext.current.duration = 5.0
self.view.animator().layer?.backgroundColor = NSColor(hex: hexColors[randomIndex]).cgColor
}, completionHandler:{
print("completed")
})
}
I tried using NSAnimationContext to set duration of color change, but it does not work. However it works with the alphaValue of the view.
I'm not sure if you have gotten your answer yet. But this might be able get it to work:
let hexColors = ["56A55B", "4F86EC", "F2BC42", "DA5040"]
#IBAction func changeColor(sender: NSButton) {
let randomIndex = Int(arc4random_uniform(UInt32(hexColors.count)))
NSAnimationContext.runAnimationGroup({ context in
//duration
context.duration = 5.0
// This is the key property that needs to be set
context.allowsImplicitAnimation = true
self.view.animator().layer?.backgroundColor = NSColor(hex: hexColors[randomIndex]).cgColor
}, completionHandler:{
print("completed")
})
}
here is what the documentation says:
/* Determine if animations are enabled or not. Using the -animator proxy will automatically set allowsImplicitAnimation to YES. When YES, other properties can implicitly animate along with the initially changed property. For instance, calling [[view animator] setFrame:frame] will allow subviews to also animate their frame positions. This is only applicable when layer backed on Mac OS 10.8 and later. The default value is NO.
*/
#available(macOS 10.8, *)
open var allowsImplicitAnimation: Bool