UIScreenEdgeRecognizerGesture smooth like safari - swift

My container view controller has a screen edge pan gesture to change the views. The code for panning the views looks as follows:
func changeView(recognizer: UIScreenEdgePanGestureRecognizer) {
println("INITIAL: \(recognizer.translationInView(view))")
if recognizer.state == .Began {
// Create and configure the view
println("BEGAN: \(recognizer.translationInView(view))")
}
if recognizer.state == .Changed {
println("CHANGED: \(recognizer.translationInView(view))")
let translation = recognizer.translationInView(view)
currentView.view.center.x += translation.x
pendingView.view.center.x += translation.x
recognizer.setTranslation(CGPointZero, inView: view)
}
if recognizer.state == .Ended {
if recognizer.view!.center.x > view.bounds.size.width {
// Animate the view to position
} else {
// Animate the view back to original
}
}
}
While this works, I'm still having an issue with the start of the panning. When a user swipes quickly, translation will have a value big enough to make the start of the pan looking "unsmoothly".
For example, with a quick swipe translation will start with a value of 100. The value is then added to the center.x of the views causing the undesired effect.
I noticed Safari has a screen edge gesture as well to change views and this effect doesn't occur no matter how quick the swipe is. Nor does this happen with a normal UIPanGestureRecognizer.
I've tried wrapping the "animation" in UIView.animateWithDuration(). It does look more smooth, but then it feels it's just lagging behind the actual gesture, unlike how it's done in Safari.
Can someone please tell me a better way to pan the views so it will look as smooth as in Safari?
EDIT:
I've added several lines to check the value of the translation and the problem is it jumps from 0 to some value causing the unwanted behavior. It doesn't matter where I put recognizer.setTranslation(CGPointZero, inView: view).
The output is:
INITIAL: (21.5, 0.0)
BEGAN: (21.5, 0.0)
INITIAL: (188.0, -3.0)
CHANGED: (188.0, -3.0)
After some more testing:
func changeView(recognizer: UIScreenEdgePanGestureRecognizer) {
println("INITIAL: \(recognizer.translationInView(view))")
recognizer.setTranslation(CGPointZero, inView: view)
}
INITIAL: (0.0, 0.0)
INITIAL: (130.5, -35.5)
FINAL:
Seems like creating and preparing the new view is causing some kind of minor lag in Began. The small amount of lag is enough to create a difference in translation of 100-200.
Probably have to preload the views somewhere else I guess.

This won't solve all your problems, since, as you have rightly said, a screen edge pan gesture recognizer is a little crusty in its behavior; but do note that you are omitting one valuable piece of data - the question of what recognizer.translationInView is in the .Began state. At that time, obviously, the finger has already moved considerably; for, if it had not, we would not have recognized this as a screen edge pan gesture! You will thus be much happier, I think, if you construct your tests like this:
switch recognizer.state {
case .Began:
// ... do initial setup
fallthrough // <-- TAKE NOTE
case .Changed:
// respond to changes
default:break
}
In that way, you will capture the missing datum and respond to it, and the jump will not be quite so bad.
I tried logging in both began and changed and my numbers (showing translationInView with no setTranslation back to zero) are this sort of thing:
began
changed
(-16.5, 0.0)
changed
(-41.5, 0.0)
changed
(-41.5, 0.0)
changed
(-58.5, 0.0)
(The first one, preceded by began, is the fallthrough execution of changed.) So yes, we do go from nothing to -41 very fast, but at least there is an intermediate value of -16.5 so it isn't quite so abrupt.
Also I should add that if there is a serious delay and jump it may well be that you have multiple conflicting gesture recognizers. If so, you can detect this fact by using delegate methods such as gestureRecognizer:shouldRequireFailureOfGestureRecognizer: - which will also let you prioritize between them and perhaps make the other g.r. give way sooner.

Related

Core Graphics with DisplayLink Unexpected Behavior

I'm trying to learn Core Graphics and am having trouble understanding the behavior of the code I've written, which uses a subclassed UIView and an override of the draw(_ rect:) function.
I've written a basic bouncing ball demo. Any number of random balls are created with random position and speed. They then bounce around the screen.
My issue is the way that the balls appear to move is unexpected and there is a lot of flicker. Here is the sequence inside for loops to iterate through all balls:
Check for collisions.
If there is a collision with the wall, multiply speed by -1.
Increment ball position by ball speed.
I'm currently not clearing the context, so I would expect the existing balls to stay put. Instead they seem to slide smoothly along with the ball that's moving.
I'd like to understand why this is the case.
Here is an image of how the current code runs at 4 fps so that you can see how the shapes are being drawn and shift back and forth:
Here is my code:
class ViewController: UIViewController {
let myView = MyView()
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemBlue
myView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(myView)
NSLayoutConstraint.activate([
myView.centerXAnchor.constraint(equalTo: view.centerXAnchor),
myView.centerYAnchor.constraint(equalTo: view.centerYAnchor),
myView.widthAnchor.constraint(equalTo: view.widthAnchor),
myView.heightAnchor.constraint(equalTo: view.heightAnchor)
])
createDisplayLink(fps: 60)
}
func createDisplayLink(fps: Int) {
let displaylink = CADisplayLink(target: self,
selector: #selector(step))
displaylink.preferredFramesPerSecond = fps
displaylink.add(to: .current,
forMode: RunLoop.Mode.default)
}
#objc func step(displaylink: CADisplayLink) {
myView.setNeedsDisplay()
}
}
class MyView: UIView {
let numBalls = 5
var balls = [Ball]()
override init(frame:CGRect) {
super.init(frame:frame)
for _ in 0..<numBalls {
balls.append(
Ball(
ballPosition: Vec2(x: CGFloat.random(in: 0...UIScreen.main.bounds.width), y: CGFloat.random(in: 0...UIScreen.main.bounds.height)),
ballSpeed: Vec2(x: CGFloat.random(in: 0.5...2), y: CGFloat.random(in: 0.5...2))))
}
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
for i in 0..<numBalls {
if balls[i].ballPosition.x > self.bounds.width - balls[i].ballSize || balls[i].ballPosition.x < 0 {
balls[i].ballSpeed.x *= -1
}
balls[i].ballPosition.x += balls[i].ballSpeed.x
if balls[i].ballPosition.y > self.bounds.height - balls[i].ballSize || balls[i].ballPosition.y < 0 {
balls[i].ballSpeed.y *= -1
}
balls[i].ballPosition.y += balls[i].ballSpeed.y
}
for i in 0..<numBalls {
context.setFillColor(UIColor.red.cgColor)
context.setStrokeColor(UIColor.green.cgColor)
context.setLineWidth(0)
let rectangle = CGRect(x: balls[i].ballPosition.x, y: balls[i].ballPosition.y, width: balls[i].ballSize, height: balls[i].ballSize)
context.addEllipse(in: rectangle)
context.drawPath(using: .fillStroke)
}
}
}
There are a lot of misunderstandings here, so I'll try to take them one by one:
CADisplayLink does not promise it will call your step method every 1/60 of a second. There's a reason the property is called preferred frames per second. It's just a hint to the system of what you'd like. It may call you less often, and in any case there will be some amount of error.
To perform your own animations by hand, you need to look at what time is actually attached to the given frame, and use that to determine where things are. The CADisplayLink includes a timestamp to let you know that. You can't just increment by speed. You need to multiply speed by actual time to determine distance.
"I'm currently not clearing the context, so I would expect the existing balls to stay put." Every time draw(rect:) is called, you receive a fresh context. It is your responsibility to draw everything for the current frame. There is no persistence between frames. (Core Animation generally provides those kinds of features by efficiently composing CALayers together; but you've chosen to use Core Graphics, and there you need to draw everything every time. We generally do not use Core Graphics this way.)
myView.setNeedsDisplay() does not mean "draw this frame right now." It means "the next time you're going to draw, this view needs to be redrawn." Depending on exactly when the CADisplayLink fires, you may drop a frame, or you might not. Using Core Graphics, you would need to update all the circle's locations before calling setNeedsDisplay(). Then draw(rect:) should just draw them, not compute what they are. (CADisplayLink is really designed to work with CALayers, though, and NSView drawing isn't designed to be updated so often, so this still may be a little tricky to keep smooth.)
The more normal way to create this system would be to generate a CAShapeLayer for each ball and position them on the NSView's layer. Then in the CADisplayLink callback, you would adjust their positions based on the timestamp of the next frame. Alternately, you could just set up a repeating NSTimer or DispatchTimerSource (rather than a CADisplayLink) at something well below the screen refresh speed (like 1/20 s) and move the layer positions in that callback. This would be nice and simple and avoid the complexities of CADisplayLink (which is much more powerful, but expects you to use the timestamp and consider other soft real-time concerns).

Flip an UIImageView horizontal on tap will result in "reversed" gesture recognizer and only works 1x

I implemented giphy stickers like instagram in my camera app.
I want to flip a sticker horizontally on tap with this code:
#objc func tapGesture(_ gesture: UITapGestureRecognizer) {
guard let gestureView = gesture.view else { return }
gestureView.transform = CGAffineTransform(scaleX: -1, y: 1)
}
1: This works only one time. It can not be reversed flipped.
2: I have added several gesture recognizer. When I flip the image, the gestures are also reversed (rotating in the different direction, etc)
What is the best way to flip and reflip the image, and keep the original gesture recognizer behavior?
Well, I think your code works as implemented.
If you set a transform to (-1|1), e.g. you flip the x-axis, then this flipping will be applied to the "original" coordinate system. In other words, you do not apply a transform to the existing transform matrix, (e.g. flip and flip again), but just modify the original matrix (which is the identity matrix).
What you might want is:
gestureView.transform = gestureView.transform.scaledBy(x: -1, y:1)
If you try to rotate a view clockwise, but this view is flipped, then you see the rotation from behind, e.g. it appears as if the rotation was anti-clockwise. The gesture recognizer still works as expected.

Pre-Emptive Swift Animation Object Move - Confusing

Good day,
I have a very simple animation function, that drops a button by 200.
However, I discover that before the animation begins, the object is moved up (-) by 200! Therefore, after the animation, the button is back where it started.
I tried to set self.button1.center = self.view.center in viewDidAppear before calling the function.
func dropStep(){
UIView.animate(withDuration: 6, animations: {
self.button1.center.y += 200
}, completion: nil)
}
I expected the animation to start from where it is intended (at the center of the view) and not pre-emptively shifted up by 200 points.
Try,
func dropStep(){
button1.center = self.view.center
UIView.animate(withDuration: 6, animations: {
self.button1.center.y += 200
}, completion: nil)
}
Okay, so I could not genuinely find the reason behind this.
However, this was an app that was used and reused to learn animations, so maybe somehow the object referencing outlets or some sort of hidden reference was messed up, such that the app behaved funny.
Therefore, I created a new app with virtually the same code, and it behaved as expected. Thanks anyway for your help.

Taps events not getting detected on CAShapeLayer lineWidth?

I have created a circle using Bezier path as well as i created tap events to check whether the layer has been tapped or not, it works fine until i increase the size of lineWidth of CAShapeLayer. By tapping on the lineWidth, sometimes it is detected and sometimes it is not detected.
I searched stackoverflow about the problem which i was having but i am unable to solve it. The problem remains the same, i am unable to detect certain taps on my layer(lineWidth area).
I just want to detect taps on the lineWidth of CAShapeLayer, i have been searching everywhere but couldn't find a proper solution for it. Most of the answers are in outdated languages. I would really appreciate if anyone could give an example to solve my issue. Swift 4.
https://i.imgur.com/194Aljn.png
let tapRecognizer = UITapGestureRecognizer(target: self, action: #selector(tapDetected(tapRecognizer:)))
self.addGestureRecognizer(tapRecognizer)
#objc public func tapDetected(tapRecognizer:UITapGestureRecognizer) {
let tapLocation:CGPoint = tapRecognizer.location(in: self)
self.hitTest(tapLocation: CGPoint(x: tapLocation.x, y: tapLocation.y))
}
private func hitTest(tapLocation:CGPoint) {
if layer.path?.contains(tapLocation) == true {
print("Do something")
} else {
print("Nothing Found")
}
}
The problem is that the line stroke is not really part of the path - is it is just parts of its display. You can convert the path to be an larger path containing the stroke by using some CGPath methods:
let pathWithLineStroke = UIBezierPath.init(cgPath: path.cgPath.copy(strokingWithWidth: 2.0, lineCap: CGLineCap.butt, lineJoin: .bevel, miterLimit: 1.0));
Of course replace the the width, lineCap, lineJoin, and miterLimit with your actual values.
I'd recommend doing this earlier in your code, and then just drawing the path that already has the strokes built in, instead of setting those properies on the CALayer.
Hope that helps. Good luck.

Callout for MKPolyline

I have a short line (MKPolyline) and a custom annotation class (MKPointAnnotaion). Right now I have the point annotation located at the midpoint of the polyline. However, I would like the callout to be displayed whenever any point on the polyline is touched, similar to how routing performs in Maps. Since polylines don't seem to have a touchable attribute like annotations, is there a way to have the annotation encompass the entire line?
I do not care about the annotations marker (it is actually a nuisance and would prefer to hide it), only the callout and associated class info.
If at all possible could someone provide a brief example code with the answer?
EDIT: Other posts seem to be from 5+ years ago with links that do not work anymore
A couple of thoughts:
I think your “add annotation” approach for where you want this callout to start from is going to be the easiest approach. If you start contemplating making an annotation view subclass that shows the path, I think that’s going to get pretty hairy pretty quickly (handling scaling changes, rotations, 3D, etc.). Overlays give you a bunch of behaviors for free, so I think you’ll want to stick with that. And annotations give you callout behaviors for free, too, so I think you’ll want to stay with that too.
If you don’t want your annotation view to show a pin/marker, just don’t subclass from MKPinAnnotationView or MKMarkerAnnotationView, but rather just use MKAnnotationView.
The tricky step is how to detect taps on the MKPolylineRenderer. One idea is to create a path that traces the outline of the path of the MKPolyline.
extension MKPolyline {
func contains(point: CGPoint, strokeWidth: CGFloat = 44, in mapView: MKMapView) -> Bool {
guard pointCount > 1 else { return false }
let path = UIBezierPath()
path.move(to: mapView.convert(from: points()[0]))
for i in 1 ..< pointCount {
path.addLine(to: mapView.convert(from: points()[i]))
}
let outline = path.cgPath.copy(strokingWithWidth: strokeWidth, lineCap: .round, lineJoin: .round, miterLimit: 0)
return outline.contains(point)
}
}
where
extension MKMapView {
func convert(from mapPoint: MKMapPoint) -> CGPoint {
return convert(mapPoint.coordinate, toPointTo: self)
}
}
You can then create a gesture recognizer that detects a tap, checks to see if it’s within this path that outlines the MKPolyline, or whatever. But these are the basic pieces of the solution.
Obviously, the answers here outline different, apparently looking at the distance to the paths. That conceivably would work, too.