SwiftUI: Synchronise 2 Animation Functions (CAMediaTimingFunction and Animation.timingCurve) - swift

I was using CAMediaTimingFunction (for a window) and Animation.timingCurve (for the content) with the same Bézier curve and time to deal with the resizing. However, there can be a very small time difference and therefore it caused the window to flicker irregularly. Just like this (GitHub repository: http://github.com/toto-minai/DVD-Player-Components ; original design by 7ahang):
Red background indicates the whole window, while green background indicates the content. As you can see, the red background is visible at certain points during the animation, which means that the content is sometimes smaller than the window, so it generates extra small spaces and causes unwanted shaking.
In AppDelegate.swift, I used a custom NSWindow class for the main window and overrode setContentSize():
class AnimatableWindow: NSWindow {
var savedSize: CGSize = .zero
override func setContentSize(_ size: NSSize) {
if size == savedSize { return }
savedSize = size
NSAnimationContext.runAnimationGroup { context in
context.timingFunction = CAMediaTimingFunction(controlPoints: 0, 0, 0.58, 1.00) // Custom Bézier curve
context.duration = animationTime // Custom animation time
animator().setFrame(NSRect(origin: frame.origin, size: size), display: true, animate: true)
}
}
}
In ContentView.swift, the main structure (simplified):
struct ContentView: View {
#State private var showsDrawer: Bool = false
var body: some View {
HStack {
SecondaryAdjusters() // Left panel
HStack {
Drawer() // Size-changing panel
.frame(width: showsDrawer ? 175 : 0, alignment: .trailing)
DrawerBar() // Right bar
.onTapGesture {
withAnimation(Animation.timingCurve(0, 0, 0.58, 1.00, duration: animationTime)) { // The same Bézier curve and animation time
showsDrawer.toggle()
}
}
}
}
}
}
The animated object is the size-changing Drawer() in the middle. I suppose that it is because the different frame rate of two methods. I'm wondering how to synchronise the two functions, and whether there's another way to implement drawer-like View in SwiftUI. You could clone the repository here. Thanks for your patient.

Related

SwiftUI center align a view except if another view "pushes" it up

Here is a breakdown.
I have a zstack that contains 2 vstacks.
first vstack has a spacer and an image
second has a text and button.
ZStack {
VStack {
Spacer()
Image("some image")
}
VStack {
Text("press the button")
Button("ok") {
print("you pressed the button")
}
}
}
Now this setup would easily give me an image on the bottom of a zstack, and a centered title and button.
However if for example the device had a small screen or an ipad rotates to landscape. depending on the image size (which is dynamic). The title and button will overlap the image. instead of the button being "pushed" up.
In UIKit this is as simple as centering the button to superview with a high priority and having greaterThanOrEqualTo image.topAnchor with a required priority.
button would be centered in screen but if the top of the image was too big the center constraint would give priority to the image top anchor required constraint and push the button up.
I have looked into custom alignments and can easily get always above image or always center but am missing some insight in having it both depending on layout. Image size is dynamic so no hardcoded sizes.
What am i missing here? how would you solve this simple yet tricky task.
There might be an easier way using .alignmentGuide but I tried to practice on Layout for this answer.
I created a custom ImageAndButtonLayout that should do what you want: it takes two views assuming the first is the image and the second is the button (or anything else).
They are put into subviews just for clarity, you can also put them directly into ImageAndButtonLayout. For testing you can change the height of the image via slider.
The Layout always uses the available full height and pushes the first view (image) to the bottom - so you don't need an extra Spacer() with the image. The position of the second view (button) is calculated based on the height of the first view and the available height.
struct ContentView: View {
#State private var imageHeight = 200.0 // for testing
var body: some View {
VStack {
ImageAndButtonLayout {
imageView
buttonView
}
// changing "image" height for testing
Slider(value: $imageHeight, in: 50...1000)
.padding()
}
}
var imageView: some View {
Color.teal // Image placeholder
.frame(height: imageHeight)
}
var buttonView: some View {
VStack {
Text("press the button")
Button("ok") {
print("you pressed the button")
}
}
}
}
struct ImageAndButtonLayout: Layout {
func sizeThatFits(proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) -> CGSize {
let maxsizes = subviews.map { $0.sizeThatFits(.infinity) }
var totalWidth = maxsizes.max {$0.width < $1.width}?.width ?? 0
totalWidth = min(totalWidth, proposal.width ?? .infinity )
let totalHeight = proposal.height ?? .infinity // always return maximum height
return CGSize(width: totalWidth, height: totalHeight)
}
func placeSubviews(in bounds: CGRect, proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) {
let heightImage = subviews.first?.sizeThatFits(.unspecified).height ?? 0
let heightButton = subviews.last?.sizeThatFits(.unspecified).height ?? 0
let maxHeightContent = bounds.height
// place image at bottom, growing upwards
let ptBottom = CGPoint(x: bounds.midX, y: bounds.maxY) // bottom of screen
if let first = subviews.first {
var totalWidth = first.sizeThatFits(.infinity).width
totalWidth = min(totalWidth, proposal.width ?? .infinity )
first.place(at: ptBottom, anchor: .bottom, proposal: .init(width: totalWidth, height: maxHeightContent))
}
// place button at center – or above image
var centerY = bounds.midY
if heightImage > maxHeightContent / 2 - heightButton {
centerY = maxHeightContent - heightImage
centerY = max ( heightButton * 2 , centerY ) // stop at top of screen
}
let ptCenter = CGPoint(x: bounds.midX, y: centerY)
if let last = subviews.last {
last.place(at: ptCenter, anchor: .center, proposal: .unspecified)
}
}
}

SwiftUI What exactly happens when use .offset and .position Modifier simultaneously on a View, which decides the final location?

Here I have this question when I try to give a View an initial position, then user can use drag gesture to change the location of the View to anywhere. Although I already solved the issue by only using .position(x:y:) on a View, at the beginning I was thinking using .position(x:y:) to give initial position and .offset(offset:) to make the View move with gesture, simultaneously. Now, I really just want to know in more detail, what exactly happens when I use both of them the same time (the code below), so I can explain what happens in the View below.
What I cannot explain in the View below is that: when I simply drag gesture on the VStack box, it works as expected and the VStack moves with finger gesture, however, once the gesture ends and try to start a new drag gesture on the VStack, the VStack box goes back to the original position suddenly (like jumping to the original position when the code is loaded), then start moving with the gesture. Note that the gesture is moving as regular gesture, but the VStack already jumped to a different position so it starts moving from a different position. And this causes that the finger tip is no long on top of the VStack box, but off for some distance, although the VStack moves with the same trajectory as drag gesture does.
My question is: why the .position(x:y:) modifier seems only take effect at the very beginning of each new drag gesture detected, but during the drag gesture action on it seems .offset(offset:) dominates the main movement and the VStack stops at where it was dragged to. But once new drag gesture is on, the VStack jumps suddenly to the original position. I just could not wrap my head around how this behavior happens through timeline. Can somebody provide some insights?
Note that I already solved the issue to achieve what I need, right now it's just to understand what is exactly going on when .position(x:y:) and .offset(offset:) are used the same time, so please avoid some advice like. not use them simultaneously, thank you. The code bellow suppose to be runnable after copy and paste, if not pardon me for making mistake as I delete few lines to make it cleaner to reproduce the issue.
import SwiftUI
struct ContentView: View {
var body: some View {
ButtonsViewOffset()
}
}
struct ButtonsViewOffset: View {
let location: CGPoint = CGPoint(x: 50, y: 50)
#State private var offset = CGSize.zero
#State private var color = Color.purple
var dragGesture: some Gesture {
DragGesture()
.onChanged{ value in
self.offset = value.translation
print("offset onChange: \(offset)")
}
.onEnded{ _ in
if self.color == Color.purple{
self.color = Color.blue
}
else{
self.color = Color.purple
}
}
}
var body: some View {
VStack {
Text("Watch 3-1")
Text("x: \(self.location.x), y: \(self.location.y)")
}
.background(Color.gray)
.foregroundColor(self.color)
.offset(self.offset)
.position(x: self.location.x, y: self.location.y)
.gesture(dragGesture)
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
Group {
ContentView()
}
}
}
Your issue has nothing to do with the use of position and offset. They actually both work simultaneously. Position sets the absolute position of the view, where as offset moves it relative to the absolute position. Therefore, you will notice that your view starts at position (50, 50) on the screen, and then you can drag it all around. Once you let go, it stops wherever it was. So far, so good. You then want to move it around again, and it pops back to the original position. The reason it does that is the way you set up location as a let constant. It needs to be state.
The problem stems from the fact that you are adding, without realizing it, the values of offset to position. When you finish your drag, offset retains the last values. However, when you start your next drag, those values start at (0,0) again, therefore the offset is reset to (0,0) and the view moves back to the original position. The key is that you need to use just the position or update the the offset in .onEnded. Don't use both. Here you have a set position, and are not saving the offset. How you handle it depends upon the purpose for which you are moving the view.
First, just use .position():
struct OffsetAndPositionView: View {
#State private var position = CGPoint(x: 50, y: 50)
#State private var color = Color.purple
var dragGesture: some Gesture {
DragGesture()
.onChanged{ value in
position = value.location
print("position onChange: \(position)")
}
.onEnded{ value in
if color == Color.purple{
color = Color.blue
}
else{
color = Color.purple
}
}
}
var body: some View {
Rectangle()
.fill(color)
.frame(width: 30, height: 30)
.position(position)
.gesture(dragGesture)
}
}
Second, just use .offset():
struct ButtonsViewOffset: View {
#State private var savedOffset = CGSize.zero
#State private var dragValue = CGSize.zero
#State private var color = Color.purple
var offset: CGSize {
savedOffset + dragValue
}
var dragGesture: some Gesture {
DragGesture()
.onChanged{ value in
dragValue = value.translation
print("dragValue onChange: \(dragValue)")
}
.onEnded{ value in
savedOffset = savedOffset + value.translation
dragValue = CGSize.zero
if color == Color.purple{
color = Color.blue
}
else{
color = Color.purple
}
}
}
var body: some View {
Rectangle()
.fill(color)
.frame(width: 30, height: 30)
.offset(offset)
.gesture(dragGesture)
}
}
// Convenience operator overload
func + (lhs: CGSize, rhs: CGSize) -> CGSize {
return CGSize(width: lhs.width + rhs.width, height: lhs.height + rhs.height)
}

How to get .onHover(...) events for custom Shapes

Say you have a Circle. How can you change its color when you hover inside the circle?
I tried
struct ContentView: View {
#State var hovered = false
var body: some View {
Circle()
.foregroundColor(hovered ? .purple : .blue)
.onHover { self.hovered = $0 }
}
}
But this causes hovered to be true even when the mouse is outside of the circle (but still inside its bounding box).
I noticed that the .onTapGesture(...) uses the hit testing of the actual shape and not the hit testing of its bounding box.
So how can we have similar hit testing behavior as the tap gesture but for hovering?
The answer depends on the precision you need. The hove in SwiftUI currently just monitor the MouseEnter and MouseExit events, so the working region for a view is the frame which is a Rectangle.
You may build a backgroundView in ZStack which can compose those Rectangle with a customized algorithm. In circle shape, it should be like a matrix with different some small rectangles. One oversimplified example could be like the following.
ZStack{
VStack{
HStack{
Rectangle().frame( width:100, height: 100).offset(x: -50, y: 0)
Rectangle().frame( width:100, height: 100).offset(x: 50, y: 0)
}}.onHover{print($0)}
Rectangle().foregroundColor(hovered ? .purple : .blue).clipShape(Circle())
}

SwiftUI .rotationEffect() framing and offsetting

When applying .rotationEffect() to a Text, it rotates the text as expected, but its frame remains unchanged. This becomes an issue when stacking rotated views with non-rotated views, such as with a VStack of HStack, causing them to overlap.
I initially thought the rotationEffect would simply update the frame of the Text to be vertical, but this is not the case.
I've tried manually setting the frame size and (if needed, offsetting) the Text, which sort of works, but I don't like this solution because it requires some guessing and checking of where the Text will appear, how big to make the frame, etc.
Is this just how rotated text is done, or is there a more elegant solution to this?
struct TextAloneView: View {
var body: some View {
VStack {
Text("Horizontal text")
Text("Vertical text").rotationEffect(.degrees(-90))
}
}
}
Overlapping Text
You need to adjust the frame yourself in this case. That requires capturing what the frame is, and then applying the adjustment.
First, to capture the existing frame, create a preference, which is a system for passing data from child views to their parents:
private struct SizeKey: PreferenceKey {
static let defaultValue: CGSize = .zero
static func reduce(value: inout CGSize, nextValue: () -> CGSize) {
value = nextValue()
}
}
extension View {
func captureSize(in binding: Binding<CGSize>) -> some View {
overlay(GeometryReader { proxy in
Color.clear.preference(key: SizeKey.self, value: proxy.size)
})
.onPreferenceChange(SizeKey.self) { size in binding.wrappedValue = size }
}
}
This creates a new .captureSize(in: $binding) method on Views.
Using that, we can create a new kind of View that rotates its frame:
struct Rotated<Rotated: View>: View {
var view: Rotated
var angle: Angle
init(_ view: Rotated, angle: Angle = .degrees(-90)) {
self.view = view
self.angle = angle
}
#State private var size: CGSize = .zero
var body: some View {
// Rotate the frame, and compute the smallest integral frame that contains it
let newFrame = CGRect(origin: .zero, size: size)
.offsetBy(dx: -size.width/2, dy: -size.height/2)
.applying(.init(rotationAngle: CGFloat(angle.radians)))
.integral
return view
.fixedSize() // Don't change the view's ideal frame
.captureSize(in: $size) // Capture the size of the view's ideal frame
.rotationEffect(angle) // Rotate the view
.frame(width: newFrame.width, // And apply the new frame
height: newFrame.height)
}
}
And for convenience, an extension to apply it:
extension View {
func rotated(_ angle: Angle = .degrees(-90)) -> some View {
Rotated(self, angle: angle)
}
}
And now your code should work as you expect:
struct TextAloneView: View {
var body: some View {
VStack {
Text("Horizontal text")
Text("Vertical text").rotated()
}
}
}
RotationEffect takes a second argument which is the anchor point, if you omit it - the default is .center.
Try this instead:
.rotationEffect(.degrees(-90), anchor: .bottomTrailing)

onLongPressGesture frame in SwiftUI

I am trying to replicate a long press button effect like the Apple default one, but with a custom style.
I have a custom view where I call onLongPressGesture. The issue is that the pressing variable is getting set to false even though my finger is still pressing.
I just move my finger outside the the views onLongPressGesture frame.
I want the pressing variable to not get set to false when I move my finger outside the frame area.
How can i achieve that?
Here's my code:
.onLongPressGesture(minimumDuration: 1000000000, maximumDistance: 100, pressing: {
pressing in
if !pressing {
self.action?()
self.showNextScreen = true
} else {
withAnimation(.spring()) {
self.showGrayBackgound = true
}
}
}) { }
Use the maximumDistance parameter to set how far outside the view the gesture applies:
struct LongPressView: View {
#State var isPressing = false
let action: ()->()
var body: some View {
Rectangle()
.fill(isPressing ? Color.orange : .gray)
.frame(width: 50, height: 30)
.onLongPressGesture(minimumDuration: 1000000, maximumDistance: 1000, pressing: { pressing in
self.isPressing = pressing
if !pressing { self.action() }
}, perform: {})
}
}
A sufficiently large maximumDistance will be outside the boundary of the screen and the long press will remain active until it's released. However, the macOS behavior where you click and hold while dragging outside the frame and back so that the button's state is turned off and back on isn't possible with a LongPressGesture. Once outside the maximumDistance, the gesture is complete.