I want to recognize Single tap and Double tap on the Capsule below. This code works fine :
Capsule()
.frame(width: 100, height: 42)
.onTapGesture(count: 1) {
print("Single Tap recognized instantly")
}
But When I'm adding the .onTapGesture(count: 2) to it Single tap called after 0.25ms.
Capsule()
.frame(width: 100, height: 42)
.onTapGesture(count: 2) {
print("Double tap recognized instantly")
}
.onTapGesture(count: 1) {
print("Single Tap recognized after 0.25ms")
}
How can I fix this?
Just to make one thing clear: You cannot have a single tap-gesture that should be recognized together with a double tap gesture when your goal is to detect both exclusively (i.e. the system should differentiate between a single tap and a double tap with no delay) since a single tap could theoretically always lead to a double tap. This means you always have to wait a certain period (of 0.25 ms as it seems in your case) to rule out the user intended to double-tap.
However, when your goal is to have a single tap-gesture together (in other words simultaneously) with the double-tap gesture, this is now completely feasible.
A gesture is, like so many things in SwiftUI a protocol and custom gestures can be built in a lot of ways like the body of a view. In your case, we want a double tap-gesture that runs simultaneously with a single tap gesture. The simplest way to this, you declare a computed property with an opaque return type (some Gesture) and describe your gesture in that property:
var tapGesture: some Gesture {
TapGesture(count: 2)
.onEnded {
print("Double tap")
}
.simultaneously(with: TapGesture(count: 1)
.onEnded {
print("Single Tap")
})
}
You can simply add this gesture to whatever view you desire, just like you would do the same thing with any other gesture:
Capsule()
.gesture(tapGesture)
Related
I want to create the "one finger zoom gesture" from the iOS Maps App. So a tap (touch down, then touch up), then long press (only touch down), and finally a drag up/down.
I was expecting something like this:
let tapGesture = TapGesture().onEnded { _ in
print("tap")
}
let longPressGesture = LongPressGesture(minimumDuration: 0.00001)
.onChanged { _ in
print("long press update")
}
.onEnded { _ in
print("long press")
}
let dragGesture = DragGesture(minimumDistance: 1)
.onChanged { gesture in
print("drag")
}
let combined = tapGesture.sequenced(before: longPressGesture).sequenced(before: dragGesture)
SomeView()
.gesture(combined)
Only the LongPressGesture after the initial tap never gets called. I've tried a LongPressGesture instead of the initial TapGesture, but two sequential long presses are called instantly (so the "touch up" event is not regarded).
An initial DragGesture(minimumDuration: 0) with onEnded and a corresponding #State to track the initial tap might work, but I have a ScrollView beneath (for the map-like panning and zooming), so I'm guessing a DragGesture cannot be the first one.
I also tried setting a gestureEnabled flag after a first tap via onTapGesture() and then using the .gesture(combined, including: gestureEnabled ? .all : .none), with a few more edge cases handled I managed to get it working, only to find out that double taps no longer worked anywhere else on the View.
I should clarify that there's a ScrollView with text underneath the view, so long press, double tap, scrolling (horizontal and vertical) should all keep working.
Any ideas?
Currently, my best alternative is just using the long press + drag. It works with the additional tap as well (since it's not necessary in that case). Only downside is it also triggers if the user holds down for a short time before attempting a scroll.
I'm trying to implement the drag and drop feature to my LazyHGridview. When I try to drop the view on another view, a "plus" icon within a green circle is displayed at the top right corner of the view.
struct TestView: View {
var d: GridData
#Binding var list: [GridData]
#State private var dragOver = false
var body: some View {
VStack {
Text(String(d.id))
.font(.headline)
.foregroundColor(.white)
}
.frame(width: 160, height: 240)
.background(colorPalette[d.id])
.onDrag {
let item = NSItemProvider(object: String(d.id) as NSString)
item.suggestedName = String(d.id)
return item
}
.onDrop(of: [UTType.text], isTargeted: $dragOver) { providers in
return true
}
.border(Color(UIColor.label), width: dragOver ? 8 : 0)
}
}
}
It is managed by DropProposal drop operation and by default (if you do not provide explicit drop delegate) is .copy as documented, which adds '+' sign. By this you inform user that something will be duplicated.
/// Tells the delegate that a validated drop moved inside the modified view.
///
/// Use this method to return a drop proposal containing the operation the
/// delegate intends to perform at the drop ``DropInfo/location``. The
/// default implementation of this method returns `nil`, which tells the
/// drop to use the last valid returned value or else
/// ``DropOperation/copy``.
public func dropUpdated(info: DropInfo) -> DropProposal?
if you want to manage it explicitly then provide DropDelegate with implemented drop update as in below demo
func dropUpdated(info: DropInfo) -> DropProposal?
// By this you inform user that something will be just relocated
return DropProposal(operation: .move)
}
That is the normal user interface indicating that, at the point where your finger is at the moment, it is okay to drop the view you are dragging.
From a UX point, the goal is to commit the input when people click anywhere outside the TextField.
For example, if the input is for renaming an item, then when clicked outside, we store the input as the new item name, and replace the TextField with a Text to show the new item name.
I assume it is an expected standard behavior, but please let me know if it is against Apple's MacOS standards.
Is there a standard / conventional way to achieve it with SwiftUI, or with some AppKit workaround?
As I see, onCommit is only triggered when we hit the Return key, not when we click outside the TextField. So what I think what I need to figure out is how to detect clicking outside.
What I considered:
Some built-in view modifier on the TextField which activates this behavior, but I couldn't find any.
Detect focus loss or editingChanged of the TextField, but by default, when we click on the background / a button / a text, the TextField doesn't lose focus, nor is the editingChanged event triggered.
If the TextField is focused, add an overlay on the ContentView with an onTapGesture modifier, but then it would take two taps to trigger a button when a TextField is focused.
Add an onTapGesture modifier on the ContentView, which calls a focusOut method, but it doesn't receive the event when people tap on a child view that also has an onTapGesture on it.
Improving on 4, also call focusOut from the onTapGesture callback of all child views. So far this is the only viable option I see, but I'm not sure it is a good pattern to put extra code in all onTapGestures, just in order to customize TextField behavior.
Example code:
import SwiftUI
#main
struct app: App {
#State private var inputText: String = ""
var body: some Scene {
WindowGroup {
ZStack {
Color.secondary.onTapGesture { print("Background tapped") }
VStack {
Text("Some label")
Button("Some button") { print("Button clicked") }
TextField(
"Item rename input",
text: $inputText,
onCommit: { print("Item rename commit") }
)
}
}
}
}
}
I am using the TapGesture in SwiftUI for MacOS. TapGesture is only recognized on TouchInsideOut event, when releasing the press again. I want to call an action on touchdown and another on the end gesture.
There is a onEnded state available for TapGesture but no onStart.
MyView()
.onTapGesture {
//Action here only called after tap gesture is released
NSLog("Test")
}
Is there any chance to detect touch down and touch release?
I tried using LongPressGesture aswell, but couldn't figure it out.
If by pure SwiftUI then only indirectly for now.
Here is an approach. Tested with Xcode 11.4.
Note: minimumDistance: 0.0 below is important !!
MyView()
.gesture(DragGesture(minimumDistance: 0.0, coordinateSpace: .global)
.onChanged { _ in
print(">> touch down") // additional conditions might be here
}
.onEnded { _ in
print("<< touch up")
}
)
The selected answer generates a continuous stream of events on touch, not just one on touch-down.
This works as requested:
MyView()
.onLongPressGesture(minimumDuration: 0)
{
print("Touch Down")
}
I'm writing an app for iPhone using SwiftUI in XCode.
In one of the views, there is a Text label that changes its text whenever a button is pressed.
The entire view is spring animated, so whenever the text is changed via the button, it is changed with an animation.
The animation works well, except during the animation the Text label adds an unnecessary ellipsis to the end of the text.
I've tried to remove the ellipsis using:
Text("text")
.truncationMode(nil)
However, this gives an error.
Is there any way to .turn off the "..." in the Text label?
If not, is there a way to turn off animations for just that Text label without affecting the others, since the entire view is animated?
You can use minimumScaleFactor(_ factor: CGFloat). The text will shrink according to the factor value.
For example, if your font size is 10 and your factor is 0.4, the text font size will be able to decrease down to 4 if needed.
Text("text")
.minimumScaleFactor(0.1)
You can use Text("text").animation(nil) to turn off animation.
or you can choose other animations to prevent the ...
Text("text").animation(.spring(response: 0.0, dampingFraction:0.2))
Try this:
struct UnAnimatedText: View {
private let text: String
init(_ text: String) {
self.text = text
}
var body: some View {
Button(action: {
}) {
Text(text)
.frame(maxWidth: .infinity)
.animation(nil)
}
.disabled(true)
}
}
The text will change without animation but frame of UnAnimatedText - with animations.
.frame(maxWidth: .infinity) is optional, the main idea is to wrap in Button