"hold and drag" - would be holding down with one finger and dragging horizontally (or vertically) with another.
GestureDetector would recognize it as Scale gesture. I want to differentiate between these 2:
Two fingers moving closer or further apart (normal scale)
One finger holding and another finger moving closer or further (I want to detect this)
There's a built-in Widget for that called LongPressDraggable you can use it similar to the Draggable widget.
The one main common difference being the ability to directly drag using the Draggable widget or hold and drag using the LongPressDraggable Widget.
Related
I'm creating some kind of image editor (kinda like Canva). For zooming in, out, and panning, I use InteractiveViewer. But the thing is, once I start using InteractiveViewer, I noticed that my other gestures for the elements inside the canvas (e.g. tap and drag on an element should drag the element around) are not working. Tapping and dragging now become panning the whole canvas.
This is still the case even if I turn off panning and scaling (panEnabled: false, scaleEnabled: false). Thankfully single tap is still working because InteractiveViewer does not use that, but I need the entire gesture input when the user makes input that's not intended for the InteractiveViewer. For example, if the user taps on an element so it becomes selected, then all the gesture input should go to the tapped element instead of to InteractiveViewer.
I once thought of a solution to just simply not using InteractiveViewer when there's an element selected, but that causes the user to immediately lose the zoom and pan previously set before. I need solution so that the InteractiveViewer can preserves the zoom and pan setting while letting all the gesture input goes through to the elements inside. How can I do that?
I am building a chess board component, in which we should move the pieces by a simple Drag And Drop. In order to ease the drag and drop, I also try to draw a feedback which is a "cross of cells" centered around the pointer.
So, I am trying this way :
I draw a container that starts at the pointer location
Inside that container, I draw the entire cross, also drawing the dragged piece at the center of this container
So, I need a way to shift that feedback half to the left and to the top. That is, I need the feedback to be positionned to left and to the top of the pointer location.
Is that possible ?
I've tried playing with feedback and dragAnchor properties of Draggable, but I did not achieve.
I wonder what's the best way to detect if a finger moves into or moves out of a widget, without lifting the finger?
By that I mean, for example, if I have a bunch of containers lives inside a parent container, after touching down, without lifting the finger, I move across the screen, and get notified about the containers my finger leaves and enters.
I think it should be done by adding a Listener to the parent container and listen for onPointerMove event, find a container that contains the finger position. But I have no idea how to compare finger position with container position.
The type of thing you are asking for can be done using RenderBox and GestureDetector.
I've prepared a demo for you so you can refer it on this gist.
I'm experimenting using the google maps for flutter plugin but I'm running into a problem managing the gesture priority.
My map, which need to handle gestures is rendered inside a PageView which also listen to some gestures. The thing is, It appears that the gesture handler of PageView take the priority over the map one, so if I drag vertically on the map, the map move but if I drag horizontally on the map, the page will switch instead of the map moving.
Is there a way to manage the priority between gesture handlers of widget to fix this sort of problems?
My tvOS app generates a game board using SKNodes that looks like the following:
Each shape, separated by lines, is an SKNode that is focusable (e.g. each colored wedge is composed of 5 SKNodes that gradually diminish in size closer to the center).
My problem is that the focus engine doesn't focus the next focus item (SKNode) that would feel like the logical, most natural next node to focus. This issue is because the focus engine logic is rectangular while my SKNodes are curved. As you can see below, there are inherent problems when trying to figure out the next focusable item when swiping down from the outermost yellow SKNode:
In the example above, the focus engine deducts that the currently focused area is the area within the red-shaded rectangle based on the node's edges. Due to this logic, the focused rectangle overlaps areas that are not part of the currently focused node, including the entire width of the second yellow SKNode. Therefore when swiping downward, the focus engine skips focus to the third (middle) yellow SKNode.
How would one go about solving this focus issue so that focus is more natural both vertically and horizontally for my circular game board of SKNodes without seeming so sporadic? Is this possible? Perhaps with UIFocusGuides?
You have to handle the focus manually. Use the methods listed below to check for the next focused view from the context.
func shouldUpdateFocus(in context: UIFocusUpdateContext) -> Bool
In this method, you will get the focus heading direction (UIFocusHeading). Intercept the required direction and save your required next focused view in some property. Manually update the focus by calling below methods
setNeedsFocusUpdate()
updateFocusIfNeeded()
This will trigger the below
preferredFocusEnvironments: [UIFocusEnvironment] { get }
In this check for saved instance, and return the same. This will help you handle the focus manually as per your requirements.