I am wondering what is the equivalent of the web api DocumentOrShadowRoot.elementFromPoint() in flutter.
Specifically, I am wondering how I could figure out what is the leaf element/widget instance in a widget hierarchy, given an Offset.
For example, consider the following structure:
For the First Offset marked with a dark circle, I would expect to get some sort of data that can help me figure out the offset is over Container.
For the Second Offset marked with a dark circle, I would expect the stack.
For the last one, it would be the positioned element.
A bit of context
I'm exploring the implementation of a visual editor similar to FIGMA in Flutter. I have experience in implementing such a rendering system with web technologies.
I want to render a selection indicator or outline when a tap/click happens on each element. These elements are nested. Adding multiple nested event handlers triggers all of them. For example, mouse enter and mouse leave when moving the mouse over the Stack or Positioned element would trigger all the parent event handlers as well.
Any help or guidance would be appreciated.
Simple answer to your exact question: No direct equivalent. Possible to implement but not advisable.
You could theoretically implement your own version of elementFromPoint() by looking at how GestureBinding works in Flutter. That would be a deep dive for sure, and you might learn from it, but there is a simpler solution. Even if you implement your own method, then you would still need to resolve conflicts when more than 1 element is found - and this is something Flutter solves out of the box with the gesture arena.
I see that you expect the top-most or deepest child to be reported, something that you can obtain by using the GestureDetector widget. What you're looking for is making your gesture detectors opaque. A GestureDetector has a property called behaviour of type HitTestBehaviour. The default for it is deferToChild. Here are the possible values:
/// How to behave during hit tests.
enum HitTestBehavior {
/// Targets that defer to their children receive events within their bounds
/// only if one of their children is hit by the hit test.
deferToChild,
/// Opaque targets can be hit by hit tests, causing them to both receive
/// events within their bounds and prevent targets visually behind them from
/// also receiving events.
opaque,
/// Translucent targets both receive events within their bounds and permit
/// targets visually behind them to also receive events.
translucent,
}
What follows is slightly related, so consider it a deep dive in your use-case
Since you're going down this path: I also built a WYSIWYG design system, with selection indicators, handles for rotating, resizing, etc. and have one advice: Completely separate your design rendering from your gesture detectors and selection indicators.
I initially put the gesture detectors "around" the design elements - in your example, the gesture detectors would sit in between yellow / blue / green / red. The reason this is a bad idea is that it complicates a few things. In some cases I needed to create touch areas larger than the design elements themselves, in which case I needed to add padding and reposition the GestureDetector parents. In other cases the design elements would become fixed or locked and would not have a GestureDetector parent and Flutter would completely rebuild the contents of the layer since tree comparing got confused. It gets messy fast. So, stack these layers:
Design on bottom, no interactivity.
Selection indicators, resize / rotate handles. Still no interactivity
Gesture detectors for all design elements. If you're lucky, you know the exact size, position, rotation for the design elements you can simply use Positioned. If you have groups of design elements, then your gesture detectors also get grouped and transformed together. If you also have self-sizing design elements (like images), it gets a bit more complicated, but I got around my issues by adding the design element as an invisible child. The way I would do this now is by loading meta-data about the images and knowing them at build time (as opposed to waiting for images to load and produce layout changes).
Selection indicators + resize / rotate handles gesture detectors. They are top-most and also opaque, so they catch everything that hits them.
This setup then allows you to experiment more in the gesture department, it allows you to use colored boxes to debug and in general will make your life easier.
TLDR: Use opaque gesture detectors.
Related
I have a grid view which is showing the heal status of many different services, and coloring them and/or auto-opening a webpage when the service goes down. The problem is that the elements which are off the screen are not being checked, which is more efficient, but not what is desired in this case.
I guess it's behaving similarly to the RecyclerView in android?
I want to be building the widgets which are checking service health even when they are not visible on the screen.
Currently the services don't start being checked until the moment I scroll them into the screen.
Assuming you are currently using the GridView.builder constructor, I recommend using the "normal" GridView constructor (with a children property). Since GridView.builder only builds the elements currently visible for efficiency reasons, the elements that are not rendered on the screen won't run your back end logic.
For more information, see the official docs:
[GridView.builder] constructor is appropriate for grid views with a large (or infinite) number of children because the builder is called only for those children that are actually visible.
Here you'll find alternatives:
The most commonly used grid layouts are GridView.count, which creates a layout with a fixed number of tiles in the cross axis, and GridView.extent, which creates a layout with tiles that have a maximum cross-axis extent.
My tvOS app generates a game board using SKNodes that looks like the following:
Each shape, separated by lines, is an SKNode that is focusable (e.g. each colored wedge is composed of 5 SKNodes that gradually diminish in size closer to the center).
My problem is that the focus engine doesn't focus the next focus item (SKNode) that would feel like the logical, most natural next node to focus. This issue is because the focus engine logic is rectangular while my SKNodes are curved. As you can see below, there are inherent problems when trying to figure out the next focusable item when swiping down from the outermost yellow SKNode:
In the example above, the focus engine deducts that the currently focused area is the area within the red-shaded rectangle based on the node's edges. Due to this logic, the focused rectangle overlaps areas that are not part of the currently focused node, including the entire width of the second yellow SKNode. Therefore when swiping downward, the focus engine skips focus to the third (middle) yellow SKNode.
How would one go about solving this focus issue so that focus is more natural both vertically and horizontally for my circular game board of SKNodes without seeming so sporadic? Is this possible? Perhaps with UIFocusGuides?
You have to handle the focus manually. Use the methods listed below to check for the next focused view from the context.
func shouldUpdateFocus(in context: UIFocusUpdateContext) -> Bool
In this method, you will get the focus heading direction (UIFocusHeading). Intercept the required direction and save your required next focused view in some property. Manually update the focus by calling below methods
setNeedsFocusUpdate()
updateFocusIfNeeded()
This will trigger the below
preferredFocusEnvironments: [UIFocusEnvironment] { get }
In this check for saved instance, and return the same. This will help you handle the focus manually as per your requirements.
I am working on a game using GTK3 as a rendering technique (terrible idea, but it's a school project).
My gameobjects are made of Image widgets and are placed in a Fixed container. It's working pretty well, however when i move widgets beoynd right or bottom border, the window automatically grows along with it.
I want the window to stay at the sam size, event if widget leaves its area and becomes invisible. It works when i move widget past the upper or left border.
I tried using gtk_widget_set_vexpand and gtk_widget_set_hexpand. My window is set as not resizable (gtk_window_set_resizable).
Is there any way I can achieve this?
This isn't the right way to use GTK+. GTK+ is intended for laying out widgets in a GUI program.
There are better options for animating 2D elements. One that works with GTK+ natively is the Clutter library. You can also integrate SDL or OpenGL or something like that if you so choose.
That being said, you can also use GtkLayout instead of GtkFixed, or put the GtkFixed in a GtkScrolledWindow and hide the scrollbars and set the scroll policy to prevent scrolling. It's still technically misuse, and in fact GtkFixed (and possibly GtkLayout too but the docs don't say) is really not supposed to be used anymore unless absolutely necessary because it doesn't give you automatic support for tricky UI layout problems, but it doesn't have extra dependencies.
I've implemented a musical keyboard as a subclass of Fixed and where each individual key is a subclass of DrawingArea, and so far, it works great: custom drawing code in expose, press+release functionality working... kind of. See, here's the problem: I want the user to be able to drag the mouse across the keyboard with the mouse down to play it. I currently capture the button press and release signals, as well as enter and leave notify. Unfortunately, this doesn't quite work because the widget seems to grab focus of the mouse as soon as the mouse is pressed over it. This makes sense for normal buttons, but not for a musical keyboard. Is there any good way to remedy this other than rewriting the entire keyboard to be one massive DrawingArea?
Also, it shouldn't matter, but in case it does I'm using GTK#.
You might consider using GooCanvas: You can represent each of the keys as CanvasPolylines, fill them with the colors you need. Each of the Canvas items is a GtkWidget, so you can act on events like enter, leave, button-pressed etc.
This method seems to make more sense (to me) than separate DrawingAreas. As each drawn element is still accessible, you can even change colors/size and other properties dynamically. Also, Polyline lets you make more complex shapes.
I've been Googling like crazy for a while now, and I simply can't find any answers to the question: is it possible to implement the Android List scrolling, without using an actual list UI?
I'm trying to make a grid of rectangles such as the kind you would find in a typical game app respond to finger movement in the same way that it does using Android lists (bounce on the bounds, the 'flick' effect, etc), but all of the approaches I've found involve over-complicated solutions involving extending the list, defining XML layouts, etc.
Would it not be possible to simply give an object variables for 'document' height, 'viewable' height and y-offset? I'm happy to give the delta (MS since last update) to the object on every update. It would also be good if the actual interactive region was also definable.
Additionally; are there strong advantages to using the ListView instead that I'm missing? I assume responsiveness comes into play, but I'm quite happily managing that manually at the moment.
Just use ScrollView directly, assuming you only need vertical scrolling.