Making a GWT widget transparent to events - gwt

I'm trying to draw a map, with some text annotations on top.
I've got this all working, but the text annotations (which are Label widgets floating over the top of the GWTCanvas which the map is drawn on) are soaking up any events which I'd like to get passed through to the canvas underneath. This is particularly obvious when the user drags the map; moving the mouse pointer over one of the widgets cases an Out event to be sent to the canvas, thus ending the drag.
Is there any way I can tell the Label not to respond to any events, and instead pass them on to the widget underneath?
Belated edit: turns out that pointer-events: none does what I want; but it's pretty new and may not work on some browsers...

No. The only thing you can do is catch the events and pass them through the event handler.

Related

Flutter - how to have a widget respond to touch and flick?

Currently in the design phase of an app. One of the goals we have is to be able to touch a certain widget, and on tap and hold, have the widget then follow the user's finger where they drag.
Then, if the user releases gently, the widget snaps back to the original location.
However, if the user flicks the widget, we want the widget to fly across the screen, reacting correctly to the user's flick.
Is there anything built-in that can handle this? Also, if this needs to be explained more to make sense, happy to elaborate.
Thanks!
I think for such cases you can use GestureDetector widget. It provides several useful functions. See here:
https://api.flutter.dev/flutter/widgets/GestureDetector-class.html
You can use onPanStart of GestureDetector and then get the offset and use it to move the widget accross the screen.

flutter equivalent to DocumentOrShadowRoot.elementFromPoint()

I am wondering what is the equivalent of the web api DocumentOrShadowRoot.elementFromPoint() in flutter.
Specifically, I am wondering how I could figure out what is the leaf element/widget instance in a widget hierarchy, given an Offset.
For example, consider the following structure:
For the First Offset marked with a dark circle, I would expect to get some sort of data that can help me figure out the offset is over Container.
For the Second Offset marked with a dark circle, I would expect the stack.
For the last one, it would be the positioned element.
A bit of context
I'm exploring the implementation of a visual editor similar to FIGMA in Flutter. I have experience in implementing such a rendering system with web technologies.
I want to render a selection indicator or outline when a tap/click happens on each element. These elements are nested. Adding multiple nested event handlers triggers all of them. For example, mouse enter and mouse leave when moving the mouse over the Stack or Positioned element would trigger all the parent event handlers as well.
Any help or guidance would be appreciated.
Simple answer to your exact question: No direct equivalent. Possible to implement but not advisable.
You could theoretically implement your own version of elementFromPoint() by looking at how GestureBinding works in Flutter. That would be a deep dive for sure, and you might learn from it, but there is a simpler solution. Even if you implement your own method, then you would still need to resolve conflicts when more than 1 element is found - and this is something Flutter solves out of the box with the gesture arena.
I see that you expect the top-most or deepest child to be reported, something that you can obtain by using the GestureDetector widget. What you're looking for is making your gesture detectors opaque. A GestureDetector has a property called behaviour of type HitTestBehaviour. The default for it is deferToChild. Here are the possible values:
/// How to behave during hit tests.
enum HitTestBehavior {
/// Targets that defer to their children receive events within their bounds
/// only if one of their children is hit by the hit test.
deferToChild,
/// Opaque targets can be hit by hit tests, causing them to both receive
/// events within their bounds and prevent targets visually behind them from
/// also receiving events.
opaque,
/// Translucent targets both receive events within their bounds and permit
/// targets visually behind them to also receive events.
translucent,
}
What follows is slightly related, so consider it a deep dive in your use-case
Since you're going down this path: I also built a WYSIWYG design system, with selection indicators, handles for rotating, resizing, etc. and have one advice: Completely separate your design rendering from your gesture detectors and selection indicators.
I initially put the gesture detectors "around" the design elements - in your example, the gesture detectors would sit in between yellow / blue / green / red. The reason this is a bad idea is that it complicates a few things. In some cases I needed to create touch areas larger than the design elements themselves, in which case I needed to add padding and reposition the GestureDetector parents. In other cases the design elements would become fixed or locked and would not have a GestureDetector parent and Flutter would completely rebuild the contents of the layer since tree comparing got confused. It gets messy fast. So, stack these layers:
Design on bottom, no interactivity.
Selection indicators, resize / rotate handles. Still no interactivity
Gesture detectors for all design elements. If you're lucky, you know the exact size, position, rotation for the design elements you can simply use Positioned. If you have groups of design elements, then your gesture detectors also get grouped and transformed together. If you also have self-sizing design elements (like images), it gets a bit more complicated, but I got around my issues by adding the design element as an invisible child. The way I would do this now is by loading meta-data about the images and knowing them at build time (as opposed to waiting for images to load and produce layout changes).
Selection indicators + resize / rotate handles gesture detectors. They are top-most and also opaque, so they catch everything that hits them.
This setup then allows you to experiment more in the gesture department, it allows you to use colored boxes to debug and in general will make your life easier.
TLDR: Use opaque gesture detectors.

GTK prevent custom widget from grabbing focus

I've implemented a musical keyboard as a subclass of Fixed and where each individual key is a subclass of DrawingArea, and so far, it works great: custom drawing code in expose, press+release functionality working... kind of. See, here's the problem: I want the user to be able to drag the mouse across the keyboard with the mouse down to play it. I currently capture the button press and release signals, as well as enter and leave notify. Unfortunately, this doesn't quite work because the widget seems to grab focus of the mouse as soon as the mouse is pressed over it. This makes sense for normal buttons, but not for a musical keyboard. Is there any good way to remedy this other than rewriting the entire keyboard to be one massive DrawingArea?
Also, it shouldn't matter, but in case it does I'm using GTK#.
You might consider using GooCanvas: You can represent each of the keys as CanvasPolylines, fill them with the colors you need. Each of the Canvas items is a GtkWidget, so you can act on events like enter, leave, button-pressed etc.
This method seems to make more sense (to me) than separate DrawingAreas. As each drawn element is still accessible, you can even change colors/size and other properties dynamically. Also, Polyline lets you make more complex shapes.

controlling iPhone zoom programmatically with javascript

I would like to follow my finger movement on an iPhone screen. However this results in rubber banding and scrolling and therefore I have to turn off the default behaviours.
As explained on this website
https://developer.apple.com/library/archive/documentation/AppleApplications/Reference/SafariWebContent/HandlingEvents/HandlingEvents.html
I've added eventlisteners, like so
document.addEventListener('touchmove', touchMove, false);
document.addEventListener('gesturechange', gestureChange, false);
and disabled the default behaviour, like so
function touchMove(event){event.preventDefault(); //other code here}
function gestureChange(event){event.preventDefault(); //other code here}
Now, I can do what I intended to, but I can not scale the page anymore. I'm still able to retrieve the touchstart coordinates and retrieve a zoom factor from gesturechange. Logically, I would like to use those to programmatically change the page zoom. How to do that with javascript?
So far I have some success with applying the eventlistener to a div (instead of the document) and turn oft the touchmove call using a boolean once gesturestart is detected. Actually this works pretty good. I can zoom, pan and double tap on the whole document and zoom and double tap on the div. But a pan on the div executes a function to pass the coordinates (and does not pan).

How do you return draggable content to their original positions in iPhone dev?

I am wanting to create a button in my iPhone app that when touched will return other draggable elements to their original position. I have looked at the Apple "MoveMe' example, but that returns the button to the center of the screen. I want to be able to position draggable objects around the screen, drag the objects within the app, and then return them to their original starting positions by pressing a designated button.
Any help appreciated!
Cache the initial positions of your draggable objects, and use an event handler on the button to reset their positions. I'm a little confused. The MoveMe example code is exactly what you need to answer your question -- what more do you want? You won't find a perfect code example for any arbitrary problem you can dream up.
Play around with saving the original positions and using MoveMe to reset the objects and I bet you'll have what your looking for in no time at all.