GTK prevent custom widget from grabbing focus - gtk

I've implemented a musical keyboard as a subclass of Fixed and where each individual key is a subclass of DrawingArea, and so far, it works great: custom drawing code in expose, press+release functionality working... kind of. See, here's the problem: I want the user to be able to drag the mouse across the keyboard with the mouse down to play it. I currently capture the button press and release signals, as well as enter and leave notify. Unfortunately, this doesn't quite work because the widget seems to grab focus of the mouse as soon as the mouse is pressed over it. This makes sense for normal buttons, but not for a musical keyboard. Is there any good way to remedy this other than rewriting the entire keyboard to be one massive DrawingArea?
Also, it shouldn't matter, but in case it does I'm using GTK#.

You might consider using GooCanvas: You can represent each of the keys as CanvasPolylines, fill them with the colors you need. Each of the Canvas items is a GtkWidget, so you can act on events like enter, leave, button-pressed etc.
This method seems to make more sense (to me) than separate DrawingAreas. As each drawn element is still accessible, you can even change colors/size and other properties dynamically. Also, Polyline lets you make more complex shapes.

Related

flutter equivalent to DocumentOrShadowRoot.elementFromPoint()

I am wondering what is the equivalent of the web api DocumentOrShadowRoot.elementFromPoint() in flutter.
Specifically, I am wondering how I could figure out what is the leaf element/widget instance in a widget hierarchy, given an Offset.
For example, consider the following structure:
For the First Offset marked with a dark circle, I would expect to get some sort of data that can help me figure out the offset is over Container.
For the Second Offset marked with a dark circle, I would expect the stack.
For the last one, it would be the positioned element.
A bit of context
I'm exploring the implementation of a visual editor similar to FIGMA in Flutter. I have experience in implementing such a rendering system with web technologies.
I want to render a selection indicator or outline when a tap/click happens on each element. These elements are nested. Adding multiple nested event handlers triggers all of them. For example, mouse enter and mouse leave when moving the mouse over the Stack or Positioned element would trigger all the parent event handlers as well.
Any help or guidance would be appreciated.
Simple answer to your exact question: No direct equivalent. Possible to implement but not advisable.
You could theoretically implement your own version of elementFromPoint() by looking at how GestureBinding works in Flutter. That would be a deep dive for sure, and you might learn from it, but there is a simpler solution. Even if you implement your own method, then you would still need to resolve conflicts when more than 1 element is found - and this is something Flutter solves out of the box with the gesture arena.
I see that you expect the top-most or deepest child to be reported, something that you can obtain by using the GestureDetector widget. What you're looking for is making your gesture detectors opaque. A GestureDetector has a property called behaviour of type HitTestBehaviour. The default for it is deferToChild. Here are the possible values:
/// How to behave during hit tests.
enum HitTestBehavior {
/// Targets that defer to their children receive events within their bounds
/// only if one of their children is hit by the hit test.
deferToChild,
/// Opaque targets can be hit by hit tests, causing them to both receive
/// events within their bounds and prevent targets visually behind them from
/// also receiving events.
opaque,
/// Translucent targets both receive events within their bounds and permit
/// targets visually behind them to also receive events.
translucent,
}
What follows is slightly related, so consider it a deep dive in your use-case
Since you're going down this path: I also built a WYSIWYG design system, with selection indicators, handles for rotating, resizing, etc. and have one advice: Completely separate your design rendering from your gesture detectors and selection indicators.
I initially put the gesture detectors "around" the design elements - in your example, the gesture detectors would sit in between yellow / blue / green / red. The reason this is a bad idea is that it complicates a few things. In some cases I needed to create touch areas larger than the design elements themselves, in which case I needed to add padding and reposition the GestureDetector parents. In other cases the design elements would become fixed or locked and would not have a GestureDetector parent and Flutter would completely rebuild the contents of the layer since tree comparing got confused. It gets messy fast. So, stack these layers:
Design on bottom, no interactivity.
Selection indicators, resize / rotate handles. Still no interactivity
Gesture detectors for all design elements. If you're lucky, you know the exact size, position, rotation for the design elements you can simply use Positioned. If you have groups of design elements, then your gesture detectors also get grouped and transformed together. If you also have self-sizing design elements (like images), it gets a bit more complicated, but I got around my issues by adding the design element as an invisible child. The way I would do this now is by loading meta-data about the images and knowing them at build time (as opposed to waiting for images to load and produce layout changes).
Selection indicators + resize / rotate handles gesture detectors. They are top-most and also opaque, so they catch everything that hits them.
This setup then allows you to experiment more in the gesture department, it allows you to use colored boxes to debug and in general will make your life easier.
TLDR: Use opaque gesture detectors.

How can i make a programmatically custom tooltip in cocoa (OS X)?

I need to make a custom tooltip view for all views of my project. This tooltip view has a specific shape (pentagon), font, font color and background color. Also it should has a typically delay, like the system tooltip, when mouse enter and mouse exit from view. Which is the best way for do this?
Thanks for the answers
I need to make a custom tooltip view for all views of my project.
For all views? Most applications have a lot of views that the user isn't even aware of — views used to contain groups of controls and such. So it'd be strange to offer tool tips for every view. Tool tips are usually used with interface components that actually do something, and their purpose is to tell the user what that something is. That's why you see that NSControl has methods for managing tool tips, but NSView doesn't.
Which is the best way for do this?
First, decide whether you really mean that you want tool tips for every view, or if you actually just want the same kind of tool tips that Cocoa already offers, but drawn differently. If the latter, then you could subclass each type of control you use and override draw(withExpansionFrame:in:) to draw the kind of tool tips you want.
If you really want tool tips for every view, you might do better to implement your own system. One approach might be to have some object in your app monitor mouse moved events. You can start a timer to track elapsed time after each mouse moved event, with each new event invalidating the old timer and starting a new one. If the timer expires, it can add a view displaying your pentagonal "tool tip" view to the window near the mouse.

How to avoid using instanceOf in this case? (allowing clicking on only some objects in a quadtree)

I have a bunch of Tank objects inserted into a quad tree. Some of these tank objects can be clicked on if they implement a Clickable interface. The problem is that in order to know what is being clicked, I need to query the same quadtree, but the quadtree has both clickable and non-clickable objects in it.
Potential solutions:
I could using instanceOf to see which ones in a specified region were clickable when the user clicks the screen, but I hear that using instanceOf is bad practice.
I could maintain TWO quadtrees. one for just tanks, and one for clickable objects. But then I could have to update tanks that implement the clickable interface TWICE since they would be in two seperate quadtrees, which would be slow considering I have to update every step, and people only click the screen every so often.
I could only insert clickable objects into the quadtree, and simply make non-clickable objects define dummy click methods. This would solve the problem, but it doesn't feel right because if something is non-clickable, then it shouldn't be implementing a clicking methods to begin with even if it is an empty one. Or is this ok?
I'm thinking #3 is probably the best way to do it, but I'm not sure. Any pointers?
EDIT: Now that I think about it, #3 would have some problems dealing with overlapped clicks. If a non-clickable tank overlaps a clickable one, and I click in the overlap section, unless I call all click methods for everything at that point, I could end up calling the click method of the wrong tank. To avoid doing THAT, I'd have to use instanceOf and find the clickable object and only call ITS click method.
So maybe #2 is better because I could easily cycle through all clickable objects at the clicked region and choose the one that is either at the top, or closest to the center of the region, and only call that tanks click method.
It could also not necessarily be that they overlap, but are just close together. If I were to make this game touch based, your finger isn't a point, but a region, so it might be better to cycle through objects in a region and find the closest clickable object, which would be hard to do with 3# without sifting through a bunch of non-clickable objects.
#3 sounds pretty good. You could maybe enhance it by creating another interface that ALL your objects implement, for example "GameObject", which has a method "isClickable". Then you can have your clickable objects implement a "return true" for isClickable.

2 buttons having the same functionality in iphone application

I am creating an application with call functionality, I want to have the phone image on the right hand side and the name of the contact on the left hand side. Currently this has been implemented by placing the 2 buttons side by side, hence if the user clicks on any of the buttons, the same functionality (calling the contact) happens.
Is this allowed according to Apple HIG? Please let me know.
Thank You,
Ashvin
I don't think anything in the HIG specifically disallows this, but it seems like a bad idea, for a couple of reasons:
If there are two buttons side-by-side, users will assume the buttons do different things, and will be confused when they don't.
It wastes screen real-estate that could probably be put to better use.
From your description it is hard to tell exactly what this looks like, but it sounds like something that needs to be re-designed.
Overlapping buttons?
It might not give you an intuitive UI, as the content of the overlapped button might get truncated by the overlapping button.
As Kristopher has explained, it will result in a bad user experience.
It is possible, that your app might get rejected because of the truncations as the user will not be readily able to perceive the content.
If look of the button is what you are concerned about, then you can just set one image (whether odd shaped or not) to one button, as long as the user is able to perceive that it is a button.
In the odd shaped image, you should also ensure that the functionality does not get triggered when the user touches the transparent area of the image, otherwise it might confuse the user.
Hope these points help!
Is there any mention about overlapping the buttons? Suppose for better design/look and feel, I have to place two buttons which partially overlaps each other, is it disallowed? Still the screen looks good and there will not be any confusion about the buttons. But only point is they are two different buttons, which assumes same functionality and has overlapped.
Adding some more information on the about query.
The buttons are overlapping as one is just a graphics and another is a rounded rectangle button whose Title changes dynamically.
If they can not be overlapped, can we have an odd shape button on the screen as long as user feels that they are buttons?
Thanks.

Things to consider when writing for touch screen?

I'm starting a new project which involves developing an interface for a machine that measures wedge and roundness of lenses and stores the information in a database and reports on it. There's a decent chance we're going to be putting a touch screen on this machine so that it doesn't need to have a mouse or keyboard...
I don't have any experience developing for full size touch screens, so I'm looking for advice/tips/info from you guys...
I can imagine you want to make the elements a little larger than normal... space buttons out a bit more.... things like that... anyone have anything else to add?
A few things to consider:
You need to account for parallax error when touching controls. Basically, the user may touch the screen above or below your actual control and therefore miss the control. This is a combination of the size of the control (eg you can have the active area larger than visual control to allow the user to miss and still activate the control), the viewing angle of the user (which you may or may not be able to predict/control) and the type of touch screen you're using. If you know where the user will be placed relative to the screen when using it, you can usually accommodate this with appropriate calibration.
Depending on the type of touch screen, you may need to ensure that your users aren't wearing gloves or using an implement other than their fingers (eg the end of a pen) to touch the screen. Some screens (eg those depending on conductance) don't respond well to anything other than flesh and blood.
Avoid using double clicks because it can be very hard for users to reliably double click a control. This can be partly mitigated if you've got experienced/trained users working in a fairly controlled environment where they're used to the screens.
Linked to the above, if you are using double clicks, you may find the double click activated when the user only wants to single click. This is because it's very easy for the user's finger to bounce slightly on touching the screen and, depending on how sensitive the double click settings are, trigger a double rather than a single click. For this and the previous reason, we always disable double clicks and only use single clicks (or similar single activation controls).
However big you think you need to make the controls to allow for touch activation, they almost certainly need to be bigger still. Make sure you test the interface with real users in the real deployment environment (or as close to it as you can get). For example, we deployed some screens with nice big buttons you couldn't miss only to find that the control room was unheated and that the users were wearing thick gloves in the middle of winter, making their fingers way bigger than we had allowed for.
Don't put any controls near the edges of the screen - it's very hard to get your finger into the edges (particularly if the screen has a deep bezel) and a slight calibration problem can easily shift the control too close to the edge to use. Standard menus and scroll bars are a good example of controls that can be very tricky to use on a touch screen and you should either avoid them (which is preferable - they're not good for touch screens) or replicate them with jumbo equivalents.
Remember that the user's hand will be over the screen, obscuring some of the screen and controls (typically those below where the user is touching, but it depends on the position of the user relative to the screen). Don't put instructions or indicators where the user's hand or arm will obscure them when trying to use the control they relate to (eg typically put them above rather than below the control).
Depending on the environment, make sure your touch screen is suitably proofed against dust, damp, grease etc and make sure it's easy to clean without damaging it. You wouldn't believe the slime that can quickly accumulate on a touch screen in an industrial or public setting.
The other obvious one is that there's no equivalent of pointer 'hover'. Not that that affects many apps though.
If you decide to put in analog controls (scrollbars, rotation widgets, etc) be sure to put in a digital control also. Some companies think that a touch screen means perfect control over something with your fingers. In real life, this translates to minutes of frustration trying to fix a number that's just a little off.
The most obvious thing is that everything on the GUI needs to be big enough for a fingertip to hit, which is sometimes bigger than you think.
As has been mentioned, there's really no way for a right-click action to happen. Also, double-clicking can be tricky with a fingertip on a touch screen.
The other major thing is that you'll want to create a on-screen keyboard that pops up for text entry and an on-screen numpad for number only fields.
I wrote my own set of controls for a POS application designed specifically to be touchscreen friendly.
Remember to allow enough real estate for stubby fingers and talons. In our application the users can have these manicures that necessitate them to use the pad of their finger instead of the tip. This means that you need to allow more space for activation areas than you would normally consider in any other type of application.
I would also recommend that you accommodate yourself as a programmer from a testing standpoint and from the point of view that things change and there may need to be a keyboard/mouse attached to a non-touch workstation. I cannot tell you how many times I went to touch my flat panel LCD expecting something to happen, before remembering that I had to use the mouse.
Make sure to read your basic UI principles like Fitz law (The time to acquire a target is a function of the distance to and size of the target).
Also consider whether or not the device is stationary or not when it is in use (e.g., like a palmpilot or iphone), research shows that you must accomodate that into your design.
The larger gui elements is the major thing. But it applies to all elements, scroll bars, tabs and even text fields.
The other major thing that I can think of, it's hard for the user to right click. So things that require a right click should be avoided, context menus are the only thing that comes to mind at the moment.
The other responses are pretty good, but are you totally sure that a touch screen would actually be easier to use? There are a lot of devices where a touch screen actually makes them much harder to use, not easier. The main problem is that you can't use the device when you're not looking at it. If users are going to be doing a lot of repetitive actions, a keyboard could be a lot more efficient.
Also, a touch screen might be a lot harder to use by someone with a disability, if you think there's even a small chance that could happen.
Even though this is quite old now, I found it to still be useful, as a starting point for design considerations.
http://www.sapdesignguild.org/resources/tsdesigngl/index.htm
If you've not already done so, have a look at some of the documentation available for developers on mobile platforms, eg Windows Mobile, iPhone.