Any way to move widgets beyond container without resizing it? - gtk

I am working on a game using GTK3 as a rendering technique (terrible idea, but it's a school project).
My gameobjects are made of Image widgets and are placed in a Fixed container. It's working pretty well, however when i move widgets beoynd right or bottom border, the window automatically grows along with it.
I want the window to stay at the sam size, event if widget leaves its area and becomes invisible. It works when i move widget past the upper or left border.
I tried using gtk_widget_set_vexpand and gtk_widget_set_hexpand. My window is set as not resizable (gtk_window_set_resizable).
Is there any way I can achieve this?

This isn't the right way to use GTK+. GTK+ is intended for laying out widgets in a GUI program.
There are better options for animating 2D elements. One that works with GTK+ natively is the Clutter library. You can also integrate SDL or OpenGL or something like that if you so choose.
That being said, you can also use GtkLayout instead of GtkFixed, or put the GtkFixed in a GtkScrolledWindow and hide the scrollbars and set the scroll policy to prevent scrolling. It's still technically misuse, and in fact GtkFixed (and possibly GtkLayout too but the docs don't say) is really not supposed to be used anymore unless absolutely necessary because it doesn't give you automatic support for tricky UI layout problems, but it doesn't have extra dependencies.

Related

Difference between .NET MAUI Border and Frame

What's the functional difference between a Border and a Frame in .NET MAUI?
The summary of a Border according to the documentation is
The .NET Multi-platform App UI (.NET MAUI) Border is a container control that draws a border, background, or both, around another control. A Border can only contain one child object. If you want to put a border around multiple objects, wrap them in a container object such as a layout.
And the summary of a Frame is as follows
The .NET Multi-platform App UI (.NET MAUI) Frame class is used to wrap a view or layout with a border that can be configured with color, shadow, and other options. Frames can be used to create borders around controls but can also be used to create more complex UI.
Sounds like they both do the same thing to me: drawing a border around another view (whether that's a layout or a single control doesn't matter). So why are there 2 different views? How do I decide which one to use?
I think this is due to the history of .NET MAUI. The Frame is a control that is in Xamarin.Forms. I'm not sure if it was ever intended to be the control to put a border around something, but since if was the only control that could do a shadow and border for a long time, a lot of people wrapped their controls in a Frame.
However, now with .NET MAUI there is the opportunity to fix some historical tech debt. That is why Border was introduced which is much more flexible. With Border you can give each corner an individual corner radius for instance. And instead of just a solid color you can give the Border a gradient.
So from a functional perspective the Border has more options and will probably outlive the Frame although there is no indication that Frame is going anywhere anytime soon.
There is probably more little differences here and there. Based on nothing more than a gut feeling I would think Border performs better, but I have no data to back that up.
Hope this makes it a bit more clear.

flutter equivalent to DocumentOrShadowRoot.elementFromPoint()

I am wondering what is the equivalent of the web api DocumentOrShadowRoot.elementFromPoint() in flutter.
Specifically, I am wondering how I could figure out what is the leaf element/widget instance in a widget hierarchy, given an Offset.
For example, consider the following structure:
For the First Offset marked with a dark circle, I would expect to get some sort of data that can help me figure out the offset is over Container.
For the Second Offset marked with a dark circle, I would expect the stack.
For the last one, it would be the positioned element.
A bit of context
I'm exploring the implementation of a visual editor similar to FIGMA in Flutter. I have experience in implementing such a rendering system with web technologies.
I want to render a selection indicator or outline when a tap/click happens on each element. These elements are nested. Adding multiple nested event handlers triggers all of them. For example, mouse enter and mouse leave when moving the mouse over the Stack or Positioned element would trigger all the parent event handlers as well.
Any help or guidance would be appreciated.
Simple answer to your exact question: No direct equivalent. Possible to implement but not advisable.
You could theoretically implement your own version of elementFromPoint() by looking at how GestureBinding works in Flutter. That would be a deep dive for sure, and you might learn from it, but there is a simpler solution. Even if you implement your own method, then you would still need to resolve conflicts when more than 1 element is found - and this is something Flutter solves out of the box with the gesture arena.
I see that you expect the top-most or deepest child to be reported, something that you can obtain by using the GestureDetector widget. What you're looking for is making your gesture detectors opaque. A GestureDetector has a property called behaviour of type HitTestBehaviour. The default for it is deferToChild. Here are the possible values:
/// How to behave during hit tests.
enum HitTestBehavior {
/// Targets that defer to their children receive events within their bounds
/// only if one of their children is hit by the hit test.
deferToChild,
/// Opaque targets can be hit by hit tests, causing them to both receive
/// events within their bounds and prevent targets visually behind them from
/// also receiving events.
opaque,
/// Translucent targets both receive events within their bounds and permit
/// targets visually behind them to also receive events.
translucent,
}
What follows is slightly related, so consider it a deep dive in your use-case
Since you're going down this path: I also built a WYSIWYG design system, with selection indicators, handles for rotating, resizing, etc. and have one advice: Completely separate your design rendering from your gesture detectors and selection indicators.
I initially put the gesture detectors "around" the design elements - in your example, the gesture detectors would sit in between yellow / blue / green / red. The reason this is a bad idea is that it complicates a few things. In some cases I needed to create touch areas larger than the design elements themselves, in which case I needed to add padding and reposition the GestureDetector parents. In other cases the design elements would become fixed or locked and would not have a GestureDetector parent and Flutter would completely rebuild the contents of the layer since tree comparing got confused. It gets messy fast. So, stack these layers:
Design on bottom, no interactivity.
Selection indicators, resize / rotate handles. Still no interactivity
Gesture detectors for all design elements. If you're lucky, you know the exact size, position, rotation for the design elements you can simply use Positioned. If you have groups of design elements, then your gesture detectors also get grouped and transformed together. If you also have self-sizing design elements (like images), it gets a bit more complicated, but I got around my issues by adding the design element as an invisible child. The way I would do this now is by loading meta-data about the images and knowing them at build time (as opposed to waiting for images to load and produce layout changes).
Selection indicators + resize / rotate handles gesture detectors. They are top-most and also opaque, so they catch everything that hits them.
This setup then allows you to experiment more in the gesture department, it allows you to use colored boxes to debug and in general will make your life easier.
TLDR: Use opaque gesture detectors.

How to 9-slice a sprite while keeping the center not scaled?

I wonder is there any way to slice this sprite(dialog pop up thing) that could keep the bottom center (the upside-down triangle) not scaled? I'm using nGUI if it matters.
Nope
Sorry, but that's how 9-slice scaling works. You would need 25 slice scaling to do what you're looking for and that's overkill for most things, so I've never seen an implementation.
What to do instead...
Break up your sprite into two pieces: the 9-slice portion and the "notch" portion. Then just position the notch to be in the right place.
I haven't used nGUI (only iGUI and the Unity native--both old and new) so I'm not sure on the precise nature of how nGUI will let you do that, but you'd still need two sprites, one of which is scaled and the other one which isn't, positioned either manually or through parent-child relative relationship. If your dialog is always the same width, it'll be pretty straight forward. If not, it might be more challenging.
A few other things:
You'll probably want the notch sprite and the bubble sprite to the same native image size, but its not necessary (might make things easier, might not).
The notch will want to have some "overbleed" so that when the two stack the underlying rendering code doesn't go all squinty eyed and go "there's a gap here..." and draw through in some cases.
Depending on the bubble portion's drawn edge, you might want the notch to be in front or behind. In your precise case, I don't think it'll make a difference. It's a little hard to tell due to the colors, but when I did a selectable tab (which is built similarly), the tab sits on top of the container window so that the shaded edge flows nicely. The unselected version then has no overbleed so it looks like it sits "behind" (accurate pixel placement--2D game at a fixed size--insures that no "gap" is rendered).
It's a little tedious but pretty straightforward to implement this for UI images. I recently did it in order to make a slice stretch the left/right borders of a 9-slice instead of the center.
The trick is to subclass Image and override OnPopulateMesh, where you do the calculations you need and set positions/uvs to whatever you require.
Here's a helpful how-to article: https://www.hallgrimgames.com/blog/2018/11/25/custom-unity-ui-meshes
Things for a non-UI sprite will be harder. I think you'll have to create all your geometry in a script, and the calculations might be a little complicated because you're using an atlas.

webkit overflow scrolling touch conflicts with webkit transform

It seems like applying a webkit-transform property on an element or it's parent that has webkit-over-flowing-scrolling: touch completely breaks the scrolling in that scrolling doesn't work at all.
Has anyone experienced this bug and know of a solution?
My current (hacky) solution looks like this:
$container.one 'webkitAnimationEnd', ->
$container.find('.contents').remove()
$container.append('.contents')
Basically i'm removing and then re-adding the contents of the scrollable div after the animation ends. Hopefully someone has a better solution for this.
I'm having the exact same problem, and it only goes away if I ditch -webkit-transform and switch to absolute positioning.
This would be fine, except that absolute positioning leads to lousy performance and choppy animation on iOS, which in iOS 6+ cannot be remedied w/ the previously popular translateZ and translate3D forced hardware acceleration hacks.
I figured out a hack, but it is so hideous and actually ugly that you might not want to read any further:
Take the element we want to apply -webkit-overflow-scrolling:touch to and separate it completely from the element we are applying -webkit-transform to. Use z-index manipulation to cause the scrolling element to appear in the same place it would have originally, while retaining -webkit-transform to cause the original container (now an empty container) to animate into place naturally. In my case this hack falls short though, because the scrollable content will appear suddenly on top of the animated container, instead of sliding in along with it.

Scrollviews and Cocos2D

I'm trying to develop a scrollable tile map in Cocos2D which uses an UIPanGestureRecognizer to do the dirty work, but while developing it, stumbled upon some problems for which I would like to ask for an advice.
The basic scrolling management works fine, it's precise and accurate and works by adding the translation recognized by the pan gesture manager to the tiles of the map. The problem is that the map is large and I just draw a small viewport of it, while I want to manage it like it's scrollable without any problem.
What I was thinking about is that, as soon as a whole row or column get out of the visible screen, it is moved to the opposite side, the corresponding texture rects are updated (I'm working entirely with a CCSpriteBatchNode), so that it will continuously update the viewport to make the whole thing work. This seems fine but I've found many problems in dealing with when to move the row/column, how to keep track of this issue (eg when pan changes direction from forth to back) and many little details which make me think that I should find a better approach.
Is there a common solution to my problem? That is: managing a scrollable viewport of a tilemap which should move over the whole map so the to the end user it seems like as if the map is infinite.
Thanks in advance
I solved my issue by developing a viewport in which rows and columns are effectively moved from left side to right side and from top side to bottom side.
This is done automatically when a new column or row enters the viewport and it's made by expanding the drawn viewport over the real one by an amount which is enough to avoid any graphical issue to the user.