I have two worlds A and B.
A has tappable components but B doesn't have tappable components.
First my single CameraComponent targets world A and I can detect gesture on the tappable components in A.
Next, my camera switches from targeting world A to world B, then I switch the view to world B.
Finally, When I tap the displayed world B, the gestures are still detected like it would have been world A.
See example here:
https://i.stack.imgur.com/jcpkb.gif
flame: ^1.6.0
I expected that the gestures shouldn't be detected once I swapped to world b, is this not correct?
You have to also remove world A from the component tree if you don't want the gestures to trigger on there.
Just do something like this when you swap the camera to the world B:
gameRef.remove(worldA);
Related
I am wondering what is the equivalent of the web api DocumentOrShadowRoot.elementFromPoint() in flutter.
Specifically, I am wondering how I could figure out what is the leaf element/widget instance in a widget hierarchy, given an Offset.
For example, consider the following structure:
For the First Offset marked with a dark circle, I would expect to get some sort of data that can help me figure out the offset is over Container.
For the Second Offset marked with a dark circle, I would expect the stack.
For the last one, it would be the positioned element.
A bit of context
I'm exploring the implementation of a visual editor similar to FIGMA in Flutter. I have experience in implementing such a rendering system with web technologies.
I want to render a selection indicator or outline when a tap/click happens on each element. These elements are nested. Adding multiple nested event handlers triggers all of them. For example, mouse enter and mouse leave when moving the mouse over the Stack or Positioned element would trigger all the parent event handlers as well.
Any help or guidance would be appreciated.
Simple answer to your exact question: No direct equivalent. Possible to implement but not advisable.
You could theoretically implement your own version of elementFromPoint() by looking at how GestureBinding works in Flutter. That would be a deep dive for sure, and you might learn from it, but there is a simpler solution. Even if you implement your own method, then you would still need to resolve conflicts when more than 1 element is found - and this is something Flutter solves out of the box with the gesture arena.
I see that you expect the top-most or deepest child to be reported, something that you can obtain by using the GestureDetector widget. What you're looking for is making your gesture detectors opaque. A GestureDetector has a property called behaviour of type HitTestBehaviour. The default for it is deferToChild. Here are the possible values:
/// How to behave during hit tests.
enum HitTestBehavior {
/// Targets that defer to their children receive events within their bounds
/// only if one of their children is hit by the hit test.
deferToChild,
/// Opaque targets can be hit by hit tests, causing them to both receive
/// events within their bounds and prevent targets visually behind them from
/// also receiving events.
opaque,
/// Translucent targets both receive events within their bounds and permit
/// targets visually behind them to also receive events.
translucent,
}
What follows is slightly related, so consider it a deep dive in your use-case
Since you're going down this path: I also built a WYSIWYG design system, with selection indicators, handles for rotating, resizing, etc. and have one advice: Completely separate your design rendering from your gesture detectors and selection indicators.
I initially put the gesture detectors "around" the design elements - in your example, the gesture detectors would sit in between yellow / blue / green / red. The reason this is a bad idea is that it complicates a few things. In some cases I needed to create touch areas larger than the design elements themselves, in which case I needed to add padding and reposition the GestureDetector parents. In other cases the design elements would become fixed or locked and would not have a GestureDetector parent and Flutter would completely rebuild the contents of the layer since tree comparing got confused. It gets messy fast. So, stack these layers:
Design on bottom, no interactivity.
Selection indicators, resize / rotate handles. Still no interactivity
Gesture detectors for all design elements. If you're lucky, you know the exact size, position, rotation for the design elements you can simply use Positioned. If you have groups of design elements, then your gesture detectors also get grouped and transformed together. If you also have self-sizing design elements (like images), it gets a bit more complicated, but I got around my issues by adding the design element as an invisible child. The way I would do this now is by loading meta-data about the images and knowing them at build time (as opposed to waiting for images to load and produce layout changes).
Selection indicators + resize / rotate handles gesture detectors. They are top-most and also opaque, so they catch everything that hits them.
This setup then allows you to experiment more in the gesture department, it allows you to use colored boxes to debug and in general will make your life easier.
TLDR: Use opaque gesture detectors.
I need to move an image down through canvas so that its central point would be where is now its top edge. It makes some 50 points, but if I decrease y by 50, it moves to different part of the screen on devices with different screen size. I guess, it's because my main canvas is set to scale with the screen size. So I suppose I need to manually divide the number 50 by my screen height and then code to multiply by Screen.height? Isn't there a more convenient way to move UI objects?
Allow me a second question: Do you think it is even wise to make a game purely on canvas? My game is simple 2D, only slightly animated and contains many layout elements, so I decided to go for it, but I have hard time to grasp the UI position rules.
you may have the problem of the anchoring.
Unity UI totally depends on the Anchoring, if you have got right anchoring there is no issue.
For example. if you anchored something at the Center than changing left and right value moves them according to the center anchor.
for clear visualization, you can paste a screenshot of the behavior.
My tvOS app generates a game board using SKNodes that looks like the following:
Each shape, separated by lines, is an SKNode that is focusable (e.g. each colored wedge is composed of 5 SKNodes that gradually diminish in size closer to the center).
My problem is that the focus engine doesn't focus the next focus item (SKNode) that would feel like the logical, most natural next node to focus. This issue is because the focus engine logic is rectangular while my SKNodes are curved. As you can see below, there are inherent problems when trying to figure out the next focusable item when swiping down from the outermost yellow SKNode:
In the example above, the focus engine deducts that the currently focused area is the area within the red-shaded rectangle based on the node's edges. Due to this logic, the focused rectangle overlaps areas that are not part of the currently focused node, including the entire width of the second yellow SKNode. Therefore when swiping downward, the focus engine skips focus to the third (middle) yellow SKNode.
How would one go about solving this focus issue so that focus is more natural both vertically and horizontally for my circular game board of SKNodes without seeming so sporadic? Is this possible? Perhaps with UIFocusGuides?
You have to handle the focus manually. Use the methods listed below to check for the next focused view from the context.
func shouldUpdateFocus(in context: UIFocusUpdateContext) -> Bool
In this method, you will get the focus heading direction (UIFocusHeading). Intercept the required direction and save your required next focused view in some property. Manually update the focus by calling below methods
setNeedsFocusUpdate()
updateFocusIfNeeded()
This will trigger the below
preferredFocusEnvironments: [UIFocusEnvironment] { get }
In this check for saved instance, and return the same. This will help you handle the focus manually as per your requirements.
These are my current settings and let me know if you all need more information to help me solve this issue.
Camera : Perspective
Canvas Render Mode : World Space
In the picture below the focus here is when the user wants to select a ship of their choice they click where "Select V" is and you see "Fighter ship image, but the problem here is that not only is the "Green Circle" in the way but the dropdown itself it just not big enough.
In this second picture below you can see the scaling has been done to make the dropdown more visable but as you can see in this picture below that it is hidden behind another ship select box (ship2 in the Hierarchy).
I have tried making the Z coordinate larger/small and even if I have it come closer to the camera it still is represented behind the ship2 gameobject. I am at a total loss for ideas on how to approach this and if anyone could shed some light on this that would be awesome!
Here are 2 more screen shots just in-case the first 2 images were not enough information to go on.
If I understood your question correctly, your UI is behind the ship but you want it to be above the ship. If that's the case read below, else leave a comment.
objects in the hierarchy is rendered based on the order of your objects in the hierarchy, not the depth. The Unity UI is rendered from top to button in the hierarchy. Don**'t go changing the z-axis if you want to change the display order. It doesn't work like **NGUI.
If you want any object to be displayed on top, it has to be put bellow the object in the hierarchy NOT on top.
If object A is on top of object B in the scene, the problem is from the hierarchy. Go to the hierarchy and put object B below object A if you want object B to be on top of object A.
Also, don't scale the UI the way you did in picture #2. Change the scale of shipRow1 back to 1,1,1 then use the Width and the Height properties to change its size.
In gamesalad framework I am creating a game in which I have an actor which only moves on x-axis on touch is pressed. But when I move it to x-axis the actor goes away out of the range of the screen. Let me clarify that I am new to gamesalad framework.
plz help to solve the problem.
I'm reading this in two different ways:
1 - when you press your touch controls the actor disappears from the screen
2 - when you press the touch control the actor moves along the x-axis and out of the screen
For the sake of argument I'm going to assume that we're talking about number two.
What you'll need to do is restrict the actors movement to the boundaries of the current screen. You can do this is two ways with Gamesalad
1 - create an invisible barrier for your actor to collide against
2 - use a behaviour to prevent your actor from going beyond the screen boundary
I'll explain both:
1, Invisible barriers
What you'll do is create a new actor, and set it to collide with the actor that your controlling. You'll create a few instances of this actor to create a walled area for your actor. Although this works, it's a little cludgy and using additional actors in a scene take a little performance away from your application.
2, using a behaviour
In my opinion the better way is to use the given behaviours in Gamesalad itself.
To stop the player actor from moving off screen you can use a combination of Rules and the Constrain Attribute behaviour to achieve this.
The first thing to know is your screen size; for an iPad I believe it's 1074 along the x-axis.
So to stop the actor moving off either side of the screen you'll need to do the following:
Open up the player actor
the click on the "Create Rule" button on the top right.
A new rule window will appear, but default the first dropdown will say "Actor receives event" change this to "Attribute".
Next select the attribute to use the rule against, since we're interested in the x-axis we'll want to query that player attribute which will be:
(also known as self) > Position > X
Select the greater than symbol (">") and then enter the maximum width of the screen minus whatever border value you want, so I'll use 1014 (1024 - 10).
Find and drag the Constrain Attribute behaviour into your rule a set the actors X position to 1014.
This will stop the actor from going beyond one side of the screen, now copy the rule and amend the settings to that if the actor goes less than, say 10, it will constrain the actors X position to 10.
I'd post an image, but alas my Karma isn't large enough right now! Hence the large explanation!
Hope this is what you're looking for!
It is way easier than that.
In top of the Gamesalad Creator there is a play button, which you probably know, displays your progress. To the left of this button there is a button that looks like a little video camera, it changes the camera settings. So what you first have to do is click the camera button, then the rectangular camera screen will show itself as marked. In the center of each of the sides of the highlighted rectangular which is the camera screen, sits 1 little grey rectangular. Each of these needs to be pulled to the center of the camera view so that you get a little grey "cross" in the center and from the center to the boarder there will now be this highlighted color.
Second and last step is easy, just go under your character and in (type or drag in a behavior block) you type control camera... or drag the control camera block, which as by the type box is statet possible.
Since Gamesalad only can have one camera at a time, and you character is the only one with the control camera option applied it will follow him and only him. Wherever your character starts on the screen, the camera will follow it when it passes through the center of the screen. You may know this from Super Mario Bros. Where you start of a little to the left and walk right and the second Mario enters the center of the screen, the camera follows him from then on.
Hope this helps... :