I've recently created a 2D app for the HoloLens. It is a UI Panel with several buttons into it. In order to drag the panel and be positioned as the user wants, I implemented the HanDdraggable.cs functionality (from HoloToolKit). However, whenever I try to move the panel it also rotates.
To change that I modified the Rotation Mode from "Default" to "Orient Towards User" and "Orient Towards User and Keep Uptight". But then It works even worst; if I implement that case, whenever I try to select the panel and drag it to somewhere, the panel runs off from my field of view and it suddenly disappears.
I wanted to ask if somebody has already tried to implement the HandDraggable option into an UI Hololens app and knows how to fix this nodding issue.
I'm currently working on hololens UI for one of my projects and to manipulate UI I used TwoHandManipulatable script which is built into MixedRealityToolKit. In Manipulation Mode of that script you could only set "Move" as an option, and this would allow you to move a menu with two hands, as well as one. (I wanted to have a menu which you can also rotate and scale - which works perfectly with this script, you can lock around which axis you want to have rotation enabled, to avoid unwanted manipulation).
For your script HandDraggable, did you try to set RotationMode to Lock Object Rotation? Sounds like this could solve the problem.
Related
I have a question regarding Unity and mrtk. I need to create an object, which can be deleted via button push. This button should be attached to the object. Now there is the app bar which can be attached to the object and which is very convenient because the buttons are only displayed on the side of the object you currently look at. However the app bar does not seem to work properly with the new bounding box and after deactivating the adjust button on it. So my question basically is, how do i make a button which is attached to the object, hovers on it and is only displayed on the side of the object i am currently looking at? The script of the app bar is very poorly commented, so i cannot figure out which part is responsible for making the button appear on the correct side and correspondingly how to write a script displaying the delete button only on the correct side (following the direction i am currently looking at).
To solve this problem you need a free Canvas that attaches to your object. This Canvas should be adjusted to the dimensions of the object and always look at the camera. To do this, first create a canvas and set RenderMode to World Space, Remember that you have entered the Event camera reference.:
After completing the canvas, make a button like the one below and place it in the body. In this section, adjust the dimensions so that you want to appear in near of your main object.
Finally, I suggest using Look At Constraint to match the canvas and the view to the camera. Insert the camera as source and fix Constraint settings it as shown below.
Example Result
I have created a simple Unity AR Foundation app which places objects on a plane whenever the screen is touched. I would like to add some UI so the user can press a button rather than anywhere on the screen.
I have followed several different tutorials which seem to be doing mostly the same thing. I right-click the Hierarchy -> UI -> Button. I have scaled it so it should fit my mobile screen and anchored it to the center so it should be easy enough to find.
These are the canvas settings:
Might the UI somehow be hidden behind the camera feed from the AR Session Origin -> AR Camera? Am I missing any steps to anchor the UI to the screen?
As you can probably tell, I am very new to Unity but I feel like I have followed the tutorials for creating a UI, but it simply won't show. If you need more information, please just ask and I will provide.
Not sure but sounds like you might need to have that Canvas Scalar to scale with the screen size. Change the UI scale mode to Scale with Screen Size.
I was compiling the wrong scene. I had two very similar scenes, so when I compiled I didn't realize there were no changes and that I was inspecting the entirely wrong scene.
Once I changed to the correct scene the setup above worked as expected.
A bit of background: I recently implemented a Drag and Drop Behavior to my app, where I can drag items from e.g. the Finder inside my NSTableView. Now I wanted to write a few ui-tests for this new functionality.
The general idea was to move the finder window to the left side of the screen and my application window to the right side of the screen and then execute the drag and drop. The drag and drop itself is not the problem, the problem is the setup of the mentioned window layout. I cannot find a convenient way to resize and move the two windows. Coming from .net, I expected something like app.window.setSize(..) or app.window.moveTo(...).
What I tried so far:
As I have Magnet installed on my Mac, I tried the easy way out and sent key-events (control + option + arrow) to the window. This did not work, sending the keystrokes results in an error beep. Doing this manually during the tests works, so I don't know what exactly stops Magnet from rearranging the windows, but I guess it has something to do with the Testing Framework. I did not dig deeper into this, as it would have been a cheap solution anyway.
Drag the app window corners based on screen dimensions, e.g. for the window on the left I drag the corners to the top left, bottom left, top middle and bottom middle of the screen. This requires that all four corners are visible on screen, but that's a problem for another day. The solution would normally work, but the problem is that the y-coordinates I get from the frame of my app window are not what I was expecting. I do receive the location of the app window with app.windows.firstMatch.frame.origin. The x-coordinates look alright, but the y-coordinates are totally off (from what I expected).
I can't find many resources regarding the origin or frame members. Any idea on how to face this problem or where to find a documentation about the XCUITest-Framework and the basic concepts behind it? The official documentation doesn't help in this case. I only found this short explanation in the apple documentation archive about the coordinate system of macOS (or OS X back then) applications.
I'm quite new to Unity, so I'm sorry if this is a basic question. I've been trying to set up the UI for a mobile game, but I'm not quite sure how to make the UI lock it's position, no matter the screen size. I've tried using anchors (though I don't fully understand how to use them properly), I've tried using a canvas scaler, I've looked at the Unity document and I just can't seem to find an answer. The buttons are off screen/half off the screen when I build the game to my device/switch screen sizes in the game view. Does anyone know how to fix this?
You can set your anchor point by selecting the UI object (such as a button) and then clicking here and selecting the right anchor point. You can also press down shift to set the pivot and/or alt to move the object to that point at the same time. The object should now be anchored to that point and keep its position even if the resolution is changed. You can set a precise position from the inspector, too. Simply adjust the Pos X and Pos Y variables. It will still adhere to the anchor point.
Note that you might have to play around with the Canvas object's UI Scale mode and its settings to get the right setup.
I've run through an entire fault tree trying to diagnose this, with no joy.
I'm writing a 2D card game in Unity/C#. I have four panels (one per player) that hold the cards, name, discard pile, etc. for each player. I need to have a pop-up dialog panel come up over the player panels when the user wants to change options. For some reason, I cannot get the pop-up to appear over the card sprites (it does appear over the other elements: interior panels, images, text boxes, etc.). I've tried adjust the Zpos for the dialog box panel, but nothing changes. That's problem one, but it leads to a more worrisome issue.
The bigger issue is this. Since the options panel won't display in front of the players' cards, I thought I'd just deactivate the player panels, display the dialog, then deactivate it and re-activate the player panels when it's closed. That works fine: for three of the panels. The fourth panel comes back on in its previous state, but the graphics on it will no longer update.
I've debugged and discovered the new cards are being handled correctly (sprite names changing, etc.), the discard pile is being updated, the player's name is being highlight/de-highlighted as the game progress, but none of it is appearing! It's visually stuck in the state it was when I deactivated it.
Investigating further, I've determined the error crops up anytime I deactivate and the re-activate that player's panel, whether I do it via the inspector (attaching those events to a button click), or do it in-line in script. I don't even have to open the options dialog box: I put SetActive(false/true) statements in my game code and it immediately kills the graphics updating for that panel. The sprites, text, etc. remain as they were when I deactivated and will not update.
player3Obj.gameObject.SetActive(false)
player3Obj.gameObject.SetActive(true);
Doing that to the other three panels has no effect and works fine. I see nothing different about panel 4. In fact, I can deactivate only one of its card sprites, and when I turn it back on, it is now "stuck" and won't update, even though all the other cards in that player's hand will. Same if I deactivate/re-activate one of the text fields. It will no longer update, but everything else does.
I've got no exception errors or anything, but this looks to me like some kind of memory problem, though I can't imagine what. It shows up in my Android build, no it's not specific to my machine. I'm throwing this question out there hoping someone has seen something similar.
If nothing else, maybe someone can tell me how to get my options panel to display over the card sprites. But I hate to leave a problem undiagnosed: they have a way of coming back and biting.
Update
Here's the code that isn't getting displayed. The cardBackSprit values are updating correctly, as is the gameObjectSprite, but the image onscreen isn't changing:
void DrawCardBitmap2(int Player, int cardSpot, int cardIndex)
{
string spriteObjectName;
spriteObjectName = "Sprite_Player" + Player + "_" + cardSpot;
gameObjectSprite = GameObject.Find(spriteObjectName);
gameObjectSprite.GetComponent<SpriteRenderer>().sprite = cardBackSprite;
}
There's a lot to unpack here. Let's break down your post into a series of questions:
1. I cannot get the pop-up to appear over the card sprites
It sounds like you're using the UI Canvas in Unity to handle your info panels for your players, but gameObjects for other elements. This is good, but the UI Canvas' sorting order is a bit different from standard game objects.
UI elements in the Canvas are drawn in the same order they appear in the Hierarchy. The first child is drawn first, the second child next, and so on. If two UI elements overlap, the later one will appear on top of the earlier one.
In order for your pop-up to appear above other elements in your canvas, they need be be higher in your scene's Canvas hierarchy.
Important to note: Canvases set to any Screen Space render mode will render over other game objects in the scene. Canvases set to World Space will render in their world position in the scene. The only render mode that uses Z Position to choose sorting order is World Space, but this is not my recommended solution to your problem.
My recommended solution:
Break your UI into multiple different canvases. Specifically, move your pop-up to a different canvas and place it higher in the scene hierarchy than your card sprites. When you enable/disable or move the pop-up, it will now appear over the card sprites.
2. Four panels (one per player) that hold the cards
From context and some of your code, it sounds like you have SpriteRenderers in your UI Canvas. This is known to be a complex rendering problem. Common advice involves using 2 cameras for rendering, and use camera depth to raise sprites over UI elements. However, redesigning your UI canvas is probably simpler.
3. Using GameObject.Find and complex strings at runtime
GameObject.Find is not performant, and it's not robust. It looks through all elements in the scene and returns the first object it finds with that name.
This poses a few problems:
You cannot have game objects with the same name anywhere in your hierarchy, even if they are nested in different places.
CPU cycles are wasted searching through all objects.
Hidden dependencies on object names that only show up during runtime.
Here's a great blog post on some better practices. I recommend using the [SerializeField] attribute and configuring it via inspector.
4. Canvas isn't updating when objects inside of it change
You can consider invoking Canvas.ForceUpdateCanvases() in LateUpdate(). This is more of a hack than an actual solution, but if this solves your problem it is likely an issue with canvas rendering. If this does not solve your problem, then this problem is likely elsewhere in your code that is currently not provided.
A canvas performs its layout and content generation calculations at the end of a frame, just before rendering, in order to ensure that it's based on all the latest changes that may have happened during that frame. This means that in the Start callback and the first Update callback, the layout and content under the canvas may not be up-to-date.