Hololens: how to render element visible only in AR, but not in mixed reality capture - unity3d

I'm making a presentation of someone using the Hololens that is duplicated on a big screen. For duplication it uses the device portal's mixed reality capture option (live stream).
I need to render a tool tip to be visible only to the person with the Hololens - but invisible to the people watching it on the big screen.
From what I've seen, the only difference in rendering between the two is that I can render black on the live stream (if I omit rendering the alpha channel) with it being invisible on the Hololens due to the way it's screen works. This is unfortunately useless to me, as I need to show something to the Hololens viewer, not big screen viewers.
Any ideas on how can I make part of the content visible only to the hololens user?
I can't use spectator view due to other constraints (I need first person view).

Found a solution, not the best one possible, but usable.
I render the tooltip objects only to the right eye, as only the contents of the left eye are included in the live view.
For anyone wondering, in a shader, there is a magic value of unity_StereoEyeIndex that has the value 1 or 0, depending on the eye. To use this value, it first needs to be set up.
If anoyone has an idea how can I do this without sacrificing stereoscopy, I'll be happy to hear about it.

Related

Unity AR UI not showing up

I have created a simple Unity AR Foundation app which places objects on a plane whenever the screen is touched. I would like to add some UI so the user can press a button rather than anywhere on the screen.
I have followed several different tutorials which seem to be doing mostly the same thing. I right-click the Hierarchy -> UI -> Button. I have scaled it so it should fit my mobile screen and anchored it to the center so it should be easy enough to find.
These are the canvas settings:
Might the UI somehow be hidden behind the camera feed from the AR Session Origin -> AR Camera? Am I missing any steps to anchor the UI to the screen?
As you can probably tell, I am very new to Unity but I feel like I have followed the tutorials for creating a UI, but it simply won't show. If you need more information, please just ask and I will provide.
Not sure but sounds like you might need to have that Canvas Scalar to scale with the screen size. Change the UI scale mode to Scale with Screen Size.
I was compiling the wrong scene. I had two very similar scenes, so when I compiled I didn't realize there were no changes and that I was inspecting the entirely wrong scene.
Once I changed to the correct scene the setup above worked as expected.

Part of unity text field jittery/smudging in Hololens 2

My team has made an application in Unity3D with MRTK for the Hololens 2. Our main menu inside the application does not use a Canvas, but includes Quads to display pictures and Text Mesh Pro's 3D text fields. I have found that, while this menu is open, several elements like the top-left corner picture and part of the text fields are jittery when you hold your head steady. When you nod your head, the affected parts of the text seem to lag behind so that they end up lower or higher than the text that remains steady.
The cutoff point between stable and unstable text is always the same. There is a central area that is stable. Text that is too high, or too far to the left or right in unstable. The division is in the middle of the letters (For example, the top-most part of the capital letter S is unstable, while the smaller letter m is stable.) It does not matter if the viewport is centered on the center or the side of the menu. Other objects in the menu, such as buttons, that are further outside the center, are still stable.
I'm aware that there can be problems with hologram stability, but I do not understand why only part of the same textfield are affected. I can't include screenshots or videos because the effect doesn't show up in screencaptures of the Hololens.
Does anyone know what could be causing part of an object to be unstable in the Hololens, and what might be done about it?
Edit: I made an edited screenshot to try and recreate the visual effect seen in the Hololens:
It seems to be related to depth reprojection. Text doesn't write to the depth buffer by default, which can lead to instability. MRTK have some tips, including specifically for TMPro:Depth buffer sharing in Unity

Panel/Sprites won't update after being set inactive then reactivated

I've run through an entire fault tree trying to diagnose this, with no joy.
I'm writing a 2D card game in Unity/C#. I have four panels (one per player) that hold the cards, name, discard pile, etc. for each player. I need to have a pop-up dialog panel come up over the player panels when the user wants to change options. For some reason, I cannot get the pop-up to appear over the card sprites (it does appear over the other elements: interior panels, images, text boxes, etc.). I've tried adjust the Zpos for the dialog box panel, but nothing changes. That's problem one, but it leads to a more worrisome issue.
The bigger issue is this. Since the options panel won't display in front of the players' cards, I thought I'd just deactivate the player panels, display the dialog, then deactivate it and re-activate the player panels when it's closed. That works fine: for three of the panels. The fourth panel comes back on in its previous state, but the graphics on it will no longer update.
I've debugged and discovered the new cards are being handled correctly (sprite names changing, etc.), the discard pile is being updated, the player's name is being highlight/de-highlighted as the game progress, but none of it is appearing! It's visually stuck in the state it was when I deactivated it.
Investigating further, I've determined the error crops up anytime I deactivate and the re-activate that player's panel, whether I do it via the inspector (attaching those events to a button click), or do it in-line in script. I don't even have to open the options dialog box: I put SetActive(false/true) statements in my game code and it immediately kills the graphics updating for that panel. The sprites, text, etc. remain as they were when I deactivated and will not update.
player3Obj.gameObject.SetActive(false)
player3Obj.gameObject.SetActive(true);
Doing that to the other three panels has no effect and works fine. I see nothing different about panel 4. In fact, I can deactivate only one of its card sprites, and when I turn it back on, it is now "stuck" and won't update, even though all the other cards in that player's hand will. Same if I deactivate/re-activate one of the text fields. It will no longer update, but everything else does.
I've got no exception errors or anything, but this looks to me like some kind of memory problem, though I can't imagine what. It shows up in my Android build, no it's not specific to my machine. I'm throwing this question out there hoping someone has seen something similar.
If nothing else, maybe someone can tell me how to get my options panel to display over the card sprites. But I hate to leave a problem undiagnosed: they have a way of coming back and biting.
Update
Here's the code that isn't getting displayed. The cardBackSprit values are updating correctly, as is the gameObjectSprite, but the image onscreen isn't changing:
void DrawCardBitmap2(int Player, int cardSpot, int cardIndex)
{
string spriteObjectName;
spriteObjectName = "Sprite_Player" + Player + "_" + cardSpot;
gameObjectSprite = GameObject.Find(spriteObjectName);
gameObjectSprite.GetComponent<SpriteRenderer>().sprite = cardBackSprite;
}
There's a lot to unpack here. Let's break down your post into a series of questions:
1. I cannot get the pop-up to appear over the card sprites
It sounds like you're using the UI Canvas in Unity to handle your info panels for your players, but gameObjects for other elements. This is good, but the UI Canvas' sorting order is a bit different from standard game objects.
UI elements in the Canvas are drawn in the same order they appear in the Hierarchy. The first child is drawn first, the second child next, and so on. If two UI elements overlap, the later one will appear on top of the earlier one.
In order for your pop-up to appear above other elements in your canvas, they need be be higher in your scene's Canvas hierarchy.
Important to note: Canvases set to any Screen Space render mode will render over other game objects in the scene. Canvases set to World Space will render in their world position in the scene. The only render mode that uses Z Position to choose sorting order is World Space, but this is not my recommended solution to your problem.
My recommended solution:
Break your UI into multiple different canvases. Specifically, move your pop-up to a different canvas and place it higher in the scene hierarchy than your card sprites. When you enable/disable or move the pop-up, it will now appear over the card sprites.
2. Four panels (one per player) that hold the cards
From context and some of your code, it sounds like you have SpriteRenderers in your UI Canvas. This is known to be a complex rendering problem. Common advice involves using 2 cameras for rendering, and use camera depth to raise sprites over UI elements. However, redesigning your UI canvas is probably simpler.
3. Using GameObject.Find and complex strings at runtime
GameObject.Find is not performant, and it's not robust. It looks through all elements in the scene and returns the first object it finds with that name.
This poses a few problems:
You cannot have game objects with the same name anywhere in your hierarchy, even if they are nested in different places.
CPU cycles are wasted searching through all objects.
Hidden dependencies on object names that only show up during runtime.
Here's a great blog post on some better practices. I recommend using the [SerializeField] attribute and configuring it via inspector.
4. Canvas isn't updating when objects inside of it change
You can consider invoking Canvas.ForceUpdateCanvases() in LateUpdate(). This is more of a hack than an actual solution, but if this solves your problem it is likely an issue with canvas rendering. If this does not solve your problem, then this problem is likely elsewhere in your code that is currently not provided.
A canvas performs its layout and content generation calculations at the end of a frame, just before rendering, in order to ensure that it's based on all the latest changes that may have happened during that frame. This means that in the Start callback and the first Update callback, the layout and content under the canvas may not be up-to-date.

Resolution issue when building the game

I am using the ViewportHandler script for Unity(https://github.com/dfsp-spirit/way2close/blob/master/Way2Close/Assets/Scripts/ViewportHandler.cs), to allow for my UI to appear the same in different resolutions. I am pretty sure that it was looking just fine and pretty much the same in all resolutions(with different quality graphics due to stretching, but that is fine).
I have opened up my project after a while and I am now noticing that while the game scene looks fine inside the editor, the UI elements change position for all resolutions when building the game.
I am attaching two screenshots to show the difference. The Editor one is the proper one where elements are aligned properly. The other one is when I am building the game and running it full screen.
The weird thing is that when building the game, every resolution displays the wrong way (as in picture 1). So the elements are actually resizing properly, but they are just in the wrong place for some reason and I really can't see why. Any ideas ?
(My Canvas is Screen Space - Overlay, Scale with Screen Size, Ref resolution is 2560/1440, Match width and height and ref pixels 100).
You don't need 3rd party scripts to achieve a constant size on different resolutions. Use the Canvas Scaler component on your canvas and set it to 'Constant Physical Size'. Unity should handle all the rest.
If images/sprites change position, try to change the anchor point to fit your needs.

Things to consider when writing for touch screen?

I'm starting a new project which involves developing an interface for a machine that measures wedge and roundness of lenses and stores the information in a database and reports on it. There's a decent chance we're going to be putting a touch screen on this machine so that it doesn't need to have a mouse or keyboard...
I don't have any experience developing for full size touch screens, so I'm looking for advice/tips/info from you guys...
I can imagine you want to make the elements a little larger than normal... space buttons out a bit more.... things like that... anyone have anything else to add?
A few things to consider:
You need to account for parallax error when touching controls. Basically, the user may touch the screen above or below your actual control and therefore miss the control. This is a combination of the size of the control (eg you can have the active area larger than visual control to allow the user to miss and still activate the control), the viewing angle of the user (which you may or may not be able to predict/control) and the type of touch screen you're using. If you know where the user will be placed relative to the screen when using it, you can usually accommodate this with appropriate calibration.
Depending on the type of touch screen, you may need to ensure that your users aren't wearing gloves or using an implement other than their fingers (eg the end of a pen) to touch the screen. Some screens (eg those depending on conductance) don't respond well to anything other than flesh and blood.
Avoid using double clicks because it can be very hard for users to reliably double click a control. This can be partly mitigated if you've got experienced/trained users working in a fairly controlled environment where they're used to the screens.
Linked to the above, if you are using double clicks, you may find the double click activated when the user only wants to single click. This is because it's very easy for the user's finger to bounce slightly on touching the screen and, depending on how sensitive the double click settings are, trigger a double rather than a single click. For this and the previous reason, we always disable double clicks and only use single clicks (or similar single activation controls).
However big you think you need to make the controls to allow for touch activation, they almost certainly need to be bigger still. Make sure you test the interface with real users in the real deployment environment (or as close to it as you can get). For example, we deployed some screens with nice big buttons you couldn't miss only to find that the control room was unheated and that the users were wearing thick gloves in the middle of winter, making their fingers way bigger than we had allowed for.
Don't put any controls near the edges of the screen - it's very hard to get your finger into the edges (particularly if the screen has a deep bezel) and a slight calibration problem can easily shift the control too close to the edge to use. Standard menus and scroll bars are a good example of controls that can be very tricky to use on a touch screen and you should either avoid them (which is preferable - they're not good for touch screens) or replicate them with jumbo equivalents.
Remember that the user's hand will be over the screen, obscuring some of the screen and controls (typically those below where the user is touching, but it depends on the position of the user relative to the screen). Don't put instructions or indicators where the user's hand or arm will obscure them when trying to use the control they relate to (eg typically put them above rather than below the control).
Depending on the environment, make sure your touch screen is suitably proofed against dust, damp, grease etc and make sure it's easy to clean without damaging it. You wouldn't believe the slime that can quickly accumulate on a touch screen in an industrial or public setting.
The other obvious one is that there's no equivalent of pointer 'hover'. Not that that affects many apps though.
If you decide to put in analog controls (scrollbars, rotation widgets, etc) be sure to put in a digital control also. Some companies think that a touch screen means perfect control over something with your fingers. In real life, this translates to minutes of frustration trying to fix a number that's just a little off.
The most obvious thing is that everything on the GUI needs to be big enough for a fingertip to hit, which is sometimes bigger than you think.
As has been mentioned, there's really no way for a right-click action to happen. Also, double-clicking can be tricky with a fingertip on a touch screen.
The other major thing is that you'll want to create a on-screen keyboard that pops up for text entry and an on-screen numpad for number only fields.
I wrote my own set of controls for a POS application designed specifically to be touchscreen friendly.
Remember to allow enough real estate for stubby fingers and talons. In our application the users can have these manicures that necessitate them to use the pad of their finger instead of the tip. This means that you need to allow more space for activation areas than you would normally consider in any other type of application.
I would also recommend that you accommodate yourself as a programmer from a testing standpoint and from the point of view that things change and there may need to be a keyboard/mouse attached to a non-touch workstation. I cannot tell you how many times I went to touch my flat panel LCD expecting something to happen, before remembering that I had to use the mouse.
Make sure to read your basic UI principles like Fitz law (The time to acquire a target is a function of the distance to and size of the target).
Also consider whether or not the device is stationary or not when it is in use (e.g., like a palmpilot or iphone), research shows that you must accomodate that into your design.
The larger gui elements is the major thing. But it applies to all elements, scroll bars, tabs and even text fields.
The other major thing that I can think of, it's hard for the user to right click. So things that require a right click should be avoided, context menus are the only thing that comes to mind at the moment.
The other responses are pretty good, but are you totally sure that a touch screen would actually be easier to use? There are a lot of devices where a touch screen actually makes them much harder to use, not easier. The main problem is that you can't use the device when you're not looking at it. If users are going to be doing a lot of repetitive actions, a keyboard could be a lot more efficient.
Also, a touch screen might be a lot harder to use by someone with a disability, if you think there's even a small chance that could happen.
Even though this is quite old now, I found it to still be useful, as a starting point for design considerations.
http://www.sapdesignguild.org/resources/tsdesigngl/index.htm
If you've not already done so, have a look at some of the documentation available for developers on mobile platforms, eg Windows Mobile, iPhone.