New UI Builder/Toolkit & VR World Space - unity3d

Is it possible to use the new UI builder and Toolkit for World Space UI in Virtual Reality use?
I have seen ways of doing it with Render Textures but not only does It not seem to be the same as a World Space Canvas (Which I did expect but it´s not even close) but I also don´t find a way of interacting using the VR Raycast method?

This isn't officially supported yet but is certainly possible for those willing to implement it themselves.
As the OP mentioned it is straightforward enough to render the UI, the main idea is to set panelSettings.targetTexture to some RenderTexture, which you can apply to a quad like any other texture.
Note that if you have multiple UIs you will need multiple instances of PanelSettings.
For raycasting there is a method panelSettings.SetScreenToPanelSpaceFunction which can be used for translating 2d coordinates to panel coordinates, here is an Official Unity Sample demonstrating how it can be implemented in camera space. The method is called every update so it can be hijacked to use a raycast from a controller instead of screen coordinates, although I've had mixed results with this approach.
Check out this repo for an example implementation in XR, it is a more sophisticated solution which makes extended use of the input system.

Related

Show the Unity particle on the Canvas

I am using Unity version 2021.3.15f
I want to show the particles on the UI canvas.
I'd like to know two things, how to show particle on the canvas when the render mode on the canvas is screenspace-overlay and screenspace-camera.
Do I need to convert the particle's transform into a rectTransform?
Or should I use methods like Camera.ScreenToWorldPosition?
You could always move the camera into a ScreenToWorldPosition and it will work but keep in mind this is just a bandaid fix and won't be robust and maintainable. Usually anything ui related must be compatible with Unity's UI Render Pipeline.
There is this great resource for adding particle effects into UGUI from a github repository.
Use : https://github.com/mob-sakai/ParticleEffectForUGUI
take a look at the sample scenes it has everything you need.

Unity XR Single Pass Instanced rendering and UI

I was wondering if anyone has recommendations regarding the use of the Unity Canvas-based UI system (UGUI) along with the Single Pass Instanced rendering mode for XR applications (?)
My concerns are whether the UI elements will render as Single Pass Instanced or if they are actually just rendered twice - potentially causing performance issues.
As far as I can see on the default UI shader (Unity 2019.4.21 built in shaders for the built in render pipeline), it doesn't appear to support GPU Instancing (correct me if I am wrong). I can of course create my own shader with support for GPU Instancing in accordance with the guidelines here but I don't know if the UI rendering system will actually respect that (?) thinking that there might be a reason why is not implemented in the default UI shader...
And if the UI rendering does indeed not support GPU Instancing, does it then have some other optimized way of rendering that makes up for the lack of GPU Instancing?
I am sorry for these slightly fuzzy questions. I am just trying to figure out which path to take with my project - whether to go the UI (UGUI) way or not.
Best regards, Jakob
I try to migrate a big VR project on Unity2021 and Single Pass Instanced.
I have no issue with UGUI. I had some issue with some of ours shaders and third parties shaders and in that case, the issue was the rendering was visible only on one eye or it was different some way between the two eyes.
It did not check the draw call specificaly on UGUI but for me if it's the same on both eye it's rendered once.
I have both screenspace and wordspace GUI.
Alex

How do I use different Post-Processing effects on different cameras in Unity 2017.4.2f2?

Before I explain my situation, it's important that I mention I'm using an older version of Unity, 2017.42f2 (for PSVITA support). for this reason, I'm also using the old Post-Processing Stack.
I'm making a game in which I want a lot of bloom on the UI, but not as much for the rest of the game. I used a camera to render the UI, gave it Post-Processing (Screen Space - Camera on Canvas),
and another one to render the rest of the game
Each is given a different profile to use different effects.
My expectation is that the UI renderer camera would only apply it's effects to the Canvas. Instead, it also applies them to the camera beneath it; the game renderer camera.
As you can see I used Don't Clear clear flags. I also tried Depth-only, to see if it would make a difference.
I'm lost as to what I could do. Grain and bloom get applied to everything
yet the profile for those effects is only given to the UI renderer Camera's Post Processing Behavior Script.
Does anyone have any suggestions or ideas? I'm lost.

How to grab the 2D views/textures from a 3D Object in Unity

I am working on a Projection Mapping Project and I am prototyping in Unity 3D. I have a cube like object with a 3D terrain and characters in it.
To recreate the 3D perspective and feel I am using two projectors which will project in a real world object which is exactly like the Unity Object. In order to do this I need to extract 2D views from the shape in Unity.
Is there an easy way to achieve this ?
Interesting project. It sounds like you would need multiple displays, one for each projector, each using a separate virtual camera in Unity, like documented there.
Not sure if I understood your concept correctly from the description above. If the spectator should be able to walk around the cube, onto which the rendered virtual scene should be projected, it would also be necessary to track a spectator's head/eyes to realize a convincing 3D effect. The virtual scene would need to be rendered from the matching point of view in virtual space (works for only one spectator). Otherwise the perspective would only be "right" from one single point in real space.
The effect would also only be convincing with stereo view, either by using shutter glasses or something similar. Shadows are another problem, when projecting onto the cube from outside the scene. By using only two projectors, you would also need to correct the perspective distortion, when projecting onto multiple sides of cube at the same time.
As an inspiration: There's also this fantastic experiment by Johnny Chung Lee demonstrating a head tracking technique using the Wii Remote, that might be useful in a projection mapping project like yours.
(In order to really solve this problem, it might be best to use AR glasses instead of conventional projectors, which have the projector built in, and use special projection surfaces that allow for multiple spectators at the same time (like CastAR). But I have no idea, if these devices are already on the market... - However, I see the appeal of a simple projection mapping without using special equipment. In that case it might be possible to get away from a realistic 3D scene, and use more experimental/abstract graphics, projected onto the cube...)

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.