Unity XR Single Pass Instanced rendering and UI - unity3d

I was wondering if anyone has recommendations regarding the use of the Unity Canvas-based UI system (UGUI) along with the Single Pass Instanced rendering mode for XR applications (?)
My concerns are whether the UI elements will render as Single Pass Instanced or if they are actually just rendered twice - potentially causing performance issues.
As far as I can see on the default UI shader (Unity 2019.4.21 built in shaders for the built in render pipeline), it doesn't appear to support GPU Instancing (correct me if I am wrong). I can of course create my own shader with support for GPU Instancing in accordance with the guidelines here but I don't know if the UI rendering system will actually respect that (?) thinking that there might be a reason why is not implemented in the default UI shader...
And if the UI rendering does indeed not support GPU Instancing, does it then have some other optimized way of rendering that makes up for the lack of GPU Instancing?
I am sorry for these slightly fuzzy questions. I am just trying to figure out which path to take with my project - whether to go the UI (UGUI) way or not.
Best regards, Jakob

I try to migrate a big VR project on Unity2021 and Single Pass Instanced.
I have no issue with UGUI. I had some issue with some of ours shaders and third parties shaders and in that case, the issue was the rendering was visible only on one eye or it was different some way between the two eyes.
It did not check the draw call specificaly on UGUI but for me if it's the same on both eye it's rendered once.
I have both screenspace and wordspace GUI.
Alex

Related

New UI Builder/Toolkit & VR World Space

Is it possible to use the new UI builder and Toolkit for World Space UI in Virtual Reality use?
I have seen ways of doing it with Render Textures but not only does It not seem to be the same as a World Space Canvas (Which I did expect but it´s not even close) but I also don´t find a way of interacting using the VR Raycast method?
This isn't officially supported yet but is certainly possible for those willing to implement it themselves.
As the OP mentioned it is straightforward enough to render the UI, the main idea is to set panelSettings.targetTexture to some RenderTexture, which you can apply to a quad like any other texture.
Note that if you have multiple UIs you will need multiple instances of PanelSettings.
For raycasting there is a method panelSettings.SetScreenToPanelSpaceFunction which can be used for translating 2d coordinates to panel coordinates, here is an Official Unity Sample demonstrating how it can be implemented in camera space. The method is called every update so it can be hijacked to use a raycast from a controller instead of screen coordinates, although I've had mixed results with this approach.
Check out this repo for an example implementation in XR, it is a more sophisticated solution which makes extended use of the input system.

How do I use different Post-Processing effects on different cameras in Unity 2017.4.2f2?

Before I explain my situation, it's important that I mention I'm using an older version of Unity, 2017.42f2 (for PSVITA support). for this reason, I'm also using the old Post-Processing Stack.
I'm making a game in which I want a lot of bloom on the UI, but not as much for the rest of the game. I used a camera to render the UI, gave it Post-Processing (Screen Space - Camera on Canvas),
and another one to render the rest of the game
Each is given a different profile to use different effects.
My expectation is that the UI renderer camera would only apply it's effects to the Canvas. Instead, it also applies them to the camera beneath it; the game renderer camera.
As you can see I used Don't Clear clear flags. I also tried Depth-only, to see if it would make a difference.
I'm lost as to what I could do. Grain and bloom get applied to everything
yet the profile for those effects is only given to the UI renderer Camera's Post Processing Behavior Script.
Does anyone have any suggestions or ideas? I'm lost.

Oculus Quest Single-Pass and Multi-Pass not working?

I am using the following configuration:
Unity: 2019.2.13f1
Device: Oculus Quest Using LWRP
Issues:
(a) When I change the "Stereo Rendering Mode" to "Single-Pass", the rendering of the screen is too small and too far.
(b) When I change the "Stereo Rendering Mode" to "Multi-Pass", the rendering is only visible on the Left-Eye.
(c) The only Mode that works is "Multi-View". Unfortunately, there is also of jittery motion when this is used. The images that are near the user starts to jitter and this is very much visible.
The (c) is the reason that I would like to use Single/Multi pass rendering since then it would overcome the problem.
Has anyone faced these similar issues?
This is a recurrent problem with LWRP/URP because it use post-processing effects, and single-pass stereo rendering needs you to tweak shaders to support this
And thus, unless vis major, it is best suited to stick with standard rendering pipeline.

as3 starling rendertexture vs meshbatch [how to make a choice]

Im new to starling and game development in general. As i have understood so far, the two optimised techniques of rendering on mobile are "RenderTexture" and "MeshBatch".
- At an architectural level, how should we choose between the two?
- Is it also possible to use both simultaneously? (eg. drawing a meshbatch inside a rendertexture)
Those are two orthogonal concepts. You can utilize both simultaniously in your projects. Furthermore, this is related to any platform, not just mobile.
You don't have to make a choice, use both.
Render textures
This is not a direct optimization, it is an object used to implement particular effects or rendering techniques.
However, it can be used to optimize away some draw calls if you draw complex object to the render texture once and then draw that texture directly to the screen in subsequent frames.
Starling implements this optimization for filters.
Mesh batching
This is an actual optimization techique. Draw call overhead can be high, so combining several meshes into one can give some performance benefits if implemented correctly.
Starling does this automatically for it's display objects.

How to access target texture for rendering from native world in Unity (VR)?

I wanna render VR from a native plugin. Is this possible?
So, far, what I have is this:
void OnRenderObject()
{
GL.IssuePluginEvent(getRenderCallback(), 1);​
}​
This is calling my native plugin and rendering something, but it makes this mess here.
It seems that I am rendering to the wrong texture. In fact, how to get the correct texture/framebuffer or whatever trick necessary to render VR from the native plugin OpenGL? For example, I need information about the camera being rendered (left or right) in order to render properly.
Any hints? I am trying with both Gear VR AND Google VR. Any hint about any of them is welcome.
My interpretation is, instead of Unity's default render from MeshRenderer, you want to overwrite that with your custom OpenGL calls.
Without knowing what you are rendering and what should be the correct behavior, I can't help more.
I can imagine that if you set up your own states such as turning on/off particular depth tests, blends, these would overwrite whatever states Unity set up for you prior to your custom function. And if you don't restore those states Unity is not aware and produces incorrect result.
As a side note, you may want to check out
https://github.com/Samsung/GearVRf/
It's Samsung's open source framework engine for GearVR. Being similar to Unity but open source, you may do something similar to what you posted while knowing what happens underneath