Oculus Quest Single-Pass and Multi-Pass not working? - unity3d

I am using the following configuration:
Unity: 2019.2.13f1
Device: Oculus Quest Using LWRP
Issues:
(a) When I change the "Stereo Rendering Mode" to "Single-Pass", the rendering of the screen is too small and too far.
(b) When I change the "Stereo Rendering Mode" to "Multi-Pass", the rendering is only visible on the Left-Eye.
(c) The only Mode that works is "Multi-View". Unfortunately, there is also of jittery motion when this is used. The images that are near the user starts to jitter and this is very much visible.
The (c) is the reason that I would like to use Single/Multi pass rendering since then it would overcome the problem.
Has anyone faced these similar issues?

This is a recurrent problem with LWRP/URP because it use post-processing effects, and single-pass stereo rendering needs you to tweak shaders to support this
And thus, unless vis major, it is best suited to stick with standard rendering pipeline.

Related

Show the Unity particle on the Canvas

I am using Unity version 2021.3.15f
I want to show the particles on the UI canvas.
I'd like to know two things, how to show particle on the canvas when the render mode on the canvas is screenspace-overlay and screenspace-camera.
Do I need to convert the particle's transform into a rectTransform?
Or should I use methods like Camera.ScreenToWorldPosition?
You could always move the camera into a ScreenToWorldPosition and it will work but keep in mind this is just a bandaid fix and won't be robust and maintainable. Usually anything ui related must be compatible with Unity's UI Render Pipeline.
There is this great resource for adding particle effects into UGUI from a github repository.
Use : https://github.com/mob-sakai/ParticleEffectForUGUI
take a look at the sample scenes it has everything you need.

why unity post processing effect not work in webgl?

Everything is perfect in editor. bloom、mothionblur.. but when i built project into webgl and play in chrome no bloom effect at all.
i'm using Built-in render pipeline and linear color space . even manual setting graphic apis to webgl 2.0 but no luck.
please help me with that.
here is the setting
player setting
quality setting
effect in editor
effect in chrome
Seems like you need to enable HDR for your quality level as it is described here.

Unity XR Single Pass Instanced rendering and UI

I was wondering if anyone has recommendations regarding the use of the Unity Canvas-based UI system (UGUI) along with the Single Pass Instanced rendering mode for XR applications (?)
My concerns are whether the UI elements will render as Single Pass Instanced or if they are actually just rendered twice - potentially causing performance issues.
As far as I can see on the default UI shader (Unity 2019.4.21 built in shaders for the built in render pipeline), it doesn't appear to support GPU Instancing (correct me if I am wrong). I can of course create my own shader with support for GPU Instancing in accordance with the guidelines here but I don't know if the UI rendering system will actually respect that (?) thinking that there might be a reason why is not implemented in the default UI shader...
And if the UI rendering does indeed not support GPU Instancing, does it then have some other optimized way of rendering that makes up for the lack of GPU Instancing?
I am sorry for these slightly fuzzy questions. I am just trying to figure out which path to take with my project - whether to go the UI (UGUI) way or not.
Best regards, Jakob
I try to migrate a big VR project on Unity2021 and Single Pass Instanced.
I have no issue with UGUI. I had some issue with some of ours shaders and third parties shaders and in that case, the issue was the rendering was visible only on one eye or it was different some way between the two eyes.
It did not check the draw call specificaly on UGUI but for me if it's the same on both eye it's rendered once.
I have both screenspace and wordspace GUI.
Alex

How do I use different Post-Processing effects on different cameras in Unity 2017.4.2f2?

Before I explain my situation, it's important that I mention I'm using an older version of Unity, 2017.42f2 (for PSVITA support). for this reason, I'm also using the old Post-Processing Stack.
I'm making a game in which I want a lot of bloom on the UI, but not as much for the rest of the game. I used a camera to render the UI, gave it Post-Processing (Screen Space - Camera on Canvas),
and another one to render the rest of the game
Each is given a different profile to use different effects.
My expectation is that the UI renderer camera would only apply it's effects to the Canvas. Instead, it also applies them to the camera beneath it; the game renderer camera.
As you can see I used Don't Clear clear flags. I also tried Depth-only, to see if it would make a difference.
I'm lost as to what I could do. Grain and bloom get applied to everything
yet the profile for those effects is only given to the UI renderer Camera's Post Processing Behavior Script.
Does anyone have any suggestions or ideas? I'm lost.

How to access target texture for rendering from native world in Unity (VR)?

I wanna render VR from a native plugin. Is this possible?
So, far, what I have is this:
void OnRenderObject()
{
GL.IssuePluginEvent(getRenderCallback(), 1);​
}​
This is calling my native plugin and rendering something, but it makes this mess here.
It seems that I am rendering to the wrong texture. In fact, how to get the correct texture/framebuffer or whatever trick necessary to render VR from the native plugin OpenGL? For example, I need information about the camera being rendered (left or right) in order to render properly.
Any hints? I am trying with both Gear VR AND Google VR. Any hint about any of them is welcome.
My interpretation is, instead of Unity's default render from MeshRenderer, you want to overwrite that with your custom OpenGL calls.
Without knowing what you are rendering and what should be the correct behavior, I can't help more.
I can imagine that if you set up your own states such as turning on/off particular depth tests, blends, these would overwrite whatever states Unity set up for you prior to your custom function. And if you don't restore those states Unity is not aware and produces incorrect result.
As a side note, you may want to check out
https://github.com/Samsung/GearVRf/
It's Samsung's open source framework engine for GearVR. Being similar to Unity but open source, you may do something similar to what you posted while knowing what happens underneath