Reducing eye strain when displaying video in VR? - unity3d

I'm working on a VR project where at some points we will display video in front of the user. I'm looking for recommendations/answers into how to maintain focus on the video without causing eye strain on the users. I want to know how others have tackled this issue of showing video for VR headsets within their projects/games.
What I have tried so far is increasing/decreasing the size and distance of the video from the players eyes/head. I have found that since you can't do overlay or world space-camera with VR canvas, you have to result to general world space placement. This results in decreasing too small and close causing a splitting effect and also blurring the video. Increasing the size as resulted in no significantly noticeable changes.
Headset: HTC Vive (Regular and Pro versions)

Related

Multiple cameras make bad performance on mobile

When i add new base camera even though it doesn't display anything my fps on mobile decrease approximately on 10 points. Is new camera on scene so expensive by performance for mobile? I use URP. How can I increase my fps if need six base cameras at time for different render textures?

Pupil Labs Eye Tracking camera set up, is my video feed inverted?

I just got Pupil Labs eye tracking headset (just the eye tracking cameras, no world view camera, regular headset not VR). I am using pupil labs capture software to get a video feed of my eyes and track my pupils to use in a Unity app. I noticed that my right eye video feed is inverted (upside down) by default, and I wanted to ask if that was intended or if my headset is somehow defective or incorrectly set-up. I ask because I saw a video of someone setting it up and their videos were both up-right with the correct orientation. Thank you for your input!
According to Pupil Labs' release documentation:
The eye 0 (right eye) camera sensor is physically flipped, which
results in an upside-down eye image for the right eye. This is by
design and does not negatively affect pupil detection or gaze
estimation. However, the upside-down eye 0 image repeatedly led users
to believe that something was broken or incorrect with Pupil Core
headsets. We flipped the eye 0 image now by default to better match
user expectations with a right-side-up eye image.
That is, the hardware intentionally has one camera physically flipped relative to the other. If this is visible to you in the video output, then you should either upgrade the software (which now defaults to flipping the image as required), or apply that setting manually.

Unity Mobile depth buffer precision

I'm a mobile game developer using Unity Engine.
Here is my problem:
I tried to render the static scene stuffs into a render target with color buffer and depth buffer, with which i render to the following frames before the dynamic objects are rendered if the game main player's viewpoint stays the same. My goal is to reduce some of draw calls as well as to save some power for mobile devices. This strategy saves up to 20% power in our MMO mobile game consumption on android devices FYI.
The following pics are screen shot from my test project. The sphere,cube and terrain are static objects, and the red cylinder is moving.
you can see the depth test result is wrong on android.
iOS device works fine, The depth test is right, and the render result is almost the same as the optimization is off. Notice that the shadow is not right but we ignore it for now.
However the result on Android is not good. The moving cylinder is partly occluded by the cube and the occlusion is not stable between frames.
The results seem that the depth buffer precision is not enough. Any ideas about this problem?
I Googled this problem, but no straight answers. Some said we cant read depth buffer on GLES.
https://forum.unity.com/threads/poor-performance-of-updatedepthtexture-why-is-it-even-needed.197455/
And then there are cases where platforms don't support reading from the Z buffer at all (GLES when no GL_OES_depth_texture is exposed; or Direct3D9 when no INTZ hack is present in the drivers; or desktop GL on Mac with some buggy Radeon drivers etc.).
Is this true?

How to handle mismatch between VR headset FOV and video stream FOV?

This question is based on my previous question about the difference between ViewPort and FOV.
I'm writing an application which receives 360-video and renders on screen.
Assume my video stream contains some information about the dimension of FOV (part of the video frame to be displayed at an instance) for each eye.
Do I need to render that FOV within the ViewPort of each eye?
I saw that some VR headsets advertise about their FOV. So, I think sometimes the device FOV may be different than video stream FOV. In those cases, filling the entire ViewPort by stream FOV might degrade the viewing experience. I think application may choose either different FOV or ViewPort (Filling black around the new ViewPort if required) to handle such cases. What is the recommended solution to this problem?

Smooth kinetic scrolling in unity3D

I'm developing a game for mobile platforms. I have a menu with levels. There are a lot of levels so there should be kinetic scrolling. What did I do - every frame I read touches[0].position and based and difference between previous position I move camera.
But, because of the inaccurate touch position (I suppose so) the camera doesn't move smoothly. I'm thinking about calculate average speed for three frames for example and move camera according to speed. Can you give me any advice how to smooth movement?
Also, touches[0].deltaPosition seems to work incorrect.