This question is based on my previous question about the difference between ViewPort and FOV.
I'm writing an application which receives 360-video and renders on screen.
Assume my video stream contains some information about the dimension of FOV (part of the video frame to be displayed at an instance) for each eye.
Do I need to render that FOV within the ViewPort of each eye?
I saw that some VR headsets advertise about their FOV. So, I think sometimes the device FOV may be different than video stream FOV. In those cases, filling the entire ViewPort by stream FOV might degrade the viewing experience. I think application may choose either different FOV or ViewPort (Filling black around the new ViewPort if required) to handle such cases. What is the recommended solution to this problem?
Related
I have an issue with different resolutions.
Everything perfectly works on 1920x1080 however when i set it to tablet size like 10x10 aspect ratio 'Player' isn't resizing.
My platforms created under Canvas due to scalement and correct positioning. However my player created outside of the canvas.
Should i create my character under canvas or should i create my platforms outside of it? Currently I am not sure how to solve the issue.
Since player is created outside the canvas, there's no way for canvas to affect it (also player is probably using SpriteRenderer not Image component).
One way would be to put player as Image inside a canvas, but to be honest, canavs is created for UI, not gameplay. Putting all gameplay into UI might (and probably will) create a lot of issues. I'm already surprised that player and platforms interact in your game well as they use different systems.
What you probably want to do is to put all gameplay elements (character, platforms, projectiles, etc.) outside the canvas as sprite renderers and leave canvas for what it's meant to be (UI, maybe backgrounds).
Then, you might come across a problem, where on different resolutions, you have smaller or larger area of gameplay. Your options will be to: live with that, create system that restricts gameplay and fills empty space with background or black bars, or something in between (which is for eg. let vertical gameplay area be different, but horizontal the same).
Here's idea how you could achieve it:
https://forum.unity.com/threads/maintain-the-game-content-area-on-different-types-of-screen-sizes.905384/
I just got Pupil Labs eye tracking headset (just the eye tracking cameras, no world view camera, regular headset not VR). I am using pupil labs capture software to get a video feed of my eyes and track my pupils to use in a Unity app. I noticed that my right eye video feed is inverted (upside down) by default, and I wanted to ask if that was intended or if my headset is somehow defective or incorrectly set-up. I ask because I saw a video of someone setting it up and their videos were both up-right with the correct orientation. Thank you for your input!
According to Pupil Labs' release documentation:
The eye 0 (right eye) camera sensor is physically flipped, which
results in an upside-down eye image for the right eye. This is by
design and does not negatively affect pupil detection or gaze
estimation. However, the upside-down eye 0 image repeatedly led users
to believe that something was broken or incorrect with Pupil Core
headsets. We flipped the eye 0 image now by default to better match
user expectations with a right-side-up eye image.
That is, the hardware intentionally has one camera physically flipped relative to the other. If this is visible to you in the video output, then you should either upgrade the software (which now defaults to flipping the image as required), or apply that setting manually.
I'm working on a VR project where at some points we will display video in front of the user. I'm looking for recommendations/answers into how to maintain focus on the video without causing eye strain on the users. I want to know how others have tackled this issue of showing video for VR headsets within their projects/games.
What I have tried so far is increasing/decreasing the size and distance of the video from the players eyes/head. I have found that since you can't do overlay or world space-camera with VR canvas, you have to result to general world space placement. This results in decreasing too small and close causing a splitting effect and also blurring the video. Increasing the size as resulted in no significantly noticeable changes.
Headset: HTC Vive (Regular and Pro versions)
I'm a mobile game developer using Unity Engine.
Here is my problem:
I tried to render the static scene stuffs into a render target with color buffer and depth buffer, with which i render to the following frames before the dynamic objects are rendered if the game main player's viewpoint stays the same. My goal is to reduce some of draw calls as well as to save some power for mobile devices. This strategy saves up to 20% power in our MMO mobile game consumption on android devices FYI.
The following pics are screen shot from my test project. The sphere,cube and terrain are static objects, and the red cylinder is moving.
you can see the depth test result is wrong on android.
iOS device works fine, The depth test is right, and the render result is almost the same as the optimization is off. Notice that the shadow is not right but we ignore it for now.
However the result on Android is not good. The moving cylinder is partly occluded by the cube and the occlusion is not stable between frames.
The results seem that the depth buffer precision is not enough. Any ideas about this problem?
I Googled this problem, but no straight answers. Some said we cant read depth buffer on GLES.
https://forum.unity.com/threads/poor-performance-of-updatedepthtexture-why-is-it-even-needed.197455/
And then there are cases where platforms don't support reading from the Z buffer at all (GLES when no GL_OES_depth_texture is exposed; or Direct3D9 when no INTZ hack is present in the drivers; or desktop GL on Mac with some buggy Radeon drivers etc.).
Is this true?
I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.