I am following this tutorial to view Stereo image in Unity3D. Unfortunately it only covers Oculus Rift and Google Cardboard. Both of there SDKs have two separate cameras for left eye and right eye. Here is a summary of how to do it:
Create 2 spheres for both eyes and place them at origin.
Put them in different layers (left and right).
Set culling mask of each camera (left eye and right eye) to left layer and right layer respectively.
PROBLEM:
In Gear VR camera setup, Oculus SDK is using only one camera component which is on CenterEyeAnchor child of OVRCameraRig:
I don't know how to apply the above procedure in this case. I know there are 2 transforms LelftEyeAnchor and RightEyeAnchor which are used for stereo view but I don't if camera component is attached to them at runtime in Android build. Is there a way to achieve stereo rendering for this setup?
Thanks in advance.
This is what I have:
With LeftEyeAnchor and RightEyeAnchor being on their layers (Left, Right).
Then I have an empty Gameobject: Stero, containing 2 cameras.
This is the set up for the Left
I have multiple layers on the culling mask because im displaying some stuff on each eye but you need to set there the layers being seen by the camera.
Its the same for the other camera changing every Left for Right
And at the end, currently disable because i enable both via script, the 2 spheres, one in one layer and the other in the other layer.
The CenterEyeAnchor have Both eyes as a target, and left and right layers are in the culling mask too
Hope it helps!
Related
New to unity so hoping this is a dumb / quick fix.
I'm using XR Interaction Toolkit's XR Origin camera (device-based if it matters) and want to add a second camera to overlay my UI elements so the canvases appear on top of walls / elements instead of intersecting and being hidden by them.
I've duplicated my main camera and removed all scripts except the main camera one, set it as a child to the previous main camera, and adjusted the culling masks to be UI-only and Everything-but-UI. I've also set the depth of the ui camera to be higher than the main camera, and set Clear Flags to Depth only. My UI elements are in the correct layer and the camera of the canvas is the child UI camera.
The result is a disaster. When I click the button to bring up my UI overlay, me view becomes a 360 pixelated block of tiny mirrored rectangles.
Any help would be appreciated, or workarounds not using 2 cameras.
Thanks all
I have a VR project for which I am trying to render a canvas to be "always on top" of everything using a separate UI camera as noted here.
I made my UICamera object a child of the main camera - which is the centerEyeAnchor object in the OVRCameraRig.
And I set the culling mask for my UICamera to only show its layer and removed that layer from the main camera (CenterEyeAnchor).
But in fact the perspective is weird and it seems like the UICamera is offset a little bit but its transform is zero'd out in the inspector, so I don't why it's displaying so weird.
If I set the culling mask to "Everything" for both cameras it's still offset a little.
In general you don't need the UI camera to be a child of CenterEyeAnchor. Move it out to to the top level and zero out the coordinates. The Oculus rig might be doing some magic with IPD or something else and it screws up the pixel-perfectness of UI.
Actually, I can play 360 mono videos on EasyMovieTexture, but now I need to know if, Is possible to play stereoscopic videos? and if is it, how can this be done?
Yes you can, and it is fairly easy.
You need to create two layers, one for the left eye, one for the right eye.
Then, you duplicate both your camera and your spherical screen.
One sphere should be on the Left-Eye layer, and the other on the Right-Eye layer.
Then, you configure your cameras like so:
This is the right camera. So the Culling Mask has the Lef-Eye layer disabled and the Target Eye is set to Right. You need to do the opposite with the left camera.
Note that both spheres and both cameras should be at the exact same position. The Stereo Separation is done automatically and can be configured on your cameras. (You can just keep the default values)
Alright so just one last thing, you need to configure your materials on each sphere to show only one side of the video.
Here is an example for side-by-side stereoscopy. You can easily adapt that to handle top-bottom stereoscopy.
I have 2 textures to create stereoscopic panorama on VR and i want to make a 360ยบ experience. In order to achieve this I need to show one texture at the left side (VR-LeftEye) and the other at the right side (VR-RightEye). Additionally i have to show 3D models in front of the panorama to interact with them.
Im using cardboard GoogleVR v1.20 with Unity 5.6.0b7. I have no problem with changing any version.
After several researches i got few possible solutions but i dont know how to implement them at 100%:
2 spheres (with the faces inside) with 1 camera at the center of the spheres and cull the left on the right side and viceversa. I don know how to cull in different ways per side because only one camera is needed to make stereo in 5.6.
2 textures in the same sphere material and the shader should select the needed texture according to the rendering side. I dont know how to know what is the rendering side in the shader code.
2 spheres, 2 cameras.This is the most artisan way and i have some issues displaying the 3d objects and i got double rotation speed.
Any tips or solutions are welcome.
EDIT:
Im looking for a solution on Unity 5.6.0 because it just implemented a feature that make 2 projections with a distance between them simulating both eyes.
I'm not familiar with VR in unity, but 3rd option sounds better because of the additional 3D models in front of the panorama.
Furthermore, since the eyes are in the center of the spheres in this implementation, moving 3D objects in front of the cameras might be tricky.
This is a follow up to this question regarding how to display objects on one camera only in google-vr and unity.
In the current demo project of Unity and Google-vr, I can only access Main Camera Left and Main Camera Right while running the game. During runtime, I am able to disable a layer with the culling mask of one camera.
But I am not able to save those changes while running the game. If I stop, the two Main Cameras Left/Right disappear and I only see Main Camera and GvrReticle as child.
I suspect the cameras are created or imported from a prefab during runtime.
What would be the right way to have the left / right cameras accessible when not running the screne?
It's mentioned in the guide:
Often you will wish to add the stereo rig to your scene in the Editor rather than at runtime. This allows you to tweak the default parameters of the various scripts provided in this plugin, and provides a framework on which to add additional scripts and objects, such as 3D user interface elements.
To turn your Camera into a stereo camera, select the Camera (or Cameras) in the Hierarchy panel, then execute the main menu command Component > Google VR > Update Stereo Cameras.