Actually, I can play 360 mono videos on EasyMovieTexture, but now I need to know if, Is possible to play stereoscopic videos? and if is it, how can this be done?
Yes you can, and it is fairly easy.
You need to create two layers, one for the left eye, one for the right eye.
Then, you duplicate both your camera and your spherical screen.
One sphere should be on the Left-Eye layer, and the other on the Right-Eye layer.
Then, you configure your cameras like so:
This is the right camera. So the Culling Mask has the Lef-Eye layer disabled and the Target Eye is set to Right. You need to do the opposite with the left camera.
Note that both spheres and both cameras should be at the exact same position. The Stereo Separation is done automatically and can be configured on your cameras. (You can just keep the default values)
Alright so just one last thing, you need to configure your materials on each sphere to show only one side of the video.
Here is an example for side-by-side stereoscopy. You can easily adapt that to handle top-bottom stereoscopy.
Related
How can I override the physical marker in unity to just make the physical marker invisible in camera while detecting any object.
Like in this video:
https://m.youtube.com/watch?v=R_F1LvK5gCk
And many other videos.
This maybe isn't the correct answer but, I'll put my ideas here because sadly it doesn't fit in the comment section.
I just watch the video and some of the reels they show as his works.
First, yes, in deed they make the marker a sort of invisible marker, but if you look closely, like a 0.25 speed and pushing space bar very fast to look it in slowmo, you can see a kind of "artifact" in between the girl's fingers that makes me think there's no invisible marker but a texture that are covering the marker. Maybe a cylinder that gets his texture from the video camera input.
Now, how I do that?
There are several ways to get the pixels from the webcamera, even Unity has one function. The trouble is, I dont want all the pixels but a little tiny part of the camera render, specifically the one's around my marker.
In my experience, and in his examples, they are using OpenCV, another unity plugin, so they can track anything, from faces, hands, or markers, so I can't be quite sure they are using Vuforia alone or in combination.
My idea is, with OpenCV you can catch your marker and his contour, then ask for the pixels outside the contour of your marker, those pixels will be the skin tone of the person and latter apply them as a texture over some plane or 3d model that can cover your marker. You can have the pixels at right side and left side of the marker and use a average function betwen them so it could look nice or if you like the adventures you can try to use some kind of digital images processing method to get bether results.
I'm not sure if you can have the pixels arround a marker just using just Vuforia. Honestly I've never try it before.
Well, that's my idea.
If you can get it better I'll like to hear about it.
I have 2 textures to create stereoscopic panorama on VR and i want to make a 360ยบ experience. In order to achieve this I need to show one texture at the left side (VR-LeftEye) and the other at the right side (VR-RightEye). Additionally i have to show 3D models in front of the panorama to interact with them.
Im using cardboard GoogleVR v1.20 with Unity 5.6.0b7. I have no problem with changing any version.
After several researches i got few possible solutions but i dont know how to implement them at 100%:
2 spheres (with the faces inside) with 1 camera at the center of the spheres and cull the left on the right side and viceversa. I don know how to cull in different ways per side because only one camera is needed to make stereo in 5.6.
2 textures in the same sphere material and the shader should select the needed texture according to the rendering side. I dont know how to know what is the rendering side in the shader code.
2 spheres, 2 cameras.This is the most artisan way and i have some issues displaying the 3d objects and i got double rotation speed.
Any tips or solutions are welcome.
EDIT:
Im looking for a solution on Unity 5.6.0 because it just implemented a feature that make 2 projections with a distance between them simulating both eyes.
I'm not familiar with VR in unity, but 3rd option sounds better because of the additional 3D models in front of the panorama.
Furthermore, since the eyes are in the center of the spheres in this implementation, moving 3D objects in front of the cameras might be tricky.
I am following this tutorial to view Stereo image in Unity3D. Unfortunately it only covers Oculus Rift and Google Cardboard. Both of there SDKs have two separate cameras for left eye and right eye. Here is a summary of how to do it:
Create 2 spheres for both eyes and place them at origin.
Put them in different layers (left and right).
Set culling mask of each camera (left eye and right eye) to left layer and right layer respectively.
PROBLEM:
In Gear VR camera setup, Oculus SDK is using only one camera component which is on CenterEyeAnchor child of OVRCameraRig:
I don't know how to apply the above procedure in this case. I know there are 2 transforms LelftEyeAnchor and RightEyeAnchor which are used for stereo view but I don't if camera component is attached to them at runtime in Android build. Is there a way to achieve stereo rendering for this setup?
Thanks in advance.
This is what I have:
With LeftEyeAnchor and RightEyeAnchor being on their layers (Left, Right).
Then I have an empty Gameobject: Stero, containing 2 cameras.
This is the set up for the Left
I have multiple layers on the culling mask because im displaying some stuff on each eye but you need to set there the layers being seen by the camera.
Its the same for the other camera changing every Left for Right
And at the end, currently disable because i enable both via script, the 2 spheres, one in one layer and the other in the other layer.
The CenterEyeAnchor have Both eyes as a target, and left and right layers are in the culling mask too
Hope it helps!
First, I just want to introduce to you guys my problem, because it is really complex so you need this to understand it properly.
I am trying to do something with Scene Kit and Swift : I want to reproduce what we can see in the TV Show Doctor Who where the Doctor's spaceship is bigger on the inside, as you can see in this video.
Of course the Scene Kit Framework doesn't support those kind of unreal dimensions so we need to do some sort of hackery to do achieve that.
Now let's talk about my idea in plain english
In fact, what we want to do is to display two completely different dimensions at the same place ; so I was thinking to :
A first dimension for the inside of the spaceship.
A second dimension for the outside of the spaceship.
Now, let's say that you are outside of the ship, you would be in the outside dimension, and in this outside dimension, my goal would be to display a portion of the inside dimension at the level of the door to give this effect where the camera is outside but where we can clearly see that the inside is bigger :
We would use an equivalent principle from the inside.
Now let's talk about the game logic :
I think that a good way to represent these dimensions would be two use two scenes.
We will call outsideScene the scene for the outside, and insideScene the scene for the inside.
So if we take again the picture, this would give this at the scene level :
To make it look realistic, the view of the inside needs to follow the movements of the outside camera, that's why I think that all the properties of these two cameras will be identical :
On the left is the outsideScene and on the right, the insideScene. I represent the camera field of view in orange.
If the outsideScene camera moves right, the insideScene camera will do exactly the same thing, if the outsideScene camera rotates, the insideScene camera will rotate in the same way... you get the principle.
So, my question is the following : what can I use to mask a certain portion of a certain scene (in this case the yellow zone in the outsideView) with what the camera of another view (the insideView) "sees" ?
First, I thought that I could simply get an NSImage from the insideScene and then put it as the texture of a surface in the outsideScene, but the problem would be that Scene Kit would compute it's perspective, lighting etc... so It would just look like we was displaying something on a screen and that's not what I want.
there is no super easy way to achieve this in SceneKit.
If your "inside scene" is static and can be baked into a cube map texture you can use shader modifiers and a technique called interior mapping (you can easily find examples on the web).
If you need a live, interactive "inside scene" you can use the sane technique but will have to render your scene in a texture first (or renderer your inside scene and outer scene one after the other with stencils). This can be done by leveraging SCNTechnique (new in Yosemite and iOS 8). On older versions you will have to write some OpenGL code in SCNSceneRenderer delegate methods.
I don't know if it's 'difficult'. As we have to in iOS , a lot of times the simplest answer ..is the simplest answer.
Maybe consider this:
Map a texture onto a cylinder sector prescribed by the geometry of the Tardis cube shape. Make sure the cylinder radius is equal of the focal point of the camera. Make sure you track the camera to the focal point.
The texture will be distorted because it is a cylinder making onto a cube. The actors' nodes in the Tardis will react properly to the camera but there should be two groups of light sources...One set for the Tardis and one outside the Tardis.
We are trying to achieve the following in an iphone game:
Using 2d png files, set-up a scene that seems 3d. As the user moves the device, the individual png files would warp/distort accordingly to give the effect of depth.
example of a scene: an empty room, 5 walls and a chair in the middle. = 6 png files layered.
We have successfully accomplished this using native functions like skew and scale. By applying transformations to the various walls and the chair, as the device is tilted moved, the walls would skew/scale/translate . However, the problem is since we are using 6 png files, the edges dont meet as we move the device. We need a new solution using a real engine.
Question:
we are thinking of instead of applying skew/scale transformations, that if given the freedom to move the vertices of the rectangular images, we could precisly distort images and keep all the edges 100% aligned.
What is the best framework to do this in the LEAST amount of time? Are we going about this the correct way?
You should be able to achieve this effect (at least in regards to the perspective being applied to the walls) using Core Animation layers and appropriate 3-D transforms.
A good example of constructing a scene like this can be found in the example John Blackburn provides here. He shows how to set up layers to represent the walls in a maze by applying the appropriate rotation and translation to them, then gives the scene perspective by using the trick of altering the m34 component of the CATransform3D for the scene.
I'm not sure how well your flat chair would look using something like this, but certainly you can get your walls to have a nice perspective to them. Using layers and Core Animation would let you pull off what you want using far less code than implementing this using OpenGL ES.
Altering the camera angle is as simple as rotating the scene in response to shifts in the orientation of the device.
If you're going to the effort of warping textures as they would be warped in a 3D scene, then why not let the graphics hardware do the hard work for you by mapping the textures to 3D polygons, then changing your projection or moving polygons around?
I doubt you could do it faster by restricting yourself to 2D transformations --- the hardware is geared up to do 3x3 (well, 4x4 homogenous) matrix multiplication.