That's the setting:
I have 2 cameras in the game scene. The game must be played in a room with screens on frontal wall and on the floor. To be able to test it I just recreated the 2 screens in Unity. The goal is to make the game immersive, creating the correct illusion as the image on the left.
What I've tried so far (and it kinda worked as you can see from the screenshot) is:
Camera0: goes directly in the frontal display.
Camera1: I created a post processing effect that deforms the output texture to create the correct perspective illusion.
The problem:
The fact that I'm basically working over a texture creates some blurry effect on the borders, because the pixel density is not the same in start and deformed image.
I think the best approach would be to make the deforming transformation happen on the projection matrix of Camera1 instead. But I just failed. Have you any idea on how to approach this problem correctly?
You can let your perspective cameras do the work for you.
Set the fov of the floor camera so that it shows only as much as will fit on the screen.
Then, have the cameras at the same position.
Finally, have the floor camera rotated on the +x axis by half of the sum the fov of both cameras. For example, if the wall camera is fov 80º and the floor fov is 40º, set the floor camera to be rotated by 60º along the x axis.
This will guarantee that the view of the cameras do not overlap, and they will have the correct projection along their surfaces to create the desired illusion.
Related
I have a project that deals with AR, so I use ARFoundation in Unity. Everything works fine until I want to position my 3D object on the left of the screen. I tried many solutions but none of them work.
I try taking the width and height of the screen and alter them then set the position of my object to them but it didn't work.
What should I be doing?
Understanding the space you are working in is crucial.
Screen positions are a different dimension compared to the normal 3D you have in your scene. In order to exchange positions you need to use https://docs.unity3d.com/ScriptReference/Camera.WorldToScreenPoint.html and https://docs.unity3d.com/ScriptReference/Camera.ScreenToWorldPoint.html.
For example you can have a box that has its transform at 0.0.0 but depending on where you are looking at it and at what angle, it's transform stays the same but its *screen-position changes.
Using WorldToScreenPoint can tell you where the object from the scene is at your screen (2D), and ScreenToWorldPoint does the opposite (you give it 2D from you screen and it gives you that position in the 3D).
I’ve read this article http://www.gamasutra.com/blogs/BrianKehrer/20160125/264161/VR_Distortion_Correction_using_Vertex_Displacement.php
about distortion correction with vertex displacement in VR. Moreover, there are some words about other ways of distortion correction. I use unity for my experiments(and try to modify fibrum sdk, but it does not matter in my question because I only want to understand how these methods work in general).
As I mentioned there are three ways of doing this correction:
Using Pixel-based shader.
Projecting the render target onto a warped mesh - and rendering the final output to the screen.
Vertex displacement method
I understand how the pixel-based shader works. However, I don’t understand others.
The first question is about projection render target onto a warped mesh. As I understand, I should firstly render image from game cameras to 2(for each eye) tessellated quads, then apply shader with correction to this quads and then draw quads in front of main camera. But I’m afraid, I’m wrong.
The second one is about vertex displacement. Should I simply apply shader(that translate vertex coordinates of every object from world-space into inverse-lens distorted & screenspace(lens space)) to camera?
p.s. Sorry for my terrible English, I just want to understand how it works.
For the third method (vertex displacement), yes. That's exactly what you do. However, you must be careful because this is a non-linear transformation, and it won't be properly interpolated between vertices. You need your meshes to be reasonably tessellated for this technique to work properly. Otherwiese, long edges may be displayed as distorted curves, and you can potentially have z-fighting issues too. Here you can see a good description of the technique.
For the warped distortion mesh, this is how it goes. You render the screen, without distortion, to a render texture. Usually, this texture is bigger than your real screen resolution to compensate for effects of distortion on the apparent resolution. Then you create a tessellated quad, distort its vertices, and render it to screen using the texture. Since these vertices are distorted, it will distort your image.
I am trying to make everything that is rendered by my perspective "Camera A" appear 100 points higher. This is due to the fact that my App has an interface with an open space on the upper part.
My app uses face detection to simulate the face movement into an in game avatar. To do this I compute the "Model-View-Matrix" to set it into the camera's "worldToCameraMatrix".
So far this works well, but everything is rendered with the center as the origin, now i want to move this center origin a certain distance "up" so that it matches my interface.
Is there a way to tell Unity to offset the rendered camera result?
An alternative I thought about is to render into a texture, then I can just move the texture itself, but I thought there must be an easier way.
By the way, my main camera is orthographic, and i use this one to render the camera texture. In this case simply moving the rendering game object quad up does the trick.
I found a property called "pixelRect", the description says:
Where on the screen is the camera rendered in pixel coordinates.
However moving the center up seems to scale down my objects.
You can set the viewport rect/orthosize so that its offset or you can render to a render texture and render that as a overlay with a offset or diffirence in scale.
Cheers
The scene
I was wondering about creating
different layers with different Z-Axis values to give more reality like the above picture as it has
1- The green background then
2- The play ground it self then
3- The blurred black trees representing the camera depth of field
So, I thought about Creating the green background then the the black ground with more value of zPosition then the blurred stuff scaled up with more zPosition values, but the problem is when camera moves there is no sense of reality of speed of movement for each layer as they all move together respecting the same positions .
Also I thought about using SceneKit instead as it contains full 3D tools , but the scene is 2D and does not seem to need scenekit.
Thanks in advance as the question seemed too complicated.
Okay, it is figured now.
The answer is gonna be that I put a method that makes the background moves against the camera movement direction with one half or third the camera speed.
As an example
If the camera moves 20 pixels to right , all scene layers seems to be moved 20 pixels to left . So I have to move the background about 8
pixels to right
So all layers seem to be moved 20 pixels to left except that background moved 12 to left, which is slower.
I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.