VR distortion correction methods - unity3d

I’ve read this article http://www.gamasutra.com/blogs/BrianKehrer/20160125/264161/VR_Distortion_Correction_using_Vertex_Displacement.php
about distortion correction with vertex displacement in VR. Moreover, there are some words about other ways of distortion correction. I use unity for my experiments(and try to modify fibrum sdk, but it does not matter in my question because I only want to understand how these methods work in general).
As I mentioned there are three ways of doing this correction:
Using Pixel-based shader.
Projecting the render target onto a warped mesh - and rendering the final output to the screen.
Vertex displacement method
I understand how the pixel-based shader works. However, I don’t understand others.
The first question is about projection render target onto a warped mesh. As I understand, I should firstly render image from game cameras to 2(for each eye) tessellated quads, then apply shader with correction to this quads and then draw quads in front of main camera. But I’m afraid, I’m wrong.
The second one is about vertex displacement. Should I simply apply shader(that translate vertex coordinates of every object from world-space into inverse-lens distorted & screenspace(lens space)) to camera?
p.s. Sorry for my terrible English, I just want to understand how it works.

For the third method (vertex displacement), yes. That's exactly what you do. However, you must be careful because this is a non-linear transformation, and it won't be properly interpolated between vertices. You need your meshes to be reasonably tessellated for this technique to work properly. Otherwiese, long edges may be displayed as distorted curves, and you can potentially have z-fighting issues too. Here you can see a good description of the technique.
For the warped distortion mesh, this is how it goes. You render the screen, without distortion, to a render texture. Usually, this texture is bigger than your real screen resolution to compensate for effects of distortion on the apparent resolution. Then you create a tessellated quad, distort its vertices, and render it to screen using the texture. Since these vertices are distorted, it will distort your image.

Related

World to Cube projection Unity

That's the setting:
I have 2 cameras in the game scene. The game must be played in a room with screens on frontal wall and on the floor. To be able to test it I just recreated the 2 screens in Unity. The goal is to make the game immersive, creating the correct illusion as the image on the left.
What I've tried so far (and it kinda worked as you can see from the screenshot) is:
Camera0: goes directly in the frontal display.
Camera1: I created a post processing effect that deforms the output texture to create the correct perspective illusion.
The problem:
The fact that I'm basically working over a texture creates some blurry effect on the borders, because the pixel density is not the same in start and deformed image.
I think the best approach would be to make the deforming transformation happen on the projection matrix of Camera1 instead. But I just failed. Have you any idea on how to approach this problem correctly?
You can let your perspective cameras do the work for you.
Set the fov of the floor camera so that it shows only as much as will fit on the screen.
Then, have the cameras at the same position.
Finally, have the floor camera rotated on the +x axis by half of the sum the fov of both cameras. For example, if the wall camera is fov 80º and the floor fov is 40º, set the floor camera to be rotated by 60º along the x axis.
This will guarantee that the view of the cameras do not overlap, and they will have the correct projection along their surfaces to create the desired illusion.

Shader to warp or pinch specific areas of a texture in Unity3d

I've mocked up what I am trying to accomplish in the image below - trying to pinch the pixels in towards the center of an AR marker so when I overlay AR content the AR marker is less noticeable.
I am looking for some examples or tutorials that I can reference to start to learn how to create a shader to distort the texture but I am coming up with nothing.
What's the best way to accomplish this?
This can be achieved using GrabPass.
From the manual:
GrabPass is a special pass type - it grabs the contents of the screen where the object is about to be drawn into a texture. This texture can be used in subsequent passes to do advanced image based effects.
The way distortion effects work is basically that you render the contents of the GrabPass texture on top of your mesh, except with its UVs distorted. A common way of doing this (for effects such as heat distortion or shockwaves) is to render a billboarded plane with a normal map on it, where the normal map controls how much the UVs for the background sample are distorted. This works by transforming the normal from world space to screen space, multiplying it with a strength value, and applying it to the UV. There is a good example of such a shader here. You can also technically use any mesh and use its vertex normal for the displacement in a similar way.
Apart from normal mapped planes, another way of achieving this effect would be to pass in the screen-space position of the tracker into the shader using Shader.SetGlobalVector. Then, inside your shader, you can calculate the vector between your fragment and the object and use that to offset the UV, possibly using some remap function (like squaring the distance). For instance, you can use float2 uv_displace = normalize(delta) * saturate(1 - length(delta)).
If you want to control exactly how and when this effect is applied, make it so that has ZTest and ZWrite set to Off, and then set the render queue to be after the background but before your tracker.
For AR apps, it is likely possible to avoid the preformance overhead from using GrabPass by using the camera background texture instead of a GrabPass texture. You can try looking inside your camera background script to see how it passes over the camera texture to the shader and try to replicate that.
Here are two videos demonstrating how GrabPass works:
https://www.youtube.com/watch?v=OgsdGhY-TWM
https://www.youtube.com/watch?v=aX7wIp-r48c

How to create a non-perpendicular near clipping plane in Unity?

I have a need for setting up clipping planes that aren't perpendicular to the camera. Doing that for the far plane was easy: I just added a shader that clears the background.
I just can't figure out how to do the same for the near clipping plane. I've tried to think of solutions dealing with multiple shaders and planes, a special cutting shader, having multiple cameras for this or somehow storing the view as a texture, but those ideas are mostly imperfect even if they were implementable. What I basically need is a shader that would say "don't render anything that's in front of me". Is that possible? Can I eg. make a shader to make the passed pixels "final"?

Shader to bevel the edges of a cube?

This question relates to using shaders (probably in the Unity3D milieu, but Metal or OpenGL is fine), to achieve rounded edges on a mesh-minimal cube.
I wish to use only 12-triangle minimalist mesh cubes,
and then via the shader,
Achieve the edges (/corners) of each block being slightly bevelled.
In fact, can this be done with a shader?
I recently finished creating such shader. The only way it can work is by providing 4 normal vectors instead of one for each vertex (smooth, sharp and one for each edge of the triangle for the given vertex). You will also need one float3 to detect edges.
To add such data in a mesh I made a custom mesh editor, comes with Playtime Painter Asset from Unity Asset Store. Will post the shader with the next update. Also will post to public GitHub.
You can see some dark lines, it's because it starts to interpolate to a normal vector which facing away from light source, but since there are no additional triangles, the result is visible on a triangle which is facing the camera.
Update (2/12/2018)
Realised that by clipping pixels that end up having a normal facing away from the camera, it is possible to smooth the outline shape. It wasn't tested for all possible scenarios but works great for simple shapes:
As per request added a comparison cube:
Currently, Playtime Painter has a simplified version of that shader, which interpolates between 2 normal vectors and gives ok results on some edges.
Wrote an article.
In general the Relief Mapping is able to modify the object silhouette like on this picture. You'd need to prepare a heightmap that lowers at the borders and that's it. However I think that using such shader might be an overkill for such a simple effect so maybe it's better to just make it in your geometry.

quartz 2d / openGl / cocos2d image distortion in iphone by moving vertices for 2.5d iphone game

We are trying to achieve the following in an iphone game:
Using 2d png files, set-up a scene that seems 3d. As the user moves the device, the individual png files would warp/distort accordingly to give the effect of depth.
example of a scene: an empty room, 5 walls and a chair in the middle. = 6 png files layered.
We have successfully accomplished this using native functions like skew and scale. By applying transformations to the various walls and the chair, as the device is tilted moved, the walls would skew/scale/translate . However, the problem is since we are using 6 png files, the edges dont meet as we move the device. We need a new solution using a real engine.
Question:
we are thinking of instead of applying skew/scale transformations, that if given the freedom to move the vertices of the rectangular images, we could precisly distort images and keep all the edges 100% aligned.
What is the best framework to do this in the LEAST amount of time? Are we going about this the correct way?
You should be able to achieve this effect (at least in regards to the perspective being applied to the walls) using Core Animation layers and appropriate 3-D transforms.
A good example of constructing a scene like this can be found in the example John Blackburn provides here. He shows how to set up layers to represent the walls in a maze by applying the appropriate rotation and translation to them, then gives the scene perspective by using the trick of altering the m34 component of the CATransform3D for the scene.
I'm not sure how well your flat chair would look using something like this, but certainly you can get your walls to have a nice perspective to them. Using layers and Core Animation would let you pull off what you want using far less code than implementing this using OpenGL ES.
Altering the camera angle is as simple as rotating the scene in response to shifts in the orientation of the device.
If you're going to the effort of warping textures as they would be warped in a 3D scene, then why not let the graphics hardware do the hard work for you by mapping the textures to 3D polygons, then changing your projection or moving polygons around?
I doubt you could do it faster by restricting yourself to 2D transformations --- the hardware is geared up to do 3x3 (well, 4x4 homogenous) matrix multiplication.