Projecting Orthographic Matrix Through A Perspective Camera - unity3d

I need to render a mesh in screenspace, essentially like Unity's IMGUI. I say a mesh specifically as it is a constructed mesh of many UI tris, so I am not looking to use GL quads/tris, for example.
To avoid two cameras, I need to take the scene perspective camera, and draw that mesh with an orthographic projection so that it is drawn ortho to the screen. I've tried overriding Camera::projectionMatrix, with no success, along with GL::Push/PopMatrix and GL::LoadProjectionMatrix calls.
Is there a way to draw a single mesh (i.e. Graphics::DrawMeshNow, or a related call) with an orthographic projection matrix, using a perspective camera (or a way to force the camera into ortho just for rendering that one object)?

Related

Can I make a unity camera mask inside a shape?

Is there a way to render a camera only inside a sphere (or any 3D object)?
I want to render 2 cameras...each of them has different post processing effects...but one of them renders its effects in a spherical area around the player (with feathered edges if possible)...and the other renders the rest of the screen.

Draw free 2d shapes dynamically in unity

I want to be able to draw any 2d shape dynamically in 2d world. My games has moving elements, looking like circle when alone, that can fuse and make complex shapes when getting close together. The elements can still move and thus separate or modify the shape.
How can I draw that with Unity? I don't think the usual sprites can do the trick.
I believe you can do that with shaders (sounds complex to me, but utilises GPU).
Or...
Modify sprite's vertices to introduce streching of the texture.

Shader to warp or pinch specific areas of a texture in Unity3d

I've mocked up what I am trying to accomplish in the image below - trying to pinch the pixels in towards the center of an AR marker so when I overlay AR content the AR marker is less noticeable.
I am looking for some examples or tutorials that I can reference to start to learn how to create a shader to distort the texture but I am coming up with nothing.
What's the best way to accomplish this?
This can be achieved using GrabPass.
From the manual:
GrabPass is a special pass type - it grabs the contents of the screen where the object is about to be drawn into a texture. This texture can be used in subsequent passes to do advanced image based effects.
The way distortion effects work is basically that you render the contents of the GrabPass texture on top of your mesh, except with its UVs distorted. A common way of doing this (for effects such as heat distortion or shockwaves) is to render a billboarded plane with a normal map on it, where the normal map controls how much the UVs for the background sample are distorted. This works by transforming the normal from world space to screen space, multiplying it with a strength value, and applying it to the UV. There is a good example of such a shader here. You can also technically use any mesh and use its vertex normal for the displacement in a similar way.
Apart from normal mapped planes, another way of achieving this effect would be to pass in the screen-space position of the tracker into the shader using Shader.SetGlobalVector. Then, inside your shader, you can calculate the vector between your fragment and the object and use that to offset the UV, possibly using some remap function (like squaring the distance). For instance, you can use float2 uv_displace = normalize(delta) * saturate(1 - length(delta)).
If you want to control exactly how and when this effect is applied, make it so that has ZTest and ZWrite set to Off, and then set the render queue to be after the background but before your tracker.
For AR apps, it is likely possible to avoid the preformance overhead from using GrabPass by using the camera background texture instead of a GrabPass texture. You can try looking inside your camera background script to see how it passes over the camera texture to the shader and try to replicate that.
Here are two videos demonstrating how GrabPass works:
https://www.youtube.com/watch?v=OgsdGhY-TWM
https://www.youtube.com/watch?v=aX7wIp-r48c

Using distance or colliders to determine what to render

I have some objects I only want to render within a certain distance of the player. The objects don't have any colliders originally, so I am wondering what would be best for performance. Do I:
Check distance between object and player and render within a cerain distance
Add a trigger around my player, add colliders to all objects (using project->physics to make sure the layers only look for each other) and use OnTriggerEnter/Exit to determine when to render the object?
You can use cameras near/far clipping planes (Frustum Culling) to adjust the distance for rendering.
If you are looking for something more advanced use occlusion culling or combination of both:
Occlusion Culling is a feature that disables rendering of objects when they are not currently seen by the camera
because they are obscured (occluded) by other objects. This does not happen automatically in 3D computer graphics since most of the time objects farthest away from the camera are drawn first and closer objects are drawn over the top of them (this is called “overdraw”). Occlusion Culling is different from Frustum Culling. Frustum Culling only disables the renderers for objects that are outside the camera’s viewing area but does not disable anything hidden from view by overdraw. Note that when you use Occlusion Culling you will still benefit from Frustum Culling.
You can also use Camera.layerCullDistances if you want different camera cull distances for different objects:
Normally Camera skips rendering of objects that are further away than farClipPlane. You can set up some Layers to use smaller culling distances using layerCullDistances. This is very useful to cull small objects early on, if you put them into appropriate layers.

VR distortion correction methods

I’ve read this article http://www.gamasutra.com/blogs/BrianKehrer/20160125/264161/VR_Distortion_Correction_using_Vertex_Displacement.php
about distortion correction with vertex displacement in VR. Moreover, there are some words about other ways of distortion correction. I use unity for my experiments(and try to modify fibrum sdk, but it does not matter in my question because I only want to understand how these methods work in general).
As I mentioned there are three ways of doing this correction:
Using Pixel-based shader.
Projecting the render target onto a warped mesh - and rendering the final output to the screen.
Vertex displacement method
I understand how the pixel-based shader works. However, I don’t understand others.
The first question is about projection render target onto a warped mesh. As I understand, I should firstly render image from game cameras to 2(for each eye) tessellated quads, then apply shader with correction to this quads and then draw quads in front of main camera. But I’m afraid, I’m wrong.
The second one is about vertex displacement. Should I simply apply shader(that translate vertex coordinates of every object from world-space into inverse-lens distorted & screenspace(lens space)) to camera?
p.s. Sorry for my terrible English, I just want to understand how it works.
For the third method (vertex displacement), yes. That's exactly what you do. However, you must be careful because this is a non-linear transformation, and it won't be properly interpolated between vertices. You need your meshes to be reasonably tessellated for this technique to work properly. Otherwiese, long edges may be displayed as distorted curves, and you can potentially have z-fighting issues too. Here you can see a good description of the technique.
For the warped distortion mesh, this is how it goes. You render the screen, without distortion, to a render texture. Usually, this texture is bigger than your real screen resolution to compensate for effects of distortion on the apparent resolution. Then you create a tessellated quad, distort its vertices, and render it to screen using the texture. Since these vertices are distorted, it will distort your image.