This question relates to using shaders (probably in the Unity3D milieu, but Metal or OpenGL is fine), to achieve rounded edges on a mesh-minimal cube.
I wish to use only 12-triangle minimalist mesh cubes,
and then via the shader,
Achieve the edges (/corners) of each block being slightly bevelled.
In fact, can this be done with a shader?
I recently finished creating such shader. The only way it can work is by providing 4 normal vectors instead of one for each vertex (smooth, sharp and one for each edge of the triangle for the given vertex). You will also need one float3 to detect edges.
To add such data in a mesh I made a custom mesh editor, comes with Playtime Painter Asset from Unity Asset Store. Will post the shader with the next update. Also will post to public GitHub.
You can see some dark lines, it's because it starts to interpolate to a normal vector which facing away from light source, but since there are no additional triangles, the result is visible on a triangle which is facing the camera.
Update (2/12/2018)
Realised that by clipping pixels that end up having a normal facing away from the camera, it is possible to smooth the outline shape. It wasn't tested for all possible scenarios but works great for simple shapes:
As per request added a comparison cube:
Currently, Playtime Painter has a simplified version of that shader, which interpolates between 2 normal vectors and gives ok results on some edges.
Wrote an article.
In general the Relief Mapping is able to modify the object silhouette like on this picture. You'd need to prepare a heightmap that lowers at the borders and that's it. However I think that using such shader might be an overkill for such a simple effect so maybe it's better to just make it in your geometry.
Related
My game uses very low-poly models for most of its geometry, and my current outline shader, which inverts normals and "scales up" the material, doesn't really cut it for this. Admittedly I have very little experience with shader graph but I'm trying my best here. These outlines are part of the Render Objects renderer feature in URP.
Here is an example of the issue in question, as well as the shader graph itself.
The shader you made requires smoothed normals for all vertices. If you need to have an edge on one of your outlined objects, you will get the following result with gaps because the cube doesn't have smooth normals.
And your models being very low-poly makes it a lot more visible.
I'd suggest just using a Frensel Node and attach it to a step node to get harsh outlines which you can just connect to emmision or something since it would probably only take 4-5 Nodes
I've mocked up what I am trying to accomplish in the image below - trying to pinch the pixels in towards the center of an AR marker so when I overlay AR content the AR marker is less noticeable.
I am looking for some examples or tutorials that I can reference to start to learn how to create a shader to distort the texture but I am coming up with nothing.
What's the best way to accomplish this?
This can be achieved using GrabPass.
From the manual:
GrabPass is a special pass type - it grabs the contents of the screen where the object is about to be drawn into a texture. This texture can be used in subsequent passes to do advanced image based effects.
The way distortion effects work is basically that you render the contents of the GrabPass texture on top of your mesh, except with its UVs distorted. A common way of doing this (for effects such as heat distortion or shockwaves) is to render a billboarded plane with a normal map on it, where the normal map controls how much the UVs for the background sample are distorted. This works by transforming the normal from world space to screen space, multiplying it with a strength value, and applying it to the UV. There is a good example of such a shader here. You can also technically use any mesh and use its vertex normal for the displacement in a similar way.
Apart from normal mapped planes, another way of achieving this effect would be to pass in the screen-space position of the tracker into the shader using Shader.SetGlobalVector. Then, inside your shader, you can calculate the vector between your fragment and the object and use that to offset the UV, possibly using some remap function (like squaring the distance). For instance, you can use float2 uv_displace = normalize(delta) * saturate(1 - length(delta)).
If you want to control exactly how and when this effect is applied, make it so that has ZTest and ZWrite set to Off, and then set the render queue to be after the background but before your tracker.
For AR apps, it is likely possible to avoid the preformance overhead from using GrabPass by using the camera background texture instead of a GrabPass texture. You can try looking inside your camera background script to see how it passes over the camera texture to the shader and try to replicate that.
Here are two videos demonstrating how GrabPass works:
https://www.youtube.com/watch?v=OgsdGhY-TWM
https://www.youtube.com/watch?v=aX7wIp-r48c
I'm looking for ways to clip an entire unity scene to a set of 4 planes. This is for an AR game, where I want to be able to zoom into a terrain, yet still have it only take up a given amount of space on a table (i.e: not extend over the edges of the table).
Thus far I've got clipping working as I want for the terrain and a water effect:
The above shows a much larger terrain being clipped to the size of the table. The other scene objects aren't clipped, since they use unmodifed standard shaders.
Here's a pic showing the terrain clipping in the editor.
You can see the clipping planes around the visible part of the terrain, and that other objects (trees etc) are not clipped and appear off the edge of the table.
The way I've done it involves adding parameters to each shader to define the clipping planes. This means customizing every shader I want to clip, which was fine when I was considering just terrain.
While this works, I'm not sure it's a great approach for hundreds of scene objects. I would need to modify whatever shaders I'm using, and then I'd have to be setting additional shader parameters every update for every object.
Not being an expert in Unity, I'm wondering if there are other approaches that are not "per shader" based that I might investigate?
The end goal is to render a scene within the bounds of some plane.
One easy way would be to use Box Colliders as triggers on each side of your plane. You could then turn off Renderers on objects staying in the trigger with OnTriggerEnter/OnTriggerStay and turn them on with OnTriggerExit.
You can also use Bounds.Contains.
I’ve read this article http://www.gamasutra.com/blogs/BrianKehrer/20160125/264161/VR_Distortion_Correction_using_Vertex_Displacement.php
about distortion correction with vertex displacement in VR. Moreover, there are some words about other ways of distortion correction. I use unity for my experiments(and try to modify fibrum sdk, but it does not matter in my question because I only want to understand how these methods work in general).
As I mentioned there are three ways of doing this correction:
Using Pixel-based shader.
Projecting the render target onto a warped mesh - and rendering the final output to the screen.
Vertex displacement method
I understand how the pixel-based shader works. However, I don’t understand others.
The first question is about projection render target onto a warped mesh. As I understand, I should firstly render image from game cameras to 2(for each eye) tessellated quads, then apply shader with correction to this quads and then draw quads in front of main camera. But I’m afraid, I’m wrong.
The second one is about vertex displacement. Should I simply apply shader(that translate vertex coordinates of every object from world-space into inverse-lens distorted & screenspace(lens space)) to camera?
p.s. Sorry for my terrible English, I just want to understand how it works.
For the third method (vertex displacement), yes. That's exactly what you do. However, you must be careful because this is a non-linear transformation, and it won't be properly interpolated between vertices. You need your meshes to be reasonably tessellated for this technique to work properly. Otherwiese, long edges may be displayed as distorted curves, and you can potentially have z-fighting issues too. Here you can see a good description of the technique.
For the warped distortion mesh, this is how it goes. You render the screen, without distortion, to a render texture. Usually, this texture is bigger than your real screen resolution to compensate for effects of distortion on the apparent resolution. Then you create a tessellated quad, distort its vertices, and render it to screen using the texture. Since these vertices are distorted, it will distort your image.
I'm currently experimenting with OpenGL ES 1.1 on the iPhone and trying to get my head around some of the basics. So far I've managed to draw a grid of objects which are lit with one GL_LIGHT. Here is a screenshot of the current output (question to follow)...
So you can see that my test consists of a grid of about 140 cubes - some slightly elevated so I can see how the shaded areas work. Each cube consists of this model (from Blender) and have normals / texture coordinates...
What's puzzling me, is why I don't get a 'uniform' lighting across the entire surface. Each cube seems to be lit individually and I can kind of understand why that would be... but is it not possible to have the light transition 'normally' like it would if you arranged this model out of blocks and shone a light across it. I'd expect to not see a dark edge on each individual cube, but rather a smooth transition across the whole area.
(I'm still inwardly chuffed that I managed to get this far!)
Any help or explanations would be awesome.
Thanks,
Simon
The reason why you don't get 'uniform' lighting is because I presume you are using per vertex lighting. That is the lighting is calculated per vertex and interpolated over each triangle making up the model. Since your cube has a pretty low polygon count the transition of light across the model won't look smooth.
Using OpenGL ES 1.1 there are two solutions to this. You can use higher polygon count models or implement per-pixel (DOT3) lighting. I've not implemented this myself but have come across this problem before (my solution was to switch to OpenGL ES 2.0 and use shaders to perform per-pixel lighting).
Here is a link, which may be of use: What is DOT3 lighting?
All the best!