Cull off parts above the mesh - unity3d

So, I want to make scene same to this Sphere Scene
Now I have mesh with random generation as a ground and a sphere. But I dont't know how to cull off spheres geometry above mesh. Tried to use Stencil, and hightmap. Stencil rendered ground in front, but sphere above ground is still rendered. Using heightmap, to get know if it needs to render (I compared height map and worldPos) is problematic, because the texture is superimposed over the all sphere, and not projected onto it. Can you help. Is there any shader function to cull off all above mesh.

I did something similar for an Asteroids demo a few years ago. Whenever an asteroid was hit, I used a height map - really, just a noise map - to offset half of the vertices on the asteroid model to give it a broken-in-half look. For the other half, I just duplicated the asteroid model and offset the other half using the same noise map. The effect is that the two "halves" matched perfectly.
Here's what I'd try:
Your sphere model should be a complete sphere.
You'll need a height map for the terrain.
In your sphere's vertex shader, for any vertex north of the equator:
Sample the height map.
Set the vertex's Y coordinate to the height from the height map. This will effectively flatten the top of the sphere, and then offset it based on your height map. You will likely have to scale the height value here to get something rational.
Transform the new x,y,z as usual.
Note that you are not texturing the sphere. You're modifying the geometry. This needs to happen in the geometry part of the pipeline, not in the fragment shader.
The other thing you'll need to consider is how to add the debris - rocks, etc. - so that it matches the geometry offset on the sphere. Since you've got a height map, that should be straightforward.
To start with, I'd just get your vertex shader to flatten the top half of the sphere. Once that works, add in the height map.
For this to look convincing, you'll need a fairly high-resolution sphere and height map. To cut down on geometry, you could use a plane for the terrain and a hemisphere for the bottom part. Just discard any fragment for the plane that is not within the spherical volume you're interested in. (You could also use a circular "plane" rather than a rectangular plane, but getting the vertices to line up with the sphere and filling in holes at the border can be tricky.)

As I realised, there's no standard way to cull it without artifacts. The only way it can be done is using raymarching rendering.

Related

Evaluate depth for orthographic camera

I have a post processing shader. For simplicity, my post processing shader only shows the _CameraDepthTexture at the given uv. This shader is written in code.
I'm moving to shader graph and I want to have a material for all of my objects and achieve the exact same effect (show the same depth color), althought I can't use Scene Depth node. How can I generate the exact same color for my objects in Shader Graph?
As the depth is related to the distance between the camera and the objects, I'm trying to set the depth like this:
I take the vector (vertex world position - camera world position).
I project this vector into the camera direction vector
I remap this length of the projection vector from (near plane, far plane) to (1, 0)
It looks like my depth is the same as _CameraDepthTexture, but when objects are too close to the camera, they are different (my version is darker).
How can I write a shader without Scene Depth node that generates the exact same color as _CameraDepthTexture? My camera is orthographic with orthographic size 10.4, near = -50 and far = 50.

Any way to get a URP shader to tile at a constant size, regardless of face orientation?

I need a shader which I can apply to a surface, and have it tile a texture at a constant size. Think 'stretchable brick wall' I looked at this world shader hoping I could adapt. https://www.youtube.com/watch?v=vIh_6xtBwsI&ab_channel=JustinFoley
The problem with world UV's is they only project along the major axes. I need it to follow the rotation of the object, just not affected by scale.
This is what I was trying:
But as you can imagine, it is still affected by scale, and appears to align with the y plane:

HLSL lighting based on texture pixels instead of screen

In HLSL, how can I calculate lighting based on pixels of a texture, instead of pixels that make up the object?
In other words, if I have a 64x64px texture being rendered on a 1024x768px screen, I want to calculate the lighting as it affects the 64x64px space, resulting in jagged pixels instead of a smooth line.
I've researched dozens of answers but I'm not sure how I can determine at all times if a fragment is a part of a pixel that should be fully lit or not. Maybe this is the wrong approach?
The current implementation uses a diffuse texture and a normal map. It results in what appear as artifacts (diagonal lines) in the output:
Note: The reason it almost looks correct is because of the normal map, which causes some adjacent pixels to have normals that are angled just enough to light some pixels and not others.

Find angle face under mouse pointer in Unity 3d

I have a projector component and I need to find the angle that projected texture falls at to exclude the projecting on vertical faces.
My projector is under the mouse pointer and works ok when it is over an horizontal face:
I would like the projector to switch off on vertical faces to avoid this bad effect:
If possible, I would like to do it in the shader code to avoid the vertical projected image even if the cursor is located on the corners of an horizontal face and a part "goes out" on vertical face.
I found this solution in C#:
if (Physics.Raycast(MouseRay,out hitInfo)){
if(hitInfo.normal.y>0) {
// draw
} else {
// not draw
}
}
But only it works on curved surfaces and not, for example, on the face cubes.
How can I do this properly?
Normally they would use an image on a quad using TGA transparency, which rotates itself to the face that the middle of the object is aligned to, using ray to find the vertex and making it's absolute normal.
Other ways of doing it would be quite tricky, perhaps using decals... If you did it using a shader, it would take so much time... it's a case of problem solving not being ordered in order of importance for fast development. Technically you can project a volumetric texture on top of whatever object you are using... that way you can add your barred circle projected from a point in space towards the object, as a mathematical formula, it takes a while to do, check out volumetric textures, i have written some and in your case it needs the mouse pos sent to texture and maths to add transparent zone and red zone to texture. takes all day.
It's fine to have a flat circle that flips around when you change the pointer onto a different face, it will just look like a physical card and it's much easier to code, 10 minutes instead of many hours.

Problem with glTranslatef

I use the glTranslate command to shift the position of a sprite which I load from a texture in my iPhone OpenGL App. My problem is after I apply glTranslatef, the image appears a little blurred. When I comment that line of code, the image is crystal clear. How can i resolve this issue???
You're probably not hitting the screen pixel grid exactly. This will cause texture filtering to blur it. The issue is a bit complicated: Instead of seeing the screen an texture as a array of points, see it as sheets of grid ruled paper (the texture sheet can be stretched, sheared, scaled). To make things look crisp the grids must align perfectly. The texture coordinates (0,0) and (1,1) don't hit the center of the texels but the outer edges of the texture sheet. Thus you need a little bit to offset and scale to address the texel centers. And the same goes for placing the target quads on the screen, where the vertex position must be aligned with the edges of the screen, not the pixel centers. If your projection and modelview matrix are not setup in a way that one unit in modelview space is one pixel wide and the projection fills the whole screen (or window viewport) it's difficult to get this right.
One normally starts with
glViewport(0,0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// modelview XY range 0..width x 0..height now covers the whole viewport
// (0,0) don't address the lower left pixel but the lower left edge of this
// (width,height) similarily addresses the upper right corner
// drawing a 0..width x 0..height quad with texture coordinates 0..1 x 0..1
// will cover it perfectly
This will work as long as the quad as exactly the same dimensions (i.e. it's vertex positions match) the texture coordinates and the vertex positions are integers.
Now the interesting part is: What if they don't meet those conditions. Then aliasing occours. In GL_NEAREST filtering mode things still look crisp, but some lines/rows are simply missing. In GL_LINEAR filtering mode neighbouring pixels are interpolated with the interpolation factor beding determined how far off grid they are (in laymans terms, the actual implementation looks slightly different).
So how to solve your issue: Draw sprites in a projection/modelview that matches with the viewport, use only integer coordinates for the vertex coordinates and make your texture cover the whole picture. If you're using only a part of the texture coordinate range, things get even more interesting, since one addressed the texture grid, not the texel centers.
I would recommend looking at your modelview matrix declaration and be sure that glLoadIdentity() is being called to ensure that the matrix stack is clean before applying the transform.