Unity 3D Pixelate individual 3D objects but retain resolution over distance - unity3d

I've been doing research around how to create a pixilated effect in Unity for 3D objects. I came across the following tutorials that were useful to me:
https://www.youtube.com/watch?v=dpNhymnBDQw
https://www.youtube.com/watch?v=Z8xB7i3W4CE
For the most part, it is what I am looking for, where I can pixelate certain objects in my scene (I don't want to render all objects as pixelated).
The effect, however, works based of the screen resolution so when you are closer to objects they become less pixelated and vice versa:
Notice how the cube on the right consists of less pixels as it is further away.
Do you perhaps have an idea as to how you would keep the resolution of the pixelated object consistent regardless of distance to them such as below (this effect was done in photoshop, I am unaware as to how to actually implement it):
I'm not sure if this even is possible with the method provided by most pixelart methods.
I was thinking maybe if you could use a shader per object in the scene that would render the pixelated object then you could do some fancy shader math to fix the resolution keeping it consistent per object, however I have no idea as to how you would even render a pixel effect with just a shader. (The only method I can think of is what is described in the videos in which you render the objects onto a smaller resolution via render texture then upscale to screen resolution, which you can't really do with a shader assigned to a material).
Another thought I had was to render each object separately using a separate camera for each object I wanted pixelated, then I could set the camera to be a fixed distance away from the object and blit the render together onto the main camera. This way since each pixelated object is rendered individually with their own camera at a fixed distance, they will retain a fixed pixel resolution regardless of distance from the main camera. (Essentially you can think of it as converting each object into a sprite and rendering that sprite in the scene, thus keeping the resolution of each object consistent despite distance) But this obviously has its own set of problems from performance, to different orientations etc...
Any ideas?
Ideally I am able to specify the resolution I want for a specific 3D object in my scene to be pixelated to and it retains that over any distance. This way I have the flexibility to have different objects rendered at different resolutions.
I should mention that I am using the Universal Render Pipeline at the moment with a custom render feature to achieve my current pixelated effect through downscaling a render texture and upscaling to screen resolution, in which I can change the resolution of the downscaled texture.

Related

In Unity, are there any ways to get object information pixel in screen is rendering?

I'm trying to figure out how to get object's information pixel in screen is rendering in shader.
I'm trying to make a 3d pixelation shader. And this is done by
1.getting the render texture from camera.
2.and pixelate that using shader graph
and it works fine.
I've also managed make a pixel outline.
But the problem is when the two objects are overlayed, the outline just gets drawn as if those were the same object.
I'm not exactly sure how to get over this but my idea is to,
1.Somehow get the object information the pixel in render texture in shader.
2.And draw outlines seperately based on that info
But even after days of researching. I couldn't get it working.
If you have any documents or information about accessing object in shader, or just have another way of doing this instead of this method. I would be glad to hear it. Thanks.
These are what i've tried and thought of so far
1.Google "Unity get access to object of pixel camera is rendering" (but couldn't find anything useful)
2.Just give object outline before pixelating(It sort of works but it is jittery)
3.Get the object information based on its depth value using depth texture(It kind of works but it's unstable because if two objects stay close, there's no way to distinguish them)
4.Get the object information by giving raycast on every single position pixel will render. (But it'll have to have 100k+ raycasts and use GetComponent 100k+ times every frame, which will be expensive)
First read about deferred rendering, maybe it will help you to find a nice solution.
Second, you can assign an ID to each object, then you could render all objects into a render texture using this ID as a color. Then use that render texture in your shader to differentiate objects. Some bit logic and you'll get what you need.

Recommendations for clipping an entire scene in Unity

I'm looking for ways to clip an entire unity scene to a set of 4 planes. This is for an AR game, where I want to be able to zoom into a terrain, yet still have it only take up a given amount of space on a table (i.e: not extend over the edges of the table).
Thus far I've got clipping working as I want for the terrain and a water effect:
The above shows a much larger terrain being clipped to the size of the table. The other scene objects aren't clipped, since they use unmodifed standard shaders.
Here's a pic showing the terrain clipping in the editor.
You can see the clipping planes around the visible part of the terrain, and that other objects (trees etc) are not clipped and appear off the edge of the table.
The way I've done it involves adding parameters to each shader to define the clipping planes. This means customizing every shader I want to clip, which was fine when I was considering just terrain.
While this works, I'm not sure it's a great approach for hundreds of scene objects. I would need to modify whatever shaders I'm using, and then I'd have to be setting additional shader parameters every update for every object.
Not being an expert in Unity, I'm wondering if there are other approaches that are not "per shader" based that I might investigate?
The end goal is to render a scene within the bounds of some plane.
One easy way would be to use Box Colliders as triggers on each side of your plane. You could then turn off Renderers on objects staying in the trigger with OnTriggerEnter/OnTriggerStay and turn them on with OnTriggerExit.
You can also use Bounds.Contains.

How to get rid of "shadow teeth" in Unity?

I tried everything but nothing can affect this. The only thing is when I change shadow resolution to "low", it becomes more smooth (obviously), but still not the best. Those shadows also look better if angle of view is less acute. Quality settings are the best, light source is a spotlight. Material on those things uses standard shader. What do I do wrong?
Image is enlarged.
You...can't. :(
The problem is that the shadows being cast are essentially just a texture. And texture points (aka "pixels") are square. This shadow texture is then "cast" from the light source (think about the light as being a camera: every pixel that it can see that is an object, that becomes a "dark" pixel in the lightmap; its a bit more complicated than that, but not by much).
Your objects and light are definitely not square up from each other. And in fact, can never be as your cubes are rotated five to ten degrees from each other forming a curve. Which means that some edge, somewhere, is going to get jaggy. This also explains why changing the light's position and orientation to a different angle affects the result: those edges more closely (or less closely) align with where the lightmap pixels are.
You can try various settings, such as Stable Fit or higher quality shadows (this is really just "use a bigger texture" so those jaggies get smaller as the same volume is covered by more shadow-pixels) but fundamentally you're not going to get a better result.
Unless...
You use baked lighting. Open up the Lighting window (Window -> Lighting), set your lights as baked rather than forward/deferred (this means they will not be realtime and may not move or otherwise change) and then in the Lighting window, bake your lights.
This essentially creates a second texture that is wrapped around your objects and gives them shadows and the pixels line up differently and generally give smoother shadow edges when the object's faces align with the shadow-casting edge (such as your stacked cubes). The textures are also much larger than the runtime light textures because it doesn't have to be recomputed every frame (realtime lights are restricted so they don't consume gigabytes of video RAM).
Baking will take a while, let it do its thing.
Have you tried with Stable Fit (under Quality settings)?

Purpose of mipmaps for 2D sprites?

In current Unity,
For use in Unity.UI as conventional UI ..
for any "Sprite (2D and UI)", in fact it always defaults to having "Generate Mip Maps" turned ON. Every time you drop an image in, you have to turn that "off" and apply.
As noted in the comments, these days you can actually use world space UI canvasses, and indeed advanced users may indeed have (say) "buttons that float over the head of Zelda and they are in the far distance". However if you're a everyday Unity user adding a button, just turn it off :)
In Unity, "sprites" can still be positioned in 3D space. For example, on a world space canvas. Furthermore, mipmaps are used when the sprite is scaled. This is because the mipmap sampling is determined by the texel size rather than the distance.
If a sprite is flat and perfectly scaled then there is no reason to use mipmaps. This would likely apply to your icon example.
I suspect that it is enabled by default for 2D games where sprites will often not be perfectly scaled. To clarify, a sprite does not need to be on a canvas. Sprites can exist as their own GameObject with a Sprite Renderer (not on a canvas.) When this is the case, scaling the camera view will change the sprite's size on the screen resulting in mipmapping due to the texel size changing. This results in making the sprite always perfectly scaled challenging without a canvas.

Rotate an object or change the culled face?

In OpenGL which one would result in a better performance. To change the culled face or to rotate my object?
The scenario is the following:
I compute the matrix to feed into my shaders, this will draw texture A in a certain culling position (front). When I see the object from infront i can see it, but from behind i cant, this is my desired behavior. Now i would like to add "something" behind, lets say texture B, so that when the object is seen from behind this other texture will appear in the same position and orientation as was the texture A but now with texture B.
I thought that other than building a cube with 2 sides i could simply "redraw on top" of my previous object. If i were to rotate the object i suppose I can assume that OpenGL will simply not overwrite texture A since it will not pass the face culling test. This require one extra matrix multiplication to mirror mi object on its own axis. However what if i simply change the culling property and try to draw it? Wouldn't it have the same effect of failing the text from infront but passing it from behind?
Which of these 2 options is better?
(PD: I am doing this on the iPhone so every single bit of performance is quite relevant for older devices.)
Thank you.
I'm not sure if I understood you correctly, but it sounds like you are trying to draw a billboard here? In that case, you should disable backface culling altogether and in your fragment shader check gl_FrontFacing. Depending on this, you can flip your texture coordinates and change your texture (just bind both textures and send them to the shader). This way you have only one draw call and don't have to do any strange tricks or change the OpenGL state.