Unity shader to render objects with same material to subsequent GrabPasses - unity3d

Overview
I'm working on a shader in Unity 2017.1 to enable UnityEngine.UI.Image components to blur what is behind them.
As some of the approaches in this Unity forum topic, I use GrabPasses, specifically a tex2Dproj(_GrabTexture, UNITY_PROJ_COORD(<uv with offset>)) call to look up the pixels that I use in my blur summations. I'm doing a basic 2-pass box blur, and not looking to optimize performance right now.
This works as expected:
I also want to mask the blur effect based on the image alpha. I use tex2D(_MainTex, IN.uvmain) to look up the alpha color of the sprite on the pixel I am calculating the blur for, which I then combine with the alpha of the blur.
This works fine when working with just a single UI.Image object:
The Problem
However when I have multiple UI.Image objects that share the same Material created from this shader, images layered above will cut into the images below:
I believe this is because objects with the same material may be drawn simultaneously and so don't appear in each other's GrabPasses, or at least something to that effect.
That at least would explain why, if I duplicate the material and use each material on its own object, I don't have this problem.
Here is the source code for the shader: https://gist.github.com/JohannesMP/8d0f531b815dfad07823d44bc12b8112
The Question
Is there a way to force objects of the same material to draw consecutively and not in parallel? Basically, I would like the result of a lower object's render passes to be visible to the grab pass of subsequent objects.
I could imagine creating a component that dynamically instantiates materials to force this, or using render textures, but I would really like a solution that doesn't require adding components or creating multiple materials to swap out.
I would love a solution that is entirely self-contained within one shader/one material but is unsure if this is possible. I'm still only starting to get familiar with shaders so I'm positive there are some features I am not familiar with.

It turns out that it was me re-drawing what I grabbed from the _GrabTexture that was causing the the issue. By correctly handling the alpha logic there I was able to get exactly the desired behavior:
Here is the updated sourcecode: https://gist.github.com/JohannesMP/7d62f282705169a2855a0aac315ff381
As mentioned before, optimizing the convolution step was not my priority.

Related

Why does unity material not render semi-transparency properly?

I have a Unity material whose albedo is based on a spritesheet. The sprite has semi-transparency, and is formatted to RGBA 32bit.
Now, the transparency renders in the sprite, but not in the material.
How do I do this without also making supposedly opaque parts of the albedo not transparent?
I have tried setting render mode to transparent, fade, and unlit/transparent. The result looks like this:
I tried opaque, but it ruins the texture. I tried cutout, but the semi-transparency will get out or become fully opaque. (depending on cutout)
There is no code to this.
I expect the output to make the semi-transparent parts of the material semi-transparent, and the opaque parts opaque. The actual output is either fully opaque or fully "semi-transparent", which is super annoying.
Edit
So I delayed work and I added submesh. So, it is really close to solving the problem.
It's still doing that glitch.
Okay, good news and bad news. The good news is, this problem is not uncommon. It's not even unique to Unity. The bad news is, the reason it's not uncommon or unique to Unity is because it's a universal issue with no perfect solution. But we may be able to find you a work around, so let's go through this together.
There's a fundamental issue in 3D Graphics: In what order do you draw things? If you're drawing a regular picture in real life, the obvious answer is you draw the things based on how far away from the viewer they are. This works fine for a while, but what do you do with objects that aren't cleanly "in front" of other things? Consider the following image:
Is the fruit in that basket in front of the bowl, or behind it? It's kind of neither, right? And even if you can split objects up into front and back, how do you deal with intersecting objects? Enter the Z-Buffer:
The Z-Buffer is a simple idea: When drawing the pixels of an object, you also draw the depth of those pixels. That is, how far away from the camera they are. When you draw a new object into the scene, you check the depth of the underlying pixel and compare it with the depth of the new one. If the new pixel is closer, you overwrite the old one. If the old pixel is closer, you don't do anything. The Z Buffer is generally a single channel (read: greyscale) image that never gets shown directly. As well as depth sorting, it can also be used for various post processing effects such as fog or ambient occlusion.
Now, one key component of the depth buffer is that it can only store one value per pixel. Most of the time, this is fine; After all, if you're just trying to sort a bunch of objects, the only depth you really care about is the depth of the front-most one. Anything behind that front-most object will be rendered invisible, and that's all you need to know or care about.
That is, unless your front-most object is transparent.
The issue here is that the renderer doesn't know how to deal with drawing an object behind a transparent one. To avoid this, a smart renderer (including unity) goes through the following steps:
Draw all opaque objects, in any order.
Sort all transparent objects by distance from the camera.
Draw all transparent objects, from furthest to closest.
This way, the chances of running into weird depth sorting issues is minimized. But this will still fall apart in a couple of places. When you make your object use a transparent material, the fact that 99% of the object is actually solid is completely irrelevant. As far as Unity is concerned, your entire object is transparent, and so it gets drawn according to its depth relative to other transparent objects in the scene. If you've got lots of transparent objects, you're going to have problems the moment you have intersecting meshes.
So, how do you deal with these problems? You have a few options.
The first and most important thing you can do is limit transparent materials to areas that are explicitly transparent. I believe the rendering order is based on materials above all else, so having a mesh with several opaque materials and a single transparent one will probably work fine, with the opaque parts being rendered before the single transparent part, but don't quote me on that.
Secondly, if you have alternatives, use them. The reason "cutout" mode seems to be a binary mask rather than real transparency is because it is. Because it's not really transparent, you don't run into any of the depth sorting issues that you typically would. It's a trade-off.
Third, try to avoid large intersecting objects with transparent materials. Large bodies of water are notorious for causing problems in this regard. Think carefully about what you have to work with.
Finally, if you absolutely must have multiple large intersecting transparent objects, consider breaking them up into multiple pieces.
I appreciate that none of these answers are truly satisfying, but the key takeaway from all this that this is not a bug so much as a limitation of the medium. If you're really keen you could try digging into custom render pipelines that solve your problem explicitly, but keep in mind you'll be paying a premium in performance if you do.
Good luck.
You said you tried Transparent Shader mode, but, did you tried to change Alpha Channel values in your material color after it?
The second image seems like the Alpha in RGBA is 0, try changing it.

Draw curved lines with texture and glow with Unity

I'm looking for an efficient way to draw curved lines and to make an object follow them in Unity.
I also need to draw them using a custom image and not a solid color.
And on top of that I would like to apply an outer glow to them, and not to the rest of the scene.
I don't ask for a copy/paste solution for each of these elements, I list them all to give some context.
I did something similar in a web app using the html5 canvas to draw text progressively. Here a gif showing you the render:
I only used small lines to draw what you see above. Here a very big letter with thinker lines so lines are more visible:
Of course it's not perfect, but the goal was to keep it simple and efficient. And spaces on the outer edges are not very visible in normal size.
This is used in an educational game working on mobile as a progressive app. In real world usage I attach a particles emitter to it for better effect :
And it runs smoothly even on low end devices.
I don't want to recreate this exact effect on Unity but the core functionality is very close.
Because of how I did it the first time, I thought about creating a big list of segments to draw manually, but unity may have better tools to create this kind of stuff, maybe working directly with bezier curves.
I a beginner in Unity so I don't really know what is the most efficient way to do it.
I looked at the line renderer which seemed (at first) to be a good choice but I'm a little bit worried about performances with a list of 500+ points (considering mobiles are a target).
Also, the glow I would like to add may impact on the technique to choose.
Do you have any advice or direction to give me?
Thank you very much.

Unity: Filter to give slight color variations to whole scene?

In Unity, is there a way to give slight color variations to a scene (a strain of purple here, some yellow blur there) without adjusting every single texture? And for that to work in VR stereo images too (and ideally in semi-consistent way as one moves around, and perhaps also without having to use computing-heavy colored lights)? Many thanks!
A simple way to achieve this if your color effect is fixed would be to add a canvas that renders a half transparent image to the whole screen. But I suppose that you might prefer some dynamic effect.
To achieve this, look at Unity's post processing stack. It allows you to add many post process effects, such as chromatic aberation and color grading, that might allow you to do what you want

Unity sprites don't render properly

I recently came across a problem I can't solve which involves not being able to draw my sprites properly. I have tried a lot of different things and couldn't find any solution.
Here is how the image should look like in unity:
And here is how it actually looks like:
If someone could tell me how to fix this, I would be very grateful.
Presumably the top image is a screenshot of your image manipulation program, many of which use the chequerboard pattern to mean transparency. As such, the image you have exported is a gradient going from almost solid white at the bottom to transparent at the top. This is why the image appears as such in Unity.
Also, if you're wondering why the image appears as though it has bands of different colours, this is due to a problem called colour banding. This can be fixed by using a technique called Dither (which adds some noise to the image), but how you do so will depend on which image manipulation program you are using.

How can I draw 3D model outlines on the iPhone? (OpenGL ES)

I've got a pretty simple situation that calls for something I don't know how to do without a stencil buffer (which is not supported on the iPhone).
Basically, I've got a 3D model that gets drawn behind an image. I want an outline of that model to be drawn on top of it at all times. So when it's behind the image, you can see its outline, and when its not behind the image you can see a model with an outline.
An option to simply get an outline working would be to draw a wireframe of the model with thick lines and a z offset, then draw the regular model on top of it. The problem with this is obviously that I need the outline to be drawn after the model.
This method needs to be fast, as I'm already pushing a lot of polygons around - full-on drawing of the model again in one way or another is not really desired.
Also, is there any way to find out whether my model can be seen at the moment? That is, whether or not the image over top has an opaque section at the position of the model, or if it has a transparent section. If I can figure this out (again, very quickly), then I can just draw a wireframe instead of a textured model, depending on if it's visible.
Any ideas here? Thanks.
most of the time you can re-create stencil effects using the alpha channel and render-to-texture if you think about it ...
http://research.microsoft.com/en-us/um/people/hoppe/proj/silmap/ Is a technical paper on the matter. Hopefully there's an easier way for you to accomplish this ;)
Here is a general option that might produce the effect you want (I have experience with OGL, but not iPhone):
Method 1
Render object to texture as pure white, separate from scene. This will produce a white mask where the object would be rendered.
Either draw this directly to the screen with alpha fade for a "full object", or if you're intent on your outlines, you could try rendering THIS texture to another texture, slightly enlarged, then render the original "full object" shading over this enlarged texture as pure black. This will create a sort of outline texture that you could render over the top of the scene.
Method 2
Edit out. Just read the "no stencil buffer" stipulation.
Does that help?