Usage of "Don't Clear" in "Clear Flags" property of Camera - unity3d

In Unity's Camera component, there is a property Clear Flags which allows to choose from four options: Skybox, Solid Color, Depth Only and Don't Clear.
As documentation says:
Don’t clear
This mode does not clear either the color or the depth buffer. The
result is that each frame is drawn over the next, resulting in a
smear-looking effect. This isn’t typically used in games, and would
more likely be used with a custom shader.
Note that on some GPUs (mostly mobile GPUs), not clearing the screen
might result in the contents of it being undefined in the next frame.
On some systems, the screen may contain the previous frame image, a
solid black screen, or random colored pixels.
"This isn't typically used in games and would more likely be used with a custom shader"
So my question is :
How to use it in a custom shader and What effects can be achieved by using it?
Has anyone ever used it or has a good explanation about the basic concept.
Thanks

An idea would be those enemy encounter effects in Final Fantasy games. Look at the top edge of this gif to see the smearing effects of previous frames. This is probably combined with blur/rotation.

Thread question is a bit old, however I had this problem and solved it.
I've made a Screen Image Effect that reproduces this effect, you can see it here:
https://github.com/falconmick/ClearFlagsMobile
Hope this helps!

Related

UNITY: Everything looks Low Resolution. Even SVG vectors

I'm new to Unity so Hopefully this is an easy fix.
So everything looks super low res for me. I wish my images looked high res.
Even SVG looks low res even though they're vector nodes, I don't get that at all but I assume Unity doesn't play with svg yet? the black outline graphic is SVG, the rest are PNG with alpha.
Take a peek at my three different windows. Let me know your suggestions (remember I don't know anything so the easiest thing can been overlooked)
There is a "Scale" slider on top of the game view. Right now it is set to 2.8x. When you do that Unity just zooms in but it doesn't set the the resolution or actually change anything at all. It's like moving the screen really close to your face :D Nothing else besides that particulat window is affected by this setting. So my advice would be to always keep it at 1. Unless you want to see something specific at the screen of course

Why does unity material not render semi-transparency properly?

I have a Unity material whose albedo is based on a spritesheet. The sprite has semi-transparency, and is formatted to RGBA 32bit.
Now, the transparency renders in the sprite, but not in the material.
How do I do this without also making supposedly opaque parts of the albedo not transparent?
I have tried setting render mode to transparent, fade, and unlit/transparent. The result looks like this:
I tried opaque, but it ruins the texture. I tried cutout, but the semi-transparency will get out or become fully opaque. (depending on cutout)
There is no code to this.
I expect the output to make the semi-transparent parts of the material semi-transparent, and the opaque parts opaque. The actual output is either fully opaque or fully "semi-transparent", which is super annoying.
Edit
So I delayed work and I added submesh. So, it is really close to solving the problem.
It's still doing that glitch.
Okay, good news and bad news. The good news is, this problem is not uncommon. It's not even unique to Unity. The bad news is, the reason it's not uncommon or unique to Unity is because it's a universal issue with no perfect solution. But we may be able to find you a work around, so let's go through this together.
There's a fundamental issue in 3D Graphics: In what order do you draw things? If you're drawing a regular picture in real life, the obvious answer is you draw the things based on how far away from the viewer they are. This works fine for a while, but what do you do with objects that aren't cleanly "in front" of other things? Consider the following image:
Is the fruit in that basket in front of the bowl, or behind it? It's kind of neither, right? And even if you can split objects up into front and back, how do you deal with intersecting objects? Enter the Z-Buffer:
The Z-Buffer is a simple idea: When drawing the pixels of an object, you also draw the depth of those pixels. That is, how far away from the camera they are. When you draw a new object into the scene, you check the depth of the underlying pixel and compare it with the depth of the new one. If the new pixel is closer, you overwrite the old one. If the old pixel is closer, you don't do anything. The Z Buffer is generally a single channel (read: greyscale) image that never gets shown directly. As well as depth sorting, it can also be used for various post processing effects such as fog or ambient occlusion.
Now, one key component of the depth buffer is that it can only store one value per pixel. Most of the time, this is fine; After all, if you're just trying to sort a bunch of objects, the only depth you really care about is the depth of the front-most one. Anything behind that front-most object will be rendered invisible, and that's all you need to know or care about.
That is, unless your front-most object is transparent.
The issue here is that the renderer doesn't know how to deal with drawing an object behind a transparent one. To avoid this, a smart renderer (including unity) goes through the following steps:
Draw all opaque objects, in any order.
Sort all transparent objects by distance from the camera.
Draw all transparent objects, from furthest to closest.
This way, the chances of running into weird depth sorting issues is minimized. But this will still fall apart in a couple of places. When you make your object use a transparent material, the fact that 99% of the object is actually solid is completely irrelevant. As far as Unity is concerned, your entire object is transparent, and so it gets drawn according to its depth relative to other transparent objects in the scene. If you've got lots of transparent objects, you're going to have problems the moment you have intersecting meshes.
So, how do you deal with these problems? You have a few options.
The first and most important thing you can do is limit transparent materials to areas that are explicitly transparent. I believe the rendering order is based on materials above all else, so having a mesh with several opaque materials and a single transparent one will probably work fine, with the opaque parts being rendered before the single transparent part, but don't quote me on that.
Secondly, if you have alternatives, use them. The reason "cutout" mode seems to be a binary mask rather than real transparency is because it is. Because it's not really transparent, you don't run into any of the depth sorting issues that you typically would. It's a trade-off.
Third, try to avoid large intersecting objects with transparent materials. Large bodies of water are notorious for causing problems in this regard. Think carefully about what you have to work with.
Finally, if you absolutely must have multiple large intersecting transparent objects, consider breaking them up into multiple pieces.
I appreciate that none of these answers are truly satisfying, but the key takeaway from all this that this is not a bug so much as a limitation of the medium. If you're really keen you could try digging into custom render pipelines that solve your problem explicitly, but keep in mind you'll be paying a premium in performance if you do.
Good luck.
You said you tried Transparent Shader mode, but, did you tried to change Alpha Channel values in your material color after it?
The second image seems like the Alpha in RGBA is 0, try changing it.

Unity: Filter to give slight color variations to whole scene?

In Unity, is there a way to give slight color variations to a scene (a strain of purple here, some yellow blur there) without adjusting every single texture? And for that to work in VR stereo images too (and ideally in semi-consistent way as one moves around, and perhaps also without having to use computing-heavy colored lights)? Many thanks!
A simple way to achieve this if your color effect is fixed would be to add a canvas that renders a half transparent image to the whole screen. But I suppose that you might prefer some dynamic effect.
To achieve this, look at Unity's post processing stack. It allows you to add many post process effects, such as chromatic aberation and color grading, that might allow you to do what you want

How to create a distorted screen

I want to show distorted image as error page for my application. If possible this can be a screenshot of home screen with some graphics distortion. Is this possible.
Thanks You.
As Daniel A. White's comment mentions, this probably will cause your application to be rejected from the App Store, but it can be accomplished in many ways. I think this technique would be acceptable if your own interface appeared broken, but not acceptable if you made any iOS supplied looks appear broken.
You could just use your favorite image editor (i.e. Photoshop) to distort a screen shot, and displayed it by putting it in a separate UIView. The image would be static. It couldn't react to the contents of your program's interface.
If your interface is drawn with OpenGL ES 2.0, you could draw your regular interface to a texture, then use that texture as input to another GLSL program that applied the distortion.

How can I draw 3D model outlines on the iPhone? (OpenGL ES)

I've got a pretty simple situation that calls for something I don't know how to do without a stencil buffer (which is not supported on the iPhone).
Basically, I've got a 3D model that gets drawn behind an image. I want an outline of that model to be drawn on top of it at all times. So when it's behind the image, you can see its outline, and when its not behind the image you can see a model with an outline.
An option to simply get an outline working would be to draw a wireframe of the model with thick lines and a z offset, then draw the regular model on top of it. The problem with this is obviously that I need the outline to be drawn after the model.
This method needs to be fast, as I'm already pushing a lot of polygons around - full-on drawing of the model again in one way or another is not really desired.
Also, is there any way to find out whether my model can be seen at the moment? That is, whether or not the image over top has an opaque section at the position of the model, or if it has a transparent section. If I can figure this out (again, very quickly), then I can just draw a wireframe instead of a textured model, depending on if it's visible.
Any ideas here? Thanks.
most of the time you can re-create stencil effects using the alpha channel and render-to-texture if you think about it ...
http://research.microsoft.com/en-us/um/people/hoppe/proj/silmap/ Is a technical paper on the matter. Hopefully there's an easier way for you to accomplish this ;)
Here is a general option that might produce the effect you want (I have experience with OGL, but not iPhone):
Method 1
Render object to texture as pure white, separate from scene. This will produce a white mask where the object would be rendered.
Either draw this directly to the screen with alpha fade for a "full object", or if you're intent on your outlines, you could try rendering THIS texture to another texture, slightly enlarged, then render the original "full object" shading over this enlarged texture as pure black. This will create a sort of outline texture that you could render over the top of the scene.
Method 2
Edit out. Just read the "no stencil buffer" stipulation.
Does that help?