Why does unity material not render semi-transparency properly? - unity3d

I have a Unity material whose albedo is based on a spritesheet. The sprite has semi-transparency, and is formatted to RGBA 32bit.
Now, the transparency renders in the sprite, but not in the material.
How do I do this without also making supposedly opaque parts of the albedo not transparent?
I have tried setting render mode to transparent, fade, and unlit/transparent. The result looks like this:
I tried opaque, but it ruins the texture. I tried cutout, but the semi-transparency will get out or become fully opaque. (depending on cutout)
There is no code to this.
I expect the output to make the semi-transparent parts of the material semi-transparent, and the opaque parts opaque. The actual output is either fully opaque or fully "semi-transparent", which is super annoying.
Edit
So I delayed work and I added submesh. So, it is really close to solving the problem.
It's still doing that glitch.

Okay, good news and bad news. The good news is, this problem is not uncommon. It's not even unique to Unity. The bad news is, the reason it's not uncommon or unique to Unity is because it's a universal issue with no perfect solution. But we may be able to find you a work around, so let's go through this together.
There's a fundamental issue in 3D Graphics: In what order do you draw things? If you're drawing a regular picture in real life, the obvious answer is you draw the things based on how far away from the viewer they are. This works fine for a while, but what do you do with objects that aren't cleanly "in front" of other things? Consider the following image:
Is the fruit in that basket in front of the bowl, or behind it? It's kind of neither, right? And even if you can split objects up into front and back, how do you deal with intersecting objects? Enter the Z-Buffer:
The Z-Buffer is a simple idea: When drawing the pixels of an object, you also draw the depth of those pixels. That is, how far away from the camera they are. When you draw a new object into the scene, you check the depth of the underlying pixel and compare it with the depth of the new one. If the new pixel is closer, you overwrite the old one. If the old pixel is closer, you don't do anything. The Z Buffer is generally a single channel (read: greyscale) image that never gets shown directly. As well as depth sorting, it can also be used for various post processing effects such as fog or ambient occlusion.
Now, one key component of the depth buffer is that it can only store one value per pixel. Most of the time, this is fine; After all, if you're just trying to sort a bunch of objects, the only depth you really care about is the depth of the front-most one. Anything behind that front-most object will be rendered invisible, and that's all you need to know or care about.
That is, unless your front-most object is transparent.
The issue here is that the renderer doesn't know how to deal with drawing an object behind a transparent one. To avoid this, a smart renderer (including unity) goes through the following steps:
Draw all opaque objects, in any order.
Sort all transparent objects by distance from the camera.
Draw all transparent objects, from furthest to closest.
This way, the chances of running into weird depth sorting issues is minimized. But this will still fall apart in a couple of places. When you make your object use a transparent material, the fact that 99% of the object is actually solid is completely irrelevant. As far as Unity is concerned, your entire object is transparent, and so it gets drawn according to its depth relative to other transparent objects in the scene. If you've got lots of transparent objects, you're going to have problems the moment you have intersecting meshes.
So, how do you deal with these problems? You have a few options.
The first and most important thing you can do is limit transparent materials to areas that are explicitly transparent. I believe the rendering order is based on materials above all else, so having a mesh with several opaque materials and a single transparent one will probably work fine, with the opaque parts being rendered before the single transparent part, but don't quote me on that.
Secondly, if you have alternatives, use them. The reason "cutout" mode seems to be a binary mask rather than real transparency is because it is. Because it's not really transparent, you don't run into any of the depth sorting issues that you typically would. It's a trade-off.
Third, try to avoid large intersecting objects with transparent materials. Large bodies of water are notorious for causing problems in this regard. Think carefully about what you have to work with.
Finally, if you absolutely must have multiple large intersecting transparent objects, consider breaking them up into multiple pieces.
I appreciate that none of these answers are truly satisfying, but the key takeaway from all this that this is not a bug so much as a limitation of the medium. If you're really keen you could try digging into custom render pipelines that solve your problem explicitly, but keep in mind you'll be paying a premium in performance if you do.
Good luck.

You said you tried Transparent Shader mode, but, did you tried to change Alpha Channel values in your material color after it?
The second image seems like the Alpha in RGBA is 0, try changing it.

Related

Unity shader to render objects with same material to subsequent GrabPasses

Overview
I'm working on a shader in Unity 2017.1 to enable UnityEngine.UI.Image components to blur what is behind them.
As some of the approaches in this Unity forum topic, I use GrabPasses, specifically a tex2Dproj(_GrabTexture, UNITY_PROJ_COORD(<uv with offset>)) call to look up the pixels that I use in my blur summations. I'm doing a basic 2-pass box blur, and not looking to optimize performance right now.
This works as expected:
I also want to mask the blur effect based on the image alpha. I use tex2D(_MainTex, IN.uvmain) to look up the alpha color of the sprite on the pixel I am calculating the blur for, which I then combine with the alpha of the blur.
This works fine when working with just a single UI.Image object:
The Problem
However when I have multiple UI.Image objects that share the same Material created from this shader, images layered above will cut into the images below:
I believe this is because objects with the same material may be drawn simultaneously and so don't appear in each other's GrabPasses, or at least something to that effect.
That at least would explain why, if I duplicate the material and use each material on its own object, I don't have this problem.
Here is the source code for the shader: https://gist.github.com/JohannesMP/8d0f531b815dfad07823d44bc12b8112
The Question
Is there a way to force objects of the same material to draw consecutively and not in parallel? Basically, I would like the result of a lower object's render passes to be visible to the grab pass of subsequent objects.
I could imagine creating a component that dynamically instantiates materials to force this, or using render textures, but I would really like a solution that doesn't require adding components or creating multiple materials to swap out.
I would love a solution that is entirely self-contained within one shader/one material but is unsure if this is possible. I'm still only starting to get familiar with shaders so I'm positive there are some features I am not familiar with.
It turns out that it was me re-drawing what I grabbed from the _GrabTexture that was causing the the issue. By correctly handling the alpha logic there I was able to get exactly the desired behavior:
Here is the updated sourcecode: https://gist.github.com/JohannesMP/7d62f282705169a2855a0aac315ff381
As mentioned before, optimizing the convolution step was not my priority.

How to 9-slice a sprite while keeping the center not scaled?

I wonder is there any way to slice this sprite(dialog pop up thing) that could keep the bottom center (the upside-down triangle) not scaled? I'm using nGUI if it matters.
Nope
Sorry, but that's how 9-slice scaling works. You would need 25 slice scaling to do what you're looking for and that's overkill for most things, so I've never seen an implementation.
What to do instead...
Break up your sprite into two pieces: the 9-slice portion and the "notch" portion. Then just position the notch to be in the right place.
I haven't used nGUI (only iGUI and the Unity native--both old and new) so I'm not sure on the precise nature of how nGUI will let you do that, but you'd still need two sprites, one of which is scaled and the other one which isn't, positioned either manually or through parent-child relative relationship. If your dialog is always the same width, it'll be pretty straight forward. If not, it might be more challenging.
A few other things:
You'll probably want the notch sprite and the bubble sprite to the same native image size, but its not necessary (might make things easier, might not).
The notch will want to have some "overbleed" so that when the two stack the underlying rendering code doesn't go all squinty eyed and go "there's a gap here..." and draw through in some cases.
Depending on the bubble portion's drawn edge, you might want the notch to be in front or behind. In your precise case, I don't think it'll make a difference. It's a little hard to tell due to the colors, but when I did a selectable tab (which is built similarly), the tab sits on top of the container window so that the shaded edge flows nicely. The unselected version then has no overbleed so it looks like it sits "behind" (accurate pixel placement--2D game at a fixed size--insures that no "gap" is rendered).
It's a little tedious but pretty straightforward to implement this for UI images. I recently did it in order to make a slice stretch the left/right borders of a 9-slice instead of the center.
The trick is to subclass Image and override OnPopulateMesh, where you do the calculations you need and set positions/uvs to whatever you require.
Here's a helpful how-to article: https://www.hallgrimgames.com/blog/2018/11/25/custom-unity-ui-meshes
Things for a non-UI sprite will be harder. I think you'll have to create all your geometry in a script, and the calculations might be a little complicated because you're using an atlas.

OpenGL ES for iPhone blending not working

I'm a beginner to 3D graphics in general and I'm trying to make a 3D game for the iPhone, and more specifically, to use textures that contain transparency. I am able to load a texture (an 8 bit .png file) into OpenGL and map it to a square (made from a triangle strip) but the transparent parts of the image are not transparent when I run the app in the simulator - they take on the background colour, whatever it is set to, but obscure images that are further away. I am unable to post a screenshot as I am a new user, so my apologies for that. I will try to upload and link it some other way.
Even more annoying is that when I load the image into Apple's GLSprite example code, it works exactly as I want it to. I have copied the code from GLSprite's setupView into my project and it still doesn't work properly.
I am using the blend function:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I was under the impression that this is correct for what I want to do.
Is there something very basic I am missing here? Any help would be much appreciated as I am submitting this as a coursework project in a few weeks and would very much like it to work.
Let me break this down:
First of all your transparent object is drawn.
At this point two things happen:
The pixels are drawn correctly to the back buffer
The depth buffer pixels are set in the depth buffer. Note that the depth buffer will write values all across your object, and transparency does not affect it.
You then draw other objects behind the transparent object.
But any of these objects pixels will not be drawn, because their depth buffer value are less than those already drawn.
The solution to this problem is to draw your scene back-to-front (draw starting at the further away things).
Hope that helps.
Edit: I'm assuming you are using the depth buffer here. If this isn't correct I'll consider writing another answer.

Most effective "architecture" for layered 2D app using OpenGL on iPhone?

I'm working on an iPhone OS app whose primary view is a 2-D OpenGL view (this is a subclass of Apple's EAGLView class, basically setting up an ortho-projected 2D environment) that the user interacts with directly.
Sometimes (not at all times) I'd like to render some controls on top of this baseline GL view-- think like a Heads-Up Display. Note that the baseline view underneath may be scrolling/animating while controls should appear to be fixed on the screen above.
I'm good with Cocoa views in general, and I'm pretty good with CoreGraphics, but I'm green with Open GL, and the EAGLView's operations (and its relationship to CALayers) is fairly opaque to me. I'm not sure how to mix in other elements most effectively (read: best performance, least hassle, etc). I know that in a pinch, I can create and keep around geometry for all the other controls, and render those on top of my baseline geometry every time I paint/swap, and thus just keep everything the user sees on one single view. But I'm less certain about other techniques, such as having another view on top (UIKit/CG or GL?) or somehow creating other layers in my single view, etc.
If people would be so kind to write up some brief observations if they've travelled these roads before, or at least point me to documentation or existing discussion on this issue, I'd greatly appreciate it.
Thanks.
Create your animated view as normal. Render it to a render target. What does this mean? Well, usually, when you 'draw' the polygons to the screen, you're actually doing it to a normal surface (the primary surface), that just so happens to be the one that eventually goes to the screen. Instead of rendering to the screen surface, you can render to any old surface.
Now, your HUD. Will this be exactly the same all the time or will it change? Will only bits of it change?
If all of it changes, you'll need to keep all the HUD geometry and textures in memory, and will have to render them onto your 'scrolling' surface as normal. You can them apply this final, composite render to the screen. I wouldn't worry too much about hassle and performance here -- the HUD can hardly be as complex as the background. You'll have a few textures quads at most?
If all of the hud is static, then you can render it to a separate surface when your app starts, then each frame render from that surface onto the animated surface you're drawing each frame. This way you can unload all the HUD geom and textures right at the start. Of course, it might be the case that the surface takes up more memory -- it depends on what resources your app needs most.
If your had half changes and half not, then technically, you can pre-render the static parts and then render the other parts as you're going along, but this is more hassle than the other two options.
Your two main options depend on the dynamicness of the HUD. If it moves, you will need to redraw it onto your scene every frame. It sucks, but I can hardly imagine that geometry is complex compared to the rest of it. If it's static, you can pre-render and just alpha blend one surface onto another before sending to the screen.
As I said, it all depends on what resources your app will have spare.

OpenGL ES. Scrolling 3 layer starfield textures gets me from 60 -> 40 FPS

I need to draw the background for a 2D space scrolling shooter. I need to implement 3 layers of stars: one distant nebula (moving really slow) in the background, one layer of far away stars (moving slow) and one layer of close stars (moving normal) on top of the other two.
The way i first tried this was using 3 textures of 320 x 480 that were transparent pngs of the stars. I used GL_BLEND and SRC_ALPHA, ONE_MINUS_SRC_ALPHA.
The results were not great even on the 3GS. On the first generation devices the FPS dropped to 40..50 so i think i'm doing this the wrong way.
When i disable the GL_BLEND everything works great even on the 1st gen devices and the FPS is back to 60 again... so it's must be the fact that i'm trying to belnd large transparent textures.
The problem is i don't know how to do it some other way...
Should i draw only the first nebula like an opaque texture and then try to emulate the middle and top star layer with small points moving around the screen?
Is there any other approach on the blending issue? How can i speed up the rendering process? Is one big texture (tileset) the answer?
Please help me cuz i'm stuck here and i can't get out.
I don't know how you want your stars to look like, but you might want to try to move them from a texture to geometry by using GL_POINTS in the DrawElements or DrawArrays maybe just replace the top two layers with layers of geometry. You can manipulate the points using PointSize, PointSizePointerOES and PointParameter to modify the rendering of the points.
You might want to use multi-texturing to see if that speeds it up. Each multi-texture stage can be assigned a unique transformation matrix, so you should be able to translate each layer at different speeds.
I believe all iPhone models support two texture stages, so you should be able to combine two of your layers into a single draw call. You might still need to resort to blending for the third layer.
Also note that alpha testing could be faster than alpha blending.
Good luck!
The back nebula should definitely be opaque; everything else is getting drawn on top of it, and I assume the only thing behind it is black. Also, prideout has a point: assuming your star layers can have effectively 1-bit alpha, that's definitely something you can try. Failing that, the GL_POINTS technique Harald mentions would work as well.