I have a particle system for creating "nebulas" in a 2D space game. The results are looking pretty good, except there's a rendering/shader issue with what I'm doing. In game I see a rather ugly tiered blending, almost like I'm looking at 6-bit color rendering (exaggerating for clarity).
Looking at it in unity's scene view, it's nice and smooth (the few "artifacts" are part of the source image used for the particle. It looks exactly as I would hope):
Is there a way to fix this issue? And for curiosity and posterity's sake, why does this happen?
A few specifics:
It's Unity 2018.1.0f2, Windows 10, GTX 1050 ti. There are multiple particles overlapping in that image (I'd guess around 10?), One particle system. the shader is unity's provided Particles/Additive, and I get the same results with Particles/Additive (Soft), Particles/Alpha Blended, and a number of other built in transparent shaders I've tried. The material has alpha = 77 and the particle system adds in alpha = 16 (I believe that's cumulative). The base image is white and transparent. The alpha channel is what defines the actual shape. Color is added with the particle system (resulting in each particle having a different color, even though they all use the same base image).
Edit:
Did some more experimentation, the problem occurs in game (build or in editor) and also in the scene view if not play-testing. I tried reducing the particle count; even single particles (no other overlapping transparent objects/particles) still have the issue. I also experimented with putting all of the transparency control in the particle system or the material, and tried putting both at 255 alpha and using the brightness to control opacity (it's an additive shader), and still no improvement.
I did figure out why the camera and scene view rendered differently. The Camera had "Allow HDR" checked. After unchecking that, the scene view looks identical to the camera's rendering (too bad it didn't go the other way). Perhaps looking into HDR farther will yield answers...
Alright, I've done some experimentation, and I'm beginning to think this is actually a problem with computer displays/graphics cards/color profiles. I took the image in gimp, zoomed in on a part where there was visible stair-stepping in color, and took some color samples at the edges of different color gradients.
Here are the color samples I took, from lighter to darker gradients.
G1 43 48 17 (next to G2)
G2 43 47 17 (next to G1)
G2 39 46 16 (next to G3)
G3 38 45 16 (next to G2)
G3 36 44 16 (next to G4)
What you'll note is that there is substantial difference in RGB value within a gradient, but even zoomed in to where I could fit four pixels on the screen, I couldn't see any pixel outlines within a gradient, but it was very obvious at the edge of the gradient. Either there's something funky with my eyes, or the screen is doing a poor reproduction of the colors. I've tested this on three screens, a cheap HP laptop, a 27" Acer monitor, and a MackBook Pro, all have the same effect.
Also interesting to note, the effect is present even on the brighter image, it's just less noticeable. I'm thinking that unity has absolutely zero to do with this effect, and it's more of a hardware issue (or like, gamma settings in the video drivers).
I'd still be curious if anyone else knows anything more on this topic.
Related
I'm trying to figure out why my Object's textures keep turning white once I scale the object down to 1% (or less) of its normal size.
I can manipulate the objects realtime with my fingers and there is a threshold where all the textures (except a few) turn completely ghost white, as shown below:
https://imgur.com/wMykeFw
Any input to fix is appreciated!
One potential cause of this issue is due to how certain shaders can miscalculate how to render textures when scales are set to low values.
To be able to render this asset so small using the same shader, re-import the mesh with a smaller scale factor (in the mesh import settings), and that may fix it.
select ARCamera then camera, in the inspector, select the cameras clipping plane and increase it(you want to find the minimum possible clipping that works to save on memory, so start at 20000, and work your way backwards til it stops working, then back up a notch).
next (still in the cameras inspector), select Rendering Path and set it to Legacy Vertex Lit
this should clear it up for you
I'm currently working on voxel terrain generation in Unity, and I've run into something annoying:
From certain camera angles, you can see seams between the edges of chunk meshes, as pictured below:
What I know:
This only occurs on the edge between two meshes.
This is not being caused by texture bleeding (The textures are solid colors, so I'm using a very large amount of padding when setting up the UVs).
The positions of all vertices and meshes are showing up as exact integers.
Disabling anti-aliasing almost entirely fixes this (You can still see the occasional speck along the edge).
I'm using Unity's default Standard shader.
Can someone explain what's causing this, and whether there's a way to solve this other than disabling AA?
Almost certainly the side faces are demonstrating z-buffer fighting with the top faces — precision is imperfect so along the seam of your geometry rounding errors are making the very top of the brown face of one cube seem to be closer to the camera than the very top of the green face of the next.
Ideally, don't draw the brown faces that definitely aren't visible — if a cube has a neighbour on face X then don't draw either its face X or its neighbour's adjoining face.
I'm struggling with performance in unity. I created a very simple game scene. Very low poly. 2 light sources, 1 Directional 1 Point, I have the Standard Shader with Alberdo and Occlusion texture set (this for all few 3D objects in the scene)
The issue is, I was expecting fps to be around 60 but it is 29ish..
What things I have to consider regarding performance in this scenario? it is very frutstating since it is a very, very simple scene
see images:
In your Quality settings, as mentioned by Hamza, set shadow resolution to medium setting and set V sync count to "Dont sync".
I am using Unity 5 to develop a game. I'm still learning, so this may be a dumb question. I have read about Depth Buffer and Depth Texture, but I cannot seem to understand if that applies here or not.
My setting is simple: I create a grid using several quads (40x40) which I use to snap buildings. Those buildings also have a base, made with quads. Every time I put one one the map, the Quads overlap and they look like the picture.
As you can see, the red quad is "merging" with the floor (white quads).
How can I make sure Unity renders the red one first, and the white ones are background? Of course, I can change the red quad Y position, but that seems like the wrong way of solving this.
This is a common issue, called Z-Fighting.
Usually you can reduce it by reducing the range of “Clipping Planes” of the camera, but in your case the quads are at the same Y position, so you can’t avoid it without changing the Y position.
I don't know if it is an option for you, but if you use SpriteRenderer (Unity 2D) you don’t have that problem and you can just set “Sorting Layer” or “Order in Layer” if you want modify the rendering order.
I am working on a MATLAB project which enables the user to do face detection and blur them out.
Built-in functions used:
vision.CascadeObjectDetector
The problem with this function: It only detects frontal faces.
The methods I tried: Use imrotate function in a while loop to rotate the image while the degree is less then 360. So I thought that it would work. I increment the rotation by 23 everytime.
Cons: It doesn't work, it changes the spatial resolution of the image.
I have done some experiments in the past, and I have learned that the vision.CascadeObjectDetector using the default frontal face model can tolerate about 15 degrees of in-plane rotation. So I would advise rotating the image by 15 or even 10 degrees at a time, rather than 23.
The problem with training your own detector in this case is the fact that the underlying features (Haar, LBP, and HOG) are not invariant to in-plane rotation. You would have to train multiple detectors, one for each orientation, every 15 degrees or so.
Also, are you detecting faces in still images or in video? If you are looking at a video, then you may want to try tracking the faces. This way, even if you miss a face because somebody's head is tilted, you'll have a chance to detect it later. And once you detect a face, you can track it even if it tilts. Take a look at this example.