I have a ScrollView with a working mask that block images, text etc when not in the viewport (visible area).
The problem I have is that ALL particles sytems are ALWAYS rendering and visible on screen, whether they are part of the viewport or not.
I would to know:
1) if masking is possible on Particle Systems
2) and if it is what have I overlooked or missed that makes the particles visible.
FYI I have tried layers, adding a specific mask to the object to the object with the particle system, adding a mask to the parent of the object with the particle system, and randomly altering renderer settings, and I'm ready to cry.
The problem is not the particle systems themselves, but with the shader the particles use.
The way Unity's Mask Stencil system works is through the stencil buffer, which only works if your shader plays nice with it. If you want to try to modify your shader for this, here is the relevant documentation. Otherwise, try changing to a different shader or using a different method to hide your particles, such as modifying Camera.rect, for which the documentation is here.
By the way, if we're being a stickler for terminology here, "viewport" doesn't mean what you think it means (within the context of computer graphics).
Related
I have implemented blast particle within my game but when it spawns, it gets cutout to correspond to the nearest environment object.
This problem, I was getting:
Overall there are multiple particle systems are running to achieve this but I am attaching one particle system inspector panel details:
A similar kind of renderer exists for mostly all particle systems. So please guide me to solve the above effect cut related problem with the wall.
EDIT-1:
I have added a VFX rendering camera and created separate later too for the effects but there is no change in the result.
EDIT-2:
Here you have a screenshot for the Main Camera of the game:
You could add collisions to the particles so that they either bounce away from the world object or are just destroyed.
Example Collision Setup
It will add a little overhead but depending on how many particles the effect uses it shouldn't be an issue, and if it is you could test it with a lower collision quality.
I found this advice on the polycount forums written by user AlecMoody:
[1.] Create a second camera parented to the primary camera...
[2.] Create a render layer for your [particle effect] (or assign it to an existing layer).
[3.] Set the main camera culling mask to everything except your explosion layer.
[4.] Set the child camera culling mask to only be your explosion layer. The child camera should have the clear flag set to "don't clear"
[5.] and put a positive value into the [child] camera depth.
Since for you having a higher depth on the child camera doesn't make it render above the lower depth cameras, setting the clear flag to "depth only" may help you.
I'm looking for ways to clip an entire unity scene to a set of 4 planes. This is for an AR game, where I want to be able to zoom into a terrain, yet still have it only take up a given amount of space on a table (i.e: not extend over the edges of the table).
Thus far I've got clipping working as I want for the terrain and a water effect:
The above shows a much larger terrain being clipped to the size of the table. The other scene objects aren't clipped, since they use unmodifed standard shaders.
Here's a pic showing the terrain clipping in the editor.
You can see the clipping planes around the visible part of the terrain, and that other objects (trees etc) are not clipped and appear off the edge of the table.
The way I've done it involves adding parameters to each shader to define the clipping planes. This means customizing every shader I want to clip, which was fine when I was considering just terrain.
While this works, I'm not sure it's a great approach for hundreds of scene objects. I would need to modify whatever shaders I'm using, and then I'd have to be setting additional shader parameters every update for every object.
Not being an expert in Unity, I'm wondering if there are other approaches that are not "per shader" based that I might investigate?
The end goal is to render a scene within the bounds of some plane.
One easy way would be to use Box Colliders as triggers on each side of your plane. You could then turn off Renderers on objects staying in the trigger with OnTriggerEnter/OnTriggerStay and turn them on with OnTriggerExit.
You can also use Bounds.Contains.
I've been trying to fix this issue for a while. It's about the render order of transparent particle sprites, regardless of the shader used, some of the sprites from the background nebula are rendered above the foreground ones. The image should clarify the situation. The sprites are Quads with materials on them which happen to use the Legacy Shaders/Particles/Alpha Blended shader.
I've even tried setting the renderQueue of the foreground quads' materials to a value higher than that of the background quads, but even that didn't help
It seems whatever I do, the render order of the transparent sprites is messed up. The shader currenty used is Particles/Additive Blend, but using similar shaders didn't really help.
Particle system geometry is batched, so the render order is determined by the particle system itself. In the settings for the particle system, go to the last category "Rendering". In there you should find a field called "Sort Mode" that determines which particles are put in front of others. It sounds like you want the "By Distance" option.
In current Unity,
For use in Unity.UI as conventional UI ..
for any "Sprite (2D and UI)", in fact it always defaults to having "Generate Mip Maps" turned ON. Every time you drop an image in, you have to turn that "off" and apply.
As noted in the comments, these days you can actually use world space UI canvasses, and indeed advanced users may indeed have (say) "buttons that float over the head of Zelda and they are in the far distance". However if you're a everyday Unity user adding a button, just turn it off :)
In Unity, "sprites" can still be positioned in 3D space. For example, on a world space canvas. Furthermore, mipmaps are used when the sprite is scaled. This is because the mipmap sampling is determined by the texel size rather than the distance.
If a sprite is flat and perfectly scaled then there is no reason to use mipmaps. This would likely apply to your icon example.
I suspect that it is enabled by default for 2D games where sprites will often not be perfectly scaled. To clarify, a sprite does not need to be on a canvas. Sprites can exist as their own GameObject with a Sprite Renderer (not on a canvas.) When this is the case, scaling the camera view will change the sprite's size on the screen resulting in mipmapping due to the texel size changing. This results in making the sprite always perfectly scaled challenging without a canvas.
A similar question was asked in the past, but the new version of Unity, does not solve my problems. When setting multiple cameras with different depth, they handle different layer. But when I add the shader blur, the second chamber, which handles only one layer, this blur is added to all elements. Why is it, and how to fix it?
When applying the texture, depth and alignment of the behavior of the camera changes. For example, a shader camera should be in the background, with the settings skoboh, because the main can monitor everything and retain only the second shader setting depth only. Sorry for bad english.