I have a death transformation for one of my GameObjects which goes from a spherical ball to a bunch of small individual blocks. Each of these blocks I want to fade at different times but since they all use the same shader I cannot seem to figure out how to make all of them not fade out at the same time.
This first picture is the Spherical Ball in its first step for when it turns from a spherical ball to a Minecraft'ish looking block ball and to the right of it is one of the blocks that make up the Minecraft'ish looking ball shown by the red arrow.
Now this is my Inspector for one of the little blocks that make up the Minecraft'ish looking ball.
I have an arrow pointing to what makes the object fade but that is globally across all of the blocks since they use the same shader. Is it possible to have each block fade separately or am I stuck and need to find a new disappear act for the little block dudes?
You need to modify the material property by script at runtime, and you need to do it through the Renderer.material property. When you access Renderer.material, Unity will automatically create a copy of the material for you that is handled separately -- including getting its own draw call, if you care about performance. You can tell this has happened because the material name in the renderer will change to "Materialname (Instance)".
Set the material's fade property using Renderer.material.SetFloat() (or whatever the appropriate Set... function is). Unfortunately the property's name isn't "Fade Factor". You can find the property's name by looking at the shader script, or by switching the inspector to debug mode and digging through the Saved Properties array for one that looks right.
Related
It’s like this hip is stuck in place, with in the pictures(being at different times in the animation since I can’t upload a gif) the red wireframe is the drawing of the raw output triangles and everything else being default unity, aka the correct output
What am I doing wrong? What am I missing? This has been driving me nuts for 2-3 days now, any help is appreciated
As you do not post any code, I will try to do some guessing over what is going on. Disclamair: I actually happen to run into similar problem myself.
First, you must know that Update's of MonoBehaviour are called in random order (not neccessery random-random, but yous see the point). If you bake mesh in one component, it can still be one-frame late to the Animator component.
There are actually two solution of this case: first is to specify the order of Script Execution Order in Project Settings, while the second is to use LateUpdate instead of Update when updating mesh after skinning.
The second problem you might have run into is scale / position of skinned mesh. Even if it does not contribute at all to the animation movement it can still wreck the mesh baking collision later on.
To fix that make sure that all your SkinnedMeshRenderers objects have Local Position, Local Rotation and Local Scale set in Transform component to identity (0,0,0 for position and rotation, 1,1,1 for scale). If not - reset those values in editor. It will not change animations, but it will change mesh generator.
If both solutions does not work, please describe the problem more thoroughly.
I am using a single surface shader with a custom vertex function, and tried to I use macros like UNITY_PASS_SHADOWCASTER to add pass-specific code to the shadow processing, for example moving the vertices away from the light source to fix self-shadowing. However, I discovered that doing so has weird effects on how the shadows are rendered on the object, and even when some of its pixels are displayed.
Eventually, I managed to find out that the ShadowCaster pass must be called at least twice even if there is a single light source: once with the virtual camera matching the light source, but also a second time when the shadow is to be applied to it. This is the call that controls the visibility of the shadows behind the object.
Now I have two questions:
What is this mode of execution called?
How do I make code branch depending on which of these mode is executing? In other words, I want to move the vertices to a different position when casting the shadow, but make them stay when the shadows are applied to the object. At the moment, I am checking whether ObjSpaceLightDir matches ObjSpaceViewDir, but it doesn't sound like the best idea. Considering the shader pass is probably being compiled only once, I suppose I would have to look for a runtime variable, but I am not sure whether there is even any...
I managed to find mentions of a ShadowCollector pass for older versions of Unity. Is this the same thing?
I am using Unity 2020.3.32f1 with the built-in render pipeline.
I am trying to understand how Unity decides what to draw first in a 2D game. I could just give everything an order in layer, but I have so many objects that it would be so much easier if it was just drawing in the order of the hierarchy. I could write a script that gives every object its index, but I also want to see it in editor.
So the question is, is there an option that I can check so that it uses the order in the hierarchy window as the default sorting order?
From your last screenshot I could see you are using SpriteRenderer.
The short answer to the question "is there an option that I can check so that it uses the order in the hierarchy window as the default sorting order?" would be no, there isn't by default*.
Sprite renderers calculates which object is in front of others in one of two ways:
By using the distance to the camera, this will draw the objects closest to the camera on top of the rest of the objects in that same order in layer, as per the docs:
Sprite Sort Point
This property is only available when the Sprite Renderer’s Draw Mode is set to Simple.
In a 2D project, the Main Camera is set to Orthographic Projection mode by default. In this mode, Unity renders Sprites in the order of their their distance to the camera, along the direction of the Camera’s view.
If you want to keep everything on the same sorting layer/order in layer you can change the order in which the objects appear by moving one of the two objects further away from the camera (this is probably further down the z axis). For example if your Cashew is on z = 0, and you place the walnut on z = 1 then the cashew will be drawn on top of the walnut. If Cashew is on z=0 and the walnut is on z = -1 then the walnut will be draw on top (Since negative is closer to the camera). If both of the objects are on z - 0 they are both equally as far away from the camera, so it becomes a coin toss for which object gets drawn in front, as it does not take into account the hierarchy.
The second way the order can be changed is by creating different sorting layers, and adjusting the order in layer in the sprite renderer component. But you already figured that out.
*However, that doesn't mean it cannot be done, technically...
If you feel adventurous there is nothing stopping you from making an editor script that automates setting the order in layer for you based on the position in the hierarchy. This script would loop through all the objects in your hierarchy, grab the index of the object in the hierarchy, and assign the index to the Order in Layer.
I don't think Unity has such feature (https://docs.unity3d.com/Manual/2DSorting.html).
Usually you shall define some Sorting Layers:
far background
background
foreground
and assign Sprite Renderer of each sprite to one of Sorting Layers
I want to mix SpriteKit and Metal, and I found that there is a special class for that - SKRenderer. But, as I see, it can only draw whole SKScene as a single layer above all screen. So, can it use the depth buffer and zPosition property for the proper rendering? For example, if my scene contains two SKNodes, and I want to draw "Metal object" between them?
I see in a GPU debugger that every SKNode is rendered with a separate draw call (even more that one, actually), so, in theory, it's possible to use SKNode.zPosition not only for sorting inside SKScene. For example, they can just translate it into viewport's z-position as is (and, of course, keep depth test).
The reason why I think that it's possible is that in the documentation, I see this sentence:
For example, you might write the environmental effects layer of your app that does fog, clouds, and rain, with custom Metal shaders, and continue to layer content below and above that with SpriteKit.
and I just can't believe that as "continue to layer content below and above that with SpriteKit" they mean "OK, you can create two different SKScenes".
I have a brown sprite, which contains a hole with triangular shape.
I've added a trail renderer (and set its order in layer to appear behind the sprite), so the user can paint the sprite's hole without painting the sprite itself.
My question is: how can it detect when the hole is all painted?
I thought about using a shader to check if there is any black pixel in the screen, but I don't know if it's possible, because the shader won't know in what percentage of the image it is.
One way would be to take a screenshot with the ScreenCapture.CaptureScreenshotAsTexture method and then loop through an array of pixel colors from Texture2D.GetPixels32. You could then check if the array contains 'black' pixels.
I would do it in a coroutine for better performance as doing it every frame may slow down your application. Also what is important when it comes to CaptureScreenshotAsTexture according to unity docs:
To get a reliable output from this method you must make sure it is called once the frame rendering has ended, and not during the rendering process. A simple way of ensuring this is to call it from a coroutine that yields on WaitForEndOfFrame. If you call this method during the rendering process you will get unpredictable and undefined results.