Attachment points - unity3d

I use models designed in Blender, and I need to add attachment points to it for special effects. Like mark a point in a hand of a model (modified by hand animations of course) that I can apply glow to when needed. I know how to apply glow to a 3d point, I just need a way to get that point.
How do I do that?

There's a couple ways to do this sort of thing, but I like this approach the best because it's easy for tech artists to interface with it (alls it needs is a special name on an object). You can have your top level character script scan its children and look for objects with some naming convention you specify.
foreach(Transform child in gameObject.GetComponentsInChildren<Transform>()) {
if( child.name == "AttachmentPointOrWhatever" ) {
myEffectsObject.transform.parent = child;
myEffectsObject.transform.localPosition = Vector3.zero;
}
}
This works because Unity will update the bones' positions based on your imported animation, so the effects object would follow along with the point that you imported with your animation.
As far as creating the animation, I'm coming from Maya and 3ds Max, but the idea should be the same for blender: Add extra bones for your attachment points and make sure they're bound to your model (or added to the skin weights or whatever the term is in Blender). They shouldn't have any weights on any vertices, but they need to be in the bind set so that Unity recognizes them as part of your animation and properly animates the points.

Related

Render order according to hierarchy in Unity

I am trying to understand how Unity decides what to draw first in a 2D game. I could just give everything an order in layer, but I have so many objects that it would be so much easier if it was just drawing in the order of the hierarchy. I could write a script that gives every object its index, but I also want to see it in editor.
So the question is, is there an option that I can check so that it uses the order in the hierarchy window as the default sorting order?
From your last screenshot I could see you are using SpriteRenderer.
The short answer to the question "is there an option that I can check so that it uses the order in the hierarchy window as the default sorting order?" would be no, there isn't by default*.
Sprite renderers calculates which object is in front of others in one of two ways:
By using the distance to the camera, this will draw the objects closest to the camera on top of the rest of the objects in that same order in layer, as per the docs:
Sprite Sort Point
This property is only available when the Sprite Renderer’s Draw Mode is set to Simple.
In a 2D project, the Main Camera is set to Orthographic Projection mode by default. In this mode, Unity renders Sprites in the order of their their distance to the camera, along the direction of the Camera’s view.
If you want to keep everything on the same sorting layer/order in layer you can change the order in which the objects appear by moving one of the two objects further away from the camera (this is probably further down the z axis). For example if your Cashew is on z = 0, and you place the walnut on z = 1 then the cashew will be drawn on top of the walnut. If Cashew is on z=0 and the walnut is on z = -1 then the walnut will be draw on top (Since negative is closer to the camera). If both of the objects are on z - 0 they are both equally as far away from the camera, so it becomes a coin toss for which object gets drawn in front, as it does not take into account the hierarchy.
The second way the order can be changed is by creating different sorting layers, and adjusting the order in layer in the sprite renderer component. But you already figured that out.
*However, that doesn't mean it cannot be done, technically...
If you feel adventurous there is nothing stopping you from making an editor script that automates setting the order in layer for you based on the position in the hierarchy. This script would loop through all the objects in your hierarchy, grab the index of the object in the hierarchy, and assign the index to the Order in Layer.
I don't think Unity has such feature (https://docs.unity3d.com/Manual/2DSorting.html).
Usually you shall define some Sorting Layers:
far background
background
foreground
and assign Sprite Renderer of each sprite to one of Sorting Layers

Is inverse kinematics possible without additional reference objects?

I was looking at this tutorial: https://docs.unity3d.com/Manual/InverseKinematics.html
In that tutorial, they change the body position of hands, head etc, by setting a target object to hold or look at.
In this project: https://hackaday.com/2016/01/23/amazing-imu-based-motion-capture-suit-turns-you-into-a-cartoon/
The guy access the blender api, and directly sets the transform of several bones.
Is it possible to do the same in Unity ? I do not need any assistance with getting data from sensors etc. I'm just looking for information on what is the equivalent API in unity to directly set the orientation of specific body parts of a skeleton at runtime.
You are probably looking for SkinnedMeshRenderer.
When you import a model from some 3d soft, such as blender, it will have SkinnedMeshRenderer component.
What you would want too check out is SkinnedMeshRenderer.bones, which get you the array of bones (as an array of Transform) used to control its pose. You can modify its elements, thus affecting the pose. So you can do stuff like this:
var bones = this.GetComponent<SkinnedMeshRenderer>().bones;
bones[0].localRotation = bones[0].localRotation * Quaternion.Euler(0f, 45f, 0f);
Just play around with it, it is the best way to see.
For more advanced manipulations, you can also set your own array of bones and specify their weights, with SetBlendShapeWeight / GetBlendShapeWeight, but this is probably more than what you need.

In Unity3D, what would "setting" the bounds of a mesh do or achieve?

In a Unity code base, I saw this:
// the game object currently has no mesh attached
MeshFilter mFilter = gameObject.AddComponent<MeshFilter>();
gameObject.AddComponent<MeshRenderer>();
// no problem so far
mFilter.mesh = MakeASmallQuadMesh(0.1f);
// great stuff
mFilter.mesh.bounds = SomeSpecificBounds();
// what ?
The function "MakeASmallQuadMesh" has the usual completely normal code for making a mesh, so
Mesh mesh = new Mesh();
mesh.SetVertices(verts);
mesh.SetIndices(indices);
mesh.SetUVs(0, uvs);
mesh.RecalculateNormals();
mesh.RecalculateBounds();
return mesh;
No worries there. It makes a quad 10cm across.
But what about the line of code
mFilter.mesh.bounds = SomeSpecificBounds();
I was amazed to learn you can set mesh.bounds, I assumed it would be read-only.
What possible "meaning" is there to "setting" the bounds? It would be like: there's a written measurement in a doctor's office stating that Jane is 6'. You change the record to 5'10". Of course, Jane's height does not change at all. You've just, bizarrely made the record wrong.
Could it be that: so, the bounds of a mesh are used by various, indeed many, Unity systems. (Culling, etc etc.) Could the pattern be that by "forcing" the bounds like this (the bounds are now "totally wrong" for the object - they're just some value you forced in there) the programmer wanted (for some reason) Unity's system (say, culling) to use those forced, nonsensical (to the actual object) values?
Wild guess, maybe there's a pattern (I have never heard of) where you "force" the bounds of object A to be the same as object B - for some reason I cannot guess at?
What could possibly be the pattern / reason here?
I would just assume it's a basic mistake, but assumptions kill.
I was actually curious about this myself, and I happened to have a program that explicitly generated large numbers of custom meshes, so I decided to test a few things.
The first thing I wanted to confirm was that the bounds were in fact set automatically. A rudimentary inspection revealed that this was indeed the case: Specifically, a mesh's bounds are automatically recalculated any time you set the mesh.vertices property. This probably explains why the property is a fixed length array rather than a list. (fun side note: If you try to assign secondary properties like uv coords or normals to the mesh before assigning vertex positions, unity gently nags you about mismatched array lengths before promptly exploding. So Don't Do That.)
As for what this actually impacts, I had a hunch: I set the extents of my mesh bounds to be 0. Immediately, meshes at the corner of my viewport stated getting visually culled. This tells us a few things:
Setting bounds explicitly does have an effect.
Unity does actually make use of custom bounds data.
Unity uses mesh bounds to perform frustum culling.
According to Unity's manual, there are three cases where the Bounds class is used: Mesh.bounds, Renderer.bounds, and Collider.bounds. Of those three, Mesh.bounds is the only property that isn't read only.
As for the question of why anybody would want to set mesh bounds explicitly, it's not impossible that you could perform some clever culling optimization like looking at a complex mesh through a window or some such, but if I had to guess, whoever wrote that code didn't trust Unity to set mesh bounds accurately or explicitly.

Making multiple objects with the same shader fade at different times

I have a death transformation for one of my GameObjects which goes from a spherical ball to a bunch of small individual blocks. Each of these blocks I want to fade at different times but since they all use the same shader I cannot seem to figure out how to make all of them not fade out at the same time.
This first picture is the Spherical Ball in its first step for when it turns from a spherical ball to a Minecraft'ish looking block ball and to the right of it is one of the blocks that make up the Minecraft'ish looking ball shown by the red arrow.
Now this is my Inspector for one of the little blocks that make up the Minecraft'ish looking ball.
I have an arrow pointing to what makes the object fade but that is globally across all of the blocks since they use the same shader. Is it possible to have each block fade separately or am I stuck and need to find a new disappear act for the little block dudes?
You need to modify the material property by script at runtime, and you need to do it through the Renderer.material property. When you access Renderer.material, Unity will automatically create a copy of the material for you that is handled separately -- including getting its own draw call, if you care about performance. You can tell this has happened because the material name in the renderer will change to "Materialname (Instance)".
Set the material's fade property using Renderer.material.SetFloat() (or whatever the appropriate Set... function is). Unfortunately the property's name isn't "Fade Factor". You can find the property's name by looking at the shader script, or by switching the inspector to debug mode and digging through the Saved Properties array for one that looks right.

How can I make dynamically generated terrain segments fit together Unity

I'm creating my game with dynamicly generated terrain. It is very simple idea. There are always three parts of terrain: segment on which stands a player and two next to it. When the player is moving(always forward) to the next segment new one is generated and the last one is cut off. It works wit flat planes, but i don't know how to do it with more complex terrain. Should I just make it have the same edge from both sides(for creating assets I'm using blender)? Or is there any other option? Please note that I'm starting to make games with unity.
It depends on what you would like your terrain to look like. If you want to create the terrain pieces in something external, like Blender, then yes all those pieces will have to fit together seamlessly. But that is a lot of work as you will have to create a lot of pieces that fit together for the landscape to remain interesting.
I would suggest that you rather generate the terrain dynamically in Unity. You can create your own mesh using code. You start by creating an object (in code), and then generating vertex and triangle arrays to assign to the object, for it to have a visible and sensible mesh. You first create vertices at specific positions and then add triangles that consist of 3 vertices at a time. If you want a smooth look instead of a low poly look, you will reuse some vertices for the next triangle, which is a little trickier.
Once you have created your block's mesh, you can begin to change your code to specify how the height of the vertices could be changed, to give you interesting terrain. As long as the first vertices on your new block are at the same height (say y position) as the last vertices on your current block (assuming they have the same x and z positions), they will line up. That said, you could make it even simpler by not using separate blocks, but by rather updating your object mesh to add new vertices and triangles, so that you are creating a terrain that is just one part that changes, rather than have separate blocks.
There are many ways to create interesting terrain. One of the most often used functions to generate semi-random and interesting terrain, is Perlin Noise. Another is his more recent Simplex noise. Like most random generator functions, it has a seed value, which you can keep track of so that you can create interesting terrain AND get your block edges to line up, should you still want to use separate blocks rather than a single mesh which dynamically expands.
I am sure there are many tutorials online about noise functions for procedural landscape generation. Amit Patel's tutorials are good visual and interactive explanations, here is one of his tutorials about noise-based landscapes. Take a look at his other great tutorials as well. There will be many tutorials on dynamic mesh generation as well, just do a google search -- a quick look tells me that CatLikeCoding's Procedural Grid tutorial will probably be all you need.