Is inverse kinematics possible without additional reference objects? - unity3d

I was looking at this tutorial: https://docs.unity3d.com/Manual/InverseKinematics.html
In that tutorial, they change the body position of hands, head etc, by setting a target object to hold or look at.
In this project: https://hackaday.com/2016/01/23/amazing-imu-based-motion-capture-suit-turns-you-into-a-cartoon/
The guy access the blender api, and directly sets the transform of several bones.
Is it possible to do the same in Unity ? I do not need any assistance with getting data from sensors etc. I'm just looking for information on what is the equivalent API in unity to directly set the orientation of specific body parts of a skeleton at runtime.

You are probably looking for SkinnedMeshRenderer.
When you import a model from some 3d soft, such as blender, it will have SkinnedMeshRenderer component.
What you would want too check out is SkinnedMeshRenderer.bones, which get you the array of bones (as an array of Transform) used to control its pose. You can modify its elements, thus affecting the pose. So you can do stuff like this:
var bones = this.GetComponent<SkinnedMeshRenderer>().bones;
bones[0].localRotation = bones[0].localRotation * Quaternion.Euler(0f, 45f, 0f);
Just play around with it, it is the best way to see.
For more advanced manipulations, you can also set your own array of bones and specify their weights, with SetBlendShapeWeight / GetBlendShapeWeight, but this is probably more than what you need.

Related

How to collect a list of all contact points handled by the unity physics engine that frame

I need a 3D equivalent to Collider2D.GetContacts for ground deteciton in my platformer, but I can't see how to do this neatly, in theory the physics engine should be keeping track of these points anyway, so this should be possible without any extra processing, but I can't figure out how. A 3D equivalent to this function simply doesn't seem to exist, so what is the best alternative?

Why in 3D game we need to separate a material into so many textures for a static object?

Perhaps the question is not that correct, the textures should be say a kind of channel? although I know they will be mixed in the shader finally.
I know the knowledge of the various textures is very important, but also a bit hard to understand completely.
From my understanding:
diffuse - the 'real' color of an object without light involved.
light - for static objects. render light effections into texture beforehand.
specular - the area where has direct reflection.
ao - to absorb indirect light for the different area of an object.
alpha - to 'shape' the object.
emissive - self illuminance.
normal - pixel normal vector to deal with the light ray.
bump - (dont' know the exact differences between normalmap).
height - stores Z range values, to generate terrain, modify vertex etc.
And the items below should be related to PBR material which I'm not familiar with:
translucency / cavity / metalness / roughness etc...
Please correct me if some misunderstandings there.
But whatever, my question is why we need to separate these textures apart for a material but not only render them together into the diffusemap directly for a static object?
It'll be appreciated if some examples (especially for PBR) , and thank you very much.
I can beforehand bake all things into the diffuse map and apply to my
mesh, why I need to apply so many different textures?
Re-usability:
Most games re-use textures to reduce the size of the game. You can't if you combine them together. For example, when you two similar objects but you want to randomize the looks of them(aging effect), you can make them share the-same color(albedo) map but use different ao map. This becomes important when there hundreds of objects, you can use different combination of texture maps on similar objects to create unique Objects. If you have combined this into one, it would be impossible to share it with other similar objects but you to slightly make to look different.
Customize-able:
If you separate them, you'll be able to change the amount of effect each texture will apply to the Object. For example, the slider on the metallic slot for the Standard shader. There are more of this sliders on other map slots but they only appear once you plug a texture into the slot. You can't do this when you combine the textures into one.
Shader:
The standard shader can't do this so you have to learn how to write shader since you can't use one image to get the effects you would with all those texture maps with the standard shader. A custom shader is required and you need a way to read the information about the maps in the combined shader.
This seems like a reasonable place to start:
https://en.wikipedia.org/wiki/Texture_mapping
A texture map is an image applied (mapped) to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3d model formats or material definitions, and assembled into resource bundles.
I would add to this that the shape or a polygon don't have to belong to 3d objects as one may imagine it. If you render two triangles as a rectangle, you can run all sorts of computations and store it in a "live" texture.
Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974.
What this detail represents is either some agreed upon format to represent some property, (say "roughness" within some BRDF model) which you would encounter if you are using some kind of an engine.
Or whatever you decide that detail to be, if you are writing your own engine. You can decide to store whatever you want, however you want it.
You'll notice on the link that different "mapping" techniques are mentioned, each with their own page. This is the result of some person, or people who did some research and came up with a paper detailing the technique. Other people adopt it, and that's how they find their way into engines.
There is no rule saying these can't be combined.

Tile Grid Data storage for 3D Space in Unity

This question is (mostly) game engine independent but I have been unable to find a good answer.
I'm creating a turn-based tile game in 3D space using Unity. The levels will have slopes, occasional non-planar geometry, depressions, tunnels, stairs etc. Each level is static/handcrafted so tiles should never move. I need a good way to keep track of tile-specific variables for static levels and i'd like to verify if my approaches make sense.
My ideas are:
Create 2 Meshes - 1 is the complex game world, the second is a reference mesh overlay that will have minimal geometry; it will not be rendered and will only be used for the tiles. I would then Overlay the two and use the 2nd mesh as a grid reference.
Hard-code the tiles for each level. While tedious it will work as a brute force approach. I would, however, like to avoid this since it's not very easy to deal with visually.
Workaround approach - Convert the 3d to 2D textures and only use 1 mesh.
"Project" a plane down onto the level and record height/slope to minimize complexity. Also not ideal.
Create individual tile objects for each tile manually (non-rendered). Easiest solution i could think of.
Now for the Unity3D specific question:
Does unity allow selecting and assigning individual Verts/Triangles/Squares of a mesh and adding componenets, scripts, or variables to those selections; for example, selecting 1 square in the 10x10 unity plane and telling unity the square of that plane now has a new boolean attached to it? This question mostly refers to idea #1 above, where i would use a reference mesh for positional and variable information that were directly assigned to the mesh. I have a feeling that if i do choose to have a reference mesh, i'd need to have the tiles be individual objects, snap them in place using the reference, then attach relevant scripts to those tiles.
I have found a ton of excellent resources (like http://www-cs-students.stanford.edu/~amitp/gameprog.html) on tile generation (mostly procedural), i'm a bit stuck on the basics due to being new to unity and im not looking for procedural design.

Unity3d Transform issues when trying to re-parent a Gameobject with sub-Gameobjects

I am trying to grab a Gameobject that's placed in the correct spot and re-parent it to different Gameobject transforms using code. I manage to do this by using ..transform.parent = parent.transform but the rotation and position get messed up. How can I keep it's rotation and position when re-parenting?
Thanks!
Always use gameObject.transform.SetParent(anotherGameObject), or you risk exactly the sort of phenomena you've described. See SetParent docs; there is a second parameter to consider.
The form mentioned in the question still works in many cases, but has been deprecated for a while.
Place all game object that you want to parent, to empty game object(example:"EMPTY\PARENT\EMPTY\CHILD") with scale 1:1:1(in editor)or re-scale your parent game object to 1:1:1

Attachment points

I use models designed in Blender, and I need to add attachment points to it for special effects. Like mark a point in a hand of a model (modified by hand animations of course) that I can apply glow to when needed. I know how to apply glow to a 3d point, I just need a way to get that point.
How do I do that?
There's a couple ways to do this sort of thing, but I like this approach the best because it's easy for tech artists to interface with it (alls it needs is a special name on an object). You can have your top level character script scan its children and look for objects with some naming convention you specify.
foreach(Transform child in gameObject.GetComponentsInChildren<Transform>()) {
if( child.name == "AttachmentPointOrWhatever" ) {
myEffectsObject.transform.parent = child;
myEffectsObject.transform.localPosition = Vector3.zero;
}
}
This works because Unity will update the bones' positions based on your imported animation, so the effects object would follow along with the point that you imported with your animation.
As far as creating the animation, I'm coming from Maya and 3ds Max, but the idea should be the same for blender: Add extra bones for your attachment points and make sure they're bound to your model (or added to the skin weights or whatever the term is in Blender). They shouldn't have any weights on any vertices, but they need to be in the bind set so that Unity recognizes them as part of your animation and properly animates the points.