Why in Unity3d Transform is a separate entity from GameObject? - unity3d

Every GameObject has Transform, and only one. Every Transform can only be created attached to a GameObject. Hierarchy is realized through Transform, but GameObject relies heavily on it.
Is there a reason about why they are different entities and not the same thing?

Component's and GameObject's are inherited types of a base class called Object. Component's can only exist and be attached to Objects.
A Transform is an inherited type of Component. All GameObject's must have a Transform Component attached to them. The reason being:
Every object in a scene has a Transform. It's used to store and
manipulate the position, rotation and scale of the object. Every
Transform can have a parent, which allows you to apply position,
rotation and scale hierarchically. This is the hierarchy seen in the
Hierarchy pane.
If you asking why such an architecture is in place, you may find that this sort of discussion will be found debated in more than a few articles and books. I once asked this question myself and I found the following Stack Overflow answer to best answer it HERE.

Related

In Unity3D, what would "setting" the bounds of a mesh do or achieve?

In a Unity code base, I saw this:
// the game object currently has no mesh attached
MeshFilter mFilter = gameObject.AddComponent<MeshFilter>();
gameObject.AddComponent<MeshRenderer>();
// no problem so far
mFilter.mesh = MakeASmallQuadMesh(0.1f);
// great stuff
mFilter.mesh.bounds = SomeSpecificBounds();
// what ?
The function "MakeASmallQuadMesh" has the usual completely normal code for making a mesh, so
Mesh mesh = new Mesh();
mesh.SetVertices(verts);
mesh.SetIndices(indices);
mesh.SetUVs(0, uvs);
mesh.RecalculateNormals();
mesh.RecalculateBounds();
return mesh;
No worries there. It makes a quad 10cm across.
But what about the line of code
mFilter.mesh.bounds = SomeSpecificBounds();
I was amazed to learn you can set mesh.bounds, I assumed it would be read-only.
What possible "meaning" is there to "setting" the bounds? It would be like: there's a written measurement in a doctor's office stating that Jane is 6'. You change the record to 5'10". Of course, Jane's height does not change at all. You've just, bizarrely made the record wrong.
Could it be that: so, the bounds of a mesh are used by various, indeed many, Unity systems. (Culling, etc etc.) Could the pattern be that by "forcing" the bounds like this (the bounds are now "totally wrong" for the object - they're just some value you forced in there) the programmer wanted (for some reason) Unity's system (say, culling) to use those forced, nonsensical (to the actual object) values?
Wild guess, maybe there's a pattern (I have never heard of) where you "force" the bounds of object A to be the same as object B - for some reason I cannot guess at?
What could possibly be the pattern / reason here?
I would just assume it's a basic mistake, but assumptions kill.
I was actually curious about this myself, and I happened to have a program that explicitly generated large numbers of custom meshes, so I decided to test a few things.
The first thing I wanted to confirm was that the bounds were in fact set automatically. A rudimentary inspection revealed that this was indeed the case: Specifically, a mesh's bounds are automatically recalculated any time you set the mesh.vertices property. This probably explains why the property is a fixed length array rather than a list. (fun side note: If you try to assign secondary properties like uv coords or normals to the mesh before assigning vertex positions, unity gently nags you about mismatched array lengths before promptly exploding. So Don't Do That.)
As for what this actually impacts, I had a hunch: I set the extents of my mesh bounds to be 0. Immediately, meshes at the corner of my viewport stated getting visually culled. This tells us a few things:
Setting bounds explicitly does have an effect.
Unity does actually make use of custom bounds data.
Unity uses mesh bounds to perform frustum culling.
According to Unity's manual, there are three cases where the Bounds class is used: Mesh.bounds, Renderer.bounds, and Collider.bounds. Of those three, Mesh.bounds is the only property that isn't read only.
As for the question of why anybody would want to set mesh bounds explicitly, it's not impossible that you could perform some clever culling optimization like looking at a complex mesh through a window or some such, but if I had to guess, whoever wrote that code didn't trust Unity to set mesh bounds accurately or explicitly.

Swift SKSpriteNode: Complex Sprite Textures?

This is a question regarding best-practice for implementation. For an example, I will reference a simple game called Pixel Claw.
Suppose I had an SKSpriteNode akin to the claw from Pixel Claw, in that what the SKSpriteNode might be ''holding'' is variable (but finite in possibilities, e.g. four different objects).
What the node SKSpriteNode is holding has no agency of itself.
Thus my question is: is it better to use different textures such as a claw, a claw holding object a, a claw holding object b, etc or have two SKSpriteNodes and position the SKSpriteNode with the claw texture to be next to the SKSpriteNode with an object texture?
I am not making a claw game, it was just the first example that popped to my head where both could be plausible solutions. The former being more simplistic - just switch the texture, the latter being more generalizable.
If the latter is the best solution all around, how can one ''pair'' the movements of the two sprites?
I would use the second approach especially if the objects a and b can fall out of the claw at some point.
You should make the texture of a claw, the texture of object a, and the texture of object b.
To make the objects move in sync with the claw, simply add the object as a child of the claw node. This way, moving the claw will cause the object to move as well!
Note that before adding the object as a child, you need to make sure that the object does not have a parent. If it does, you must remove it from its parent before adding it as a child of the claw. The same thing applies when you "release" the object from the claw: you need to first remove the object from its parent (the claw) and then add it as a child of the desired parent.

Unity3d Transform issues when trying to re-parent a Gameobject with sub-Gameobjects

I am trying to grab a Gameobject that's placed in the correct spot and re-parent it to different Gameobject transforms using code. I manage to do this by using ..transform.parent = parent.transform but the rotation and position get messed up. How can I keep it's rotation and position when re-parenting?
Thanks!
Always use gameObject.transform.SetParent(anotherGameObject), or you risk exactly the sort of phenomena you've described. See SetParent docs; there is a second parameter to consider.
The form mentioned in the question still works in many cases, but has been deprecated for a while.
Place all game object that you want to parent, to empty game object(example:"EMPTY\PARENT\EMPTY\CHILD") with scale 1:1:1(in editor)or re-scale your parent game object to 1:1:1

Drag and match objects

I want to make a game which needs to drag an object (UIView class object) and intersect with an another object (UIButton) and the same time an event generates , and checks the matching of object is right or wrong.
If wrong the objects come back to its position.
The touch and drag I have done with the object but elasticity of the objects position is not working properly ? Any example or help Please
If you're using standard (non-GL) programming, you can use the object's frame to check if the objects intersect. There's a call for it. You can take the relevant points of one object and check if they are inside the other object's polygon. If they are, then you have a collision. The Quartz 2D drawing tools have lots of intersection methods for these types of objects.

Attachment points

I use models designed in Blender, and I need to add attachment points to it for special effects. Like mark a point in a hand of a model (modified by hand animations of course) that I can apply glow to when needed. I know how to apply glow to a 3d point, I just need a way to get that point.
How do I do that?
There's a couple ways to do this sort of thing, but I like this approach the best because it's easy for tech artists to interface with it (alls it needs is a special name on an object). You can have your top level character script scan its children and look for objects with some naming convention you specify.
foreach(Transform child in gameObject.GetComponentsInChildren<Transform>()) {
if( child.name == "AttachmentPointOrWhatever" ) {
myEffectsObject.transform.parent = child;
myEffectsObject.transform.localPosition = Vector3.zero;
}
}
This works because Unity will update the bones' positions based on your imported animation, so the effects object would follow along with the point that you imported with your animation.
As far as creating the animation, I'm coming from Maya and 3ds Max, but the idea should be the same for blender: Add extra bones for your attachment points and make sure they're bound to your model (or added to the skin weights or whatever the term is in Blender). They shouldn't have any weights on any vertices, but they need to be in the bind set so that Unity recognizes them as part of your animation and properly animates the points.