Say you want to let your users create a 2D airship (side view) made up of components. One component could be a floating balloon, another could be a storage room, etc.
And say you want this airship to have physics applied to it as a whole, with each component playing a part in this. For example, balloons would take away from its down force and other compartments would apply down force to it.
At the same time, the whole airship works as a whole (or at least the only physical separation would be between floater components and the others, which apply weight), so physics are applied to it as if it was one body.
Now, how can this be managed? How can I customize a game object via a script, giving it different components with different weights, and have it behave like it was a single premade object?
I'm sorry if the question is simple, but I'm only getting started with unity. Thank you very much!
TL;DR: Customizable airship gameobject with different components that have different weights. How to make it behave as a single physics entity and manage its different components?
I'm not sure that I completely understood the question :D
You mean having a game object, that has some child game objects, each with individual Collider and Rigidbody, and the Rigidbody physics applied to the parent game object? You can use joint colliders for this purpose such as HingeJoint2D, and set the parent's Rigidbody as ConnectedRigidBody.
(And instead of the word "component", use child or part or sth else to prevent ambiguity. 'Cuz the components in Unity3D are stuff like scripts, rigidbody, collider, etc. which are attached to game objects.)
Related
I'm working on a small project where the goal is to defend yourself from mass enemy hordes in a 2d environment. The goal is to have the game be physics based, with enemies pushing against eachother to create a feeling of chaos.
Now I have a setup with enemies, a player and multiple obstacles all with a Rigidbody2D and 2D-colliders (mixed types, including polygon2D collider). Now I want to have the enemies chase the player. The obstacles however are polygon shaped rigidbodies, and thus are able to move.
So I want the "navigation" mesh of some sort to update on the fly, and calculate a navigation path for many entities.
Note: I don't know how many entities yet, but let's say ~100.
The things I tried to far:
Using the built-in NavMesh package from Unity. This package does not properly support the 2D configuration of Unity
Using the NavMeshPlus package. This implements a NavMeshSurface2D component to work with the 2D environment. I got this to work, but the NavMeshAgent component to steer the enemy does not use the force attribute of the rigidbody component, but rather just sets the speed or position of the transform. If someone knows how to use the NavMeshAgent or the NavMesh API in a way to get the target direction/speed via script, that would also be a solution
I tried the A* package. Which also works, but creates a very jagged line as it only can move in 8 directions (up,down,left,right and diagonals). It also cuts the corners of the obstacles, and does not take into account the size of the object (enemy) itself, so it hangs at the corners.
If anyone knows a good package/extension or the location of documentation of the NavMesh part, that would be great!
Thank you in advance.
So I got it to work and will share the solution I took:
Using the NavMeshPlus package, I created a navmesh.
Instead of using a NavMeshAgent, I configured an agent type in the navigation window with the right physical dimensions.
Next I added a script on the enemy to function as a custom NavMeshAgent.
Add a NavMeshSurface2D reference to the script (Serialized)
Using UnityEngine.AI, call NavMesh.CalculatePath(), with the AI.NavMeshQueryFilter. This has 2 components, the agent and area setters. Both of those I get from the NavMeshSurface2D reference.
Now you have the path the agent has to follow!
By the way, I'm also aware this could be done by a custom Agent inheriting the NavMeshAgent class, I'm also looking into that. Time will tell which one will have better performance.
I have been building a 2D sprite based game for which I want to have the player be able to customize their equipment. This means that although I am fine with drawing the content, I'd need to ensure animations in the game run fine on top of each other. For this, I have been preparing a game object with several children to account for the equipment:
Each of the children runs a single animation and should have to follow the player, which I accomplish by using transform.localPosition = Vector2.zero; on the Update of the script each takes, so they hook to the parent's pivot and follow the player. While this has worked for the most part, there are moments in which all of the objects are not synchronized and as such sometimes the parent object (the body) is seen where it shouldn't since the other game objects should render on top:
Aside from that, to make it easy for the children to follow the parent position I had made sprites which are all the same size, which risks me having to load a lot of transparent space per sprite.
Another problem that I just noticed as I'm trying to address the issue of loading too many useless pixels involves the positioning with other objects such as the Sword game object, which doesn't follow the player fully if I use sprites that are not perfect squares (see this question for details How to align sprites of smaller sizes to a moving gameobject sprite? and this one Sibling sprite doesn't appear to follow main GameObject sprite even when transform.position updates)
I tried to fix this by making the Sword a child of the Hero, but even then changing the position through a function that sets values to add on to transform values of the sword game object only change the position of it relative to the initial value. I attempted changing the pivot of the sword sprites to a custom value to guess where the center of it would align with the main game object and appear in the right position, but even that doesn't seem to work.
I'm kind of getting tired with my current process, as I have to rely on several animations for each of the game objects, both parent and children, so that these obey to different layers in a single animator (or in the case of the sword, a separate animator), all to ensure there is some synchronization that doesn't always occur:
I really don't mind the web that is turning out in what I'm doing, but the fact that I have to repeat it across multiple layers with no real guarantee that all the objects would appear right on top of each other due to the fact of having multiple animations playing, and loading multiple sprites with empty space is becoming more of a chore than enjoyment.
So I think I came up with a possible solution: If I could make a single animation for the whole equipment used at any given point (whether only wearing pants or wearing full equipment), then having this single animation could guarantee synchronization across parent and children without the need for animator layers or special functions to update position or worrying about pivots or square sprites if I can set the position of non-square sprites in the animation, with the downside that I would need to account for every single animation for each possible equipment variation (so if I had even 3 of each sword, pants, boots, etc. that would mean 3^6 animations) and make a more complex web of animator states. The only thing I'd be worried about in this case, however, would be the performance, if having too many animations for a player would affect how fast these load. But at the benefit of eliminating the other problems mentioned, my question boils down to this:
Is it better to have a single game object with animations that change multiple sprites across children game objects and a single animator that chooses states based on multiple variables, or game objects with multiple animations that change a single sprite for each, and a single or multiple animators with multiple layers that choose states based on multiple variables?
There isn't really a set answer for something like this. It really just depends on how good your/the players computer is when playing the game. Sorry if this isn't what you wanted.
I want to create a game with tons of different enemies, items, accessories, heroes, attacks and more.
To make it easily expandable and maintenance, I want to create the game in the most modular way possible so nothing in the game will be built-in and every aspect of the game will be assembled and configured.
But I'm undecided as to how the elements should be modeled:
As ScriptableObjects - (for example) every enemy will be configured and represented by some scriptabe-object and this asset will contains the enemy stats and behaviours but also the enemy prefabs so the script will tell us how to create the enemy object in the game.
so if I have some place that I want to put some enemy in it, I should (for example) attach on the inspector the enemy's scriptable-object.
As Prefabs - every enemy will be represented by some prefab and the prefab root object will contains some scriptable-object that its purpose is to hold some specific enemy stats and behaviours and the enemy will be act according to them.
so if I have some place that I want to put some enemy in it, I should attach on the inspector the enemy's prefab.
Is there some best practice for that? a way that will make my life easier when the game will be particularly large and complex?
I have a Unity project in which there is a 2D game world that consists of static colliders to make the geometry solid to the characters that inhabit it. The player is a dynamic collider (with a non-kinematic rigidbody). There's also an enemy character that is also a dynamic collider. Both characters walk over the floor and bump into walls like I'd expect them to.
What I want to achieve is that the player and enemy are not solid to each other, so they can move through each other. I achieved this by putting the enemy and the player on separate layers and setting the collision matrix so that these layers do not collide with each other. The problem I'm having now, however, is that I do want to detect whether or not the enemy and the player ran into each other. I added a trigger collider to the enemy character, it's on the enemy layer which means it doesn't detect collisions with the player.
I thought of making a sub-gameobject for the enemy, put it on the player's layer, add a rigidbody and trigger collider to it and use that to detect collisions between the player and the enemy, but it feels so convoluted that it leaves me wondering if there isn't a more elegant solution for this.
Edit (2020-05-26): Nowadays you can tell the engine to ignore collisions between two given objects. Kudos to Deepscorn for the comment. Will leave the rest of the answer unchanged, and read with that in mind.
Yes, you need to create a child GameObject, with a trigger collider, and put it in a layer that interacts with the player layer.
No, you don't need to add a Rigidbody to the new GameObject, the parent's rigidbody already makes it a dynamic collider, so you will get OnTrigger events.
As a side note, just to keep things organized, if you create a child of the enemy don't put it in the player layer. For example, in the future you might need to disable the player's layer collision with itself. Furthermore, if your player interacts this way with many objects, I'd put a single trigger on the player instead of the enemies, on a separate PlayerTrigger layer, just to keep things simple.
Isn't there a simpler way? Not really. You definitely need non-interaction between the player and enemy colliders, but some kind of interaction between them too. So one of them needs to span two layers, or the whole interaction would be described by a single bool. The physics engine processes lots of information in one go, so you can set all the layers and collisions you want, but during the physics loop you have no further control on what happens. You can't tell the engine to ignore collisions between just two objects. Having only 32 layers, and having them behave in the rigid way they do are actually heavy optimizations. If you are concerned about performance for creating another layer, disable interaction between layers you don't need, like the trigger layer and the floor and walls, or layers that don't even touch.
Your alternative is doing it all by code, which is even less elegant. A single child capsule on the player doesn't sound that bad now, doesn't it?
I am looking at tutorial where things are defined like this:
planes are sprites with dynamic physics bodies
plane moving is done with actions by following the path
there is contact detection between bullet and plane
bullet is sprite and it has physics body set to be static (which is little unusual in my opinion)
Here is the link to tutorial for more information.
Questions:
When we use actions to move physicsBodies is there a difference how we set body's dynamic property? Because bullet is static but still there is no problem for movement.
When we have situation like this, where we don't need collision detection, but just contact detection, and we can't move sprites (enemies) by applying forces or impulses, what is a good approach? Is this approach correct?
I think this is nice way, but I would like to be fully aware what is really happening when we use this method and are there any drawbacks or possible problems.
There are posts on SO that suggest we shouldn't use actions for moving dynamic physics bodies. I am aware that we can't use this approach in every case. For example this would not work for moving platform with other object on it, because there would be no correct physics simulation between body on the platform and platform moved by action. But in cases like from this tutorial, where we only need contact detection for object that can be moved only by actions (moved along path) I suppose it's not a problem ?
static means that the body isnt affected by physics. That doesnt mean it cant be manually repositioned or moved. So if something is set to static, it participates in the physics simulation, but it isnt affected by it. Think of a plane hitting a mountain. The plane is dynamic, the mountain is just going to sit there even though its participating in the physics. But you could still move that mountain around manually using a position or an action.
Not totally understanding your question, but I'll give it a shot.
You can move physicsBody's manually (using position property or actions), but you need to ask yourself why you're doing that. You typically don't want to do it because they're bypassing the physics simulation. If you're forcing it to move around, what's the point of using a physics body in the first place.. right?
But if your physicsbody is something like a powerup that is totally static, and you just want it to move around in a circle using an action then thats fine. You probably just want to use contact detection for the bullet, powerup, etc without actually moving it around using physics. Nothing wrong with that.