I'd like to add some sort of "properties" to different areas of the level so they can affect NPC behavior.
Level layout:
level layout
What NPC behavior I want to achieve:
NPC can freely walk around Area #1
NPC isn't allowed to go to Area #4 in general but it may decide to go there in some exceptional cases
If NPC wants to interact with the player he has to go to Area #2 and check if player is in Area #3. If player is not there, NPC have to wait until player enters Area #3
I know that I can affect the path NPC chooses by using navigation modifier volumes and changing navigation cost in some areas.
But, based on goals above, I also need at least the following
I need to get random point within specific area (for goals #1 and #3)
I need to check if some actor is in specific area (for goal #3)
And I guess I don't really need "navigation cost" feature, since if I could get random point in specific area, I would be able to control where NPC goes anyway
The questions are:
What actor should I use to mark some areas as "NPC can go here if they want to walk around", "NPC can go here if they want to interact with player", etc?
If volume is the best option, what volume should I use? My concerns about Navigation Modifier volume is that I don't really need to modify navigation process by blocking it completely or adjusting its cost.
I have used the NavMeshBoundsVolume to define the area that my AI character is allowed to be in.
Then I used blackboards and set points the AI could walk around too, adding variables enabling or disabling a point. In your case you could have a variable that is true when the player is in area #3.
I've included a picture of my Behavior Tree, so that you can get an idea of the how a flow might look. This is for an "enemy," but you could simply have the AI follow the character to a specific area instead of playing the attack animation and applying damage.
Here is a really good series on AI in Unreal Engine. It's UE4 but if you are familiar with the engine you shouldn't have a problem applying it to UE5.
Related
I'd like to show congestion areas on a conveyor network by using the density map included into the Material Handling Library, but so far I haven't find a way to do so, as material agents movement cannot be tracked by the density map, but it only accepts transporters or pedestrians (both in free space movement mode).
So I thought I could create a "parallel" agent (for instance, a pedestrian) that could get attached to my material and move along with it. Then I could set the pedestrian visible property to "no" so that it does not show in the animation, or make it really small as an alternative approach.
The problem when doing the pickup/dropoff logic is that the pedestrian disappears from the scene when it gets picked up (although it's internally batched with the material) so the density map shows nothing.
Same thing happens if I try to seize/release a transporter, as they do not travel along the conveyor with the material agent.
Any idea on how to get this?
Thanks a lot in advance!
You won't be able to drag pedestrians, they actively move via PedMoveTo blocks. This is a creative idea but it will not work.
You will have to code your own heatmap using dynamically colored rectangles that you animate on top of conveyors. Doable, but might be more work than is really required for your purpose ;)
I'm trying to figure out a good way to script the NPCs in my RPG. A sample NPC interaction could go something like this:
NPC starts dialog #1 with player.
When the dialog is finished, the NPC moves to a waypoint on the map.
Once the NPC arrives at the waypoint and the player talks to him again, he starts dialog #2.
At the end of the dialog, the NPC asks a question.
If the player gives response A, the dialog ends. In this case, talking to the NPC again starts dialog #2 again.
If the player gives response B, the NPC gives an item to the player, and disappears. From now on, that same NPC will be present in a different Unity scene.
I've found plenty of examples of making a dialog tree, but I can't find a good way to handle complex situations like that. One of the most challenging problems is to determine which scene -- and where in the scene -- that NPC is. Depending on how far along the player is in the game, that NPC could be in any one of many different scenes, and will have different dialog and behavior.
Since Unity makes it easy to attach a script to my NPC's game object, I could of course do this all through a C# script. However, that script will get pretty big and messy for important NPCs.
The path that I've gone down so far is to create an XML file. Something like this:
<AgentAi>
<ActionGroup>
<Dialog>
<Statement>Hi!</Statement>
<Statement>Follow me.</Statement>
</Dialog>
<MoveTo>Waypoint_1</MoveTo>
<SetNpcState>NpcGreetedPlayer</SetNpcState>
</ActionGroup>
<ActionGroup>
<Conditions>
<State>NpcGreetedPlayer</State>
</Conditions>
<Dialog>
<Statement>Here, take this.</Statement>
</Dialog>
<AddItem>Dagger</AddItem>
<MoveTo>Waypoint_2</MoveTo>
</ActionGroup>
</AgentAi>
That sample would cause the NPC to greet the player and move to another spot. Then when the player talks to him again, the NPC will give the player a dagger and move to another waypoint.
The problem with the XML is that I'm worried about it growing very large for important NPC that can be in a lot of different places depending on where the player is in the game. I'd have to keep dynamically determining which NPCs are in a scene each time I load a new scene. I'm not totally against doing it with XML like this, but I don't want to waste a bunch of time heading down this road if there's a better way of doing it.
Since this type of behavior is common in a lot of games, is there a good way of doing it in Unity without having to homebrew my own complex system?
Normal software systems would use a database, once the level of complexity gets too high.
I'd setup the storyline with a numeric reference, like the pages of a book.
If they go to a higher number without interacting then the interaction is still available.
Then you can setup each interaction as a separate thing, with a start and finish number (not available before and not available after).
maybe you could do this by making the xml files separate, but I'd think you still need to tie them into the storyline.
Long-winded question out of the way, I'll provide a diagram of what I am going for:
The red square represents the character, the blue rectangle represents the camera, the green dot represents the center of the "stage", and the black circle is the stage itself.
What I desire is to essentially lock the player's movement around the "center" of the stage, so that anytime you move left or right you are more or less rotating around said center. However, I also want the player to be able to move forwards and backwards to/from the center as well. Keep in mind I want the camera to always stay directly behind the player. I have tried many different methods, and the latest is the following:
I took a default actor, attached a spring arm, attached a child actor to that (gets possessed to become the playable character), attached another spring arm, and finally the camera to that. I then added the blueprint code to the first spring arm so that it was the one being controlled by the left/right controls. However, upon hitting play, the only thing that moves is the camera, and it can only move forwards and backwards.
I'm admittedly pretty new to Unreal Blueprints, so any help would be appreciated.
Alright, I figured it out.
Here's the setup needed if anyone else wants something similar.
For the player themselves, you'll need something like this:
The important thing is to center the root mesh where you want to rotate around. The spring arm's target arm length will be affected for the player mesh movement, giving the illusion you are physically moving the character. The second spring arm isn't necessary unless you wish to have more control over the camera to player distance.
For the rotation Blueprint, you'll need this:
The target is whatever you named the root mesh. (Mine was called Center) Drag and drop it from the hierarchy.
For the forward/backward movement, you'll need this:
The target is what you named the spring arm. (I left mine as the default "SpringArm") Again, just drag and drop it from the hierarchy.
Adjustments in Project Settings:
Yes, my inputs are backwards from what you'd think. I felt it was quicker just to reverse the inputs instead of adjusting whatever was causing the movement to be backwards in the first place. (It's probably just the sphere orientation.) Also, you'll notice I have the w and s inputs set to 5 or -5 instead of 1 or -1. This is due to the fact the movement was slow otherwise. I'm sure there's a fix that doesn't involve changing the input axis scale, but honestly I won't really have a reason to alter the values at any point in my project. If it ever comes up where I do need to, I'm sure there's a bypass to change the values from within blueprints anyways.
End result:
End Result Video
If I remember correctly, child actor components are a bit different from other components in that they are transformation independent, that is they do not update their transformation when their parent component moves around.
I find it a bit strange that you would separate your player actor and the camera component. Normally, the player "pawn" contains the mesh and camera components for one player.
I would suggest you do the following:
Create a player actor (e.g. a "pawn" or "character" class)
Create the following component hierarchy:
Root Scene -> Spring arm -> Skeletal or static mesh -> spring arm -> camera
Your root scene is the green center in your drawing. You can then basically use the blueprint you already have to rotate and move your player.
Youtube video link
So in this simulation, agents (represented by dots) choose one direction to move according to surrounding circumstances. Theoretically all agents make the decision and move simultaneously, but what if two dots decided to move to the same position and collide with each other? How in practice does the program solve this?
My guess is it actually executes the calculation one by one in order but this somehow violates the presumed requirement of simultaneous interaction. Because any dot's calculated move advanced in time might change the other one's surroundings of status quo, thus alters the intended move of the latter. On the other hand this seems to be the only way to avoid collision problems.
Any help thanks!
I am wondering how developers are able to create games where the player can actually see the characters hands as they are casting spells, shooting, etc. A good example of this would be Overwatch. How is this done? Is there two separate views? One that the enemy sees, and then another that the player sees where its just arms and one is hidden to the other player? Or is the camera positioned in such a way where it is actually just the character model. Thanks!
"depends", basically. But the most common way (e.g. what most FPS games do) is having a detailed model for the player (your local avatar) with properly placed camera(s - see later) so you can see your hands/feet/etc and everyone remote to you is rendered "as is" (the model and it's locomotion / animations for jumping, etc - including how the model holds e.g. a rifle while jumping, etc).
The tricky part comes when
a) your game wants to be unique/complex on this e.g. like in your example: you want to see hands while you 'cast a spell' (or blood spilling in the eyes or anything)
and/or
b) you realize it's annoying to see particular body parts (e.g. your feet, as it might block view and/or makes jumping look silly), or gun sinks into walls/doors or such
In both cases, using multiple cameras for so called "layering" is the solution. Long story short: there's a camera that see the rifle in (kinda) topmost Z-Order, or a camera that cannot see the character model or a camera that can see "floating hands" casting spells.The trick with these, nothing else but that particular camera can see the effects, i.e. only the player (everyone else, as I mentioned above, see the model doing the "standard something" associated with that particular action. E.g. casting any spell ==> waving hands above the head or such).I hope this helps to give you an idea how it works. Cheers!