How to setup HUD widgets correctly in UE4 - unreal-engine4

I am following a tutorial on Udemy on Unreal Engine based game development. I have posted the sam question on Udemy as well. I will try my best to explain the issue I am facing.
What works
I have set up a
GameMode named FPS_GameMode,
GameInstance named FPS_GameInstance,
HUD named FPS_HUD
I have set the HUD in FPS_GameMode to FPS_HUD.
I have assigned the FPS_GameMode as GameMode in project settings.
Now I have created 2 blueprints MainHUD and UI_DynamicCrosshair. UI_DynamicCrosshair consists of functionality related to resizing the crosshair if the character is moving. (It spreads out if the character is moving).
I have used this crosshair in the MainHUD (used UI_DynamicCrosshair in the MainHUD's viewport).
In my FPS_Character blueprint, I am creating the MainHUD widget and adding it to the viewport.
The crosshair widget shows up when I play the game but it does not update when my character moves.
What doesn't work
I need to call the functionality defined in my UI_DynamicCrosshair, in FPS_Character so that I can trigger it when the character moves.
For that, I tried using the MainHUD reference as it is accessible in the FPS_Character blueprint assuming that I would be able to access the UI_DynamicCrosshair via the MainHUD reference. But this doesn't work. The UI_DynamicCrosshair is not accessible via the MainHUD reference.
Can you share a checklist/list-of-steps so that I can crosscheck everything I have done and figure out what I have missed?
Please let me know if you need more details.

I am assuming it's all in Blueprints, because you're not mentioning any C++ code.
Seems to me that your UI_DynamicCrosshair instance is just set as private, as you would normally be able to access it via a MainHUD reference if it were set as public.
Make sure that this checkbox is unchecked in your MainHUD class, when your UI_DynamicCrosshair variable is selected.
Although, the better (or nicer) way to achieve this would be to create a public function, let's say SetCharacterMovement(Boolean bIsMoving), in your MainHUD, and to implement the logic that does the resizing in it.
You can then just call this function on your MainHUD reference from your FPS_Character when you need to update the crosshair. That way your PlayerCharacter does not have to be aware of the inner logic of your HUD, which takes care of itself (and its 'children').
It's a principle called Separation of concerns, and that will help you design things the clean way.

Related

Unity: Guidelines to add script to Prefab

I am new to Unity and I'd like to know what's the best way to add a script to a Prefab.
I currently see two ways of doing so:
1) Use the Unity interface to add it to an existing prefab
2) Use the AddComponent following the code which instantiates the prefab
I try using the 2) everywhere as I am using git to source control my code and I think conflicts may be easier to resolve inside code (compare to inside .prefab files by instance). But I may be wrong.
Is there any unity good practice regarding this ?
That's indeed the only two ways to add a component to a GameObject.
The primary way you are expected to add components to GameObjects is the Unity interface. Being able to setup logic and data through the interface rather than code is one of the big benefits of using such a game engine. It gives you flexibility and it smooths the process for quite a number of operations.
AddComponent use leans more toward adding a component to change the behavior of an existing GameObject or to create a GameObject from scratch, both at runtime. Most people usually don't make an use of it.
Git handles .prefab merging just fine. These are basically just text files with tags and structure so that it can be interpreted by the engine and be readable for an user (think of XML files).

Unity Editor: ExecuteInEditMode vs Editor scripts

What are the use-cases for using ExecuteInEditMode and what are for Editor scripts? When to use one instead of another?
ExecuteInEditMode - This is an attribute for scripts, denoted as [ExecuteInEditMode]. By default, MonoBehaviours are only executed in play mode. By adding this attribute, any instance of the MonoBehaviour will have its callback functions executed while the Editor is not in playmode. Use-Cases for this include, but are not limited to:
Position constraining – your script may help you position your game objects by constraint object positions using a custom algorithm.
Connecting objects – if the ComponentA of the ObjectA requires instance of the ComponentB that is somewhere on the scene, and you are able to find it using your code, then you can do it automatically instead of making the designer do it manually later.
In-Editor warnings and errors – Unity will generate a standard set of errors if something is wrong with your code or the initial Unity components setup. Still, you won’t receive any warning if you create a room without doors or windows by mistake. This can be done by a custom script, so the designer is warned when editing.
Management scripts – These are scripts that are keeping things in order. Let’s say that your scene has 5 cameras, but all these cameras should have the same FOV parameters. Instead of changing those manually (yes, you can select all of them, but there’s no guarantee that someone else will do the same) you can make your management script adjust all values using one master value. As a result all single camera parameters will be locked and can be changed only by using the management script.
The Knights of Unity have released a tutorial on some sample functionality for ExecuteInEditMode that expands on this.
Editor Scripts - This is a collection of scripts that extend the Editor class, a Base class to derive custom Editors from. This can be used to create your own custom inspector guis and editors for your objects. For more information, check out this video on Editor Scripting. Since Editor Scripts are programmed, you may also want to view the Scripting API.
With Editor you customize MonoBehaviour appearance.
For ExecuteInEditMode most use cases would be modification of Scene View, for example Mesh Renderer (I do not know if Mesh Renderer uses ExecuteInEditMode but it could), it will render Mesh in game but it does also render this mesh in scene view.
Some other use cases: validation, communication with other components, modification other components, modification gameobjects, basically you can do most of things that you could do in-game and in-editor.

Change a mecanim motion reference for a synced layer via Editor Script?

Layer1:
4 State machines, in each state machine X states. Each of these machines have states with motion references set; some being blend trees.
Layer2 (Synced with layer1):
In Unity's editor I can navigate the same navigation tree as mentioned in Layer1. And I can change the references to the animation clips just fine.
Question is, how do I do the equivalent in a script? There is no access to these references it seems; in which case how is the Unity editor storing this?
Screen shot of what I am referring to:
https://s3.amazonaws.com/uploads.hipchat.com/20686/98498/cEhYZ4owenI8ebJ/Test.png
Backend representation.
This is fairly common inside most of Unity's classes, with a few notable exceptions, Unity's UI for example, the repo is open source if you're curious.
If you've ever seen any of Unity's decompiled code then you'll notice that a lot of methods delegate to the actual method, written in C++, on the backend. This is the same setup for a lot of their variables to.
In the animator controllers case, you can access the animation clips at runtime, as the Animator controller inherits from Runtime Animator controller. You are however, unable to swap in new ones. For this you'd need another controller to swap in instead.
You can swap in animation in the Editor though, note ONLY in the Editor, not when you build and ship. The UnityEditor.Animations namespace gives you the ability to build up animator controllers from code. If you look at the decompiled version of these classes you'll notice that a lot of these properties delegate to the C++ to do the actual animator controller building. Then you can use that build version in code.
There is a potential way to build your own animator controller at runtime. Note though, this is your own version, to the best of my knowledge Unity doesn't let you build animator controllers at runtime. You'll be after the UnityEngine.Experimental.Director namespace. This gives you access to custom playables, which can be used to build up animation chains, or maybe even an animator controller, at runtime. Bare in mind though, this is experimental, and may change in future Unity releases.
So your answer is backend representation. The controllers runtime logic is written in C++ with a builder interface being accessible in the Editor. The backend data is serialised when you hit the save button, with the frontend providing a way to reference the backend from C#. At runtime, you use the Animator to control certain facets of the backend.
Some links:
Animations: https://docs.unity3d.com/ScriptReference/Animations.AnimatorController.html
Director stuff:
https://docs.unity3d.com/ScriptReference/Experimental.Director.AnimatorControllerPlayable.html
Hopefully this answers your question.

How to change player states?

I have a "design pattern" problem. I want to enable for a player to change his state. Lets say I have three states or super powers if you will. Each of them have different abilities. If this abilities were just based on some attributes (lets say mass or speed) I could just change that on the player and everything would work fine.
But what if there are some other functionalities changed. Lets say if the player is in the state 2 and he jumps the animation is different and some other thing changes. Now I know I could make this with a lot of checking in update loop for states but I want to make this elegant.
My idea until now is to make generalPlayer object and the each special player inherits from it and adds special abilities, and when player change state I would kind of change instance of player to that instance.
Is there any better way? I am using c# as scripting language
The problem I have with that approach is that you are using multiple different objects for one player. There could be some mess involved with passing data every time the player changes states which would be better avoided. Since C# has delegates, which, for our purposes, behave much like first class functions, it is possible to change the behavior of your player by changing out certain routines and field values on every change of state. This allows you to keep your data in one object and change behavior on the fly without relying solely on conditionals. There is a pithy phrase I have heard many times, that an object encapsulates state and behavior. In C#, you can change state by manipulating field values, and change behavior by relying on delegates. That should cover your problem.
I have found the best suiting sollution thanks to friend. What I use was an Strategy pattern and then put different instances to the interface I used to controll the player. It works like charm. Thanks all for help.

How to organize multiple xna game components?

I am programming in XNA and need help on organizing my classes and code:
I have a button class and a button manager.
I also have a scene class and a scene manager.
The button manager handles the different buttons that would be drawn on different screens since almost all screens would have a button.
The scene manager does the same thing except instead of handling buttons it handles background scene objects that just need to be drawn.
Both managers depend on the current game state to determine which buttons or scene objects to draw.
How should I organize my code so that both managers know what the current game state is? (Both managers are instantiated inside of the main game class and both managers are game components)
The keyword you are looking for that describes your problem is "Game state management" The XNA website has a few good articles on it, be sure to read this one: http://create.msdn.com/en-US/education/catalog/sample/game_state_management
Now to answer your question more directly. Say you have 2 different states
-Menu
-Game
First create a class called State with methods to setup draw and update the correct UI elements.
Now create a class MenuState deriving from State and override the setup, draw and update methods. In the setup method put all the code to generate the correct menu. (Like Scene.MenuItems.Clear(); Scene.MenuItems.Add(new Label(..)); etc..). Do the same for the update and draw methods (update and draw each control, capture events given by clicks on buttons etc...)
Do the same for GameState.
Now in your Scene code make a field "State state". When a user presses escape set state to (a new) MenuState. When a user returns to game set state to (a new) GameState. In the scene's update and draw methods place a call to state.Update(..) and state.Draw(..). Because you've overriden these methods in GameState and MenuState the correct actions will be performed.
This approach solves the issue of having a gazzillion controls that do checks like "if(scene.StateEnum == StateEnum.SomeState){DoThis();}". You will find this way easier to manage.
Also think about building other conceptual classes. Like the MenuState could have a substate, "Options menu". Maybe think up a form class, etc...
SInce GameComponent has a Game property, you can use this to cast to the Game class, or alternatively to get a Service exposing the Game Status.