What are the use-cases for using ExecuteInEditMode and what are for Editor scripts? When to use one instead of another?
ExecuteInEditMode - This is an attribute for scripts, denoted as [ExecuteInEditMode]. By default, MonoBehaviours are only executed in play mode. By adding this attribute, any instance of the MonoBehaviour will have its callback functions executed while the Editor is not in playmode. Use-Cases for this include, but are not limited to:
Position constraining – your script may help you position your game objects by constraint object positions using a custom algorithm.
Connecting objects – if the ComponentA of the ObjectA requires instance of the ComponentB that is somewhere on the scene, and you are able to find it using your code, then you can do it automatically instead of making the designer do it manually later.
In-Editor warnings and errors – Unity will generate a standard set of errors if something is wrong with your code or the initial Unity components setup. Still, you won’t receive any warning if you create a room without doors or windows by mistake. This can be done by a custom script, so the designer is warned when editing.
Management scripts – These are scripts that are keeping things in order. Let’s say that your scene has 5 cameras, but all these cameras should have the same FOV parameters. Instead of changing those manually (yes, you can select all of them, but there’s no guarantee that someone else will do the same) you can make your management script adjust all values using one master value. As a result all single camera parameters will be locked and can be changed only by using the management script.
The Knights of Unity have released a tutorial on some sample functionality for ExecuteInEditMode that expands on this.
Editor Scripts - This is a collection of scripts that extend the Editor class, a Base class to derive custom Editors from. This can be used to create your own custom inspector guis and editors for your objects. For more information, check out this video on Editor Scripting. Since Editor Scripts are programmed, you may also want to view the Scripting API.
With Editor you customize MonoBehaviour appearance.
For ExecuteInEditMode most use cases would be modification of Scene View, for example Mesh Renderer (I do not know if Mesh Renderer uses ExecuteInEditMode but it could), it will render Mesh in game but it does also render this mesh in scene view.
Some other use cases: validation, communication with other components, modification other components, modification gameobjects, basically you can do most of things that you could do in-game and in-editor.
Related
I am following a tutorial on Udemy on Unreal Engine based game development. I have posted the sam question on Udemy as well. I will try my best to explain the issue I am facing.
What works
I have set up a
GameMode named FPS_GameMode,
GameInstance named FPS_GameInstance,
HUD named FPS_HUD
I have set the HUD in FPS_GameMode to FPS_HUD.
I have assigned the FPS_GameMode as GameMode in project settings.
Now I have created 2 blueprints MainHUD and UI_DynamicCrosshair. UI_DynamicCrosshair consists of functionality related to resizing the crosshair if the character is moving. (It spreads out if the character is moving).
I have used this crosshair in the MainHUD (used UI_DynamicCrosshair in the MainHUD's viewport).
In my FPS_Character blueprint, I am creating the MainHUD widget and adding it to the viewport.
The crosshair widget shows up when I play the game but it does not update when my character moves.
What doesn't work
I need to call the functionality defined in my UI_DynamicCrosshair, in FPS_Character so that I can trigger it when the character moves.
For that, I tried using the MainHUD reference as it is accessible in the FPS_Character blueprint assuming that I would be able to access the UI_DynamicCrosshair via the MainHUD reference. But this doesn't work. The UI_DynamicCrosshair is not accessible via the MainHUD reference.
Can you share a checklist/list-of-steps so that I can crosscheck everything I have done and figure out what I have missed?
Please let me know if you need more details.
I am assuming it's all in Blueprints, because you're not mentioning any C++ code.
Seems to me that your UI_DynamicCrosshair instance is just set as private, as you would normally be able to access it via a MainHUD reference if it were set as public.
Make sure that this checkbox is unchecked in your MainHUD class, when your UI_DynamicCrosshair variable is selected.
Although, the better (or nicer) way to achieve this would be to create a public function, let's say SetCharacterMovement(Boolean bIsMoving), in your MainHUD, and to implement the logic that does the resizing in it.
You can then just call this function on your MainHUD reference from your FPS_Character when you need to update the crosshair. That way your PlayerCharacter does not have to be aware of the inner logic of your HUD, which takes care of itself (and its 'children').
It's a principle called Separation of concerns, and that will help you design things the clean way.
I am making a strategy game and I need to have a tool which places the objects above the terrain while I am dragging them in Unity Editor when I work on level design.
Basically I want to get result like here:
https://www.youtube.com/watch?v=YI6F1x4pzpg
but I need it to work before I hit the Play button in Unity Editor.
Here is a tutorial
https://www.youtube.com/watch?v=gLtjPxQxJPk
where the author of it made a tool which snaps the object to the terrain height when a key is pressed. I need the same to happen automatically whenever I place an object over my terrain. And I want my tool to adjust the Y position of the object automatically even while I am dragging it inside of the editor.
Also just to clarify: I don't need grid snapping and I don't need this functionality during the gameplay. I just need to have a tool for my level design work.
Please give me a clue where to start with it.
Thanks!
There is this tag you can apply to classes so they do call their regular events during editor mode already: https://docs.unity3d.com/ScriptReference/ExecuteInEditMode.html
A trivial way then would be to apply this to a special class/object which regularly "finds" all objects from the game object hierarchy. Then it shall filter that list for the ones you want to snap to the axis and enforce their Y.
I am new to Unity and I'd like to know what's the best way to add a script to a Prefab.
I currently see two ways of doing so:
1) Use the Unity interface to add it to an existing prefab
2) Use the AddComponent following the code which instantiates the prefab
I try using the 2) everywhere as I am using git to source control my code and I think conflicts may be easier to resolve inside code (compare to inside .prefab files by instance). But I may be wrong.
Is there any unity good practice regarding this ?
That's indeed the only two ways to add a component to a GameObject.
The primary way you are expected to add components to GameObjects is the Unity interface. Being able to setup logic and data through the interface rather than code is one of the big benefits of using such a game engine. It gives you flexibility and it smooths the process for quite a number of operations.
AddComponent use leans more toward adding a component to change the behavior of an existing GameObject or to create a GameObject from scratch, both at runtime. Most people usually don't make an use of it.
Git handles .prefab merging just fine. These are basically just text files with tags and structure so that it can be interpreted by the engine and be readable for an user (think of XML files).
I am a bit of a novice with the Unity Engine and Mixed Reality App development so please bear with me.
I have been working with the Microsoft Mixed Reality Toolkit for Unity to try and animate a game object and move it to the side. A simple action, very similar to an example scene provided by Microsoft with the toolkit called "InteractableObject" (Information links provided below)
Interactable Object - Mixed Reality (Microsoft Docs)
Mixed Reality Toolkit-Unity Interactable Objects and Receivers (Github)
This example scene in Unity has multiple objects to be used as "buttons". With the Mixed Reality Toolkit, even objects that you want the user to interact with to perform some sort of action when selected is even considered a button. At least according to the documentation I have actually been able to find on the subject. This is a series of screenshots depicting the inspector panels for my GameObject and the container for my object:
GameObject Inspector Panel
GameObject Container Inspector Panel (Part 1
GameObject Container Inspector Panel (Part 2
I am trying to make a single game object move to the side when I place the standard cursor on it. This same action is done with a balloon object in the example scene I mentioned. I have created the animator and the state machine the same as they did in there example as well as setup my game object in an almost identical format. Only real difference is that created a balloon object themselves and I am using a different set of custom models from my company.
When I attempt to play back the app in the Unity Editor, the state does not change when I place the cursor on the object. I can force the state to change using the editor and the required animation engages, but it will not change the state on its own. I configured my state machine the same as the Microsoft example and setup my state variable the same as well. It should move from an "Observation" state to a "Targeted" or "ObservationTargeted" state when the cursor moves onto the object. A screenshot of the GameObject state machine and the inspector panel of the specific transition in question are provided below:
GameObject Animator State Machine Setup
Observation to ObservationTargeted Transition Inspector Panel
I went through and verified that all components added by the Mixed Reality Toolkit are the same and they are. This includes the DefaultCursor, InputManager, MixedRealityCameraParent and Directional Light. I also checked that all the scripts were coded the same as well and they are. I am running out of places to look. I attached the Visual Studio debugger to the project in Unity and have verified that it just isn't changing the state on its own. But I cannot figure out why. I believe the problem has something to do with the setup of the transition, but I haven't been able to find the issue. All of the other mentioned components are provided by Microsoft and are not changed by myself nor are they changed in the sample scene.
If anyone else has had a similar problem or may know where I can look to find the problem please let me know. I haven't even built the project into an UWP application yet.
I know it's been a few months, but do you still looking for the solution?
With the newest version of Mixed Reality Toolkit you could make any GameObject to act as a button. Simply read this documentation. I have some cubes as buttons in my Unity project and the only extra Component I added to it to make it work was Interactable, which comes from Mixed Reality Toolkit.
If you want to trigger some animation when you place the cursor on the object (or look at it if you're going to use it with Hololens) then you can add them in Interactable object by adding a new Event (for example: OnFocus() event)
Hope this helps is any way
Layer1:
4 State machines, in each state machine X states. Each of these machines have states with motion references set; some being blend trees.
Layer2 (Synced with layer1):
In Unity's editor I can navigate the same navigation tree as mentioned in Layer1. And I can change the references to the animation clips just fine.
Question is, how do I do the equivalent in a script? There is no access to these references it seems; in which case how is the Unity editor storing this?
Screen shot of what I am referring to:
https://s3.amazonaws.com/uploads.hipchat.com/20686/98498/cEhYZ4owenI8ebJ/Test.png
Backend representation.
This is fairly common inside most of Unity's classes, with a few notable exceptions, Unity's UI for example, the repo is open source if you're curious.
If you've ever seen any of Unity's decompiled code then you'll notice that a lot of methods delegate to the actual method, written in C++, on the backend. This is the same setup for a lot of their variables to.
In the animator controllers case, you can access the animation clips at runtime, as the Animator controller inherits from Runtime Animator controller. You are however, unable to swap in new ones. For this you'd need another controller to swap in instead.
You can swap in animation in the Editor though, note ONLY in the Editor, not when you build and ship. The UnityEditor.Animations namespace gives you the ability to build up animator controllers from code. If you look at the decompiled version of these classes you'll notice that a lot of these properties delegate to the C++ to do the actual animator controller building. Then you can use that build version in code.
There is a potential way to build your own animator controller at runtime. Note though, this is your own version, to the best of my knowledge Unity doesn't let you build animator controllers at runtime. You'll be after the UnityEngine.Experimental.Director namespace. This gives you access to custom playables, which can be used to build up animation chains, or maybe even an animator controller, at runtime. Bare in mind though, this is experimental, and may change in future Unity releases.
So your answer is backend representation. The controllers runtime logic is written in C++ with a builder interface being accessible in the Editor. The backend data is serialised when you hit the save button, with the frontend providing a way to reference the backend from C#. At runtime, you use the Animator to control certain facets of the backend.
Some links:
Animations: https://docs.unity3d.com/ScriptReference/Animations.AnimatorController.html
Director stuff:
https://docs.unity3d.com/ScriptReference/Experimental.Director.AnimatorControllerPlayable.html
Hopefully this answers your question.