I have my object in Unity and this object has a destructor as well as Awake method
...
private void Awake()
{
Debug.Log("AWAKE");
}
~UnityHumanObject()
{
Debug.Log("DESTRUCTOR");
if (stream != null)
{
stream_release(stream);
}
}
...
when I click on Run button according to the log I see that destructor prints his log message 3 times and just after that I see Awake log message... Next, if I click stop (Run button again), I don't see that destructor even get a call.
So, question is - why when I click on run button first of all I get 3 time call to destructor and secondly why if I stop Unity I don't actually get destructor call?
You may wish to implement a receiver for OnDestroy() instead.
See https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnDestroy.html
I don't know the full ins and outs of how Unity engine works, but I can tell you that things that derive from UnityEngine.Object (which includes MonoBehaviors) are a hybrid type that is part managed-code and part unmanaged-code. The Editor is also an arcane beast in that it runs your scene as you're editing it, but restricts certain Messages/calls until you're in "play" mode (or register with the ExecuteInEditor attribute).
With all of that, it becomes quite impossible to manage your code using low-level constructors and destructors if you're extending any of the Unity classes. Simply put, don't do it unless the Unity-specific functionality doesn't support what you need. In your case, closing a stream, using an OnDestroy() function should be perfectly sufficient.
Related
enter image description here
Unity has a component called Button as part of its UI system which you can use to subscribe on-click events to it through the inspector which is incredibly useful.
However, when projects get larger, I run into trouble in many situations:
events subscribed this way in the inspector are not rearrange-able which makes buttons that have lots of events difficult to manage
changing the contents of the scripts used for events can cause the button to not recognize a function that was used for an event which means you have to re-reference it
if anything happens to the GameObject or prefab that stores the Button component such as it getting corrupt then all your events that were serialized onto the button would be wiped and you would need to re-reference all of them
the above points make debugging very very difficult
What are some ways I can work around the problems I've listed above?
Inject event functions inside the code:
public Button playButton; // set button in inspector
public void Start()
{
playButton.onClick.AddListener(() =>
{
transform.position = point1;
// do something..
});
}
as the title says I have an issue loading a scene when in playmode in the editor.
The workflow of my game is as follows:
Initializing (works fine)
There is an empty scene that creates some global game objects that will exist during the entire runtime.
MainMenu (works fine)
After the Initializing is done it loads the MainMenu scene. I can interact with the scene and everything is nice.
Connect to game server (works fine)
In the main menu I have an option to connect to a game server (a dedicated server application I created on my own)
Establishing the connection and sending the login works well.
Character selection (not working)
After the login on the server I get the resonse to select a character. (This works as expected.)
Then I'm going to handle this response by opening the character selection scene.
And here I have the issue. In Playmode inside the editor the handler method is executed (verifyed by debug logs) but the scene is not loaded actualy.
When I build the game and run the created .exe and follow the exact same steps the character selection scene is loaded and shown as expected.
I searched the documentation and also the web but did not find any similar issues (maybe I still missed something)
So my question is as follows:
How do I get the scene to also load in the editor playmode? My approach seems not to be totaly wrong as it works after build.
Here is the code snippet that should load the scene:
private void MessageRecived(object sender, GNL.ResponseMessageEventArgs e)
{
GameEventMessage message = this.eventManager.MessageHandler.ParseMessage(e.Message);
Debug.Log($"Recived message with type {message.Type}");
switch (message.Type)
{
case GameEvent.CharacterSelectionRequired:
Debug.Log($"Handle character creation 01");
this.HandleCharacterSelectionRequried(message);
break;
default:
break;
}
}
private void HandleCharacterSelectionRequried(GameEventMessage eventMessage)
{
Debug.Log($"Handle character creation 02");
SceneManager.LoadScene("CharacterCreation");
}
All three Debug.Log statements are executed. Only the LoadScene isn't working in the editor playmode.
IMPORTANT ADDITION
After further testing I have to mention that the network communication is done in a seperate thread. From this thread when a new message arrives an eventHandler is called.
This is where the Method is added to the Event handler:
this.client = new GNL.GameClient(System.Net.IPAddress.Parse(host), port);
this.client.AnnounceRecivedMessage += this.MessageRecived;
this.clientNetwork = new Thread(this.client.Start);
clientNetwork.Start();
And this is the definition of the eventHandler:
public event EventHandler<ResponseMessageEventArgs> AnnounceRecivedMessage;
IMPORTANT ADDITION - Part 2
I just discovered that, it works on a normal build but not when selecting "development build" in the build settings.
This is really annoying as I have to build the game every time I made a change to test it.
I'm thankful for any help and suggestions.
After much more debugging and testing I figured out that this issue indeed is a threading issue.
Deep in the debugger I found the exception I would have expected for a threading issue. So I will have to change my implementation here.
But still there is an inconsistence in how threads are handled in Unity when running the game with debugging tools enabled (playmode in editor and development build) and with them disabled (normal build).
AFAIK, you can't switch between scenes in the editor. Unity Editor only works to edit and play the opened scene.
If you want to test some parts of the workflow, create functions dedicated to the editor (you can use #if UNITY_EDITOR) to test your scenes without the previous character selection.
I have a game that will have several massive levels that flow right into each other (it's a Metroidvania game), and so I need to unload levels when the character leaves those areas, so that it doesn't crash the game due to using too much memory.
I've already tried:
void OnTriggerExit2D(Collider2D coll)
{
SceneManager.UnloadScene(sceneIndex);
}
However, I read somewhere that you can't call UnloadScene from physics triggers for some reason. https://docs.unity3d.com/ScriptReference/SceneManagement.SceneManager.UnloadScene.html
But they say to use UnloadSceneAsync, which doesn't exist. The link in the documentation is broken and my program won't compile when I try to use it.
How to go about this? How does one unload a scene after the character leaves it?
EDIT: I've also tried this, but it won't compile:
void OnTriggerExit2D(Collider2D coll)
{
SceneManager.UnloadSceneAsync(sceneIndex);
}
The UnloadSceneAsync was added in Unity 5.5 which is still in beta version. The only way to have function available to you is to download the beta version. This is the documentation and you can get the latest Unity 5.5 (v5.5.0b10 as of writing this) from here.
You can verify this by going to the Release Note here.
Press Ctrl+F then search 'UnloadSceneAsync'. It says:
SceneManager: Added UnloadSceneAsync API which can be called anytime unlike UnloadScene.
SceneManager: UnloadScene has now be marked deprecated and will throw an exception if called at illegal times. UnloadSceneAsync
should be used instead (762371)
We've created a small demo of Unity3D with AirConsole-plugin which is working in Unity debugger. (If I press play, the browser opens, sometimes it works, sometimes it doesn't. If it doesn't, one can restart Unity and then it works.)
If we create a release or a developer build, it does no longer work. It will load the image correctly, but the controllers (virtual+phone) stay 'loading' most of the time. Sometimes they reach the first correct HTML page, but then the message they send doesn't seem to arrive on the screen-side.
When I click the 'Open Exported Port' after the build, it doesn't work too, except for once.
One error message I got once:
"Uncaught TypeError: Cannot read property 'postQueue' of undefined"
This error message appears always:
"pre-main prep time: 176 ms UnityLoader.js:1
Module.printErr # UnityLoader.js:1"
Do you know what these error messages mean?
I've tried a lot, but this seems to solve the problem: Be sure to attach the event listeners in the Awake method, as they do in basic example application.
public class AirConsoleService : MonoBehaviour
{
void Awake()
{
// register events
AirConsole.instance.onReady += OnReady;
AirConsole.instance.onMessage += OnMessage;
AirConsole.instance.onConnect += OnConnect;
AirConsole.instance.onDisconnect += OnDisconnect;
// etc. ...
}
// etc. ...
}
My problem was that my AirConsoleService was static and not a MonoBehaviour to be sure that there is only one instance of AirConsoleService. It works perfectly for the debug 'play' mode, but in the release build, the AirConsole somehow does not know the deviceID which sent the message (this means, we get -1 from method ConvertDeviceIdToPlayerNumber. And this explains why the controllers don't get any signal from the screen.
My solution: I've attached it as a component to the AirConsole object.
Further notes:
Anti-Virus may block content.
Your PC/server has to be fast.
During development, log the deviceID and the playerID and the current active deviceIDs.
Often restart Unity or the integrated webserver of the plugin
I'm still pretty new to scripting in Unity3D, and I'm following along with a tutorial that uses GUI.Button() to draw a button on the screen.
I am intrigued by how this function works. Looking through the documentation, the proper use of GUI.Button is to invoke the function in an if statement and put the code to be called when the button is pushed within the if statement's block.
What I want to know is, how does Unity3D "magically" delay the code in the if statement until after the button is clicked? If it was being passed in as a callback function or something, then I could understand what was going on. Perhaps Unity is using continuations under the hood to delay the execution of the code, but then I feel like it would cause code after the if statement to be executed multiple times. I just like to understand how my code is working, and this particular function continues to remain "magical" to me.
I don't know if it's the right term, but I usually refer to such system as immediate mode GUI.
how does Unity3D "magically" delay the code in the if statement until
after the button is clicked?
GUI.Button simply returns true if a click event happened inside the button bounds during last frame. Basically calling that function you are polling: every frame for every button asking the engine if an event which regards that button (screen area) is happened.
If it was being passed in as a callback function or something, then I
could understand what was going on
You are probably used to an MVC like pattern, where you pass a controller delegate that's called when an UI event is raised from the view. This is something really different.
Perhaps Unity is using continuations under the hood to delay the
execution of the code, but then I feel like it would cause code after
the if statement to be executed multiple times.
No. The function simply returns immediately and return true only if an event happened. If returns false the code after the if won't be executed at all.
Side notes:
That kind of system is hard to maintain, especially for complex structured GUI.
It has really serious performance implications (memory allocation, 1 drawcall for UI element)
Unless you are writing an editor extension or custom inspector code, I'd stay away from it. If you want to build a menu implement your own system or use an external plugin (there are several good ones NGUI, EZGUI,..).
Unity has already announced a new integrated UI System, it should be released soon.
Good question. The unity3d gui goes through several event phases, or in the documentation
Events correspond to user input (key presses, mouse actions), or are UnityGUI layout or rendering events.
For each event OnGUI is called in the scripts; so OnGUI is potentially called multiple times per frame. Event.current corresponds to "current" event inside OnGUI call."
In OnGUI you can find out which event is currently happening with >Event.current
The following events are processed link:
Types of UnityGUI input and processing events.
-MouseDown
-MouseUp,mouse button was released
-MouseMove,Mouse was moved (editor views only)
-MouseDrag,Mouse was dragged
-KeyDown, A keyboard key was pressed
-KeyUp A keyboard key was released.
-ScrollWheel The scroll wheel was moved.
-Repaint A repaint event. One is sent every frame.
-Layout A layout event.
-DragUpdated Editor only: drag & drop operation updated.
-DragPerform Editor only: drag & drop operation performed.
-DragExited Editor only: drag & drop operation exited.
-Ignore Event should be ignored.
-Used Already processed event.
-ValidateCommand Validates a special command (e.g. copy & paste).
-ExecuteCommand Execute a special command (eg. copy & paste).
-ContextClick User has right-clicked (or control-clicked on the mac).
Unity GUI has much improved lately and is quite usefull if you want to handle things programmatically. If you want to handle things visually, i recommend looking at the plugins heisenbug refers to.
If you decide to use unity gui, i recommend using only one object with ongui, and let this object handle all your gui.