So I was reading Odin's docs and from there I noticed mentions that the serialization system is needed in the compiled build. AFAIK, Serialization is used for scene saving and inspector stuff - none of this is needed in the game-ready compiled build.
Then what's the purpose of serialization system in a compiled build?
In general serialization means nothing more than storing certain values serialized into a file e.g. the Unity scene or any Assets like Prefabs or ScriptableObjects. You can basically open all of them in a texteditor and see their serialized version.
In general everything serialized means you can configure it in the Inspector and on runtime/in a build the instances will be generated using the serialized (stored) values for their fields instead of default ones.
Basically e.g. any serialized field like
[SerializeField] private float value;
or
public string someString;
works like this and ofcourse requires at least some kind of deserialization system on runtime and in a compiled build.
It is further possible to also use serialization and deserialization on runtime /in a build for e.g. converting classes from and to JSON or XMl or other formats or load and create assets or AssetBundles.
Therefore it makes completely sense to have (de)serialization systems in a build and it is very unlikely to find a system that has the serialization isolated from the deserialization.
Specific for the Odin Inspector: It uses the serialzation system to - how they say - serialize everything. This even allows you to store scene references in prefabs so the prefab once instantiated can restore the according references from the scene.
Related
I'm trying to write a tool that scans through our Unity (2017.4.22) prefabs and scene files to look for Monobehaviour properties that don't exist in a release build. I've created a c# console project (incorporating YamlDotNet 6.1.2), and from this project I refer to Unity's 'Assembly-CSharp-firstpass.dll' and 'Assembly-CSharp.dll' dlls.
I was wondering if anyone has been able to configure YamlDotNet to parse any given prefab/scene file into a generic key/value data structure (with an arbitrary number of nested levels) in memory, so I can iterate it and use reflection to determine if fields exist.
If you're wondering why I need to do this manually, it's because I'm scanning for fields that don't exist in release builds. The only way to do that is to recompile 'Assembly-CSharp-firstpass' and 'Assembly-CSharp' with UNITY_EDITOR removed (and all code from files within a 'Editor' subfolder) culled. I can't do this in-editor (obviously) so that's why this has to be a standalone tool.
Everything I've tried has resulted in crashes. Here's what I tried:
I downloaded the YamlDotNet 6.1.2 source
Tried to deserialize a prefab using Deserialize(...). Received this error: "Encountered an unresolved tag 'tag:unity3d.com,2011:1'"
I then found this custom Type resolver, which I integrated: https://gist.github.com/derFunk/795d7a366627d59e0dbd
I then started receiving this exception: "Exception during deserialization ---> System.InvalidOperationException: Failed to create an instance of type 'UnityEngine.GameObject'"
I'm guessing this is is happening because I'm trying to instantiate a GameObject in an environment that doesn't fully support GameObjects (I am in a standalone C# project after all).
But I don't actually need to instantiate any GameObjects. I just want to parse the values. Does this make sense to anyone? I found a few other questions here but they don't seem to handle YAML files that are at the same complexity as Unity's prefabs.
Thanks in advance,
Jeff
Your guess is correct. Because you are resolving the tag:unity3d.com,2011:* tags to UnityEngine.GameObject and related types, then the deserializer attempts to create an instance of the corresponding type when it encounters the tags.
You could opt to always resolve the tag to Dictionary<string, object> (or Dictionary<object, object> if the keys are not always strings). Then always get dictionaries.
In your case it is probably simpler to load the YAML stream using YamlDotNet.RepresentationModel.YamlStream. This will give you a representation of the YAML document that will be more suitable for your use case.
Here's the official example on how to do it: https://github.com/aaubry/YamlDotNet/wiki/Samples.LoadingAYamlStream
// Load the stream
var yaml = new YamlStream();
yaml.Load(input);
// Examine the stream
var mapping = (YamlMappingNode)yaml.Documents[0].RootNode;
I am using the PostSharp solution for INotifyPropertyChanged by decorating my business classes with the [NotifyPropertyChanged] attribute.
All works fine.
Now I wrote a custom aspect that handles property changes so that I get some custom flags set when some special properties change. This aspects is named [HandlePropertyChanged] and works when used alone.
Now I try to use both aspects in combination. As I read on the PostSharp page I can manually order them to ensure a fixed order by using
[NotifyPropertyChanged(AspectPriority = 0)]
[HandlePropertyChanged(AspectPriority = 1)]
In this case, I can build my solution, but because "NotifyPropertyChanged" runs before "HandlePropertyChanged", the changes on my properties are already done and the custom logic does not run correctly.
If I try this
[HandlePropertyChanged(AspectPriority = 0)]
[NotifyPropertyChanged(AspectPriority = 1)]
my build fails with the error at the bottm of the text (see below).
Best would be to simply do what NotifyPropertyChanged does in my custom aspect and forget about the PostSharp aspect
Is this possible?
0: Error C:\Source\WAVE\WAVE.Data.Contracts\Entities\Base\EntityBase.cs (17,16) PS0115: Conflicting aspects on "TopMotive.WAVE.Data.Contracts.Entities.Base.EntityBase`1": according to aspect dependencies, transformation "Instantiation of aspect PostSharp.Patterns.Model.NotifyPropertyChangedAttribute" should be located both before and after transformation "Instantiates binding collection for field "PostSharp.Patterns.Model.NotifyPropertyChangedAttribute/LocationBindings".".
This bug is fixed in PostSharp 5.0.52 and PostSharp 6.0.16 RC.
Try superior and free alternative: Stepen Cleary's Calculated Properties.
https://github.com/StephenCleary/CalculatedProperties/blob/master/README.md
I used both in production and found it to be far better than PostSharp's aspect.
Also from PostSharp docs:
"If a property getter calls a virtual method from its class or a delegate, or references a property of another object (without using canonical form this.field.Property), PostSharp will generate an error because it cannot resolve such a dependency at build time. The same limitations apply when your property getter contains complex data flows, such as loops, or calls to methods (except property getters) of other classes.
When this happens, you can either refactor your code so that it can be automatically analyzed by PostSharp, or you can take over the responsibility for analyzing the code"
None of those limitations apply to Calculated Properties. It can do loops, virtual methods, LINQ to objects, basically any runtime dependencies that you can imagine doesn't matter how indirect. Dependency graph rewires itself at runtime and just works without any ceremony. They are also fast.
I'm trying to Save/Serialize a GameObject and its components (with their current values), but I don't think it's possible, any ideas ???
Keep in mind that I don't wanna use an asset from the assets store, I wanna code it myself.
Thanks in advance :)
Sorry I dont have the rep to make a comment. I use two methods to acheive this save state.
First is what programmer seems to suggest, make a class that hosts the property's needed to recreate the object. Component names can be saved as strings, then fetched using
gameObject.AddComponent(Type.GetType(componentName));
If you are doing this alot it may be worth checking out this forum first.Here
You can serialize these classes using the unity serializer(the string names of types will serialise. You may have to make a sub class to handle the values in each object of different type.
This can be a fairly complicated approach so I prefer a file path serialized for a prefab in the resources folder. Any values that need saving can be saved in a serializable class like first approach which is referenced in a script on the saved prefab. This script has a method to apply all details saved in the deserialized class to the prefab when it is created. The details are captured by the script,added to the class and serialized whenever the object state should be recorded.
The exact way this is managed may vary but I have done this myself with he ability to store any values I could need. Even if you use a third party serializer from the asset store you will still have trouble saving entire objects, but you will be able to get enough info to recreate.
I've managed to save the components I want (custom scripts) by saving them in a json file, then serializing them with binary so they can be more secure.
Thanks to everyone who tried to help tho...
I'd like to know if it is possible to use the serializer of GWT. When using the rpc-mechnism of GWT, GWT serializes the objects on the client and deserializes the objects on the server. For this mechanism you have to use special servlets (RemoteServiceServlet) of GWT. But i want to use the normal HttpServlets and therefore i have to serialize and deserialize the objects by myself.
All the code you need to look at is in the RemoteServiceServlet.java. Focus on the processCall method.
The RPC.decodeRequest(payload, ...) will give you a RPCRequest object which includes the method to be called and the deserialized parameters.
To encode the response focus on RPC.invokeAndEncodeResponse() and RPC.encodeResponseForSuccess() methods.
[EDITED]
In client-side it's worth to take a look to the proxy classes generated by the RPC generator, concretely the YourService_Proxy.java file. Generated files are left somewhere in your project's folder structure after compiling a project (you can indicate this folder with the -gen though).
The interesting code is in in the RemoteServiceProxy, looking at the createStreamWritter method, you can see how to serialize your objects. In the createStreamReader you can see how to deserialize a message from the server.
See gwt-byte-serializer
SerializerInt ser = new Serializer();
ser.writeValue("test");
ser.writeValue(new int[]{5,1,6});
String buffer = ser.getBuffer();
SerializerInt des = new Serializer(buffer);
des.readString()
des.readIntegerArr()
GWT compiles the Java source into Javascript, and names the files according to a hash of their contents. I'm getting a new set of files every compile, because the javascript contents are changing, even when I don't change the source at all.
The files are different for OBF and PRETTY output, but if I set it to DETAILED, they're no longer different every compile. In PRETTY, I can see that all/most of the differences between compiles are in the value parameters for typeId. For example, a funciton called initValues() is called with different values for it's typeId parameter.
In PRETTY mode, the differences you see are allocation of Java Classes to TypeIds. It's how GWT manages run time type checking. You'll notice a table at the bottom of each script essentially mapping each typeId to all compatible superclasses. This is how GWT can still throw ClassCastException in JavaScript (though you should run into this very rarely!).
In OBF mode, the differences are due to the allocation of minified function names.
In both cases, it's due to the order the compiler is processing the code. Some internal symbol tables might be using a non-ordered collection store symbols for processing. It can happen for lots of reasons.
As far as I know, GWT will compile a new version every time you compile it, this is a feature ;)
You can use ant to control it though, so that it only builds the GWT section of your application if it's actually changed:
http://wiki.shiftyjelly.com/index.php/GWT#Use_The_Power_of_Ant_to_Build_Changes_Only