I am trying to understand how to use the profiler to inspect c# objects that are not related to the scene. In my application, after parsing a bunch of xml and creating a bunch of objects that persist via a static dictionary, I can see the mono memory value jump up on the profiler. However I cannot seem to see the breakdown of what memory is where, how many instances of objects exists etc. It would seem the profiler only knows about GameObjects and mono behaviours. Is this accurate?
if your main concern is to see the memory size of your static dictionary, you can do so by using the profiler class:
public static int GetRuntimeMemorySize(Object o);
http://docs.unity3d.com/ScriptReference/Profiler.GetRuntimeMemorySize.html
you can also use the profiler class functions:
public static void BeginSample(string name, Object targetObject);
public static void EndSample();
to profile a piece of code with a custom label.
Related
I am planning to code a card game in Unity/C#. In this game, every card will be a different class derived from main Card class. And they will override the Card virtual functions. My problem is, if there are 1000 cards and more, every card class methods and size informations will be loading to the ram on executing the game? If so, how we can manage this period? I just want to load classes data that cards in the player's deck to the ram. What are your suggestions? Thanks in advance.
If you think it might be a problem, create a quick test. It should be easy enough to create your main class with one or two virtual methods, and then create 100 derived classes, each one of which overrides the main class methods in the same way. Something like:
public class MainCard {
// properties
// 2 methods
}
Load and run your program (something simple that just instantiates a MainCard object), and determine how much memory it's taking.
Then, create 100 derived classes that override the methods in MainCard. The implementations of the overridden virtual methods can be identical.
public class Card01 : MainCard {
// override both MainCard methods
}
public class Card02 : MainCard {
// copy Card01 implementation here
}
Do that for 100 classes. See how much memory it takes.
Remember that in C#, the code for a class isn't necessarily even loaded into memory until an instance of the class is created, and then all you get is the metadata except for the functions that are actually called. The Just In Time compiler handles loading and compiling to native code on an as-needed basis. Unless you pre-compile on installation, of course.
Also remember, though, that once compiled the code will remain in memory throughout the life of the program. So if it's possible for every possible card to be used within a single game, then you do have to make sure that the compiled code for all cards will fit into memory.
So, I have bound the CombatController to an object called "godObject". In the Start() method, I call init() functions on other classes. I did this so I can control the order in which objects are initialized since, for example, the character controller relies on the grid controller being initialized.
Quick diagram:
-------------------- calls
| CombatController | ----------> CameraController.init();
-------------------- |
| ---> GridController.init();
|
| ---> CharacterController.init();
So, now I have a slight problem. I have multiple properties that I need in every controller. At the moment, I have bound everything to the combat controller itself. That means that, in every controller, I have to get an instance of the CombatController via GameObject.Find("godObject).GetComponent<CombatController>(). To be honest, I don't think this is good design.
My idea now was to create a BaseCombatController that extends MonoBehavior, and then have all other classes like GridController, CharacterController etc. extend the BaseCombatController. It might look like this:
public class BaseCombatController : MonoBehaviour
{
public GameObject activePlayer;
public void setActivePlayer(GameObject player) {
this.activePlayer = player;
}
... more stuff to come ...
}
This way, I could access activePlayer everywhere without the need to create a new instance of the CombatController. However, I'm not sure if this doesn't have possible side effects.
So, lots of text for a simple question, is that safe to do?
I use inheritance in Unity all the time. The trick, like you have in the question, is to allow your base class to inherit from monobehavior. For Example:
public class Base Item : Monobehavior
{
public string ItemName;
public int Price;
public virtual void PickUp(){//pickup logic}
//Additional functions. Update etc. Make them virtual.
}
This class sets up what an item should do. Then in a derived class you can change and extend this behavior.
public class Coin : BaseItem
{
//properties set in the inspector
public override void PickUp(){//override pickup logic}
}
I have used this design pattern a lot over the past year, and am currently using it in a retail product. I would say go for it! Unity seems to favor components over inheritance, but you could easily use them in conjunction with each other.
Hope this helps!
As far as I can see this should be safe. If you look into Unity intern or even Microsoft scripts they all extend/inhert (from) each other.
Another thing you could try would be the use of interfaces, here is the Unity Documentation to them: https://unity3d.com/learn/tutorials/topics/scripting/interfaces if you want to check it out.
You are right that GameObject.Find is pure code smell.
You can do it via the inheritance tree (as discussed earlier) or even better via interfaces (as mentioned by Assasin Bot), or (I am surprised no one mentioned it earlier) via static fields (aka the Singleton pattern).
One thing to add from experience - having to have Inits() called in a specific order is a yellow flag for your design - I've been there myself and found myself drowned by init order management.
As a general advice: Unity gives you two usefull callbacks - Awake() and Start(). If you find yourself needing Init() you are probably not using those two as they were designed.
All the Awakes() are guaranteed (for acvie objects) to run before first Start(), so do all the internal object initialisation in Awake(), and binding to external objects on Start(). If you find yourself needing finer control - you should probably simplify the design a bit.
As a rule of thumb: all objects should have their internal state (getcomponents<>, list inits etc) in order by the end of Awake(), but they shold not make any calls depending on other objects being ready before Start(). Splitting it this way usually helps a lot
I am attempting to call out to parallels.js via JSNI. Parallels provides a nice API around web workers, and I wrote some lightweight wrapper code which provides a more convenient interface to workers from GWT than Elemental. However I'm getting an error which has me stumped:
com.google.gwt.core.client.JavaScriptException: (DataCloneError) #io.mywrapper.workers.Parallel::runParallel([Ljava/lang/String;Lcom/google/gwt/core/client/JavaScriptObject;Lcom/google/gwt/core/client/JavaScriptObject;)([Java object: [Ljava.lang.String;#1922352522, JavaScript object(3006), JavaScript object(3008)]): An object could not be cloned.
This comes from, in hosted mode:
at com.google.gwt.dev.shell.BrowserChannelServer.invokeJavascript(BrowserChannelServer.java:249) at com.google.gwt.dev.shell.ModuleSpaceOOPHM.doInvoke(ModuleSpaceOOPHM.java:136) at com.google.gwt.dev.shell.ModuleSpace.invokeNative(ModuleSpace.java:571) at com.google.gwt.dev.shell.ModuleSpace.invokeNativeVoid(ModuleSpace.java:299) at com.google.gwt.dev.shell.JavaScriptHost.invokeNativeVoid(JavaScriptHost.java:107) at io.mywrapper.workers.Parallel.runParallel(Parallel.java)
Here's my code:
Example client call to create a worker:
Workers.spawnWorker(new String[]{"hello"}, new Worker() {
#Override
public String[] work(String[] data) {
return data;
}
#Override
public void done(String[] data) {
int i = data.length;
}
});
The API that provides a general interface:
public class Workers {
public static void spawnWorker(String[] data, Worker worker) {
Parallel.runParallel(data, workFunction(worker), callbackFunction(worker));
}
/**
* Create a reference to the work function.
*/
public static native JavaScriptObject workFunction(Worker worker) /*-{
return worker == null ? null : $entry(function(x) {
worker.#io.mywrapper.workers.Worker::work([Ljava/lang/String;)(x);
});
}-*/;
/**
* Create a reference to the done function.
*/
public static native JavaScriptObject callbackFunction(Worker worker) /*-{
return worker == null ? null : $entry(function(x) {
worker.#io.mywrapper.workers.Worker::done([Ljava/lang/String;)(x);
});
}-*/;
}
Worker:
public interface Worker extends Serializable {
/**
* Called to perform the work.
* #param data
* #return
*/
public String[] work(String[] data);
/**
* Called with the result of the work.
* #param data
*/
public void done(String[] data);
}
And finally the Parallels wrapper:
public class Parallel {
/**
* #param data Data to be passed to the function
* #param work Function to perform the work, given the data
* #param callback Function to be called with result
* #return
*/
public static native void runParallel(String[] data, JavaScriptObject work, JavaScriptObject callback) /*-{
var p = new $wnd.Parallel(data);
p.spawn(work).then(callback);
}-*/;
}
What's causing this?
The JSNI docs say, regarding arrays:
opaque value that can only be passed back into Java code
This is quite terse, but ultimately my arrays are passed back into Java code, so I assume these are OK.
EDIT - ok, bad assumption. The arrays, despite only ostensibly being passed back to Java code, are causing the error (which is strange, because there's very little googleability on DataCloneError.) Changing them to String works; however, String isn't sufficient for my needs here. Looks like objects face the same kinds of issues as arrays do; I saw Thomas' reference to JSArrayUtils in another StackOverflow thread, but I can't figure out how to call it with an array of strings (it wants an array of JavaScriptObjects as input for non-primitive types, which does me no good.) Is there a neat way out of this?
EDIT 2 - Changed to use JSArrayString wherever I was using String[]. New issue; no stacktrace this time, but in the console I get the error: Uncaught ReferenceError: __gwt_makeJavaInvoke is not defined. When I click on the url to the generated script in developer tools, I get this snippet:
self.onmessage = function(e) {self.postMessage((function (){
try {
return __gwt_makeJavaInvoke(3)(null, 65626, jsFunction, this, arguments);
}
catch (e) {
throw e;
}
})(e.data))}
I see that _gwt_makeJavaInvoke is part of the JSNI class; so why would it not be found?
You can find working example of GWT and WebWorkers here: https://github.com/tomekziel/gwtwwlinker/
This is a preliminary work, but using this pattern I was able to pass GWT objects to and from webworker using serialization provided by AutoBeanFactory.
If you never use dev mode it is currently safe to pretend that a Java String[] is a JS array with strings in it. This will break in dev mode since arrays have to be usable in Java and Strings are treated specially, and may break in the future if the compiler optimizes arrays differently.
Cases where this could go wrong in the future:
The semantics of Java arrays and JavaScript arrays are different - Java arrays cannot be resized, and are initialized with specific values based on the component type (the data in the array). Since you are writing Java code, the compiler could conceivable make assumptions based on details about how you create and use that array that could be broken by JS code that doesn't know to never modify the array.
Some arrays of primitive types could be optimized into TypedArrays in JavaScript, more closely following Java semantics in terms of resizing and Java behavior in terms of allocation. This would be a performance boost as well, but could break any use of int[], double[], etc.
Instead, you should copy your data into a JsArrayString, or just use the js array to hold the data rather than going back and forth, depending on your use case. The various JsArray types can be resized and already exist as JavaScript objects that outside JS can understand and work with.
Reply to EDIT 2:
At a guess, the parallel.js script is trying to run your code from another scope such a in the webworker (that's the point of the code, right) where your GWT code isn't present. As such, it can't call the makeJavaInvoke which is the bridge back into dev mode (would be a different failure with compiled JS). According to http://adambom.github.io/parallel.js/ there are specific requirements that a passed callback must meet to be passed in to spawn and perhaps then - your anonymous functions definitely do not meet them, and it may not be possible to maintain java semantics.
Before I get much deeper, check out this answer I did a while ago addressing the basic issues with webworkers and gwt/java: https://stackoverflow.com/a/11376059/860630
As noted there, WebWorkers are effectively new processes, with no shared code or shared state with the original process. The Parallel.js code attempts to paper over this with a little bit of trickery - shared state is only available in the form of the contents passed in to the original Parallel constructor, but you are attempting to pass in instances of 'java' objects and calling methods on them. Those Java instances come with their own state, and potentially can link back to the rest of the Java app by fields in the Worker instance. If I were implementing Worker and doing something that referenced other data than what was passed in, then I would be seeing further bizarre failures.
So the functions you pass in must be completely standalone - they must not refer to external code in any way, since then the function can't be passed off to the webworker, or to several webworkers, each unaware of each other's existence. See https://github.com/adambom/parallel.js/issues/32 for example:
That's not possible since it would
require a shared state across workers
require us to transmit all scope variables (I don't think there's even a possibility to read the available scopes)
The only thing which might be possible would be cache variables, but these can already be defined in the function itself with spawn() and don't make any sense in map (because there's no shared state).
Without being actually familiar with how parallel.js is implemented (all of this answer so far is reading the docs and a quick google search for "parallel.js shared state", plus having experiemented with WebWorkers for a day or so and deciding that my present problem wasn't yet worth the bother), I would guess that then is unrestricted, and you can you pass it whatever you like, but spawn, map, and reduce must be written in such a way that their JS can be passed off to the new JS process and completely stand alone there.
This may be possible from your normal Java code when compiled, provided you have just one implementation of Worker and that impl never uses state other than what is directly passed in. In that case the compiler should rewrite your methods to be static so that they are safe to use in this context. However, that doesn't make for a very useful library, as it seems you are trying to achieve. With that in mind, you could keep your worker code in JSNI to ensure that you follow the parallel.js rules.
Finally, and against the normal GWT rules, avoid $entry for calls you expect to happen in other contexts, since those workers have no access to the normal exception handling and scheduling that $entry enables.
(and finally finally, this is probably still possible if you are very careful at writing Worker implementations and write a Generator that invokes each worker implementation in very specific ways to make sure that com.google.gwt.dev.jjs.impl.MakeCallsStatic and com.google.gwt.dev.jjs.impl.Pruner can correctly act to knock out the this in those instance methods once they've been rewritten as JS functions. I think the cleanest way to do this is to emit the JSNI in the generator itself, call a static method written in real Java, and from that static method call the specific instance method that does the heavy lifting for spawn, etc.)
I've been creating a prototype for a modern MUD engine. A MUD is a simple form of simulation and provide a good method in which to test a concept I'm working on. This has led me to a couple of places in my code where things, are a bit unclear, and the design is coming into question (probably due to its being flawed). I'm using model first (I may need to change this) and I've designed a top down architecture of game objects. I may be doing this completely wrong.
What I've done is create a MUDObject entity. This entity is effectively a base for all of my other logical constructs, such as characters, their items, race, etc. I've also created a set of three meta classes which are used for logical purposes as well Attributes, Events, and Flags. They are fairly straightforward, and are all inherited from MUDObject.
The MUDObject class is designed to provide default data behavior for all of the objects, this includes deletion of dead objects. The automatically clearing of floors. etc. This is also designed to facilitate this logic virtually if needed. For example, checking a room to see if an effect has ended and deleting the the effect (remove the flag).
public partial class MUDObject
{
public virtual void Update()
{
if (this.LifeTime.Value.CompareTo(DateTime.Now) > 0)
{
using (var context = new ReduxDataContext())
{
context.MUDObjects.DeleteObject(this);
}
}
}
public virtual void Pause()
{
}
public virtual void Resume()
{
}
public virtual void Stop()
{
}
}
I've also got a class World, it is derived from MUDObject and contains the areas and room (which in turn contain the games objects) and handles the timer for the operation to run the updates. (probably going to be moved, put here as if it works would limit it to only the objects in-world at the time.)
public partial class World
{
private Timer ticker;
public void Start()
{
this.ticker = new Timer(3000.0);
this.ticker.Elapsed += ticker_Elapsed;
this.ticker.Start();
}
private void ticker_Elapsed(object sender, ElapsedEventArgs e)
{
this.Update();
}
public override void Update()
{
this.CurrentTime += 3;
// update contents
base.Update();
}
public override void Pause()
{
this.ticker.Enabled = false;
// update contents
base.Pause();
}
public override void Resume()
{
this.ticker.Enabled = true;
// update contents
this.Resume();
}
public override void Stop()
{
this.ticker.Stop();
// update contents
base.Stop();
}
}
I'm curious of two things.
Is there a way to recode the context so that it has separate
ObjectSets for each type derived from MUDObject?
i.e. context.MUDObjects.Flags or context.Flags
If not how can I query a child type specifically?
Does the Update/Pause/Resume/Stop architecture I'm using work
properly when placed into the EF entities directly? given than it's for
data purposes only?
Will locking be an issue?
Does the partial class automatically commit changes when they are made?
Would I be better off using a flat repository and doing this in the game engine directly?
1) Is there a way to recode the context so that it has separate ObjectSets for each type derived from MUDObject?
Yes, there is. If you decide that you want to define a base class for all your entities it is common to have an abstract base class that is not part of the entity framework model. The model only contains the derived types and the context contains DbSets of derived types (if it is a DbContext) like
public DbSet<Flag> Flags { get; set; }
If appropriate you can implement inheritance between classes, but that would be to express polymorphism, not to implement common persistence-related behaviour.
2) Does the Update/Pause/Resume/Stop architecture I'm using work properly when placed into the EF entities directly?
No. Entities are not supposed to know anything about persistence. The context is responsible for creating them, tracking their changes and updating/deleting them. I think that also answers your question about automatically committing changes: no.
Elaboration:
I think here it's good to bring up the single responsibility principle. A general pattern would be to
let a context populate objects from a store
let the object act according to their responsibilities (the simulation)
let a context store their state whenever necessary
I think Pause/Resume/Stop could be responsibilities of MUD objects. Update is an altogether different kind of action and responsibility.
Now I have to speculate, but take your World class. You should be able to express its responsibility in a short phrase, maybe something like "harbour other objects" or "define boundaries". I don't think it should do the timing. I think the timing should be the responsibility of some core utility which signals that a time interval has elapsed. Other objects know how to respond to that (e.g. do some state change, or, the context or repository, save changes).
Well, this is only an example of how to think about it, probably far from correct.
One other thing is that I think saving changes should be done not nearly as often as state changes of the objects that carry out the simulation. It would probably slow down the process dramatically. Maybe it should be done in longer intervals or by a user action.
First thing to say, if you are using EF 4.1 (as it is tagged) you should really consider going to version 5.0 (you will need to make a .NET 4.5 project for this)
With several improvements on performance, you can benefit from other features also. The code i will show you will work for 5.0 (i dont know if it will work for 4.1 version)
Now, let's go to you several questions:
Is there a way to recode the context so that it has separate
ObjectSets for each type derived from MUDObject? If not how can I
query a child type specifically?
i.e. context.MUDObjects.Flags or context.Flags
Yes, you can. But to call is a little different, you will not have Context.Worlds you will only have the base class to be called this way, if you want to get the set of Worlds (that inherit from MUDObject, you will call:
var worlds = context.MUDObjects.OfType<World>();
Or you can do in direct way by using generics:
var worlds = context.Set<World>();
If you define you inheritance the right way, you should have an abstract class called MUDObjects and all others should iherit from that class. EF can work perfectly with this, you just need to make it right.
Does the Update/Pause/Resume/Stop architecture I'm using work properly
when placed into the EF entities directly? given than it's for data
purposes only?
In this case i think you should consider using a Design Pattern called Strategy Pattern, do some research, it will fit your objects.
Will locking be an issue?
Depends on how you develop the system....
Does the partial class automatically commit changes when they are
made?
Did not understand that question.... Partial classes are just like regular classes, thay are just in different files, but when compiled (or event at Design-Time, because of the vshost.exe) they are in fact just one.
Would I be better off using a flat repository and doing this in the
game engine directly?
Hard to answer, it all depends on the requirements of the game, deploy strategy....
I am using the following code in my asp.net app. According to this code, for all users of the app, there will be only a single instance of DBProviderFactory. Will this create a problem in a multi-user environment? So all users would use the the same DbProviderFactory object to create connections. I am not sure if this will create some type of hidden problems in a multi-user environment.
The reason, why I am using a static instance for DbProviderFactory, is so that the GetFactory method is not called everytime a connection needs to be instantiated. This, I think, would make it quicker to get a connection object. Any flaw in my reasoning?
public class DatabaseAccess
{
private static readonly DbProviderFactory _dbProviderFactory =
DbProviderFactories.GetFactory(System.Configuration.ConfigurationManager.ConnectionStrings["DB"].ProviderName);
public static DbConnection GetDbConnection()
{
DbConnection con = _dbProviderFactory.CreateConnection();
con.ConnectionString = System.Web.Configuration.WebConfigurationManager.ConnectionStrings["DB"].ConnectionString;
return con;
}
}
It looks fine, but probably will not create interesting efficiencies.
Object creation in .NET is quick. So creating the factory doesn't take a lot of time. Acquiring the connection from a remote database does, but with connection pooling, this normally isn't an issue.
The factory probably doesn't appear to implement any state of it's own & looks like it's probably immutable. So access from different threads is probably okay.
Static objects aren't garbage collected. I doubt the factory will grow in size, so this shouldn't be a problem.
So you avoid a bunch of cheap object creation, a bunch of cheap background garbage collections, and have a minor risk of a derived class actually having state and not being thread safe depending on the exact implementation returned by GetFactory