Mono.Cecil and Unity not playing nice together - unity3d

I'm trying to use Mono.Cecil to patch my custom user scripts in Unity.
I have code I want to inject into my custom user scripts in Unity to avoid writing the same lines of code in every MonoBehaviour in the project.
However, when I do:
using (AssemblyDefinition assemblyDefinition = AssemblyDefinition.ReadAssembly(assembly.Location, new ReaderParameters() { ReadWrite = true }))
{
//Do some patching here
assemblyDefinition.Write();
}
Then I get an exception saying
IOException: Win32 IO returned 1224
Which apparently means that the file is locked from being written to.
If I instead try to use:
File.Delete(sourceAssemblyPath);
File.Move(targetAssemblyPath, sourceAssemblyPath);
Then the dll gets patched correctly, but when I try to play the application then the scripts in my scene lose reference, as if the replacement of the file causes them to think the scripts on the scene objects no longer exist in the project (Which I guess would make sense since I DID delete the dll they were in to replace it with the new one).
Has anyone any idea on how to patch the user's project assembly in Unity while maintaining usability of the current project?
Or should I resort to only patching during build or something?
Suggestions?
Thanks

Last time I tried something with Cecil I was able to use a single var stream = new FileStream(path, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite); to both read and write the file without a delete/copy.
If you do it using [InitializeOnLoad] then the assemblies have obviously already been loaded so modifying them at that point isn't going to help; you'd need to load assemblies once to trigger Cecil than reload again to load the modifications every time you would normally reload only once. You'll want to use UnityEditor.AssemblyReloadEvents.beforeAssemblyReload instead.
beforeAssemblyReload gets called after the new assemblies are recompiled but before they are loaded. So you'd use [InitializeOnLoad] to register a callback ([DidReloadScripts] seems identical in every case I've tried) which should ensure that all newly compiled assemblies get processed going forward. In some cases this might not happen (such as if scripts need to be compiled when you first open the editor so it hasn't registered your event yet) so you'll probably also need to run your processing code immediately on initialisation as well and force an assembly reload if anything it changed using UnityEditorInternal.InternalEditorUtility.RequestScriptReload(); or UnityEditor.AssetDatabase.Refresh();.
The best way I've found to mark an assembly as processed is to inject an attribute definition and add an instance of it to the assembly, then check for it by name and skip the assembly if it exists. Without a way to do this, you'd be processing every assembly in the project every time scripts recompile rather than only processing ones that have been modified.
Edit: to process the assemblies of a build, try this:
private static bool _HasProcessed;
[PostProcessScene]
private static void OnPostProcessScene()
{
if (_HasProcessed || !BuildPipeline.isBuildingPlayer)
return;
ProcessAssemblies(#"Library\PlayerDataCache");
_HasProcessed = true;
}
[PostProcessBuild]
private static void OnPostProcessBuild(BuildTarget target, string pathToBuiltProject)
{
_HasProcessed = false;
}

Related

Execute an script's Start() after another script's Start() Unity C#

I would like to run the Start() from one script just before the Start() of another Script. Is it possible? Can you choose the order of the executions of the scripts?
I am not totally sure about Start() but you can configure the Script Execution Order of Awake, OnEnable and Update. Go to menu Edit / Project Settings and set your preferences like described in the manual section. So you might want to investigate further if Start is affected too - I believe it is as it is kind of related to Update
In general I would recommend to use this feature carefully. If you run nto the situation of having too many scripts in this list, this indicates some design issues.
If you have one script (A) meant to run after another (B), I guess it means A depends on B. In that case, you should get B to call for A passing the needed data.
public class A : MonoBehaviour
{
public void Init(State state){}
}
public class B : MonoBehaviour
{
private State state;
void Start()
{
this.state = SetState();
this.gameObject.GetComponent<A>().Init(this.state);
}
}
This might be the only way in the long run preventing long debugging hours. In fact, if you use the script execution order, it is fine until you have a lot of classes and you have been working on the project for 6 months or more. Worst, you give the project to another coder. Then you have "invisible" dependencies with new bugs you can hardly debug since they are not in the code.
What you may be able to do is normally do the script that you are wanting first however you would like it. However lets say your wanting to run the other script at the end of the first script, You may be able to reference a function by using this (Replacing SecondScriptName with the script you want to go after the first one then replacing FunctionFromSecondScript with the Function from that script)
<SecondScriptName>().FunctionFromSecondScript();
You can then call all of the functions in turn in whatever order you wish.
If i make a mistake, Please forgive me as this is my first comment to help another programmer and I am currently a budding one myself.
Hope this helps :)

Entity Framework SaveChanges "hangs" the program

My code is pretty simple:
Context.AddObject("EntitiesSetName", newObjectName);
Context.SaveChanges();
It worked fine, but just one time – the first one. That time, I interrupted my program by Shift+F5 after the SaveChanges() was traced. It was a debug process, so I manually removed a newly created record from a DB and ran a program again in the debug mode. But it does not work anymore – it “hangs” when SaveChanges() is being called.
Another strange thing that I see:
If I write before addObject() and SaveChanges() are called something like:
var tempResult = (from mydbRecord in Context
where Context.myKey == 123
select mydbRecord.myKey).Count();
// 123 is the key value of the record that should be created before the program hangs.
tempResult will have the next value: 1.
So, it seems that the record is created (when the program hung) and now exists, but when I check the DB manually using other tools – it does not!
What do I do wrong? Is it some kind of cache issue or something else?
EDIT:
I've found a source of problem.
It was not EF problem at all, but it's a problem of the tool that I use to control the database manually (Benthic).
My program falls into some kind of deadlock (when I call SaveChanges()) with the tool when the tool is connected into the same DB.
So, the problem is in the synchronization area, imho, so my question can be marked as solved.

Easy clock simulation for testing a project

Consider testing the project you've just implemented. If it's using the system's clock in anyway, testing it would be an issue. The first solution that comes to mind is simulation; manually manipulate system's clock to fool all the components of your software to believe the time is ticking the way you want it to. How do you implement such a solution?
My solution is:
Using a virtual environment (e.g. VMWare Player) and installing a Linux (I leave the distribution to you) and manipulating virtual system's clock to create the illusion of time passing. The only problem is, clock is ticking as your code is running. Me, myself, am looking for a solution that time will actually stop and it won't change unless I tell it to.
Constraints:
You can't confine the list of components used in project, as they might be anything. For instance I used MySQL date/time functions and I want to fool them without amending MySQL's code in anyway (it's too costy since you might end up compiling every single component of your project).
Write a small program that changes the system clock when you want it, and how much you want it. For example, each second, change the clock an extra 59 seconds.
The small program should
Either keep track of what it did, so it can undo it
Use the Network Time Protocol to get the clock back to its old value (reference before, remember difference, ask afterwards, apply difference).
From your additional explanation in the comments (maybe you cold add them to your question?), my thoughts are:
You may already have solved 1 & 2, but they relate to the problem, if not the question.
1) This is a web application, so you only need to concern yourself with your server's clock. Don't trust any clock that is controlled by the client.
2) You only seem to need elapsed time as opposed to absolute time. Therefore why not keep track of the time at which the server request starts and ends, then add the elapsed server time back on to the remaining 'time-bank' (or whatever the constraint is)?
3) As far as testing goes, you don't need to concern yourself with any actual 'clock' at all. As Gilbert Le Blanc suggests, write a wrapper around your system calls that you can then use to return dummy test data. So if you had a method getTime() which returned the current system time, you could wrap it in another method or overload it with a parameter that returns an arbitrary offset.
Encapsulate your system calls in their own methods, and you can replace the system calls with simulation calls for testing.
Edited to show an example.
I write Java games. Here's a simple Java Font class that puts the font for the game in one place, in case I decide to change the font later.
package xxx.xxx.minesweeper.view;
import java.awt.Font;
public class MinesweeperFont {
protected static final String FONT_NAME = "Comic Sans MS";
public static Font getBoldFont(int pointSize) {
return new Font(FONT_NAME, Font.BOLD, pointSize);
}
}
Again, using Java, here's a simple method of encapsulating a System call.
public static void printConsole(String text) {
System.out.println(text);
}
Replace every instance of System.out.println in your code with printConsole, and your system call exists in only one place.
By overriding or modifying the encapsulated methods, you can test them.
Another solution would be to debug and manipulate values returned by time functions to set them to anything you want

mvc-mini-profiler slows down Entity Framework

I've set up mvc-mini-profiler against my Entity Framework-powered MVC 3 site. Everything is duly configured; Starting profiling in Application_Start, ending it in Application_End and so on. The profiling part works just fine.
However, when I try to swap my data model object generation to providing profilable versions, performance slows to a grind. Not every SQL query, but some queries take about 5x the entire page load. (The very first page load after firing up IIS Express takes a bit longer, but this is sustained.)
Negligible time (~2ms tops) is spent querying, executing and "data reading" the SQL, while this line:
var person = dataContext.People.FirstOrDefault(p => p.PersonID == id);
...when wrapped in using(profiler.Step()) is recorded as taking 300-400 ms. I profiled with dotTrace, which confirmed that the time is actually spent in EF as usual (the profilable components do make very brief appearances), only it is taking much longer.
This leads me to believe that the connection or some of its constituent parts are missing sufficient data, making EF perform far worse.
This is what I'm using to make the context object (my edmx model's class is called DataContext):
var conn = ProfiledDbConnection.Get(
/* returns an SqlConnection */CreateConnection());
return CreateObjectContext<DataContext>(conn);
I originally used the mvc-mini-profiler provided ObjectContextUtils.CreateObjectContext method. I dove into it and noticed that it set a wildcard metadata workspace path string. Since I have the database layer isolated to one project and several MVC sites as other projects using the code, those paths have changed and I'd rather be more specific. Also, I thought this was the cause of the performance issue. I duplicated the CreateObjectContext functionality into my own project to provide this, as such:
public static T CreateObjectContext<T>(DbConnection connection) where T : System.Data.Objects.ObjectContext {
var workspace = new System.Data.Metadata.Edm.MetadataWorkspace(
GetMetadataPathsString().Split('|'),
// ^-- returns
// "res://*/Redacted.csdl|res://*/Redacted.ssdl|res://*/Redacted.msl"
new Assembly[] { typeof(T).Assembly });
// The remainder of the method is copied straight from the original,
// and I carried over a duplicate CtorCache too to make this work.
var factory = DbProviderServices.GetProviderFactory(connection);
var itemCollection = workspace.GetItemCollection(System.Data.Metadata.Edm.DataSpace.SSpace);
itemCollection.GetType().GetField("_providerFactory", // <==== big fat ugly hack
BindingFlags.NonPublic | BindingFlags.Instance).SetValue(itemCollection, factory);
var ec = new System.Data.EntityClient.EntityConnection(workspace, connection);
return CtorCache<T, System.Data.EntityClient.EntityConnection>.Ctor(ec);
}
...but it doesn't seem to make much of a difference. The problem still exists whether I use the above hacked version that's more specific with metadata workspace paths or the mvc-mini-profiler provided version. I just thought I'd mention that I've tried this too.
Having exhausted all this, I'm at my wits' end. Once again: when I just provide my data context as usual, no performance is lost. When I provide a "profilable" data context, performance plummets for certain queries (I don't know what influences this either). What could mvc-mini-profiler do that's wrong? Am I still feeding it the wrong data?
I think this is the same problem as this person ran into.
I just resolved this issue today.
see: http://code.google.com/p/mvc-mini-profiler/issues/detail?id=43
It happened cause some of our fancy hacks were not cached well enough. In particular:
var workspace = new System.Data.Metadata.Edm.MetadataWorkspace(
new string[] { "res://*/" },
new Assembly[] { typeof(T).Assembly });
Is a very expensive call, so we need to cache the workspace.
Profiling, by definition, will effect performance of the application being profiled. The profiler needs to insert it's own method calls throughout the application, intercept low level system calls, and record all that data someplace (meaning writes to disk). All of those tasks take up precious CPU cycles, memory, and disk access.

Rhino Mocks Calling instead of Recording in NUnit

I am trying to write unit tests for a bit of code involving Events. Since I need to raise an event at will, I've decided to rely upon RhinoMocks to do so for me, and then make sure that the results of the events being raised are as expected (when they click a button, values should change in a predictable manner, in this example, the height of the object should decrease)
So, I do a bit of research and realize I need an Event Raiser for the event in question. Then it's as simple as calling eventraiser.Raise(); and we're good.
The code for obtaining an event raiser I've written as is follows (written in C#) (more or less copied straight off the net)
using (mocks.Record())
{
MyControl testing = mocks.DynamicMock<MyControl>();
testing.Controls.Find("MainLabel",false)[0].Click += null;
LastCall.IgnoreArguments();
LastCall.Constraints(Rhino.Mocks.Constraints.Is.NotNull());
Raiser1 = LastCall.GetEventRaiser();
}
I then test it as In playback mode.
using (mocks.Playback())
{
MyControl thingy = new MyControl();
int temp=thingy.Size.Height;
Raiser1.Raise();
Assert.Greater(temp, thingy.Size.Height);
}
The problem is, when I run these tests through NUnit, it fails. It throws an exception at the line testing.Controls.Find("MainLabel",false)[0].Click += null; which complains about trying to add null to the event listener. Specifically, "System.NullReferenceException: Object Reference not set to an instance of the Object"
Now, I was under the understanding that any code under the Mocks.Record heading wouldn't actually be called, it would instead create expectations for code calls in the playback. However, this is the second instance where I've had a problem like this (the first problem involved classes/cases that where a lot more complicated) Where it appears in NUnit that the code is actually being called normally instead of creating expectations. I am curious if anyone can point out what I am doing wrong. Or an alternative way to solve the core issue.
I'm not sure, but you might get that behaviour if you haven't made the event virtual in MyControl. If methods, events, or properties aren't virtual, then I don't think DynamicMock can replace their behaviour with recording and playback versions.
Personally, I like to define interfaces for the classes I'm going to mock out and then mock the interface. That way, I'm sure to avoid this kind of problem.