How to assign/opt from multiple delegates for a 'moled' method? - moles

I am currently examining Moles from the outside while I wait for my VS 2010 license, and I wonder whether Moles allows me to:
provide the ability to assígn multiple mole delegates for a method being moled, perhaps at a test fixture setup level?
switch in runtime in my test case, which of my mole delegates must be invoked for the upcoming call(s) to the moled method being isolated?
Any hints?

Best Answer:
It is much easier and makes far more sense to include gating logic in the detour method, than using two stubs for the same method! For example, MyMethod reads data from three different files on disk, each requiring different mock data to be returned. We may detour System.IO.File.OpenRead and gate the return value by analyzing the input parameters of OpenRead:
TEST METHOD:
[TestMethod]
[HostType("Moles")]
public void Test()
{
System.IO.Moles.MFile.OpenReadString = filePath => {
var mockStream = new System.IO.FileStream();
byte[] buffer;
switch (filePath)
{
case #"C:\DataFile.dat":
mockStream.Write(buffer, 0, 0); // Populate stream
break;
case #"C:\TextFile.txt":
mockStream.Write(buffer, 0, 0); // Populate stream
break;
case #"C:\LogFile.log":
mockStream.Write(buffer, 0, 0); // Populate stream
break;
}
return mockStream;
};
var target = new MyClass();
target.MyMethod();
}
TARGET TYPE:
using System.IO;
public class MyClass
{
public void MyMethod()
{
var fileAData = File.OpenRead(#"C:\DataFile.dat");
var fileBData = File.OpenRead(#"C:\TextFile.txt");
var fileCData = File.OpenRead(#"C:\LogFile.log");
}
}
Direct Answer to Your Questions:
Yes to #1: instantiate one type for each detour, and then use each for the desired behavior. And, yes to #2: act upon one instance of the mole type or the other. This requires addition of method input parameters or class constructor injection.
For example, MyMethod reads three data files from disk, and you need to pass back three different data mocks. MyMethod requires three parameters, an overtly intrusive solution. (Note input parameters are FileInfo type; because, System.IO>File is static and can not be instantiated: For example:
TEST METHOD:
[TestMethod]
[HostType("Moles")]
public void Test()
{
var fileInfoMoleA = new System.IO.Moles.MFileInfo();
fileInfoMoleA.OpenRead = () => { return new FileStream(); };
var fileInfoMoleB = new System.IO.Moles.MFileInfo();
fileInfoMoleB.OpenRead = () => { return new FileStream(); };
var fileInfoMoleC = new System.IO.Moles.MFileInfo();
fileInfoMoleC.OpenRead = () => { return new FileStream(); };
var target = new MyClass();
target.MyMethod(fileInfoMoleA, fileInfoMoleB, fileInfoMoleC);
}
TARGET TYPE:
using System.IO;
public class MyClass
{
// Input parameters are FileInfo type; because, System.IO.File
// is a static class, and can not be instantiated.
public void MyMethod(FileInfo fileInfoA, FileInfo fileInfoB, FileInfo fileInfoC)
{
var fileAData = fileInfoA.OpenRead();
var fileBData = fileInfoB.OpenRead();
var fileCData = fileInfoC.OpenRead();
}
}
UPDATE:
In response to #Chai comment, it is possible to create common methods, within the test project, that may be referenced as the mole detour delegate. For example, you may wish to write a common method that may be referenced by any unit test, that sets up a variety of pre-configured scenarios. The following example displays how a parameterized method could be used. Get creative -- they're just method calls!
TARGET TYPES:
namespace PexMoleDemo
{
public class MyClass
{
private MyMath _math;
public MyClass()
{
_math = new MyMath() { left = 1m, right = 2m };
}
public decimal GetResults()
{
return _math.Divide();
}
}
public class MyOtherClass
{
private MyMath _math;
public MyOtherClass()
{
_math = new MyMath() { left = 100m, right = 200m };
}
public decimal Divide()
{
return _math.Divide();
}
}
public class MyMath
{
public decimal left { get; set; }
public decimal right { get; set; }
public decimal Divide()
{
return left / right;
}
}
}
TEST METHODS:
ArrangeScenarios() sets up mole detours, by switching on the enumeration parameter. This allows the same scenarios to be erected, in a DRY manner, throughout many tests.
using System;
using Microsoft.Moles.Framework;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using PexMoleDemo;
[assembly: MoledAssembly("PexMoleDemo")]
namespace TestProject1
{
[TestClass()]
public class ProgramTest
{
public enum Scenarios
{
DivideByZero,
MultiplyInsteadOfDivide
}
private void ArrangeScenario(Scenarios scenario)
{
switch (scenario)
{
case Scenarios.DivideByZero:
PexMoleDemo.Moles.MMyMath.AllInstances.rightGet =
instance => { return 0m; };
break;
case Scenarios.MultiplyInsteadOfDivide:
PexMoleDemo.Moles.MMyMath.AllInstances.Divide =
instance => { return instance.left * instance.right; };
break;
default:
throw new NotImplementedException("Invalid scenario.");
}
}
[TestMethod]
[HostType("Moles")]
[ExpectedException(typeof(DivideByZeroException))]
public void Test1()
{
ArrangeScenario(Scenarios.DivideByZero);
var target = new PexMoleDemo.MyClass();
var math = new PexMoleDemo.MyMath() { left = 1, right = 2 };
var left = math.left;
var right = math.right;
var actual = target.GetResults();
}
[TestMethod]
[HostType("Moles")]
public void Test2()
{
ArrangeScenario(Scenarios.MultiplyInsteadOfDivide);
// Perform some sort of test that determines if code breaks
// when values are multiplied instead of divided.
}
[TestMethod]
[HostType("Moles")]
[ExpectedException(typeof(DivideByZeroException))]
public void Test3()
{
ArrangeScenario(Scenarios.DivideByZero);
var target = new PexMoleDemo.MyOtherClass();
var math = new PexMoleDemo.MyMath() { left = 1, right = 2 };
var left = math.left;
var right = math.right;
var actual = target.Divide();
}
[TestMethod]
[HostType("Moles")]
public void Test4()
{
ArrangeScenario(Scenarios.MultiplyInsteadOfDivide);
// Perform some sort of test that determines if code breaks
// when values are multiplied instead of divided.
}
}
}

Related

Can Kaitai Struct be used to describe TLV data without creating new types for each field?

I'm reverse engineering a file format that stores each field as TLV blocks (type, length, value).
The fields do not have to be in order, or even present at all. Their presence is denoted with a sentinel, which is a 16-bit type identifier and a 32-bit end offset. There are hundreds of unique identifiers, but a decent chunk of those are just single primitive values. aside from denoting the type, they can also identify what field the data should be stored in.
It is also worth noting that there will never be a duplicate id on a parent structure. The only time is can occur is if there are multiple of the same object type in an array/list.
I have successfully written a Kaitai definition for one of them:
meta:
id: struct_02ea
endian: le
seq:
- id: unk_00
type: s4
- id: fields
type: field_block
repeat: eos
types:
sentinel:
seq:
- id: id
type: u2
- id: end_offset
type: u4
field_block:
seq:
- id: sentinel
type: sentinel
- id: value
type:
switch-on: sentinel.id
cases:
0xF0: u1
0xF1: u1
0xF2: u1
0xF3: u1
0xF4: u4
0xF5: u4
size: sentinel.end_offset - _root._io.pos
Handling things this way does work, and I could likely map out the entire format like this. However, when it comes time to compiling this definition into another format, things get nasty.
Since I am wrapping each field in a field_block, the generated code stores these values in that type of object. This is incredibly inefficient when half of the generated field_block objects store a single integer. It would also require the consuming code to iterate through a list of each field block in order to get the actual field's value.
Ideally, I would like to define this structure so that the sentinels are only parsed while Kaitai is reading the data, and each value would be mapped to a field on the parent structure.
Is this possible? This technology is really cool, and I'd love to use it in my project, but I feel like the overhead that this is generating is a lot more trouble than it's worth.
Here's an example of the definition when compiled into C#:
using System.Collections.Generic;
namespace Kaitai
{
public partial class Struct02ea : KaitaiStruct
{
public static Struct02ea FromFile(string fileName)
{
return new Struct02ea(new KaitaiStream(fileName));
}
public Struct02ea(KaitaiStream p__io, KaitaiStruct p__parent = null, Struct02ea p__root = null) : base(p__io)
{
m_parent = p__parent;
m_root = p__root ?? this;
_read();
}
private void _read()
{
_unk00 = m_io.ReadS4le();
_fields = new List<FieldBlock>();
{
var i = 0;
while (!m_io.IsEof) {
_fields.Add(new FieldBlock(m_io, this, m_root));
i++;
}
}
}
public partial class Sentinel : KaitaiStruct
{
public static Sentinel FromFile(string fileName)
{
return new Sentinel(new KaitaiStream(fileName));
}
public Sentinel(KaitaiStream p__io, Struct02ea.FieldBlock p__parent = null, Struct02ea p__root = null) : base(p__io)
{
m_parent = p__parent;
m_root = p__root;
_read();
}
private void _read()
{
_id = m_io.ReadU2le();
_endOffset = m_io.ReadU4le();
}
private ushort _id;
private uint _endOffset;
private Struct02ea m_root;
private Struct02ea.FieldBlock m_parent;
public ushort Id { get { return _id; } }
public uint EndOffset { get { return _endOffset; } }
public Struct02ea M_Root { get { return m_root; } }
public Struct02ea.FieldBlock M_Parent { get { return m_parent; } }
}
public partial class FieldBlock : KaitaiStruct
{
public static FieldBlock FromFile(string fileName)
{
return new FieldBlock(new KaitaiStream(fileName));
}
public FieldBlock(KaitaiStream p__io, Struct02ea p__parent = null, Struct02ea p__root = null) : base(p__io)
{
m_parent = p__parent;
m_root = p__root;
_read();
}
private void _read()
{
_sentinel = new Sentinel(m_io, this, m_root);
switch (Sentinel.Id) {
case 243: {
_value = m_io.ReadU1();
break;
}
case 244: {
_value = m_io.ReadU4le();
break;
}
case 245: {
_value = m_io.ReadU4le();
break;
}
case 241: {
_value = m_io.ReadU1();
break;
}
case 240: {
_value = m_io.ReadU1();
break;
}
case 242: {
_value = m_io.ReadU1();
break;
}
default: {
_value = m_io.ReadBytes((Sentinel.EndOffset - M_Root.M_Io.Pos));
break;
}
}
}
private Sentinel _sentinel;
private object _value;
private Struct02ea m_root;
private Struct02ea m_parent;
public Sentinel Sentinel { get { return _sentinel; } }
public object Value { get { return _value; } }
public Struct02ea M_Root { get { return m_root; } }
public Struct02ea M_Parent { get { return m_parent; } }
}
private int _unk00;
private List<FieldBlock> _fields;
private Struct02ea m_root;
private KaitaiStruct m_parent;
public int Unk00 { get { return _unk00; } }
public List<FieldBlock> Fields { get { return _fields; } }
public Struct02ea M_Root { get { return m_root; } }
public KaitaiStruct M_Parent { get { return m_parent; } }
}
}
Affiliate disclaimer: I'm a Kaitai Struct maintainer (see my GitHub profile).
Since I am wrapping each field in a field_block, the generated code stores these values in that type of object. This is incredibly inefficient when half of the generated field_block objects store a single integer. It would also require the consuming code to iterate through a list of each field block in order to get the actual field's value.
I think that rather than trying to describe the entire format with an ultimate Kaitai Struct specification, it's better for you not to let the generated code parse all the fields automatically. Move the parsing control to your application code, where you use the type Struct02ea.FieldBlock that represents the individual field and basically replicate the "repeat until end of stream" loop that the generated code that you posted was doing:
_fields = new List<FieldBlock>();
{
var i = 0;
while (!m_io.IsEof) {
_fields.Add(new FieldBlock(m_io, this, m_root));
i++;
}
}
The advantage of doing so is that you can adjust the loop to fit your needs. To avoid the overhead you describe, you'll probably want to keep the Struct02ea.FieldBlock object in a local variable inside the loop body, pull only the values you care about (save them in your compact, consumer-friendly output structures) and let it leave the scope after the loop iteration ends. This will allow each original FieldBlock object to get garbage-collected once you process it, so the overhead they have will be limited to a single instance and not multiplied by the number of fields in the file.
The most straightforward and seamless way to prevent the Kaitai Struct-generated code parse fields (but otherwise keep everything the same) is to add if: false in the KSY specification, as #webbnh suggested in a GitHub issue:
seq:
- id: unk_00
type: s4
- id: fields
type: field_block
repeat: eos
if: false # add this
The if: false works better than omitting it from seq entirely, because the kaitai-struct-compiler has occasional troubles with unused types (when compiling the KSY spec with unused types, you may get an error "Unable to derive _parent type in ..." due to a compiler bug). But with this if: false trick, you can't run into them because the field_block type is no longer unused.

EF: How to enclose context object in a using statement?

Let's say I have the following classes Customer.cs, a context OfficeContext.cs, and a repository OfficeRepository.cs. Knowing that the context use a connection object, so it's advised to enclose it in a using statement:
public List<Customer> GetAllCustomersWithOrders()
{
using(var oContext = new OfficeContext())
{
//Code here....
}
}
My question is what if I want to re-use some of the code already in the repository? For instance, what if I want to display all the customers that ordered products but didn't receive them yet, do I need to duplicate the code?
public List<Customer> GetCustomersNotReceiveProducts()
{
using(var oContext = new OfficeContext())
{
//Re-use GetAllCustomersWithOrders() here???...
}
}
But as you can see, each time access a method, I also open instantiate a new context object. Is there any way to deal with that?
What I do is have my repositories implement IDisposable.
Then have two constructors (one default) that instaniates a new context that holds it as a class level variable. And another constructor that takes a context and uses that internally.
The on the dispose of the class the context is disposed (if the current repository instatiated it).
This removes the context out of the method level and moves it to the class level. My functions keep everything in IQueryable so one function can call another function and perform additional refinements before the database it hit.
Exmaple:
public class MemberRepository : IDisposable
{
OfficeContext db;
bool isExternalDb = false;
public MemberRepository()
{
db = new OfficeContext();
isExternalDb = false;
}
public MemberRepository(OfficeContext db)
{
this.db = db;
isExternalDb = true;
}
public IQueryable<Member> GetAllMembers()
{
var members= db.Members
return members;
}
public IQueryable<Member> GetActiveMembers()
{
var members = GetAllMembers();
var activeMembers = members.Where(m => m.isActive == true);
return activeMembers;
}
public void Dispose()
{
if (isExternalDb == false)
{
db.Dispose();
}
}
}
Then where I use the repository, I do a using at that level:
using(var memberRepository = new MemberRepository())
{
var members = memberRepository.GetActiveMembers();
}

How to pass values across test cases in NUnit 2.6.2?

I am having two Methods in Unit Test case where First Insert Records into Database and Second retrieves back data. I want that input parameter for retrieve data should be the id generated into first method.
private int savedrecordid =0;
private object[] SavedRecordId{ get { return new object[] { new object[] { savedrecordid } }; } }
[Test]
public void InsertInfo()
{
Info oInfo = new Info();
oInfo.Desc ="Some Description here !!!";
savedrecordid = InsertInfoToDb(oInfo);
}
[Test]
[TestCaseSource("SavedRecordId")]
public void GetInfo(int savedId)
{
Info oInfo = GetInfoFromDb(savedId);
}
I know each test case executed separately and separate instance we can't share variables across test methods.
Please let me know if there is way to share parameters across the test cases.
The situation you describe is one of unit tests' antipatterns: unit tests should be independent and should not depend on the sequence in which they run. You can find more at the xUnit Patterns web site:
Unit test should be implemented using Fresh Fixture
Anti pattern Shared Fixture
And both your unit tests have no asserts, so they can't prove whether they are passing or not.
Also they are depend on a database, i.e. external resource, and thus they are not unit but integration tests.
So my advice is to rewrite them:
Use mock object to decouple from database
InsertInfo should insert info and verify using the mock that an appropriate insert call with arguments has been performed
GetInfo should operate with a mock that returns a fake record and verify that it works fine
Example
Notes:
* I have to separate B/L from database operations…
* … and make some assumptions about your solution
// Repository incapsulates work with Database
public abstract class Repository<T>
where T : class
{
public abstract void Save(T entity);
public abstract IEnumerable<T> GetAll();
}
// Class under Test
public class SomeRule
{
private readonly Repository<Info> repository;
public SomeRule(Repository<Info> repository)
{
this.repository = repository;
}
public int InsertInfoToDb(Info oInfo)
{
repository.Save(oInfo);
return oInfo.Id;
}
public Info GetInfoFromDb(int id)
{
return repository.GetAll().Single(info => info.Id == id);
}
}
// Actual unittests
[Test]
public void SomeRule_InsertInfo_WasInserted() // ex. InsertInfo
{
// Arrange
Info oInfo = new Info();
oInfo.Desc = "Some Description here !!!";
var repositoryMock = MockRepository.GenerateStrictMock<Repository<Info>>();
repositoryMock.Expect(m => m.Save(Arg<Info>.Is.NotNull));
// Act
var savedrecordid = new SomeRule(repositoryMock).InsertInfoToDb(oInfo);
// Assert
repositoryMock.VerifyAllExpectations();
}
[Test]
public void SomeRule_GetInfo_ReciveCorrectInfo() // ex. GetInfo
{
// Arrange
var expectedId = 1;
var expectedInfo = new Info { Id = expectedId, Desc = "Something" };
var repositoryMock = MockRepository.GenerateStrictMock<Repository<Info>>();
repositoryMock.Expect(m => m.GetAll()).Return(new [] { expectedInfo }.AsEnumerable());
// Act
Info receivedInfo = new SomeRule(repositoryMock).GetInfoFromDb(expectedId);
// Assert
repositoryMock.VerifyAllExpectations();
Assert.That(receivedInfo, Is.Not.Null.And.SameAs(expectedInfo));
}
ps: full example availabel here

Autofac wiring question - beginner

Beginners question:
Given two classes: Myclass5 and Myclass6 how can one wire up following factory method (returned as Func)
such that
myclass5 and myclass6 instances and IMyClass that they depend on are all retrieved via autofac (assuming that these three instances are registered).
public static MyClass4 FactoryMethod(int nu)
{
if (nu == 1)
return new MyClass5(....);
if (nu == 4)
return new MyClass6(....);
throw new NotImplementedException();
}
public abstract class MyClass4
{
}
public class MyClass5 : MyClass4
{
public MyClass5(int nu, IMyClass a)
{
}
}
public class MyClass6 : MyClass4
{
public MyClass6(int nu, IMyClass a)
{
}
}
For FactoryMethod to be able to create the instances, it requires access to a container. I would suggest create a delegate type for the factory method, which makes it easy to take dependency on it. Registration goes like this:
var cb = new ContainerBuilder();
cb.RegisterType<SomeClass>().As<IMyClass>();
cb.RegisterType<MyClass5>();
cb.RegisterType<MyClass6>();
cb.Register((c, p) =>
{
var context = c.Resolve<IComponentContext>();
return new FactoryMethod(nu =>
{
var nuParameter = TypedParameter.From(nu);
switch (nu)
{
case 1:
return context.Resolve<MyClass5>(nuParameter);
case 4:
return context.Resolve<MyClass6>(nuParameter);
default:
throw new NotImplementedException();
}
});
});
var container = cb.Build();
At resolve time, you can then take a dependency on the FactoryMethod delegate type and use it to resolve instances:
var factory = container.Resolve<FactoryMethod>();
var instance5 = factory(1);
var instance6 = factory(1);
Note: the delegate instance we're creating needs a context. We cannot use the c parameter directly since that context is only temporary. Thus we must resolve a IComponentContext to "bake" into the lambda.
Update: if you would like to extract the factory implementation into a method that is not dependent on the container I would suggest the following:
public class FactoryMethodImpl
{
readonly Func<int, MyClass5> _factory5;
readonly Func<int, MyClass6> _factory6;
public FactoryMethodImpl(Func<int, MyClass5> factory5, Func<int, MyClass6> factory6)
{
_factory5 = factory5;
_factory6 = factory6;
}
public MyClass4 Create(int nu)
{
switch (nu)
{
case 1:
return _factory5(nu);
case 4:
return _factory6(nu);
default:
throw new NotImplementedException();
}
}
}
Now, change the registration code to this:
var cb = new ContainerBuilder();
cb.RegisterType<SomeClass>().As<IMyClass>();
cb.RegisterType<MyClass5>();
cb.RegisterType<MyClass6>();
cb.RegisterType<FactoryMethodImpl>().SingleInstance();
cb.Register(c=> new FactoryMethod(c.Resolve<FactoryMethodImpl>().Create));

Serializing Entity Framework problems

Like several other people, I'm having problems serializing Entity Framework objects, so that I can send the data over AJAX in a JSON format.
I've got the following server-side method, which I'm attempting to call using AJAX through jQuery
[WebMethod]
public static IEnumerable<Message> GetAllMessages(int officerId)
{
SIBSv2Entities db = new SIBSv2Entities();
return (from m in db.MessageRecipients
where m.OfficerId == officerId
select m.Message).AsEnumerable<Message>();
}
Calling this via AJAX results in this error:
A circular reference was detected while serializing an object of type \u0027System.Data.Metadata.Edm.AssociationType
Which is because of the way the Entity Framework creates circular references to keep all the objects related and accessible server side.
I came across the following code from (http://hellowebapps.com/2010-09-26/producing-json-from-entity-framework-4-0-generated-classes/) which claims to get around this problem by capping the maximum depth for references. I've added the code below, because I had to tweak it slightly to get it work (All angled brackets are missing from the code on the website)
using System.Web.Script.Serialization;
using System.Collections.Generic;
using System.Collections;
using System.Linq;
using System;
public class EFObjectConverter : JavaScriptConverter
{
private int _currentDepth = 1;
private readonly int _maxDepth = 2;
private readonly List<int> _processedObjects = new List<int>();
private readonly Type[] _builtInTypes = new[]{
typeof(bool),
typeof(byte),
typeof(sbyte),
typeof(char),
typeof(decimal),
typeof(double),
typeof(float),
typeof(int),
typeof(uint),
typeof(long),
typeof(ulong),
typeof(short),
typeof(ushort),
typeof(string),
typeof(DateTime),
typeof(Guid)
};
public EFObjectConverter( int maxDepth = 2,
EFObjectConverter parent = null)
{
_maxDepth = maxDepth;
if (parent != null)
{
_currentDepth += parent._currentDepth;
}
}
public override object Deserialize( IDictionary<string,object> dictionary, Type type, JavaScriptSerializer serializer)
{
return null;
}
public override IDictionary<string,object> Serialize(object obj, JavaScriptSerializer serializer)
{
_processedObjects.Add(obj.GetHashCode());
Type type = obj.GetType();
var properties = from p in type.GetProperties()
where p.CanWrite &&
p.CanWrite &&
_builtInTypes.Contains(p.PropertyType)
select p;
var result = properties.ToDictionary(
property => property.Name,
property => (Object)(property.GetValue(obj, null)
== null
? ""
: property.GetValue(obj, null).ToString().Trim())
);
if (_maxDepth >= _currentDepth)
{
var complexProperties = from p in type.GetProperties()
where p.CanWrite &&
p.CanRead &&
!_builtInTypes.Contains(p.PropertyType) &&
!_processedObjects.Contains(p.GetValue(obj, null)
== null
? 0
: p.GetValue(obj, null).GetHashCode())
select p;
foreach (var property in complexProperties)
{
var js = new JavaScriptSerializer();
js.RegisterConverters(new List<JavaScriptConverter> { new EFObjectConverter(_maxDepth - _currentDepth, this) });
result.Add(property.Name, js.Serialize(property.GetValue(obj, null)));
}
}
return result;
}
public override IEnumerable<System.Type> SupportedTypes
{
get
{
return GetType().Assembly.GetTypes();
}
}
}
However even when using that code, in the following way:
var js = new System.Web.Script.Serialization.JavaScriptSerializer();
js.RegisterConverters(new List<System.Web.Script.Serialization.JavaScriptConverter> { new EFObjectConverter(2) });
return js.Serialize(messages);
I'm still seeing the A circular reference was detected... exception being thrown!
I solved these issues with the following classes:
public class EFJavaScriptSerializer : JavaScriptSerializer
{
public EFJavaScriptSerializer()
{
RegisterConverters(new List<JavaScriptConverter>{new EFJavaScriptConverter()});
}
}
and
public class EFJavaScriptConverter : JavaScriptConverter
{
private int _currentDepth = 1;
private readonly int _maxDepth = 1;
private readonly List<object> _processedObjects = new List<object>();
private readonly Type[] _builtInTypes = new[]
{
typeof(int?),
typeof(double?),
typeof(bool?),
typeof(bool),
typeof(byte),
typeof(sbyte),
typeof(char),
typeof(decimal),
typeof(double),
typeof(float),
typeof(int),
typeof(uint),
typeof(long),
typeof(ulong),
typeof(short),
typeof(ushort),
typeof(string),
typeof(DateTime),
typeof(DateTime?),
typeof(Guid)
};
public EFJavaScriptConverter() : this(1, null) { }
public EFJavaScriptConverter(int maxDepth = 1, EFJavaScriptConverter parent = null)
{
_maxDepth = maxDepth;
if (parent != null)
{
_currentDepth += parent._currentDepth;
}
}
public override object Deserialize(IDictionary<string, object> dictionary, Type type, JavaScriptSerializer serializer)
{
return null;
}
public override IDictionary<string, object> Serialize(object obj, JavaScriptSerializer serializer)
{
_processedObjects.Add(obj.GetHashCode());
var type = obj.GetType();
var properties = from p in type.GetProperties()
where p.CanRead && p.GetIndexParameters().Count() == 0 &&
_builtInTypes.Contains(p.PropertyType)
select p;
var result = properties.ToDictionary(
p => p.Name,
p => (Object)TryGetStringValue(p, obj));
if (_maxDepth >= _currentDepth)
{
var complexProperties = from p in type.GetProperties()
where p.CanRead &&
p.GetIndexParameters().Count() == 0 &&
!_builtInTypes.Contains(p.PropertyType) &&
p.Name != "RelationshipManager" &&
!AllreadyAdded(p, obj)
select p;
foreach (var property in complexProperties)
{
var complexValue = TryGetValue(property, obj);
if(complexValue != null)
{
var js = new EFJavaScriptConverter(_maxDepth - _currentDepth, this);
result.Add(property.Name, js.Serialize(complexValue, new EFJavaScriptSerializer()));
}
}
}
return result;
}
private bool AllreadyAdded(PropertyInfo p, object obj)
{
var val = TryGetValue(p, obj);
return _processedObjects.Contains(val == null ? 0 : val.GetHashCode());
}
private static object TryGetValue(PropertyInfo p, object obj)
{
var parameters = p.GetIndexParameters();
if (parameters.Length == 0)
{
return p.GetValue(obj, null);
}
else
{
//cant serialize these
return null;
}
}
private static object TryGetStringValue(PropertyInfo p, object obj)
{
if (p.GetIndexParameters().Length == 0)
{
var val = p.GetValue(obj, null);
return val;
}
else
{
return string.Empty;
}
}
public override IEnumerable<Type> SupportedTypes
{
get
{
var types = new List<Type>();
//ef types
types.AddRange(Assembly.GetAssembly(typeof(DbContext)).GetTypes());
//model types
types.AddRange(Assembly.GetAssembly(typeof(BaseViewModel)).GetTypes());
return types;
}
}
}
You can now safely make a call like new EFJavaScriptSerializer().Serialize(obj)
Update : since version Telerik v1.3+ you can now override the GridActionAttribute.CreateActionResult method and hence you can easily integrate this Serializer into specific controller methods by applying your custom [GridAction] attribute:
[Grid]
public ActionResult _GetOrders(int id)
{
return new GridModel(Service.GetOrders(id));
}
and
public class GridAttribute : GridActionAttribute, IActionFilter
{
/// <summary>
/// Determines the depth that the serializer will traverse
/// </summary>
public int SerializationDepth { get; set; }
/// <summary>
/// Initializes a new instance of the <see cref="GridActionAttribute"/> class.
/// </summary>
public GridAttribute()
: base()
{
ActionParameterName = "command";
SerializationDepth = 1;
}
protected override ActionResult CreateActionResult(object model)
{
return new EFJsonResult
{
Data = model,
JsonRequestBehavior = JsonRequestBehavior.AllowGet,
MaxSerializationDepth = SerializationDepth
};
}
}
and finally..
public class EFJsonResult : JsonResult
{
const string JsonRequest_GetNotAllowed = "This request has been blocked because sensitive information could be disclosed to third party web sites when this is used in a GET request. To allow GET requests, set JsonRequestBehavior to AllowGet.";
public EFJsonResult()
{
MaxJsonLength = 1024000000;
RecursionLimit = 10;
MaxSerializationDepth = 1;
}
public int MaxJsonLength { get; set; }
public int RecursionLimit { get; set; }
public int MaxSerializationDepth { get; set; }
public override void ExecuteResult(ControllerContext context)
{
if (context == null)
{
throw new ArgumentNullException("context");
}
if (JsonRequestBehavior == JsonRequestBehavior.DenyGet &&
String.Equals(context.HttpContext.Request.HttpMethod, "GET", StringComparison.OrdinalIgnoreCase))
{
throw new InvalidOperationException(JsonRequest_GetNotAllowed);
}
var response = context.HttpContext.Response;
if (!String.IsNullOrEmpty(ContentType))
{
response.ContentType = ContentType;
}
else
{
response.ContentType = "application/json";
}
if (ContentEncoding != null)
{
response.ContentEncoding = ContentEncoding;
}
if (Data != null)
{
var serializer = new JavaScriptSerializer
{
MaxJsonLength = MaxJsonLength,
RecursionLimit = RecursionLimit
};
serializer.RegisterConverters(new List<JavaScriptConverter> { new EFJsonConverter(MaxSerializationDepth) });
response.Write(serializer.Serialize(Data));
}
}
You can also detach the object from the context and it will remove the navigation properties so that it can be serialized. For my data repository classes that are used with Json i use something like this.
public DataModel.Page GetPage(Guid idPage, bool detach = false)
{
var results = from p in DataContext.Pages
where p.idPage == idPage
select p;
if (results.Count() == 0)
return null;
else
{
var result = results.First();
if (detach)
DataContext.Detach(result);
return result;
}
}
By default the returned object will have all of the complex/navigation properties, but by setting detach = true it will remove those properties and return the base object only. For a list of objects the implementation looks like this
public List<DataModel.Page> GetPageList(Guid idSite, bool detach = false)
{
var results = from p in DataContext.Pages
where p.idSite == idSite
select p;
if (results.Count() > 0)
{
if (detach)
{
List<DataModel.Page> retValue = new List<DataModel.Page>();
foreach (var result in results)
{
DataContext.Detach(result);
retValue.Add(result);
}
return retValue;
}
else
return results.ToList();
}
else
return new List<DataModel.Page>();
}
I have just successfully tested this code.
It may be that in your case your Message object is in a different assembly? The overriden Property SupportedTypes is returning everything ONLY in its own Assembly so when serialize is called the JavaScriptSerializer defaults to the standard JavaScriptConverter.
You should be able to verify this debugging.
Your error occured due to some "Reference" classes generated by EF for some entities with 1:1 relations and that the JavaScriptSerializer failed to serialize.
I've used a workaround by adding a new condition :
!p.Name.EndsWith("Reference")
The code to get the complex properties looks like this :
var complexProperties = from p in type.GetProperties()
where p.CanWrite &&
p.CanRead &&
!p.Name.EndsWith("Reference") &&
!_builtInTypes.Contains(p.PropertyType) &&
!_processedObjects.Contains(p.GetValue(obj, null)
== null
? 0
: p.GetValue(obj, null).GetHashCode())
select p;
Hope this help you.
I had a similar problem with pushing my view via Ajax to UI components.
I also found and tried to use that code sample you provided. Some problems I had with that code:
SupportedTypes wasn't grabbing the types I needed, so the converter wasn't being called
If the maximum depth is hit, the serialization would be truncated
It threw out any other converters I had on the existing serializer by creating its own new JavaScriptSerializer
Here are the fixes I implemented for those issues:
Reusing the same serializer
I simply reused the existing serializer that is passed into Serialize to solve this problem. This broke the depth hack though.
Truncating on already-visited, rather than on depth
Instead of truncating on depth, I created a HashSet<object> of already seen instances (with a custom IEqualityComparer that checked reference equality). I simply didn't recurse if I found an instance I'd already seen. This is the same detection mechanism built into the JavaScriptSerializer itself, so worked quite well.
The only problem with this solution is that the serialization output isn't very deterministic. The order of truncation is strongly dependent on the order that reflections finds the properties. You could solve this (with a perf hit) by sorting before recursing.
SupportedTypes needed the right types
My JavaScriptConverter couldn't live in the same assembly as my model. If you plan to reuse this converter code, you'll probably run into the same problem.
To solve this I had to pre-traverse the object tree, keeping a HashSet<Type> of already seen types (to avoid my own infinite recursion), and pass that to the JavaScriptConverter before registering it.
Looking back on my solution, I would now use code generation templates to create a list of the entity types. This would be much more foolproof (it uses simple iteration), and have much better perf since it would produce a list at compile time. I'd still pass this to the converter so it could be reused between models.
My final solution
I threw out that code and tried again :)
I simply wrote code to project onto new types ("ViewModel" types - in your case, it would be service contract types) before doing my serialization. The intention of my code was made more explicit, it allowed me to serialize just the data I wanted, and it didn't have the potential of slipping in queries on accident (e.g. serializing my whole DB).
My types were fairly simple, and I didn't need most of them for my view. I might look into AutoMapper to do some of this projection in the future.