Equals and GetHashCode - c#-3.0

What do you think about this Person class? Is it a bad idea or best practise to override Equals and GetHashCode like that?
public class Person {
public int PersonId { get; set; }
public string Name { get; set; }
public override bool Equals(object obj) {
var person = obj as Person;
return PersonId == person.PersonId;
}
public override int GetHashCode() {
return PersonId;
}
}
Usage :
static void Main(string[] args) {
var list = new List<Person>();
list.Add(new Person(){ PersonId = 1, Name = "Mike"});
list.Add(new Person() { PersonId = 2, Name = "Michael Sync" });
list.Add(new Person(){ PersonId = 1, Name = "Mike"});
var list1 = new List<Person>();
list1.Add(new Person() { PersonId = 1, Name = "Mike" });
list1.Add(new Person() { PersonId = 3, Name = "Julia" });
var except = list.Except(list1);
foreach (var item in except) {
Console.WriteLine(item.Name);
}
Console.ReadKey();
}

A few points:
It's not null safe or "different type" safe. Try this:
new Person().Equals(new Object());
or
new Person().Equals(null);
Bang.
Classes defining equality operations should usually be immutable IMO. Changing the contents of an object after using it as a dictionary key is a Bad Thing, for example.
Consider implementing IEquatable<Person>
A quick reimplementation, which still assumes you want equality based solely on ID.
public sealed class Person : IEquatable<Person> {
private readonly int personId;
public int PersonId { get { return personId; }
private readonly string name;
public string Name { get { return name; } }
public Person(int personId, string name) {
// Is a null name valid? If not, throw here.
this.personId = personId;
this.name = name;
}
public override bool Equals(object obj) {
return Equals(obj as Person);
}
public Equals(Person other) {
return other != null && other.personId == personId;
}
public override int GetHashCode() {
return personId;
}
}

Yes this is wrong. You should never use a mutable property as part of the calculation for GetHashCode. Doing so opens you up to numerous hard to track down bugs.

One problem I can see is that you'll get lots of collisions for new (unsaved) records (lots of zeros) - unless you do something like have consecutive -ve ids for those... But do you really need to use Person as a key? Personally I don't think I'd bother...

Related

Building GroupBy Expression Tree - IEnumerable parameter not defined error

I want to build an expression for IQueryable GroupBy. While at the moment I'm just simplifying the problem to try and get it working, the eventual final implementation will involve the creation of quite complex expression trees so I want to build a complete expression that can then be integrated into other expressions.
I specifically want to build an expression of this overload:
public static System.Linq.IQueryable<TResult> GroupBy<TSource,TKey,TResult> (
this System.Linq.IQueryable<TSource> source,
System.Linq.Expressions.Expression<Func<TSource,TKey>> keySelector,
System.Linq.Expressions.Expression<Func<TKey,System.Collections.Generic.IEnumerable<TSource>,TResult>> resultSelector);
... my problem is in the implementation of the resultSelector and and the IEnumerable<TSource>.
I have a table of Customers (just dummy data for the purposes of working out this problem). This is stored in an SQL DB and I specifically want to use IQueryable to access the data.
public class Customer
{
public int Id { get; set; }
public string? FirstName { get; set; }
public string? LastName { get; set; }
public int Age { get; set; }
}
I also have a GroupResult class used to hold the results of the GroupBy (I have different constructors which I've been using in my testing to work out where my problem is occurring)
internal class GroupResult
{
public string? Name { get; set; }
public int NumRecords { get; set; }
public decimal AverageAge { get; set; }
public int TotalAge { get; set; }
public GroupResult() { }
public GroupResult(string name)
{
Name = name;
}
public GroupResult(IEnumerable<Customer> customers)
{
Name = Guid.NewGuid().ToString();
NumRecords = customers.Count();
}
public GroupResult(string name, IEnumerable<Customer> customers)
{
Name = name;
NumRecords = customers.Count();
}
}
The main static class that displays prompts to select column to group on, creates the relevant expression tree and executes it
internal static class SimpleGroupByCustomer
{
internal static DataContext db;
internal static void Execute()
{
using (db = new DataContext())
{
//get input
Console.WriteLine();
Console.WriteLine("Simple Customer GroupBy");
Console.WriteLine("=======================");
Console.WriteLine("Simple GroupBy on the Customer Table");
Console.WriteLine();
Console.WriteLine("Select the property that you want to group by.");
Console.WriteLine();
var dbSet = db.Set<Customer>();
var query = dbSet.AsQueryable();
//for this example we're just prompting for a column in the customer table
//GetColumnName is a helper function that lists the available columns and allows
//one to be selected
string colName = Wrapper.GetColumnName("Customer");
MethodInfo? method = typeof(SimpleGroupByCustomer).GetMethod("GetGroupBy",
BindingFlags.Static | BindingFlags.NonPublic);
if (method != null)
{
method = method.MakeGenericMethod(new Type[] { typeof(String), query.ElementType });
method.Invoke(null, new object[] { query, colName });
}
}
}
internal static void GetGroupBy<T, TTable>(IQueryable query, string colName)
{
Type TTmp = typeof(TTable);
var param = Expression.Parameter(TTmp, "c");
var prop = Expression.PropertyOrField(param, colName);
LambdaExpression keySelector = Expression.Lambda<Func<TTable, T>>(prop, param);
var param1 = Expression.Parameter(typeof(T), "Key");
var param2 = Expression.Parameter(typeof(IEnumerable<TTable>), "Customers");
var ci = typeof(GroupResult).GetConstructor(new[] { typeof(T), typeof(IEnumerable<TTable>) });
//var ci = typeof(GroupResult).GetConstructor(new[] { typeof(T) });
//var ci = typeof(GroupResult).GetConstructor(new[] { typeof(IEnumerable<TTable>) });
if (ci == null)
return;
var pExp = new ParameterExpression[] { param1, param2 };
var methodExpression = Expression.Lambda<Func<T, IEnumerable<TTable>, GroupResult>>(
Expression.New(ci, new Expression[] { param1, param2 }), //<--- ERROR HERE
pExp
);
Type[] typeArgs = new Type[] { typeof(TTable), typeof(T), typeof(GroupResult) };
Expression[] methodParams = new Expression[] { query.Expression, keySelector, methodExpression };
var resultExpression = Expression.Call(typeof(Queryable), "GroupBy", typeArgs, methodParams);
IQueryable dbQuery = query.Provider.CreateQuery(resultExpression);
if (dbQuery is IQueryable<GroupResult> results)
{
foreach (var result in results)
{
Console.WriteLine("{0,-15}\t{1}", result.Name, result.NumRecords.ToString());
}
}
}
}
When I run this and try and iterate through the results I get the following exception:
System.InvalidOperationException: 'variable 'Customers' of type 'System.Collections.Generic.IEnumerable`1[ExpressionTrees3.Data.Customer]' referenced from scope '', but it is not defined'
which is being caused by the param2 ParameterExpression marked above.
If I use the GroupResult constructor that just takes the key value
var ci = typeof(GroupResult).GetConstructor(new[] { typeof(T) });
and omit the param2 from the Lambda body definition the code works as expected and I get a collection of GroupResult records containing the distinct key values in the Name field (but obviously no summary value).
I've tried everything I can think of and just can't get past this error - it's as though the GroupBy is not actually producing the IEnumerable grouping of Customers for each key.
I suspect I'm missing something really obvious here, but just can't see it. Any help would really very much appreciated.
Please note that I am after answers to this specific issue, I'm not looking for alternative ways of doing a GroupBy (unless there's a fundamental reason why this shouldn't work) - this will be rolled into a much larger solution for building queries and I want to use the same process throughout.
Thanks Svyatoslav - as I thought, it was me being especially dumb!
Your comments, as well as a discussion with a friend who has a lot SQL knowledge pointed me in the right direction.
I had been thinking that the GroupBy expression was going to return an Enumerable for each key value and was trying to pass that into a function ... it always felt wrong, but I just ignored that and kept going.
It's obvious now that I need to tell the GroupBy what to calculate and return (i.e. your comment about aggregation).
So for this easy example, the solution is very simple:
var pExp = new ParameterExpression[] { param1, param2 };
var countTypes = new Type[] { typeof(TTable) };
var countParams = new Expression[] { param2 };
var countExp = Expression.Call(typeof(Enumerable), "Count", countTypes, countParams);
var methodExpression = Expression.Lambda<Func<T, IEnumerable<TTable>, GroupResult>>(
Expression.New(ci, new Expression[] { param1, countExp }),
pExp
);
Just by adding the 'Count' expression into the GroupBy method call it works!
.. and adding a new ctor for GroupResult:
public GroupResult(string name, int count)
{
Name = name;
NumRecords = count;
}
(yep, I feel a bit stupid!)

How to add namespace prefix for IXmlSerializable type

I have a following class definition
[XmlRoot(ElementName = "person",Namespace = "MyNamespace")]
public class Person : IXmlSerializable
{
public string FirstName { get; set; }
[XmlNamespaceDeclarations]
public XmlSerializerNamespaces Namespaces
{
get
{
var xmlSerializerNamespaces = new XmlSerializerNamespaces();
xmlSerializerNamespaces.Add("My", "MyNamespace");
return xmlSerializerNamespaces;
}
}
public string LastName { get; set; }
public XmlSchema GetSchema()
{
return null;
}
/// <exception cref="NotSupportedException"/>
public void ReadXml(XmlReader reader)
{
throw new NotSupportedException();
}
public void WriteXml(XmlWriter writer)
{
writer.WriteElementString("firstName",FirstName);
writer.WriteElementString("lastName", LastName);
}
}
an I want to serialize it with My: prefix for MyNamespace, so when I call code
var xmlSerializer = new XmlSerializer(typeof(Person));
var person = new Person
{ FirstName = "John",LastName = "Doe"};
xmlSerializer.Serialize(Console.Out, person, person.Namespaces);
I expect following output:
<?xml version="1.0" encoding="ibm850"?>
<My:person xmlns:My="MyNamespace">
<My:firstName>John</My:firstName>
<My:lastName>Doe</My:lastName>
</My:person>
But instead of it I am getting following output:
<?xml version="1.0" encoding="ibm850"?>
<person xmlns="MyNamespace">
<firstName>John</firstName>
<lastName>Doe</lastName>
</person>
I know that writing prefixes works when I use SerializableAttribute attribute and not inherit from IXmlSerializable, but my class in the project is much more complex and I can't use default XmlSerializer.
The XmlSerializer type doesn't offer anything out-of-the-box to handle this.
If you really need to use XmlSerializer, you are going to end up with a custom XmlSerializer implementation, which isn't quite open to extend. For this reason the implementation below is more a proof of concept, just to give you an idea or a starting point.
For brevity I have omitted any error handling and only focussed on the Person class in your question. There's still some work to do to handle any nested complex properties.
As the Serialize methods are not virtual we'll have to shadow them. The main idea is to direct all overloads to a single one having the custom implementation.
Because of the customization, we'll have to be more explicit in the Person class when writing the xml elements for its properties by specifying the xml namespace to be applied.
The code below
PrefixedXmlSerializer xmlSerializer = new PrefixedXmlSerializer(typeof(Person));
Person person = new Person {
FirstName = "John",
LastName = "Doe"
};
xmlSerializer.Serialize(Console.Out, person, person.Namespaces);
results in
<My:person xmlns:My="MyNamespace">
<My:firstName>John</My:firstName>
<My:lastName>Doe</My:lastName>
</My:person>
It's up to you to consider whether this all is acceptable.
In the end, <My:person xmlns:My="MyNamespace"> equals <person xmlns="MyNamespace">.
Person
[XmlRoot(ElementName = "person", Namespace = NAMESPACE)]
public class Person : IXmlSerializable
{
private const string NAMESPACE = "MyNamespace";
public string FirstName { get; set; }
[XmlNamespaceDeclarations]
public XmlSerializerNamespaces Namespaces
{
get
{
var xmlSerializerNamespaces = new XmlSerializerNamespaces();
xmlSerializerNamespaces.Add("My", NAMESPACE);
return xmlSerializerNamespaces;
}
}
public string LastName { get; set; }
public XmlSchema GetSchema()
{
return null;
}
/// <exception cref="NotSupportedException"/>
public void ReadXml(XmlReader reader)
{
throw new NotSupportedException();
}
public void WriteXml(XmlWriter writer)
{
// Specify the xml namespace.
writer.WriteElementString("firstName", NAMESPACE, FirstName);
writer.WriteElementString("lastName", NAMESPACE, LastName);
}
}
PrefixedXmlSerializer
public class PrefixedXmlSerializer : XmlSerializer
{
XmlRootAttribute _xmlRootAttribute;
public PrefixedXmlSerializer(Type type) : base(type)
{
this._xmlRootAttribute = type.GetCustomAttribute<XmlRootAttribute>();
}
public new void Serialize(TextWriter textWriter, Object o, XmlSerializerNamespaces namespaces)
{
// Out-of-the-box implementation.
XmlTextWriter xmlTextWriter = new XmlTextWriter(textWriter);
xmlTextWriter.Formatting = Formatting.Indented;
xmlTextWriter.Indentation = 2;
// Call the shadowed version.
this.Serialize(xmlTextWriter, o, namespaces, null, null);
}
public new void Serialize(XmlWriter xmlWriter, Object o, XmlSerializerNamespaces namespaces, String encodingStyle, String id)
{
// Lookup the xml namespace and prefix to apply.
XmlQualifiedName[] xmlNamespaces = namespaces.ToArray();
XmlQualifiedName xmlRootNamespace =
xmlNamespaces
.Where(ns => ns.Namespace == this._xmlRootAttribute.Namespace)
.FirstOrDefault();
// Write the prefixed root element with its xml namespace declaration.
xmlWriter.WriteStartElement(xmlRootNamespace.Name, this._xmlRootAttribute.ElementName, xmlRootNamespace.Namespace);
// Write the xml namespaces; duplicates will be taken care of automatically.
foreach (XmlQualifiedName xmlNamespace in xmlNamespaces)
{
xmlWriter.WriteAttributeString("xmlns", xmlNamespace.Name , null, xmlNamespace.Namespace);
}
// Write the actual object xml.
((IXmlSerializable)o).WriteXml(xmlWriter);
xmlWriter.WriteEndElement();
}
}
Try the following.
public void WriteXml(XmlWriter writer)
{
writer.WriteAttributeString("xmlns", "my", null, "MyNamespace");
writer.WriteElementString("firstName", FirstName);
writer.WriteElementString("lastName", LastName);
}
Do you need to use XmlSerializer? If not, try following code:
Person.cs
Add new method:
public void Serialize(XmlWriter writer)
{
writer.WriteStartDocument();
writer.WriteStartElement("My", "Person", "MyNamespace");
writer.WriteElementString("My", "FirstName", "MyNamespace", FirstName);
writer.WriteElementString("My", "LastName", "MyNamespace", LastName);
writer.WriteEndElement();
writer.WriteEndDocument();
}
Usage
var person = new Person { FirstName = "John", LastName = "Doe" };
person.Serialize(new XmlTextWriter(Console.Out));

Handling Related Data when using Entity Framework Code First

I have two Classes: LicenseType and EntityType.
[Table("LicenseType")]
public class LicenseType : ComplianceBase, INotifyPropertyChanged
{
private List<Certification> _certifications = new List<Certification>();
private List<EntityType> _entityTypes = new List<EntityType>();
public List<EntityType> EntityTypes
{
get { return _entityTypes; }
set { _entityTypes = value; }
}
public List<Certification> Certifications
{
get { return _certifications; }
set { _certifications = value; }
}
}
and
[Table("EntityType")]
public class EntityType : ComplianceBase, INotifyPropertyChanged
{
private List<LicenseType> _licenseTypes = new List<LicenseType>();
public List<LicenseType> LicenseTypes
{
get { return _licenseTypes; }
set
{
_licenseTypes = value;
// OnPropertyChanged();
}
}
}
The both derive from ComplianceBase,
public class ComplianceBase
{
private int _id;
private string _name;
private string _description;
public string Description
{
get { return _description; }
set
{
if (_description == value) return;
_description = value;
OnPropertyChanged();
}
}
public event PropertyChangedEventHandler PropertyChanged;
public int Id
{
get { return _id; }
set
{
if (value == _id) return;
_id = value;
OnPropertyChanged();
}
}
public string Name
{
get { return _name; }
set
{
if (value == _name) return;
_name = value;
OnPropertyChanged();
}
}
protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null)
{
PropertyChangedEventHandler handler = PropertyChanged;
if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
}
What I want is to be able to do is associate an EntityType with one or more LicenseTypes, so for instance, an EntityType "Primary Lender" could be associated with say two LicenseTypes, "Lender License" and "Mortgage License". In this situation, I want one record in the EntityType table, "Primary Lender" and two records in my LicenseType table: "Lender License" and "Mortgage License".
The code for adding related LicenseTypes to my EntityType is done by calling:
_currentEntity.LicenseTypes.Add(licenseType);
and then calling _context.SaveChanges();
There is an additional table, "EntityTypeLicenseTypes" that serves as the lookup table to relate these two tables. There are two records to join the EntityType with the two related LicenseTypes.
And this works. However, my code also adds (it duplicates) the LicenseType record and adds it in the LicenseType table for those records that are being associated.
How can I stop this from happening?
In order to avoid the duplication you must attach the licenseType to the context:
_context.LicenseTypes.Attach(licenseType);
_currentEntity.LicenseTypes.Add(licenseType);
_context.SaveChanges();

How do I patch enumerables with System.Web.Http.OData.Delta?

Trying to make use of System.Web.Http.OData.Delta to implement PATCH methods in ASP.NET Web API services, but it seems unable to apply changes to properties of type IEnumerable<T>. I'm using the latest Git revision of Delta (2012.2-rc-76-g8a73abe). Has anyone been able to make this work?
Consider this data type, which it should be possible to update in a PATCH request to the Web API service:
public class Person
{
HashSet<int> _friends = new HashSet<int>();
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public IEnumerable<int> Friends
{
get { return _friends; }
set
{
_friends = value != null ? new HashSet<int>(value) : new HashSet<int>();
}
}
public Person(int id, string firstName, string lastName)
{
Id = id;
FirstName = firstName;
LastName = lastName;
}
public Person()
{
}
}
This Web API method implements patching of a Person through Delta<Person>:
public void Patch(int id, Delta<Person> delta)
{
var person = _persons.Single(p => p.Id == id);
delta.Patch(person);
}
If I send a PATCH request with the following JSON to the service, the person's Friends property should be updated, but alas it doesn't happen:
{"Friends": [1]}
The crux of the matter is really how to make Delta update Friends with this data. See also the discussion at CodePlex.
The problem likely is that Deta will try to assign JSON's JArray to your Hashset<int>
If you are using it against JsonMEdiaTypeFormatter and you internalized the Delta code (meaning you can modify it), you'd have to do something like this (this is rough, but works):
Inside, bool TrySetPropertyValue(string name, object value) of Delta<T>, where it returns false:
if (value != null && !cacheHit.Property.PropertyType.IsPrimitive && !isGuid && !cacheHit.Property.PropertyType.IsAssignableFrom(value.GetType()))
{
return false;
}
Change to:
var valueType = value.GetType();
var propertyType = cacheHit.Property.PropertyType;
if (value != null && !propertyType.IsPrimitive && !propertyType.IsAssignableFrom(valueType))
{
var array = value as JArray;
if (array == null)
return false;
var underlyingType = propertyType.GetGenericArguments().FirstOrDefault() ??
propertyType.GetElementType();
if (underlyingType == typeof(string))
{
var a = array.ToObject<IEnumerable<string>>();
value = Activator.CreateInstance(propertyType, a);
}
else if (underlyingType == typeof(int))
{
var a = array.ToObject<IEnumerable<int>>();
value = Activator.CreateInstance(propertyType, a);
}
else
return false;
}
This will only work with collections of int or string but hopefully nudges you into a good direction.
For example, now your model can have:
public class Team {
public HashSet<string> PlayerIds { get; set; }
public List<int> CoachIds { get; set; }
}
And you'd be able to successfully update them.
You could override the TrySetPropertyValue method of the Delta class and make use of JArray class:
public sealed class DeltaWithCollectionsSupport<T> : Delta<T> where T : class
{
public override bool TrySetPropertyValue(string name, object value)
{
var propertyInfo = typeof(T).GetProperty(name);
return propertyInfo != null && value is JArray array
? base.TrySetPropertyValue(name, array.ToObject(propertyInfo.PropertyType))
: base.TrySetPropertyValue(name, value);
}
}
If you are using the ODataMediaTypeFormatter, this should be working. There are a couple of caveats to mention though.
1) your collections have to be settable.
2) the entire collection is replaced. you cannot remove/add individual elements.
Also, there is an issue tracking item 1 - '670 -Delta should support non-settable collections.'

How to decorate a class item to be an index and get the same as using ensureIndex?

I'd like to define in class declaration which items are index, something like:
public class MyClass {
public int SomeNum { get; set; }
[THISISANINDEX]
public string SomeProperty { get; set; }
}
so to have the same effect as ensureIndex("SomeProperty")
Is this possible?
I think this is a nice idea, but you have to do this yourself, there's no built-in support for it. If you have an access layer you can do it in there. You'd need an attribute class, something like this;
public enum IndexConstraints
{
Normal = 0x00000001, // Ascending, non-indexed
Descending = 0x00000010,
Unique = 0x00000100,
Sparse = 0x00001000, // allows nulls in the indexed fields
}
// Applied to a member
[AttributeUsage(AttributeTargets.Property | AttributeTargets.Field)]
public class EnsureIndexAttribute : EnsureIndexes
{
public EnsureIndex(IndexConstraints ic = IndexConstraints.Normal) : base(ic) { }
}
// Applied to a class
[AttributeUsage(AttributeTargets.Class)]
public class EnsureIndexesAttribute : Attribute
{
public bool Descending { get; private set; }
public bool Unique { get; private set; }
public bool Sparse { get; private set; }
public string[] Keys { get; private set; }
public EnsureIndexes(params string[] keys) : this(IndexConstraints.Normal, keys) {}
public EnsureIndexes(IndexConstraints ic, params string[] keys)
{
this.Descending = ((ic & IndexConstraints.Descending) != 0);
this.Unique = ((ic & IndexConstraints.Unique) != 0); ;
this.Sparse = ((ic & IndexConstraints.Sparse) != 0); ;
this.Keys = keys;
}
}//class EnsureIndexes
You could then apply attributes at either the class or member level as follows. I found that adding at member level was less likely to get out of sync with the schema compared to adding at the class level. You need to make sure of course that you get the actual element name as opposed to the C# member name;
[CollectionName("People")]
//[EnsureIndexes("k")]// doing it here would allow for multi-key configs
public class Person
{
[BsonElement("k")] // name mapping in the DB schema
[BsonIgnoreIfNull]
[EnsureIndex(IndexConstraints.Unique|IndexConstraints.Sparse)] // name is implicit here
public string userId{ get; protected set; }
// other properties go here
}
and then in your DB access implementation (or repository), you need something like this;
private void AssureIndexesNotInlinable()
{
// We can only index a collection if there's at least one element, otherwise it does nothing
if (this.collection.Count() > 0)
{
// Check for EnsureIndex Attribute
var theClass = typeof(T);
// Walk the members of the class to see if there are any directly attached index directives
foreach (var m in theClass.GetProperties(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.FlattenHierarchy))
{
List<string> elementNameOverride = new List<string>(1);
EnsureIndexes indexAttr = null;
// For each members attribs
foreach (Attribute attr in m.GetCustomAttributes())
{
if (attr.GetType() == typeof(EnsureIndex))
indexAttr = (EnsureIndex)attr;
if (attr.GetType() == typeof(RepoElementAttribute))
elementNameOverride.Add(((RepoElementAttribute)attr).ElementName);
if ((indexAttr != null) && (elementNameOverride.Count != 0))
break;
}
// Index
if (indexAttr != null)
{
if (elementNameOverride.Count() > 0)
EnsureIndexesAsDeclared(indexAttr, elementNameOverride);
else
EnsureIndexesAsDeclared(indexAttr);
}
}
// Walk the atributes on the class itself. WARNING: We don't validate the member names here, we just create the indexes
// so if you create a unique index and don't have a field to match you'll get an exception as you try to add the second
// item with a null value on that key
foreach (Attribute attr in theClass.GetCustomAttributes(true))
{
if (attr.GetType() == typeof(EnsureIndexes))
EnsureIndexesAsDeclared((EnsureIndexes)attr);
}//foreach
}//if this.collection.count
}//AssureIndexesNotInlinable()
EnsureIndexes then looks like this;
private void EnsureIndexesAsDeclared(EnsureIndexes attr, List<string> indexFields = null)
{
var eia = attr as EnsureIndexes;
if (indexFields == null)
indexFields = eia.Keys.ToList();
// use driver specific methods to actually create this index on the collection
var db = GetRepositoryManager(); // if you have a repository or some other method of your own
db.EnsureIndexes(indexFields, attr.Descending, attr.Unique, attr.Sparse);
}//EnsureIndexes()
Note that you'll place this after each and every update because if you forget somewhere your indexes may not get created. It's important to ensure therefore that you optimise the call so that it returns quickly if there's no indexing to do before going through all that reflection code. Ideally, you'd do this just once, or at the very least, once per application startup. So one way would be to use a static flag to track whether you've already done so, and you'd need additional lock protection around that, but over-simplistically, it looks something like this;
void AssureIndexes()
{
if (_requiresIndexing)
AssureIndexesInit();
}
So that's the method you'll want in each and every DB update you make, which, if you're lucky would get inlined by the JIT optimizer as well.
See below for a naive implementation which could do with some brains to take the indexing advice from the MongoDb documentation into consideration. Creating indexes based on queries used within the application instead of adding custom attributes to properties might be another option.
using System;
using System.Reflection;
using MongoDB.Bson.Serialization.Attributes;
using MongoDB.Driver;
using NUnit.Framework;
using SharpTestsEx;
namespace Mongeek
{
[TestFixture]
class TestDecorateToEnsureIndex
{
[Test]
public void ShouldIndexPropertyWithEnsureIndexAttribute()
{
var server = MongoServer.Create("mongodb://localhost");
var db = server.GetDatabase("IndexTest");
var boatCollection = db.GetCollection<Boat>("Boats");
boatCollection.DropAllIndexes();
var indexer = new Indexer();
indexer.EnsureThat(boatCollection).HasIndexesNeededBy<Boat>();
boatCollection.IndexExists(new[] { "Name" }).Should().Be.True();
}
}
internal class Indexer
{
private MongoCollection _mongoCollection;
public Indexer EnsureThat(MongoCollection mongoCollection)
{
_mongoCollection = mongoCollection;
return this;
}
public Indexer HasIndexesNeededBy<T>()
{
Type t = typeof (T);
foreach(PropertyInfo prop in t.GetProperties() )
{
if (Attribute.IsDefined(prop, typeof (EnsureIndexAttribute)))
{
_mongoCollection.EnsureIndex(new[] {prop.Name});
}
}
return this;
}
}
internal class Boat
{
public Boat(Guid id)
{
Id = id;
}
[BsonId]
public Guid Id { get; private set; }
public int Length { get; set; }
[EnsureIndex]
public string Name { get; set; }
}
internal class EnsureIndexAttribute : Attribute
{
}
}