MongoDb C# normal property to versioned property automatically - mongodb

I have this implementation of versioned property
public class VersionedProperty<T>: Dictionary<int, T>, IParentEntityTracker //where T:IComparable<T>
{
[BsonIgnore]
public IVersionableEntity ParentEntity { get; set; }
public VersionedProperty()
{
}
public VersionedProperty(IVersionableEntity parentEntity)
{
ParentEntity = parentEntity;
}
public T Value
{
set
{
var curVal = Value;
if (EqualityComparer<T>.Default.Equals(curVal, value))
return;
ParentEntity.AtLeastOneVersionedPropertyModified = true;
this[ParentEntity.Version + 1] = value;
}
get
{
var key = ParentEntity.Version + (ParentEntity.AtLeastOneVersionedPropertyModified ? 1 : 0);
var keys = Keys.Where(k => k<=key).ToList();
if (!keys.Any())
return default(T);
var max = keys.Max();
T res;
TryGetValue(max, out res);
return res;
}
}
}
Initially I created some property in the document as not versioned. Let say
public class Product
{
public Decimal Price{get;set;}
}
after some time I realized that I did mistake and I should use versioned property
public class Product
{
public VersionedProperty<Decimal> Price{get;set;}
}
What I would like is that old value in the existing document automatically deserialized into this versioned property to avoid writing update queries on the collection.
Is it possible somehow interfere with the process of deserializing?
May be this can help: I use postsharp to create empty instance of versioned property for autoproperties. My postsharp aspect also assigns reference of a parent entity to ParentEntity property of that newly created empty instance of the versioned property.

Related

EF Core Updating Entity Property with JSON Type

I have a 1 to many relationship of Parent and Child to store some data. I want to store this data in PostgreSQL using Npgsql.EntityFrameworkCore.PostgreSQL package. Parent maps to a table, but Child is stored as a json column of Parent table in the database.
public class Parent
{
public int Id { get; set; }
[Column(TypeName = "json")]
public ICollection<Child> Children { get; set; }
}
public class Child
{
public int Id { get; set; }
public string Name { get; set; }
}
When I try to add a Child entity to an existing Parent instance with the code below, SaveChangesAsync doesn't produce an update command on the database.
var child = new Child(){Id = 0, Name = "Name"};
var parent = await DataContext.Parent.SingleOrDefaultAsync(f => f.Id == 1);
parent.Children.Add(child);
await DataContext.SaveChangesAsync();
In order to trigger an update command, I have to set parent entity's State to Modified before calling SaveChangesAsync.
var entry = DataContext.Entry<Parent>(parent);
entry.State = EntityState.Modified;
Is this the expected behavior or am I missing something?
Update:
As #SvyatoslavDanyliv suggested, instead of using ICollection when I use a class EqualityCollection derived from List and override Object.Equals as follows :
public class EqualityCollection<T> : List<T>
{
public override bool Equals(object? obj)
{
if (obj != null && obj.GetType() == typeof(EqualityCollection<T>))
return this.Equals(obj as EqualityCollection<T>);
return false;
}
public bool Equals(EqualityCollection<T> obj)
{
return this.SequenceEqual(obj ?? throw new InvalidOperationException());
}
public override int GetHashCode()
{
return base.GetHashCode();
}
}
Change detection doesn't detect the change in the property and update command is not triggered.
You have to define ValueComparer when defining conversion via HasConversion as described in documentation: Value Comparers

Dynamic way to Generate EntityTypeConfiguration : The type 'TResult' must be a non-nullable value type

I was thinking to generate EntityTypeConfiguration dynamically from run time and i don't want any EF dependency in Models[That is why i avoid Data Annotation].
So I declare a custom attribute(or can be any configuration file later on)
[AttributeUsage(AttributeTargets.Property, AllowMultiple=true )]
public class PersistableMemberAttribute : Attribute
{
public bool Iskey;
public bool IsRequired;
public bool IsIgnored;
public bool IsMany;
public string HasForeignKey;
public bool PropertyIsRequired;
public bool PropertyIsOptional;
}
And here is one of my Models is look like:
public class Blog
{
[PersistableMember(Iskey=true)]
public Guid BlogId { get; set; }
[PersistableMember(PropertyIsRequired = true)]
public string Name { get; set; }
public string Url { get; set; }
[PersistableMember(IsIgnored=true)]
public int Rating { get; set; }
[PersistableMember(IsMany =true)]
public ICollection<Post> Posts { get; set; }
}
Now I am going to write a generic EntityTypeConfiguration , which will create the configuration dynamically on run time based on the attribute values :
public class GenericEntityConfiguration<T> : EntityTypeConfiguration<T> where T : class
{
public GenericEntityConfiguration()
{
var members = typeof(T).GetProperties();
if (null != members)
{
foreach (var property in members)
{
var attrb= property.GetCustomAttributes(typeof( PersistableMemberAttribute ),false).OfType<PersistableMemberAttribute>();
if (attrb != null && attrb.Count() > 0)
{
foreach (var memberAttributute in attrb)
{
if (memberAttributute.Iskey || memberAttributute.IsIgnored)
{
var entityMethod = this.GetType().GetMethod("Setkey");
entityMethod.MakeGenericMethod(property.PropertyType)
.Invoke(this, new object[] { property, memberAttributute });
}
if (memberAttributute.IsRequired)
{
var entityMethod = this.GetType().GetMethod("SetRequired");
entityMethod.MakeGenericMethod(property.PropertyType)
.Invoke(this, new object[] { property, memberAttributute });
}
if (memberAttributute.PropertyIsRequired || memberAttributute.PropertyIsOptional)
{
var entityMethod = this.GetType().GetMethod("SetPropertyConfiguration");
entityMethod.MakeGenericMethod(property.PropertyType)
.Invoke(this, new object[] { property, memberAttributute });
}
}
}
}
}
}
public void SetPropertyConfiguration<TResult>(PropertyInfo propertyInfo, PersistableMemberAttribute attribute)
{
var functorParam = Expression.Parameter(typeof(T));
var lambda = Expression.Lambda(
Expression.Property(functorParam, propertyInfo)
, functorParam);
if (attribute.PropertyIsRequired)
{
this.Property<TResult>((Expression<Func<T, TResult>>)lambda).IsRequired();
}
if (attribute.PropertyIsOptional)
{
this.Property<TResult>((Expression<Func<T, TResult>>)lambda).IsOptional();
}
}
public void Setkey<TResult>(PropertyInfo propertyInfo, PersistableMemberAttribute attribute)
{
var functorParam = Expression.Parameter(typeof(T));
var lambda = Expression.Lambda(
Expression.Property(functorParam, propertyInfo)
, functorParam);
if (attribute.Iskey)
{
this.HasKey<TResult>((Expression<Func<T,TResult>>)lambda);
}
if (attribute.IsIgnored)
{
this.Ignore<TResult>((Expression<Func<T, TResult>>)lambda);
}
}
public void SetRequired<TResult>(PropertyInfo propertyInfo, PersistableMemberAttribute attribute) where TResult : class
{
var functorParam = Expression.Parameter(typeof(T));
var lambda = Expression.Lambda(
Expression.Property(functorParam, propertyInfo)
, functorParam);
if (attribute.IsRequired)
{
this.HasRequired<TResult>((Expression<Func<T, TResult>>)lambda);
}
}
}
But i got the compilation error of
Error 1 The type 'TResult' must be a non-nullable value type in order to use it as parameter 'T' in the generic type or method 'System.Data.Entity.ModelConfiguration.Configuration.StructuralTypeConfiguration.Property(System.Linq.Expressions.Expression>)' D:\R&D\UpdateStorePOC\UpdateStorePOC\Data\GenericEntityConfiguration.cs 63 17 UpdateStorePOC
which for these two statements:
this.Property<TResult>((Expression<Func<T, TResult>>)lambda).IsRequired();
this.Property<TResult>((Expression<Func<T, TResult>>)lambda).IsOptional();
that means that I need to put a constraint on my method to restrict it to a value type. In C#, this is done with the ‘struct’ keyword.
public void SetPropertyConfiguration<TResult>(PropertyInfo propertyInfo, PersistableMemberAttribute attribute) Where TResult : struct
But Its not the solution since my property type can be a class e.g string or int, bool double, etc . So it is not at all clear that I can send them into this method. Please help me to solve this issue whether there is any other way to do it.
I don't want any EF dependency in models.
With fluent mapping you're almost there and you won't come any closer. Your attributes, even though intended to be moved to a configuration file, don't make your model any more free of any EF footprint.1 Worse, they only add a second mapping layer (if you like) between your model and EF's mapping. I only see drawbacks:
You still have to maintain meta data for your model, probably not any less than regular fluent mapping and (probably) in awkward manually edited XML without compile-time checking.
You will keep expanding your code to cover cases that EF's mapping covers but yours doesn't yet.2 So it's a waste of energy: in the end you'll basically have rewritten EF's mapping methods.
You'll have to keep your fingers crossed when you want to upgrade EF.
With bugs/problems you're on your own: hard to get support from the community.
So my answer to your question help me to solve this issue would be: use fluent mapping out of the box. Keep it simple.
1 For example, you would still have to use the virtual modifier to enable proxies for lazy loading.
2 Like support for inheritance, unmapped foreign keys, max length, db data type, ... this could go on for a while.

EF5 Code First Enums and Lookup Tables

I'd like to define an enum for EF5 to use, and a corresponding lookup table. I know EF5 now supports enums, but out-of-the-box, it seems it only supports this at the object level, and does not by default add a table for these lookup values.
For example, I have a User entity:
public class User
{
int Id { get; set; }
string Name { get; set; }
UserType UserType { get; set; }
}
And a UserType enum:
public enum UserType
{
Member = 1,
Moderator = 2,
Administrator = 3
}
I would like for database generation to create a table, something like:
create table UserType
(
Id int,
Name nvarchar(max)
)
Is this possible?
Here's a nuget package I made earlier that generates lookup tables and applies foreign keys, and keeps the lookup table rows in sync with the enum:
https://www.nuget.org/packages/ef-enum-to-lookup
Add that to your project and call the Apply method.
Documentation on github: https://github.com/timabell/ef-enum-to-lookup
It is not directly possible. EF supports enums on the same level as .NET so enum value is just named integer => enum property in class is always integer column in the database. If you want to have table as well you need to create it manually in your own database initializer together with foreign key in User and fill it with enum values.
I made some proposal on user voice to allow more complex mappings. If you find it useful you can vote for the proposal.
I wrote a little helper class, that creates a database table for the enums specified in the UserEntities class. It also creates a foreign key on the tables that referencing the enum.
So here it is:
public class EntityHelper
{
public static void Seed(DbContext context)
{
var contextProperties = context.GetType().GetProperties();
List<PropertyInfo> enumSets = contextProperties.Where(p =>IsSubclassOfRawGeneric(typeof(EnumSet<>),p.PropertyType)).ToList();
foreach (var enumType in enumSets)
{
var referencingTpyes = GetReferencingTypes(enumType, contextProperties);
CreateEnumTable(enumType, referencingTpyes, context);
}
}
private static void CreateEnumTable(PropertyInfo enumProperty, List<PropertyInfo> referencingTypes, DbContext context)
{
var enumType = enumProperty.PropertyType.GetGenericArguments()[0];
//create table
var command = string.Format(
"CREATE TABLE {0} ([Id] [int] NOT NULL,[Value] [varchar](50) NOT NULL,CONSTRAINT pk_{0}_Id PRIMARY KEY (Id));", enumType.Name);
context.Database.ExecuteSqlCommand(command);
//insert value
foreach (var enumvalue in Enum.GetValues(enumType))
{
command = string.Format("INSERT INTO {0} VALUES({1},'{2}');", enumType.Name, (int)enumvalue,
enumvalue);
context.Database.ExecuteSqlCommand(command);
}
//foreign keys
foreach (var referencingType in referencingTypes)
{
var tableType = referencingType.PropertyType.GetGenericArguments()[0];
foreach (var propertyInfo in tableType.GetProperties())
{
if (propertyInfo.PropertyType == enumType)
{
var command2 = string.Format("ALTER TABLE {0} WITH CHECK ADD CONSTRAINT [FK_{0}_{1}] FOREIGN KEY({2}) REFERENCES {1}([Id])",
tableType.Name, enumProperty.Name, propertyInfo.Name
);
context.Database.ExecuteSqlCommand(command2);
}
}
}
}
private static List<PropertyInfo> GetReferencingTypes(PropertyInfo enumProperty, IEnumerable<PropertyInfo> contextProperties)
{
var result = new List<PropertyInfo>();
var enumType = enumProperty.PropertyType.GetGenericArguments()[0];
foreach (var contextProperty in contextProperties)
{
if (IsSubclassOfRawGeneric(typeof(DbSet<>), contextProperty.PropertyType))
{
var tableType = contextProperty.PropertyType.GetGenericArguments()[0];
foreach (var propertyInfo in tableType.GetProperties())
{
if (propertyInfo.PropertyType == enumType)
result.Add(contextProperty);
}
}
}
return result;
}
private static bool IsSubclassOfRawGeneric(Type generic, Type toCheck)
{
while (toCheck != null && toCheck != typeof(object))
{
var cur = toCheck.IsGenericType ? toCheck.GetGenericTypeDefinition() : toCheck;
if (generic == cur)
{
return true;
}
toCheck = toCheck.BaseType;
}
return false;
}
public class EnumSet<T>
{
}
}
using the code:
public partial class UserEntities : DbContext{
public DbSet<User> User { get; set; }
public EntityHelper.EnumSet<UserType> UserType { get; set; }
public static void CreateDatabase(){
using (var db = new UserEntities()){
db.Database.CreateIfNotExists();
db.Database.Initialize(true);
EntityHelper.Seed(db);
}
}
}
I have created a package for it
https://www.nuget.org/packages/SSW.Data.EF.Enums/1.0.0
Use
EnumTableGenerator.Run("your object context", "assembly that contains enums");
"your object context" - is your EntityFramework DbContext
"assembly that contains enums" - an assembly that contains your enums
Call EnumTableGenerator.Run as part of your seed function. This will create tables in sql server for each Enum and populate it with correct data.
I have included this answer as I've made some additional changes from #HerrKater
I made a small addition to Herr Kater's Answer (also based on Tim Abell's comment). The update is to use a method to get the enum value from the DisplayName Attribute if exists else split the PascalCase enum value.
private static string GetDisplayValue(object value)
{
var fieldInfo = value.GetType().GetField(value.ToString());
var descriptionAttributes = fieldInfo.GetCustomAttributes(
typeof(DisplayAttribute), false) as DisplayAttribute[];
if (descriptionAttributes == null) return string.Empty;
return (descriptionAttributes.Length > 0)
? descriptionAttributes[0].Name
: System.Text.RegularExpressions.Regex.Replace(value.ToString(), "([a-z](?=[A-Z])|[A-Z](?=[A-Z][a-z]))", "$1 ");
}
Update Herr Katers example to call the method:
command = string.Format("INSERT INTO {0} VALUES({1},'{2}');", enumType.Name, (int)enumvalue,
GetDisplayValue(enumvalue));
Enum Example
public enum PaymentMethod
{
[Display(Name = "Credit Card")]
CreditCard = 1,
[Display(Name = "Direct Debit")]
DirectDebit = 2
}
you must customize your workflow of generation
1. Copy your default template of generation TablePerTypeStrategy
Location : \Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Entity Framework Tools\DBGen.
2. Add custom activity who realize your need (Workflow Foundation)
3. Modify your section Database Generation Workflow in your project EF

How to correctly model loosely-typed properties in RavenDB

I am new to RavenDB and looking for guidance on the correct way to store loosely-typed data. I have a type with a list of key/value pairs. The type of the value property isn't known at design time.
public class DescriptiveValue
{
public string Key { get; set; }
public object Value { get; set; }
}
When I query a DescriptiveValue that was saved with a DateTime or Guid Value, the deserialized data type is string. Numeric values appear to retain their data types.
Is there an elegant solution to retain the data type or should I simply store all values as strings? If I go the string route, will this limit me when I later want to sort and filter this data (likely via indexes?)
I hoping this is a common problem that is easily solved and I'm just thinking about the problem incorrectly. Your help is much appreciated!
UPDATE:
The output of this unit test is: Assert.AreEqual failed. Expected:<2/2/2012 10:00:01 AM (System.DateTime)>. Actual:<2012-02-02T10:00:01.9047999 (System.String)>.
[TestMethod]
public void Store_WithDateTime_IsPersistedCorrectly()
{
AssertValueIsPersisted<DateTime>(DateTime.Now);
}
private void AssertValueIsPersisted<T>(T value)
{
ObjectValuedAttribute expected = new ObjectValuedAttribute() { Value = value };
using (var session = this.NewSession())
{
session.Store(expected);
session.SaveChanges();
}
TestDataFactory.ResetRavenDbConnection();
using (var session = this.NewSession())
{
ObjectValuedAttribute actual = session.Query<ObjectValuedAttribute>().Single();
Assert.AreEqual(expected.Value, actual.Value);
}
}
I would expect actual to be a DateTime value.
Absolutely - that's one of the strength of schema-less document databases. See here: http://ravendb.net/docs/client-api/advanced/dynamic-fields
The problem is that RavenDB server has no notion of the type of Value. When sending your object to the server, Value is persisted as a string, and when you later query that document, the deserializer does not know about the original type, so Value is deserialized as a string.
You can solve this by adding the original type information to ObjectValuedAttribute:
public class ObjectValuedAttribute {
private object _value;
public string Key { get; set; }
public object Value {
get {
// convert the value back to the original type
if (ValueType != null && _value.GetType() != ValueType) {
_value = TypeDescriptor
.GetConverter(ValueType).ConvertFrom(_value);
}
return _value;
}
set {
_value = value;
ValueType = value.GetType();
}
}
public Type ValueType { get; private set; }
}
In the setter of Value we also store the type of it. Later, when getting back the value, we convert it back to its original type.
Following test passes:
public class CodeChef : LocalClientTest {
public class ObjectValuedAttribute {
private object _value;
public string Key { get; set; }
public object Value {
get {
// convert value back to the original type
if (ValueType != null && _value.GetType() != ValueType) {
_value = TypeDescriptor
.GetConverter(ValueType).ConvertFrom(_value);
}
return _value;
}
set {
_value = value;
ValueType = value.GetType();
}
}
public Type ValueType { get; private set; }
}
[Fact]
public void Store_WithDateTime_IsPersistedCorrectly() {
AssertValueIsPersisted(DateTime.Now);
}
private void AssertValueIsPersisted<T>(T value) {
using (var store = NewDocumentStore()) {
var expected = new ObjectValuedAttribute { Value = value };
using (var session = store.OpenSession()) {
session.Store(expected);
session.SaveChanges();
}
using (var session = store.OpenSession()) {
var actual = session
.Query<ObjectValuedAttribute>()
.Customize(x => x.WaitForNonStaleResults())
.Single();
Assert.Equal(expected.Value, actual.Value);
}
}
}
}

How to decorate a class item to be an index and get the same as using ensureIndex?

I'd like to define in class declaration which items are index, something like:
public class MyClass {
public int SomeNum { get; set; }
[THISISANINDEX]
public string SomeProperty { get; set; }
}
so to have the same effect as ensureIndex("SomeProperty")
Is this possible?
I think this is a nice idea, but you have to do this yourself, there's no built-in support for it. If you have an access layer you can do it in there. You'd need an attribute class, something like this;
public enum IndexConstraints
{
Normal = 0x00000001, // Ascending, non-indexed
Descending = 0x00000010,
Unique = 0x00000100,
Sparse = 0x00001000, // allows nulls in the indexed fields
}
// Applied to a member
[AttributeUsage(AttributeTargets.Property | AttributeTargets.Field)]
public class EnsureIndexAttribute : EnsureIndexes
{
public EnsureIndex(IndexConstraints ic = IndexConstraints.Normal) : base(ic) { }
}
// Applied to a class
[AttributeUsage(AttributeTargets.Class)]
public class EnsureIndexesAttribute : Attribute
{
public bool Descending { get; private set; }
public bool Unique { get; private set; }
public bool Sparse { get; private set; }
public string[] Keys { get; private set; }
public EnsureIndexes(params string[] keys) : this(IndexConstraints.Normal, keys) {}
public EnsureIndexes(IndexConstraints ic, params string[] keys)
{
this.Descending = ((ic & IndexConstraints.Descending) != 0);
this.Unique = ((ic & IndexConstraints.Unique) != 0); ;
this.Sparse = ((ic & IndexConstraints.Sparse) != 0); ;
this.Keys = keys;
}
}//class EnsureIndexes
You could then apply attributes at either the class or member level as follows. I found that adding at member level was less likely to get out of sync with the schema compared to adding at the class level. You need to make sure of course that you get the actual element name as opposed to the C# member name;
[CollectionName("People")]
//[EnsureIndexes("k")]// doing it here would allow for multi-key configs
public class Person
{
[BsonElement("k")] // name mapping in the DB schema
[BsonIgnoreIfNull]
[EnsureIndex(IndexConstraints.Unique|IndexConstraints.Sparse)] // name is implicit here
public string userId{ get; protected set; }
// other properties go here
}
and then in your DB access implementation (or repository), you need something like this;
private void AssureIndexesNotInlinable()
{
// We can only index a collection if there's at least one element, otherwise it does nothing
if (this.collection.Count() > 0)
{
// Check for EnsureIndex Attribute
var theClass = typeof(T);
// Walk the members of the class to see if there are any directly attached index directives
foreach (var m in theClass.GetProperties(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.FlattenHierarchy))
{
List<string> elementNameOverride = new List<string>(1);
EnsureIndexes indexAttr = null;
// For each members attribs
foreach (Attribute attr in m.GetCustomAttributes())
{
if (attr.GetType() == typeof(EnsureIndex))
indexAttr = (EnsureIndex)attr;
if (attr.GetType() == typeof(RepoElementAttribute))
elementNameOverride.Add(((RepoElementAttribute)attr).ElementName);
if ((indexAttr != null) && (elementNameOverride.Count != 0))
break;
}
// Index
if (indexAttr != null)
{
if (elementNameOverride.Count() > 0)
EnsureIndexesAsDeclared(indexAttr, elementNameOverride);
else
EnsureIndexesAsDeclared(indexAttr);
}
}
// Walk the atributes on the class itself. WARNING: We don't validate the member names here, we just create the indexes
// so if you create a unique index and don't have a field to match you'll get an exception as you try to add the second
// item with a null value on that key
foreach (Attribute attr in theClass.GetCustomAttributes(true))
{
if (attr.GetType() == typeof(EnsureIndexes))
EnsureIndexesAsDeclared((EnsureIndexes)attr);
}//foreach
}//if this.collection.count
}//AssureIndexesNotInlinable()
EnsureIndexes then looks like this;
private void EnsureIndexesAsDeclared(EnsureIndexes attr, List<string> indexFields = null)
{
var eia = attr as EnsureIndexes;
if (indexFields == null)
indexFields = eia.Keys.ToList();
// use driver specific methods to actually create this index on the collection
var db = GetRepositoryManager(); // if you have a repository or some other method of your own
db.EnsureIndexes(indexFields, attr.Descending, attr.Unique, attr.Sparse);
}//EnsureIndexes()
Note that you'll place this after each and every update because if you forget somewhere your indexes may not get created. It's important to ensure therefore that you optimise the call so that it returns quickly if there's no indexing to do before going through all that reflection code. Ideally, you'd do this just once, or at the very least, once per application startup. So one way would be to use a static flag to track whether you've already done so, and you'd need additional lock protection around that, but over-simplistically, it looks something like this;
void AssureIndexes()
{
if (_requiresIndexing)
AssureIndexesInit();
}
So that's the method you'll want in each and every DB update you make, which, if you're lucky would get inlined by the JIT optimizer as well.
See below for a naive implementation which could do with some brains to take the indexing advice from the MongoDb documentation into consideration. Creating indexes based on queries used within the application instead of adding custom attributes to properties might be another option.
using System;
using System.Reflection;
using MongoDB.Bson.Serialization.Attributes;
using MongoDB.Driver;
using NUnit.Framework;
using SharpTestsEx;
namespace Mongeek
{
[TestFixture]
class TestDecorateToEnsureIndex
{
[Test]
public void ShouldIndexPropertyWithEnsureIndexAttribute()
{
var server = MongoServer.Create("mongodb://localhost");
var db = server.GetDatabase("IndexTest");
var boatCollection = db.GetCollection<Boat>("Boats");
boatCollection.DropAllIndexes();
var indexer = new Indexer();
indexer.EnsureThat(boatCollection).HasIndexesNeededBy<Boat>();
boatCollection.IndexExists(new[] { "Name" }).Should().Be.True();
}
}
internal class Indexer
{
private MongoCollection _mongoCollection;
public Indexer EnsureThat(MongoCollection mongoCollection)
{
_mongoCollection = mongoCollection;
return this;
}
public Indexer HasIndexesNeededBy<T>()
{
Type t = typeof (T);
foreach(PropertyInfo prop in t.GetProperties() )
{
if (Attribute.IsDefined(prop, typeof (EnsureIndexAttribute)))
{
_mongoCollection.EnsureIndex(new[] {prop.Name});
}
}
return this;
}
}
internal class Boat
{
public Boat(Guid id)
{
Id = id;
}
[BsonId]
public Guid Id { get; private set; }
public int Length { get; set; }
[EnsureIndex]
public string Name { get; set; }
}
internal class EnsureIndexAttribute : Attribute
{
}
}