How do I patch enumerables with System.Web.Http.OData.Delta? - rest

Trying to make use of System.Web.Http.OData.Delta to implement PATCH methods in ASP.NET Web API services, but it seems unable to apply changes to properties of type IEnumerable<T>. I'm using the latest Git revision of Delta (2012.2-rc-76-g8a73abe). Has anyone been able to make this work?
Consider this data type, which it should be possible to update in a PATCH request to the Web API service:
public class Person
{
HashSet<int> _friends = new HashSet<int>();
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public IEnumerable<int> Friends
{
get { return _friends; }
set
{
_friends = value != null ? new HashSet<int>(value) : new HashSet<int>();
}
}
public Person(int id, string firstName, string lastName)
{
Id = id;
FirstName = firstName;
LastName = lastName;
}
public Person()
{
}
}
This Web API method implements patching of a Person through Delta<Person>:
public void Patch(int id, Delta<Person> delta)
{
var person = _persons.Single(p => p.Id == id);
delta.Patch(person);
}
If I send a PATCH request with the following JSON to the service, the person's Friends property should be updated, but alas it doesn't happen:
{"Friends": [1]}
The crux of the matter is really how to make Delta update Friends with this data. See also the discussion at CodePlex.

The problem likely is that Deta will try to assign JSON's JArray to your Hashset<int>
If you are using it against JsonMEdiaTypeFormatter and you internalized the Delta code (meaning you can modify it), you'd have to do something like this (this is rough, but works):
Inside, bool TrySetPropertyValue(string name, object value) of Delta<T>, where it returns false:
if (value != null && !cacheHit.Property.PropertyType.IsPrimitive && !isGuid && !cacheHit.Property.PropertyType.IsAssignableFrom(value.GetType()))
{
return false;
}
Change to:
var valueType = value.GetType();
var propertyType = cacheHit.Property.PropertyType;
if (value != null && !propertyType.IsPrimitive && !propertyType.IsAssignableFrom(valueType))
{
var array = value as JArray;
if (array == null)
return false;
var underlyingType = propertyType.GetGenericArguments().FirstOrDefault() ??
propertyType.GetElementType();
if (underlyingType == typeof(string))
{
var a = array.ToObject<IEnumerable<string>>();
value = Activator.CreateInstance(propertyType, a);
}
else if (underlyingType == typeof(int))
{
var a = array.ToObject<IEnumerable<int>>();
value = Activator.CreateInstance(propertyType, a);
}
else
return false;
}
This will only work with collections of int or string but hopefully nudges you into a good direction.
For example, now your model can have:
public class Team {
public HashSet<string> PlayerIds { get; set; }
public List<int> CoachIds { get; set; }
}
And you'd be able to successfully update them.

You could override the TrySetPropertyValue method of the Delta class and make use of JArray class:
public sealed class DeltaWithCollectionsSupport<T> : Delta<T> where T : class
{
public override bool TrySetPropertyValue(string name, object value)
{
var propertyInfo = typeof(T).GetProperty(name);
return propertyInfo != null && value is JArray array
? base.TrySetPropertyValue(name, array.ToObject(propertyInfo.PropertyType))
: base.TrySetPropertyValue(name, value);
}
}

If you are using the ODataMediaTypeFormatter, this should be working. There are a couple of caveats to mention though.
1) your collections have to be settable.
2) the entire collection is replaced. you cannot remove/add individual elements.
Also, there is an issue tracking item 1 - '670 -Delta should support non-settable collections.'

Related

How to write an audit log entry per changed property with Audit.NET EntityFramework.Core

I'm trying to get the Audit:NET EntityFramework.Core extension to write an AuditLog entry per changed property.
For this purpose I've overidden the EntityFrameworkDataProvider.InsertEvent with a custom DataProvider.
The problem is, using DbContextHelper.Core.CreateAuditEvent to create a new EntityFrameworkEvent returns null.
The reason seems to be, at this point in the code execution DbContextHelper.GetModifiedEntries determines all EF Entries have State.Unmodified, even if they are clearly included in the EventEntry changes.
I'm trying to circumvent CreateAuditEvent by manually creating the contents is impossible due to private/internal properties.
Maybe there is an alternative solution to this problem I'm not seeing, i'm open to all suggestions.
Audit entity class
public class AuditLog
{
public int Id { get; set; }
public string Description { get; set; }
public string OldValue { get; set; }
public string NewValue { get; set; }
public string PropertyName { get; set; }
public DateTime AuditDateTime { get; set; }
public Guid? AuditIssuerUserId { get; set; }
public string AuditAction { get; set; }
public string TableName { get; set; }
public int TablePK { get; set; }
}
Startup configuration
Audit.Core.Configuration.Setup()
.UseCustomProvider(new CustomEntityFrameworkDataProvider(x => x
.AuditEntityAction<AuditLog>((ev, ent, auditEntity) =>
{
auditEntity.AuditDateTime = DateTime.Now;
auditEntity.AuditAction = ent.Action;
foreach(var change in ent.Changes)
{
auditEntity.OldValue = change.OriginalValue.ToString();
auditEntity.NewValue = change.NewValue.ToString();
auditEntity.PropertyName = change.ColumnName;
}
}
Custom data provider class
public class CustomEntityFrameworkDataProvider : EntityFrameworkDataProvider
{
public override object InsertEvent(AuditEvent auditEvent)
{
var auditEventEf = auditEvent as AuditEventEntityFramework;
if (auditEventEf == null)
return null;
object result = null;
foreach (var entry in auditEventEf.EntityFrameworkEvent.Entries)
{
if (entry.Changes == null || entry.Changes.Count == 0)
continue;
foreach (var change in entry.Changes)
{
var contextHelper = new DbContextHelper();
var newEfEvent = contextHelper.CreateAuditEvent((IAuditDbContext)auditEventEf.EntityFrameworkEvent.GetDbContext());
if (newEfEvent == null)
continue;
newEfEvent.Entries = new List<EventEntry>() { entry };
entry.Changes = new List<EventEntryChange> { change };
auditEventEf.EntityFrameworkEvent = newEfEvent;
result = base.InsertEvent(auditEvent);
}
}
return result;
}
}
Check my answer here https://github.com/thepirat000/Audit.NET/issues/323#issuecomment-673007204
You don't need to call CreateAuditEvent() you should be able to iterate over the Changes list on the original event and call base.InsertEvent() for each change, like this:
public override object InsertEvent(AuditEvent auditEvent)
{
var auditEventEf = auditEvent as AuditEventEntityFramework;
if (auditEventEf == null)
return null;
object result = null;
foreach (var entry in auditEventEf.EntityFrameworkEvent.Entries)
{
if (entry.Changes == null || entry.Changes.Count == 0)
continue;
// Call base.InsertEvent for each change
var originalChanges = entry.Changes;
foreach (var change in originalChanges)
{
entry.Changes = new List<EventEntryChange>() { change };
result = base.InsertEvent(auditEvent);
}
entry.Changes = originalChanges;
}
return result;
}
Notes:
This could impact performance, since it will trigger an insert to the database for each column change.
If you plan to use async calls to DbContext.SaveChangesAsync, you should also implement the InsertEventAsync method on your CustomDataProvider
The Changes property is only available for Updates, so if you also want to audit Inserts and Deletes, you'll need to add the logic to get the column values from the ColumnValues property on the event

How to correctly model loosely-typed properties in RavenDB

I am new to RavenDB and looking for guidance on the correct way to store loosely-typed data. I have a type with a list of key/value pairs. The type of the value property isn't known at design time.
public class DescriptiveValue
{
public string Key { get; set; }
public object Value { get; set; }
}
When I query a DescriptiveValue that was saved with a DateTime or Guid Value, the deserialized data type is string. Numeric values appear to retain their data types.
Is there an elegant solution to retain the data type or should I simply store all values as strings? If I go the string route, will this limit me when I later want to sort and filter this data (likely via indexes?)
I hoping this is a common problem that is easily solved and I'm just thinking about the problem incorrectly. Your help is much appreciated!
UPDATE:
The output of this unit test is: Assert.AreEqual failed. Expected:<2/2/2012 10:00:01 AM (System.DateTime)>. Actual:<2012-02-02T10:00:01.9047999 (System.String)>.
[TestMethod]
public void Store_WithDateTime_IsPersistedCorrectly()
{
AssertValueIsPersisted<DateTime>(DateTime.Now);
}
private void AssertValueIsPersisted<T>(T value)
{
ObjectValuedAttribute expected = new ObjectValuedAttribute() { Value = value };
using (var session = this.NewSession())
{
session.Store(expected);
session.SaveChanges();
}
TestDataFactory.ResetRavenDbConnection();
using (var session = this.NewSession())
{
ObjectValuedAttribute actual = session.Query<ObjectValuedAttribute>().Single();
Assert.AreEqual(expected.Value, actual.Value);
}
}
I would expect actual to be a DateTime value.
Absolutely - that's one of the strength of schema-less document databases. See here: http://ravendb.net/docs/client-api/advanced/dynamic-fields
The problem is that RavenDB server has no notion of the type of Value. When sending your object to the server, Value is persisted as a string, and when you later query that document, the deserializer does not know about the original type, so Value is deserialized as a string.
You can solve this by adding the original type information to ObjectValuedAttribute:
public class ObjectValuedAttribute {
private object _value;
public string Key { get; set; }
public object Value {
get {
// convert the value back to the original type
if (ValueType != null && _value.GetType() != ValueType) {
_value = TypeDescriptor
.GetConverter(ValueType).ConvertFrom(_value);
}
return _value;
}
set {
_value = value;
ValueType = value.GetType();
}
}
public Type ValueType { get; private set; }
}
In the setter of Value we also store the type of it. Later, when getting back the value, we convert it back to its original type.
Following test passes:
public class CodeChef : LocalClientTest {
public class ObjectValuedAttribute {
private object _value;
public string Key { get; set; }
public object Value {
get {
// convert value back to the original type
if (ValueType != null && _value.GetType() != ValueType) {
_value = TypeDescriptor
.GetConverter(ValueType).ConvertFrom(_value);
}
return _value;
}
set {
_value = value;
ValueType = value.GetType();
}
}
public Type ValueType { get; private set; }
}
[Fact]
public void Store_WithDateTime_IsPersistedCorrectly() {
AssertValueIsPersisted(DateTime.Now);
}
private void AssertValueIsPersisted<T>(T value) {
using (var store = NewDocumentStore()) {
var expected = new ObjectValuedAttribute { Value = value };
using (var session = store.OpenSession()) {
session.Store(expected);
session.SaveChanges();
}
using (var session = store.OpenSession()) {
var actual = session
.Query<ObjectValuedAttribute>()
.Customize(x => x.WaitForNonStaleResults())
.Single();
Assert.Equal(expected.Value, actual.Value);
}
}
}
}

How to decorate a class item to be an index and get the same as using ensureIndex?

I'd like to define in class declaration which items are index, something like:
public class MyClass {
public int SomeNum { get; set; }
[THISISANINDEX]
public string SomeProperty { get; set; }
}
so to have the same effect as ensureIndex("SomeProperty")
Is this possible?
I think this is a nice idea, but you have to do this yourself, there's no built-in support for it. If you have an access layer you can do it in there. You'd need an attribute class, something like this;
public enum IndexConstraints
{
Normal = 0x00000001, // Ascending, non-indexed
Descending = 0x00000010,
Unique = 0x00000100,
Sparse = 0x00001000, // allows nulls in the indexed fields
}
// Applied to a member
[AttributeUsage(AttributeTargets.Property | AttributeTargets.Field)]
public class EnsureIndexAttribute : EnsureIndexes
{
public EnsureIndex(IndexConstraints ic = IndexConstraints.Normal) : base(ic) { }
}
// Applied to a class
[AttributeUsage(AttributeTargets.Class)]
public class EnsureIndexesAttribute : Attribute
{
public bool Descending { get; private set; }
public bool Unique { get; private set; }
public bool Sparse { get; private set; }
public string[] Keys { get; private set; }
public EnsureIndexes(params string[] keys) : this(IndexConstraints.Normal, keys) {}
public EnsureIndexes(IndexConstraints ic, params string[] keys)
{
this.Descending = ((ic & IndexConstraints.Descending) != 0);
this.Unique = ((ic & IndexConstraints.Unique) != 0); ;
this.Sparse = ((ic & IndexConstraints.Sparse) != 0); ;
this.Keys = keys;
}
}//class EnsureIndexes
You could then apply attributes at either the class or member level as follows. I found that adding at member level was less likely to get out of sync with the schema compared to adding at the class level. You need to make sure of course that you get the actual element name as opposed to the C# member name;
[CollectionName("People")]
//[EnsureIndexes("k")]// doing it here would allow for multi-key configs
public class Person
{
[BsonElement("k")] // name mapping in the DB schema
[BsonIgnoreIfNull]
[EnsureIndex(IndexConstraints.Unique|IndexConstraints.Sparse)] // name is implicit here
public string userId{ get; protected set; }
// other properties go here
}
and then in your DB access implementation (or repository), you need something like this;
private void AssureIndexesNotInlinable()
{
// We can only index a collection if there's at least one element, otherwise it does nothing
if (this.collection.Count() > 0)
{
// Check for EnsureIndex Attribute
var theClass = typeof(T);
// Walk the members of the class to see if there are any directly attached index directives
foreach (var m in theClass.GetProperties(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.FlattenHierarchy))
{
List<string> elementNameOverride = new List<string>(1);
EnsureIndexes indexAttr = null;
// For each members attribs
foreach (Attribute attr in m.GetCustomAttributes())
{
if (attr.GetType() == typeof(EnsureIndex))
indexAttr = (EnsureIndex)attr;
if (attr.GetType() == typeof(RepoElementAttribute))
elementNameOverride.Add(((RepoElementAttribute)attr).ElementName);
if ((indexAttr != null) && (elementNameOverride.Count != 0))
break;
}
// Index
if (indexAttr != null)
{
if (elementNameOverride.Count() > 0)
EnsureIndexesAsDeclared(indexAttr, elementNameOverride);
else
EnsureIndexesAsDeclared(indexAttr);
}
}
// Walk the atributes on the class itself. WARNING: We don't validate the member names here, we just create the indexes
// so if you create a unique index and don't have a field to match you'll get an exception as you try to add the second
// item with a null value on that key
foreach (Attribute attr in theClass.GetCustomAttributes(true))
{
if (attr.GetType() == typeof(EnsureIndexes))
EnsureIndexesAsDeclared((EnsureIndexes)attr);
}//foreach
}//if this.collection.count
}//AssureIndexesNotInlinable()
EnsureIndexes then looks like this;
private void EnsureIndexesAsDeclared(EnsureIndexes attr, List<string> indexFields = null)
{
var eia = attr as EnsureIndexes;
if (indexFields == null)
indexFields = eia.Keys.ToList();
// use driver specific methods to actually create this index on the collection
var db = GetRepositoryManager(); // if you have a repository or some other method of your own
db.EnsureIndexes(indexFields, attr.Descending, attr.Unique, attr.Sparse);
}//EnsureIndexes()
Note that you'll place this after each and every update because if you forget somewhere your indexes may not get created. It's important to ensure therefore that you optimise the call so that it returns quickly if there's no indexing to do before going through all that reflection code. Ideally, you'd do this just once, or at the very least, once per application startup. So one way would be to use a static flag to track whether you've already done so, and you'd need additional lock protection around that, but over-simplistically, it looks something like this;
void AssureIndexes()
{
if (_requiresIndexing)
AssureIndexesInit();
}
So that's the method you'll want in each and every DB update you make, which, if you're lucky would get inlined by the JIT optimizer as well.
See below for a naive implementation which could do with some brains to take the indexing advice from the MongoDb documentation into consideration. Creating indexes based on queries used within the application instead of adding custom attributes to properties might be another option.
using System;
using System.Reflection;
using MongoDB.Bson.Serialization.Attributes;
using MongoDB.Driver;
using NUnit.Framework;
using SharpTestsEx;
namespace Mongeek
{
[TestFixture]
class TestDecorateToEnsureIndex
{
[Test]
public void ShouldIndexPropertyWithEnsureIndexAttribute()
{
var server = MongoServer.Create("mongodb://localhost");
var db = server.GetDatabase("IndexTest");
var boatCollection = db.GetCollection<Boat>("Boats");
boatCollection.DropAllIndexes();
var indexer = new Indexer();
indexer.EnsureThat(boatCollection).HasIndexesNeededBy<Boat>();
boatCollection.IndexExists(new[] { "Name" }).Should().Be.True();
}
}
internal class Indexer
{
private MongoCollection _mongoCollection;
public Indexer EnsureThat(MongoCollection mongoCollection)
{
_mongoCollection = mongoCollection;
return this;
}
public Indexer HasIndexesNeededBy<T>()
{
Type t = typeof (T);
foreach(PropertyInfo prop in t.GetProperties() )
{
if (Attribute.IsDefined(prop, typeof (EnsureIndexAttribute)))
{
_mongoCollection.EnsureIndex(new[] {prop.Name});
}
}
return this;
}
}
internal class Boat
{
public Boat(Guid id)
{
Id = id;
}
[BsonId]
public Guid Id { get; private set; }
public int Length { get; set; }
[EnsureIndex]
public string Name { get; set; }
}
internal class EnsureIndexAttribute : Attribute
{
}
}

Entity framework: writting custom data annotaions to change CASE of values

class DemoUser
{
[TitleCase]
public string FirstName { get; set; }
[TitleCase]
public string LastName { get; set; }
[UpperCase]
public string Salutation { get; set; }
[LowerCase]
public string Email { get; set; }
}
Suppose i have demo-class as written above, i want to create some custom annotations like LowerCase,UpperCase etc so that its value gets converted automatically. Doing this will enable me to use these annotations in other classes too.
As Ladislav implied, this is two questions in one.
Assuming you follow the recipe for creating attributes in Jefim's link, and assuming you're calling those created attribute classes "UpperCaseAttribute", "LowerCaseAttribute", and "TitleCaseAttribute", the following SaveChanges() override should work in EF 4.3 (the current version as of the time of this answer post).
public override int SaveChanges()
{
IEnumerable<DbEntityEntry> changedEntities = ChangeTracker.Entries().Where(e => e.State == System.Data.EntityState.Added || e.State == System.Data.EntityState.Modified);
TextInfo textInfo = Thread.CurrentThread.CurrentCulture.TextInfo;
changedEntities.ToList().ForEach(entry =>
{
var properties = from attributedProperty in entry.Entity.GetType().GetProperties()
where attributedProperty.PropertyType == typeof (string)
select new { entry, attributedProperty,
attributes = attributedProperty.GetCustomAttributes(true)
.Where(attribute => attribute is UpperCaseAttribute || attribute is LowerCaseAttribute || attribute is TitleCaseAttribute)
};
properties = properties.Where(p => p.attributes.Count() > 1);
properties.ToList().ForEach(p =>
{
p.attributes.ToList().ForEach(att =>
{
if (att is UpperCaseAttribute)
{
p.entry.CurrentValues[p.attributedProperty.Name] = textInfo.ToUpper(((string)p.entry.CurrentValues[p.attributedProperty.Name]));
}
if (att is LowerCaseAttribute)
{
p.entry.CurrentValues[p.attributedProperty.Name] = textInfo.ToLower(((string)p.entry.CurrentValues[p.attributedProperty.Name]));
}
if (att is TitleCaseAttribute)
{
p.entry.CurrentValues[p.attributedProperty.Name] = textInfo.ToTitleCase(((string)p.entry.CurrentValues[p.attributedProperty.Name]));
}
});
});
});
return base.SaveChanges();
}
You can override the SaveChanges method in your EF context (if you use default code-generation just write a partial class). Something like the following:
public partial class MyEntityContext
{
public override int SaveChanges(SaveOptions options)
{
IEnumerable<ObjectStateEntry> changedEntities =
this.ObjectStateManager.GetObjectStateEntries(
System.Data.EntityState.Added | System.Data.EntityState.Modified);
// here you can loop over your added/changed entities and
// process the custom attributes that you have
return base.SaveChanges(options);
}
}

Equals and GetHashCode

What do you think about this Person class? Is it a bad idea or best practise to override Equals and GetHashCode like that?
public class Person {
public int PersonId { get; set; }
public string Name { get; set; }
public override bool Equals(object obj) {
var person = obj as Person;
return PersonId == person.PersonId;
}
public override int GetHashCode() {
return PersonId;
}
}
Usage :
static void Main(string[] args) {
var list = new List<Person>();
list.Add(new Person(){ PersonId = 1, Name = "Mike"});
list.Add(new Person() { PersonId = 2, Name = "Michael Sync" });
list.Add(new Person(){ PersonId = 1, Name = "Mike"});
var list1 = new List<Person>();
list1.Add(new Person() { PersonId = 1, Name = "Mike" });
list1.Add(new Person() { PersonId = 3, Name = "Julia" });
var except = list.Except(list1);
foreach (var item in except) {
Console.WriteLine(item.Name);
}
Console.ReadKey();
}
A few points:
It's not null safe or "different type" safe. Try this:
new Person().Equals(new Object());
or
new Person().Equals(null);
Bang.
Classes defining equality operations should usually be immutable IMO. Changing the contents of an object after using it as a dictionary key is a Bad Thing, for example.
Consider implementing IEquatable<Person>
A quick reimplementation, which still assumes you want equality based solely on ID.
public sealed class Person : IEquatable<Person> {
private readonly int personId;
public int PersonId { get { return personId; }
private readonly string name;
public string Name { get { return name; } }
public Person(int personId, string name) {
// Is a null name valid? If not, throw here.
this.personId = personId;
this.name = name;
}
public override bool Equals(object obj) {
return Equals(obj as Person);
}
public Equals(Person other) {
return other != null && other.personId == personId;
}
public override int GetHashCode() {
return personId;
}
}
Yes this is wrong. You should never use a mutable property as part of the calculation for GetHashCode. Doing so opens you up to numerous hard to track down bugs.
One problem I can see is that you'll get lots of collisions for new (unsaved) records (lots of zeros) - unless you do something like have consecutive -ve ids for those... But do you really need to use Person as a key? Personally I don't think I'd bother...