Unit Testing and Equals/GetHashCode in entities - entity-framework

So I have created a few POCO's and currently one of them is part of a collection that I am testing. I am using MS Test and apparently when I need to compare two collections I will use CollectionsAssert.AreEquivalent(). Now in my entity besides all the properties I have overriden .Equals() and .GetHashCode() because these two are needed by CollectionAssert.AreEquivalent(). My simple question is - are these two methods ok to be there?

Special care needs to be taken when overriding Equals and GetHashCode as an incorrect implementation could lead to subtle bugs and performance issues that are hard to debug. If you just need them for testing a safer approach would be to implement your own AreEquivalent() method that accepts an IEqualityComparer.
Having said that EF itself doesn't rely on Equals and GetHashCode for POCO's, instead it uses the equivalents on the EntityKey for the entity.
If you do need to override these methods you could delegate to the corresponding EntityKey to get EF semantics in non-EF code that uses the entities. This approach however is not appropiate for all scenarios as it only uses the key values to establish entity identity.

Related

JPA spec requiring no-arg constructor disables us to write completely correct hashcode/equals. How do you cope with that?

Ok, so here [1] is the great read, how do really correctly define hashcode/equals, namely with respect to object hierarchies. But here I'd like to ask about #pitfall 3 from that article, which shows bizarre behavior when hashcode/equals are defined on mutable fields and Set is used for collections. We cannot use final fields and parameterized constructor only, due to JPA spec. So what are the means to avoid these gotchas? What do you use?
Well, obviously one is to avoid using Set in JPA entities. Does not seems very nice. Another solution could be to "unsupport" setters after equals method was called, but that's ridiculous and equals surely shouldn't have side-effect.
So how do you cope with that? Aside from not-knowing/ignoring it, which probably would be default action in java world...
[1] https://www.artima.com/lejava/articles/equality.html
If entity is detached you need to override equal and hashcode1. Every entity has to have #Id. ID is immutable. Entities should implement equal and hashcode based on primary key ID.
Pitfall 3 deals with mutable object. Cannot by applied on entity with immutable ID.
Guide to Implementing equals() and hashCode() with Hibernate

integration testing, comparing JPA entities

Consider you are doing some integration testing, you are storing some bigger entity into db, and then read it back and would like to compare it. Obviously it has some associations as well, but that's just a cherry on top of very unpleasant cake. How do you compare those entities? I saw lot of incorrect ideas and feel, that this has to be written manually. How you guys do that?
Issues:
you cannot use equals/hashcode: these are for natural Id.
you cannot use subclass with fixed equals, as that would test different class and can give wrong results when persisting data as data are handled differently in persistence context.
lot of fields: you don't want to type all comparisons by hand. You want reflection.
#Temporal annotations: you cannot use trivial "reflection equals" approaches, because #Temporal(TIMESTAMP) java.util.Date <> java.sql.Date
associations: typical entity you would like to have properly tested will have several associations, thus tool/approach ideally should support deep comparison. Also cycles in object graph can ruin the fun.
Best solution what I found:
don't use transmogrifying data types (like Date) in JPA entities.
all associations should be initialized in entity, because null <> empty list.
calculate externaly toString via say ReflectionToStringBuilder, and compare those. Reason for that is to allow entity to have its toString, tests should not depend that someone does not change something. Theoretically, toString can be deep, but commons recursive toStringStyle includes object identifier, which ruins it.
I though, that I could use json format to string, but commons support that only for shallow toString, Jackson (without further instructions on entity) fails on cycles over associations
Alternative solution would be actually declaring subclasses with generated id (say lombok) and use some automatic mapping tool (say remondis mapper), with option to overcome differences in Dates/collections.
But I'm listening. Does anyone posses better solution?

How can I map a nullable tinyint column to a bool where null becomes false in EF Core?

I maintain a suite of applications for a SqlServer database that has no simple creation process and various instances in production have slight differences (with regards nullability of columns and varchar sizes). I am moving the data layer of these applications from EF 6 to EF Core 2.1 to increase platform support and to finally have a straightforward way to create new databases with a consistent layout.
I might take this opportunity to clean up my POCOs somewhat. One pattern I wish to do away with is that original SqlServer database often uses tinyint null instead of bit columns with a default constraint on them. These are mapped to byte? rather than bool in my C# code which I think needlessly complicates their usage. Going forward, I'd like new databases to use bit fields instead, and in many cases it is appropriate for them to be not null and defaulted to 0. I have this working, but I figured that with all the flexibility of EF Core, I should be able to subclass my DbContext and provide different mappings in order to enable the same code to run against the original "legacy" databases there these are still nullable tinyints instead.
Attempt 1
I was hoping to use a ValueConverter<bool, byte?> in my LegacyDbContext subclass to accomplish this (passing it into PropertyBuilder<bool>.HasConversion) until I learnt that these cannot be used to convert nulls under the limitations section of the EF Core docs, so I get a System.InvalidOperationException stating:
An exception occurred while reading a database value for property '<tableName>.<columnName>'. The expected type was 'System.Boolean' but the actual value was null.
Attempt 2
Registering a custom Microsoft.EntityFrameworkCore.Metadata.Internal.IEntityMaterializerSource implementation to take care of this conversion for me but I couldn't get that working either, I think it is invoked too late for me to do the type conversion...
Surely it is possible to replace nulls with 0 prior to the type conversion from byte to bool, or some other mechanism to accomplish my new POCOs using bools to be mapped back to the nullable tinyints of older databases?

Transparently converting nullable values into non-nullable values in Entity Framework

I am currently in the process of attempting to integrate an Entity Framework application with a legacy database that is about ten years old or so. One of the many problems that this database has (alongside having no relations or constraints whatsoever) is that almost every column is set to null, even though in almost all cases, this wouldn't make sense.
Invariably, I will encounter an exception along these lines:
The 'SortOrder' property on 'MyRecord' could not be set to a 'null' value. You must set this property to a non-null value of type 'Int32'.
I have seen many questions that refer to the exception above, but these all seem to be genuine mistakes where the developer did not write classes that properly represent the data in the database. I would like to deliberately write a class that does not properly represent the data in the database. I am fully aware that this is against the rules of Entity Framework, and that is most likely why I am having so much difficulty doing it.
It is not possible to change the schema at this point as it will break existing applications. It is also not possible to fix the data, because new data will be inserted by old applications. I would like to map the database with Entity Framework as it should be, slowly move all the applications over the next couple of years or so to rely on it for data access before finally being able to move on to the database redesign phase.
One method I have used to get around this is to transparently proxy the variable:
internal int? SortOrderInternal { get; set; }
public int SortOrder
{
get { return this.SortOrderInternal ?? 0; }
set { this.SortOrderInternal = value; }
}
I can then map the field in CodeFirst:
entity.Ignore(model => model.SortOrder);
entity.Property(model => model.SortOrderInternal).HasColumnName("SortOrder");
Using the internal keyword in this method does allow me to nicely encapsulate this nastiness so I can at the very least keep it from leaking outside my data access assembly.
But unfortunately I am now unable to use the proxy field in a query as a NotSupportedException will be thrown:
The specified type member 'SortOrder' is not supported in LINQ to Entities. Only initializers, entity members, and entity navigation properties are supported.
Perhaps it might be possible to transparently rewrite the expression once it is received by the DbSet? I would be interested to hear if this would even work; I'm not skilled enough with expression trees to say. I have so far been unsuccessful in finding a method in DbSet that I could override to manipulate the expression, but I'm not above making a new class that implements IDbSet and passes through to DbSet, horrible though that would be.
Whilst investigating the stack trace, I found a reference to an internal Entity Framework concept called a Shaper, which appears to be the thing that takes the data and inputs it to A quick bit of Googling on this concept doesn't yield anything, but investigating System.Data.Entity.dll with dotPeek indicates that this would certainly be something that would help me... assuming Shaper<T> wasn't internal and sealed. I'm almost certainly barking up the wrong tree here, but I'd be interested to hear if anyone has encountered this before.
That's a fairly tough nut to crack, but you might be able to do it via Microsoft.Linq.Translations.

Mocked datacontext and foreign keys/navigation properties

I have a problem that I haven't been able to find a solution to, and I wonder if anyone could give some advice.
I have a mocked datacontext/objectset, done through interfaces and t4 templates, with some ninject magic, with the intent of having in memory datasets for unit testing.
However, what should you do with the foreign key values/navigation properties?
Lets say I have hotels and customers, ctx.Hotels has some values, but Customer.Hotels does not. The get is something like this if it is a one-to-one relationship:
return ((IEntityWithRelationships)this).RelationshipManager.GetRelatedReference<Hotel>("HotelModel.FK_Customers_Hotels", "Hotel").Value;
and one-to-many:
return ((IEntityWithRelationships)this).RelationshipManager.GetRelatedCollection<BookingRow>("HotelModel.FK_BookingRows_Customers", "BookingRow");
My skill level just isn't enough to even understand what is going on here.
[edit:]
Great Master Julie Lerman confirms that this is a dead end. You can't properly mock entityobjects, you need POCOs for that.
Mocking ObjectContext when you are using EntityObject based entities is mostly impossible because for example RelationshipManager is a real class which cannot be replaced with your mock. Also your entities are heavily dependent on non mockable EF code.
Note: "Mostly" because you can mock it but you need special framework intercepting calls to real objects and forwarding them to your methods instead. That is possible only with TypeMock Isolator or MS Moles.
Btw. mocking EF code is something you don't want to do - go through this answer and linked answers. Some of them targets newer EF API but the problems are still same.