I'm getting
System.NotSupportedException: All
objects in the EntitySet
'Entities.Message' must have unique
primary keys. However, an instance of
type 'Model.Message' and an instance
of type 'Model.Comment' both have the
same primary key value
but I have no idea what this means.
Using EF4, I have a bunch of entities of type Message. Some of these messages are actually a subtype, Comment, inheritance by table-per-type. Just
DB.Message.First();
will produce the exception. I have other instances of subtyping where I don't experience problems but I can't see any discrepencies. Sometimes, though, the problem goes away if I restart the development server, but not always.
Edit:
I've worked out (should have before) that the problem is a fault of the stored procedure fetching my Messages. The way this is currently set up as that all the fields pertaining to Message is fetched, the Comment table is ignored by the sproc. The context then proceeds to muck this up, probably by fetching those Messages that are also Comments again, as you suggested. How to do this properly is the central issue at hand. I've found some indications to a solution at http://social.msdn.microsoft.com/Forums/en-US/adodotnetentityframework/thread/bb0bb421-ba8e-4b35-b7a7-950901adb602.
As you infer, it looks like the Context is fetching a Comment as a Message (not knowing that it is a comment). Later, you ask for the actual Comment, so the context fetches the Comment. Now you have two object instances in the Context with the same ID - one is a Message and one is a Comment.
It seems that the exception is not being thrown until after both objects have been loaded (ie when you try to access the Message the second time). If you can find a way to remove the Message from Context when the Comment is loaded, this may solve your problem.
Another option might be to use the Table-per-hierarchy model. This results in a bad database design but at the end of the day you have to use what works.
You might be able to avoid the problem by ensuring that the objects are loaded as Comments first. This way, when you ask for the Message, the Context already knows about it.
Also consider using Composition over Inheritance, such that a Message has 0..1 CommentDetails.
The final suggestion is to remove the dependency on the Entity Framework from your Control code, and create a Data Access Layer which references the EF and retrieves your objects. The DAL can turn Entity Framework objects into a different set of Entity objects which are easier to use in code. This approach will produce a lot of code overhead, but may be suitable if you cannot use the Entity Framework to produce an Entity model which represents your Entities in the way you want to work with them.
To summarize, unless MS fix this issue, there is no solution to your problem which does not involve a rethink of your approach. Unfortunately the Entity Framework is not ideal, especially for complex Entity models - you might be better off creating your own DAL and bypassing the EF altogether.
It sounds like you are pulling two records into memory one into message and one into comment.
Possible prblems:
There are two physical messages with the same id
The same message is being pulled up as a message and a comment
The same message is being pulled up twice into the same context
That the problem sometimes goes away when you restart, points to a problem with cleaning up of context. Are you using "using" statements.
Do you have functionality for changing from a message to a comment?
I am not an EF kind of guy (busy working with NHibernate, haven't had time to get up to date with EF yet) so I may be totally wrong here, but could the problem be that the two tables (since you are using inheritance by table-per-type) have primary keys that collide?
If you check the data in both tables, do primary key values collide?
Related
I'm just trying to get my head around how one goes about updating an entity using CQS. Say the UI allows a user to update several properties of a particular entity, and on submit, in the back-end, an update command is created and dispatched.
The part I'm not quite understanding is:
does the cmd handler receiving the message from the dispatcher then retrieve the existing entity from the DB to then map the received stock item properties to then save? Or
is the retrieval of the existing item done prior to the dispatching of the cmd msg, to which it is then attached (the retrieved entity attached to cmd that is then dispatched)?
My understanding is that CQS allows for a more easier transition to CQRS later on (if necessary)? Is that correct?
If that is the case, the problem with 2 above is that queries could be retrieved from a schema looking very different to that from the command/write schema. Am I missing something?
does the cmd handler receiving the message from the dispatcher then retrieve the existing entity from the DB to then map the received stock item properties to then save
Yes.
If you want to understand cqrs, it will help a lot to read up on ddd -- not that they are necessarily coupled, but because a lot of the literature on CQRS assumes that you are familiar with the DDD vocabulary.
But a rough outline of the responsibility of the command handler is
Load the current state of the target of the command
Invoke the command on the target
Persist the changes to the book of record
My understanding is that CQS allows for a more easier transition to CQRS later on (if necessary)?
That's not quite right -- understanding Meyer's distinction between command and queries make the CQRS pattern easier to think about, but I'm not convinced that actually helps in the transition all that much.
If that is the case, the problem with 2 above is that queries could be retrieved from a schema looking very different to that from the command/write schema. Am I missing something?
Maybe - queries typically run off of a schema that is optimized for query; another way of thinking about it is that the queries are returning different representations of the same entities.
Where things can get tricky is when the command representation and the query representation are decoupled -- aka eventual consistency. In a sense, you are always querying state in the past, but dispatching commands to state in the present. So you will need to have some mechanism to deal with commands that incorrectly assume the target is still in some previous state.
Consider the following scenario where Jpa is used for persistence.
A student can be associated to different courses with a web form.
So this form displays different entities (student, course).
The Save button is pushed, the business logic modify some fields of the entities, but the db operation fails.
Unfortunately the enities in memory reflects the changes made by the business logic and this may create some inconsistency problem.
Is there a pattern useful in similar scenarios ?
Possible solution I thought and why I don't like them:
Don't want to revert back all the changes made the bussiness logic in case of db exception becuase it is an error prone job.
Don't want to reload the entities after the db exception in order to be sure they are aligned with the db. In fact this operation may fail too.
Otherwise I can clone the entities, make the changes and swap the clone with the original entity after a successful commit.
Anyway I would be more confortable following a well established pattern.
The Memento Pattern is a design pattern intended to offer 'rollback' functionality for the state of objects in memory. You need a Caretaker class, that asks the subject for a Memento, then attempts the persistence. The Caretaker would then give the Memento back to the subject, telling it to roll back to the state the Memento describes, if necessary.
Backgroud:
EDMX was built pretty straight forward, I just selected all tables and clicked "Ok" button several times, the thing is, not all the columns are/will be consumed by system.
What happened
Later on, several nice-to-have columns became "kiss-good-bye" columns, and bam, several components stopped functioning as exceptions are thrown from entity framework saying "Invalid Column Name xxxxx"
What is wanted to be achieved
Hopefully the legendary gurus in stackoverflow will kindly shine light on "how to let EF ignore unused columns, even if they are no more in DB".
System design sometimes would like some room. so there go all the candidate columns that might not be in use after a while, yet DB-First approach takes in all the candidates, it's possible for MS to come up with some kind of a policy to ignore unused properties, and the complexity and cost to implement this mechanism seems not very high.
In Following there lays a few things I tried but didn't have any luck, maybe it's the way I tried it, or it's just not the thing to turn things around.
Method #1 tried on my side
Implementing data loading in a more column-specific way, Table1.Select(Column1, 2, 3, 4);
Why I dropped it: it's just too much for me to rewrite so I didn't even verify whether it solves the problem.
Method #2 tried on my side
Why I dropped it: it just doesn't work
Method #3
DefaultValue=""
Why I dropped it: it just doesn't work, still got "invalid column name" exception
#4
Database.SetInitialize
Why I dropped it: it doesn't work
If your database schema changes, you need to update the EDMX model in the Designer. Right click in the white space and click "Update Model from Database...". This needs to be done every time your schema changes.
After a while of research, now I have some workaround, but I haven't got any solution to this.
Inject Data Contract into EDMX calls so that EF will only retrieve the columns required by upper layers.
Pros: this is hopefully one-shot-two-birds straight forward, using the mapper between Data Contract and EDMX Schema to make EF calls.
Cons: Perf is a concern, hopefully latest version reflection won't be taking up too much CPU Span, and also implementation will have to be a lot to cover all EF calls.
Set up a Unit Test case,
Steps Retrieve currently-used EDMX from DLL Resource Section, Retrieve latest EDMX from DB, if actual doesn't match expected, alert EDMX owner in the Dev Team.
Expected Result on-the-fly exported EDMX
Actual Result existing EDMX exported from DLL Resource Section
Wish this helps.
I'm building an application with a domain model using CQRS and domain events concepts (but no event sourcing, just plain old SQL). There was no problem with events of SomethingChanged kind. Then I got stuck in implementing SomethingCreated events.
When I create some entity which is mapped to a table with identity primary key then I don't know the Id until the entity is persisted. Entity is persistence ignorant so when publishing an event from inside the entity, Id is just not known - it's magically set after calling context.SaveChanges() only. So how/where/when can I put the Id in the event data?
I was thinking of:
Including the reference to the entity in the event. That would work inside the domain but not necesarily in a distributed environment with multiple autonomous system communicating by events/messages.
Overriding SaveChanges() to somehow update events enqueued for publishing. But events are meant to be immutable, so this seems very dirty.
Getting rid of identity fields and using GUIDs generated in the entity constructor. This might be the easiest but could hit performance and make other things harder, like debugging or querying (where id = 'B85E62C3-DC56-40C0-852A-49F759AC68FB', no MIN, MAX etc.). That's what I see in many sample applications.
Hybrid approach - leave alone the identity and use it mainly for foreign keys and faster joins but use GUID as the unique identifier by which i pull the entities from the repository in the application.
Personally I like GUIDs for unique identifiers, especially in multi-user, distributed environments where numeric ids cause problems. As such, I never use database generated identity columns/properties and this problem goes away.
Short of that, since you are following CQRS, you undoubtedly have a CreateSomethingCommand and corresponding CreateSomethingCommandHandler that actually carries out the steps required to create the new instance and persist the new object using the repository (via context.SaveChanges). I will raise the SomethingCreated event here rather than in the domain object itself.
For one, this solves your problem because the command handler can wait for the database operation to complete, pull out the identity value, update the object then pass the identity in the event. But, more importantly, it also addresses the tricky question of exactly when is the object 'created'?
Raising a domain event in the constructor is bad practice as constructors should be lean and simply perform initialization. Plus, in your model, the object isn't really created until it has an ID assigned. This means there are additional initialization steps required after the constructor has executed. If you have more than one step, do you enforce the order of execution (another anti-pattern) or put a check in each to recognize when they are all done (ooh, smelly)? Hopefully you can see how this can quickly spiral out of hand.
So, my recommendation is to raise the event from the command handler. (NOTE: Even if you switch to GUID identifiers, I'd follow this approach because you should never raise events from constructors.)
Let's say I have an entity called Product and this entity is loaded every time user hits the product information page. Usually I'd save the object in Zend_Cache (memcache) for an hour to avoid hitting the db for each request but as far as I understand that's not possible with Doctrine2 entities because of the Proxy objects.
So my question is, how can I avoid loading the same entity from the database for each request?
[EDIT]
I tried using Doctrine Cache like this
$categoryService = App_Service_Container::getService('\App\Service\Category');
$cache = $categoryService->getEm()->getConfiguration()->getResultCacheImpl();
$apple = $cache->fetch('apple');
But I get the following error
Warning: require(App/Entity/Proxy/_CG_/App/Entity/Category.php)
[function.require]: failed to open stream: No such file or directory
in /opt/vhosts/app/price/library/Doctrine/Common/ClassLoader.php on
line 163
This is same for Zend Cache as well as you can't serialize the entity because of the Proxy class
You've got several options:
Use Doctrine's built-in result caching
Try just sticking entity in memcache via Zend_Cache. When you pull it out, you may need to merge() the Product back into the EM so proxies can be dereferenced. If you fetch-join any associations you need to display the product info, and you're only doing reads, this shoudl work fine.
Don't cache the entity at all. Cache whatever output you generate instead.
EDIT: If you don't care about the hydration overhead, you're using mysql, and your Products and associated tables don't change very often, you might prefer to just rely on the mySQL query cache. It's a fairly blunt object, but useful enough to mention.
You might want to try implementing __sleep or __wakeup methods for your entity class, as Doctrine 2 has special requirements and limitations concerning serialization/deserialization of entities (which is what happens when storing them in Zend_Cache).
There is this guidance.
General information about limitations including serialization.
I find this extremely strange since i just messed around with this myself and didn't have any issues with the proxy object being stored in the database. So im guessing your configuration is not setup 100% ?
If you find the issue with your configuration then be very aware of what timdev said you MUST merge the object back into the EntityManager else you will have weird bugs down the line.
A fourth solution available for you is also to retrieve the data as an array instead of an object, but then of course you lose all the functionality connected to your module which might not be exactly want you wanted.
It seems to me more like a configuration error. Either Proxies have not been generated or there is something wrong with the proxy directory and namespace.
Depending on your configuration, proxies can be either generated automatically or manually. Does your proxies have been indeed generated under App/Entity/Proxy ? Is this indeed the right directory?
FYI proxies can be manually generated by executing doctrine orm:generate-proxies <dest-dir>
Seconding what timdev says: Doctrine has built-in caching, you want to use it.
I also wonder from your question if you are experiencing any performance issues or if you are a victim of overly eager optimisation.