I have some code which encrypts and decrypts data using EF POCO. To implement this I intercepted the Saving Changes and ObjectMaterialized events and it works (most of the time).
The issue I have is if I try to read back the encrypted object straight after saving the object the materialized event does'nt fire as it not going back to the DB to retrieve the record and therefore fails to fire the event.
Does anyone have any thoughts on how I could improve my approach to cover off this situation ?
Related
I'm implementing record versioning using OrientDB, but its callback seems to be invoked right after OOBjectDatabaseTx#save(Object) is called.
Is there any way to get an additional callback for persisted record?
It turned out that we can use ODatabaseListener for this purpose
I want to save changes on backend also, so I want to subclass NSManagedContext, override save method and loop al the changed object and call the appropriate RESTFull service. But how can I get the changed / inserted objects?
UPDATE:
I found that with setIncludesPendingChanges I can get the changed objects, but I still need to set the entity name for NSFetchRequest. But I want to fetch all different type of entites. How?
I found here a more appropriate solution based on NSManagedObjectContextDidSaveNotification: How to sync CoreData and a REST web service asynchronously and the same time properly propagate any REST errors into the UI
Having some truly strange behavior with Core Data on iOS.
I have a NSManagedObjectContext for my main thread used to read data from a SQLLite persistent store and display it to the user. I also have background processes managed by an NSOperationQueue. These background processes create an NSMangedObjectContext, fetch data from remote servers, and persist that data to the local Core Data store.
I have registered for NSManagedObjectContextDidSaveNotification and when I receive those notifications I call mergeChangesFromContextDidSaveNotification on the main thread's NSManagedObjectContext (passing the notification object as an argument).
This is all very standard and the way that all the Core Data document suggest that you deal with multi-threading.
Up until recently, I have always been inserting new objects into the data store and not modifying objects in the data store. This worked fine. After a background thread writes new data, the merge occurs, I send a notification to the UIController and redraw my display. The display draws correctly.
Recently I have made a change, and the background thread is both inserting and modifying objects. But all the rest of the pattern remains the same. Now after the merge, the data in my main thread’s NSManagedObjectContext is corrupted. If I try to query for objects, I get nothing back. If I try to examine objects which I already have a reference to all their relationships are nil (not faults but nil). I have checked the SQLLite database and the data is all there.
The only solution seems to be to reset the NSManagedObjectContext which isn’t acceptable given the architecture of the applicaiton.
Ok, a final bit of strangeness. If my background thread is only updating attributes (primitves) then I don’t get this strange behavior. However, if I update the relationships themselves, then I get these empty fetch request results and nil’d out relationships.
What am I missing?
Ok, I figured out my problem.
My problem is the way that mergeChangesFromContextDidSaveNotification merges changes from one thread to another. As I have read several times, it does not do a playback of the changes you made but rather merges the final state together. I didn't realize the ramifications of this. In my background thread, I am deleting an object, but before I delete it I nil out some of its relationships because they are marked as cascade delete and I don't want them deleted.
Apparently when I merge those deleted objects into the main context, Core Data still enforces the delete rules and cascade deletes the related objects in the main context.
I changed my delete rules and now my relationships is not getting incorrectly nil'd out. It is unfortunate that I have a relationship which in some cases I want cascade deleted and in others I don't. I will have to work on that.
I'm diving into iOS programming and am using Core Data in my app to persist the game data, however, I'm wondering if my approach is the wrong way to use Core Data. I have three tables in my DB, with the first two having a one-to-many relationship with the another table (i.e. UserProfile -->> Puzzle Packs -->> Puzzles).
The approach I'm taking to use and persist the data is simple, retrieve an instance of a UserProfile using a NSFetchedResultsController and store the UserProfile object as an instance var in my App Delegate. Then I use that UserProfile object throughout the rest of the code to access and modify the state of the puzzle packs and puzzles (representing the user's progress in the game) and whenever I make a change to the objects, I just call the NSManagedObjectContext's save method to update the DB, which is is also stored in the App Delegate.
My question is, should I be fetching data from the DB anytime I need to access or modify it or is my current approach, of fetching the top object once and calling the save method often, the correct way to use Core Data?
Thanks in advance for your wisdom! I apologize if my question is odd, I'm still a noob.
As long as you're only ever working with these objects with a single NSManagedObjectContext (ie, you never have to merge changes between contexts) then I don't see a problem with fetching once, and saving as needed.
That said, the moment you work with multiple contexts (for example, using NSOperation/NSOperationQueue or other multi-threading techniques, which require separate contexts), you'll want to make sure you merge changes and update other contexts so changes from one thread don't clobber something on another.
Hey, I'm working on the model layer for our app here.
Some of the requirements are like this:
It should work on iPhone OS 3.0+.
The source of our data is a RESTful Rails application.
We should cache the data locally using Core Data.
The client code (our UI controllers) should have as little knowledge about any network stuff as possible and should query/update the model with the Core Data API.
I've checked out the WWDC10 Session 117 on Building a Server-driven User Experience, spent some time checking out the Objective Resource, Core Resource, and RestfulCoreData frameworks.
The Objective Resource framework doesn't talk to Core Data on its own and is merely a REST client implementation. The Core Resource and RestfulCoreData all assume you talk to Core Data in your code and they solve all the nuts and bolts in the background on the model layer.
All looks okay so far and initially I though either Core Resource or RestfulCoreData will cover all of the above requirements, but... There's a couple of things none of them seemingly happen to solve correctly:
The main thread should not be blocked while saving local updates to the server.
If the saving operation fails the error should be propagated to the UI and no changes should be saved to the local Core Data storage.
Core Resource happens to issue all of its requests to the server when you call - (BOOL)save:(NSError **)error on your Managed Object Context and therefore is able to provide a correct NSError instance of the underlying requests to the server fail somehow. But it blocks the calling thread until the save operation finishes. FAIL.
RestfulCoreData keeps your -save: calls intact and doesn't introduce any additional waiting time for the client thread. It merely watches out for the NSManagedObjectContextDidSaveNotification and then issues the corresponding requests to the server in the notification handler. But this way the -save: call always completes successfully (well, given Core Data is okay with the saved changes) and the client code that actually called it has no way to know the save might have failed to propagate to the server because of some 404 or 421 or whatever server-side error occurred. And even more, the local storage becomes to have the data updated, but the server never knows about the changes. FAIL.
So, I'm looking for a possible solution / common practices in dealing with all these problems:
I don't want the calling thread to block on each -save: call while the network requests happen.
I want to somehow get notifications in the UI that some sync operation went wrong.
I want the actual Core Data save fail as well if the server requests fail.
Any ideas?
You should really take a look at RestKit (http://restkit.org) for this use case. It is designed to solve the problems of modeling and syncing remote JSON resources to a local Core Data backed cache. It supports an offline mode for working entirely from the cache when there is no network available. All syncing occurs on a background thread (network access, payload parsing, and managed object context merging) and there is a rich set of delegate methods so you can tell what is going on.
There are three basic components:
The UI Action and persisting the change to CoreData
Persisting that change up to the server
Refreshing the UI with the response of the server
An NSOperation + NSOperationQueue will help keep the network requests orderly. A delegate protocol will help your UI classes understand what state the network requests are in, something like:
#protocol NetworkOperationDelegate
- (void)operation:(NSOperation *)op willSendRequest:(NSURLRequest *)request forChangedEntityWithId:(NSManagedObjectID *)entity;
- (void)operation:(NSOperation *)op didSuccessfullySendRequest:(NSURLRequest *)request forChangedEntityWithId:(NSManagedObjectID *)entity;
- (void)operation:(NSOperation *)op encounteredAnError:(NSError *)error afterSendingRequest:(NSURLRequest *)request forChangedEntityWithId:(NSManagedObjectID *)entity;
#end
The protocol format will of course depend on your specific use case but essentially what you're creating is a mechanism by which changes can be "pushed" up to your server.
Next there's the UI loop to consider, to keep your code clean it would be nice to call save: and have the changes automatically pushed up to the server. You can use NSManagedObjectContextDidSave notifications for this.
- (void)managedObjectContextDidSave:(NSNotification *)saveNotification {
NSArray *inserted = [[saveNotification userInfo] valueForKey:NSInsertedObjects];
for (NSManagedObject *obj in inserted) {
//create a new NSOperation for this entity which will invoke the appropraite rest api
//add to operation queue
}
//do the same thing for deleted and updated objects
}
The computational overhead for inserting the network operations should be rather low, however if it creates a noticeable lag on the UI you could simply grab the entity ids out of the save notification and create the operations on a background thread.
If your REST API supports batching, you could even send the entire array across at once and then notify you UI that multiple entities were synchronized.
The only issue I foresee, and for which there is no "real" solution is that the user will not want to wait for their changes to be pushed to the server to be allowed to make more changes. The only good paradigm I have come across is that you allow the user to keep editing objects, and batch their edits together when appropriate, i.e. you do not push on every save notification.
This becomes a sync problem and not one easy to solve. Here's what I'd do: In your iPhone UI use one context and then using another context (and another thread) download the data from your web service. Once it's all there go through the sync/importing processes recommended below and then refresh your UI after everything has imported properly. If things go bad while accessing the network, just roll back the changes in the non UI context. It's a bunch of work, but I think it's the best way to approach it.
Core Data: Efficiently Importing Data
Core Data: Change Management
Core Data: Multi-Threading with Core Data
You need a callback function that's going to run on the other thread (the one where actual server interaction happens) and then put the result code/error info a semi-global data which will be periodically checked by UI thread. Make sure that the wirting of the number that serves as the flag is atomic or you are going to have a race condition - say if your error response is 32 bytes you need an int (whihc should have atomic acces) and then you keep that int in the off/false/not-ready state till your larger data block has been written and only then write "true" to flip the switch so to speak.
For the correlated saving on the client side you have to either just keep that data and not save it till you get OK from the server of make sure that you have a kinnf of rollback option - say a way to delete is server failed.
Beware that it's never going to be 100% safe unless you do full 2-phase commit procedure (client save or delete can fail after the signal from the server server) but that's going to cost you 2 trips to the server at the very least (might cost you 4 if your sole rollback option is delete).
Ideally, you'd do the whole blocking version of the operation on a separate thread but you'd need 4.0 for that.