MS sync framework 2.1 - Delete on client without affecting server - frameworks

I am using the Sync framework 2.1 to sync multiple SQL Express databases to a central database server over a WCF service. It works fine. Now I have a requirement that certain records in a table in the client database (source) which is set to "upload only" need to be deleted as part of archiving but it should not delete the corresponding records on the server (destination). Please note that the records would already have been synced. How can I do this with the Sync framework, like restrict the sync only for new records and deletes should not be synced. Is this possible?
Thanks.

that should be possible, you can intercept the changes from the source on the Changes Selected event and remove the rows you don't want to sync to prevent them from flowing thru the destination.
have a look at this post : Manipulating Change Dataset in SyncFx

Related

How to set up 2-way syncing between Airtable base and multiple external sources

Is this even possible? I would like to update my companies external postgres database, as well as potentially updating their information in Salesforce, whenever a new record is added in Airtable.
I'd also like to set up a sync so that updates in the postgres database(and again Salesforce possibly) are reflected in Airtable.

PostgreSQL - Periodically copying data from one database to another

I'm trying to set up an architecture with 2 databases, say preview and live, that have the exact same schemas. The use case is that edits can be made to the preview database and then pushed to the live database after they are vetted and approved. The production application would read from the live database.
What would be the most appropriate way to push all data from the preview database to the live database without bringing the live database down? Ideally the copy from preview to live would be an atomic transaction.
I've worked with this type of setup in MSSQL, but I'm fairly new to Postgres. So I'm open to hearing other ways to architect this (with Schemas perhaps?).
EDIT: The main reason to use separate databases is that I may need more than 1 target database (not just a single "live" database). I also may need to switch target databases on the fly without altering the source database schema.
I think what you're looking for is a "hot standby". This would be a separate instance of Postgresql, possibly on the same server but usually not, which is a near-real-time replica of the primary server.
In broad strokes, this is done by shipping the binary transaction logs from the primary server to the backup server, and then "replaying" them there. The exact mechanism for transmitting the logs may vary depending on your requirements.
Fortunately, the docs on this are excellent:
https://www.postgresql.org/docs/9.3/static/warm-standby.html
https://www.postgresql.org/docs/9.0/static/hot-standby.html

Core Data Sync - Tracking Deleted Objects

I'm setting up a basic sync service for an iPad application I'm developing. The goal is to have data consistent throughout several instances of the iPad app, as well as having a read-only version of the data on the web, hence rolling a custom solution.
The current flow is this:
Each entity has a 'created', 'modified' and 'UUID' field which are automatically updated by Core Data
On sync, each entity with a created or modified date after the last sync date is serialised into JSON and sent to the server
The server persists any changes to a MySQL database using the client-generated UUIDs as PKs (if there's a conflict, it just uses the most recently modified entity as the 'true' version, nothing fancy there) and sends back any updated entities to the client
The client then merges these changes back into its Core Data DB
This all seems to be working fine. My problem is how to track deleted objects using this method? I'm guessing I can add a 'deleted' flag to each entity and set this whenever a client deletes something, I can then push that change to the server with the rest of the sync data. Once the sync is complete then the client can actually delete these entities. My questions are:
Can I override Core Data's delete methods to automatically set this flag?
Will this require keeping all deleted entities indefinitely on the server? We'll have no way of knowing when every client has synced and actually deleted each entity (I'm not currently tracking client instances)
Is there a better way of doing this?
How about you keep a delta history table with UUID and created/updated/deleted field, maybe with a revision number for each update? So you keep a small check list of changes since your last successful sync.
That way, if you delete an object you could add an entry in the delta history table with the deleted UUID and mark it deleted. Same with created and updated objects, you only need to check the delta table to see what items you the server needs to delete, update, create, etc. You could even store every revision on the server to support rolling back to a previous version in the future if you feel like it.
I think a revision number is better than relying on client's clock that could potentially be changed manually.
You could use NSManagedObjectContext's insertedObjects, updatedObjects, deletedObjects methods to create the delta objects before every save procedure :)
My 2 cents
Whether or not you have to keep deleted objects on the server or not totally depends on your needs. You will need a deleted flag locally to mark as deleted for the sync, maybe also on the server depending on your desire to roll back.
I have taken care of this problem a few ways before. Here is one possibility:
When a client deletes something, just mark it to be deleted locally and delete from the server during the sync (at which point you can purge from core data). When other clients request to access that data, send back an HTTP 404 because you dont have the object any more. At that point the client can delete the entity locally. Now if a client requests a list of things and this object has been deleted, it will just be missing from the list of things he gets back so you can detect that and delete it. I do that in a client by creating an array of object IDs when I get a response from the server and deleting any local objects that don't have those IDs.
We have a deleted field on the server, but just to have the ability to roll back in case something is deleted by accident.
Of course you could return deleted objects to the client so they know to delete but if you don't want to keep a copy on the server, you would have to make some assumption that the clients would all update within a time frame. Then you could garbage collect after that time frame has expired.
I don't really like that solution though. If your data is too heavy to ask for all the objects for a complete sync, you could use your current merge strategy for creating and updating, and then run a separate call to check for deleted items. That call could simply ask for all IDs that the client should have on the device. It could delete the ones that don't exist. OR it could send all IDs on the client and get back a list of IDs to delete.
I think you have to provide more details about the nature of the data if you want a more opinionated suggestion.
Regarding your second question: You can design this so that the server doesn't have to keep deleted records around, if you want to. Let each app know if a given piece of data (based on its UUID) is stored on the server (e.g. add an existsOnServer property or similar). This starts out false when a new item is created in the app, but is set to true once it has been synced to the server for the first time. That way, if the app tries to sync later, but the UUID is not found, you can differentiate the two cases: If existsOnServer is false, then then this item is newly created and should be synced to the server, but if it is true then it can be taken to mean that it was already on the sever before, but has now been deleted, so you can delete it in the app too.
I'd probably argue against this approach, since it seems more error prone to me (I imagine a database or connection error incorrectly being interpreted as a deletion) and keeping records around on your server would usually not be a big deal, but it is possible. The "delta-approach" suggested by dzeikei could be used at the same time, so an update to a record that does not exist on the server signifies that it was deleted, while an insert does not.
You may take a look at Cross-Platform Data Synchronization by Dan Grover if you haven't. It's a very well written paper regarding synchronization and iOS.
About your questions:
You can avoid deleting a file in Core Data and set a 'deleted flag': just update the file instead of deleting it. You could make your own 'deleting' method that actually would call and update the flag on the record.
Keep always a last_sync and a last_updated for each record on the server and on each client. This way you'll always know when someone did change something anywhere and if that change was synced or not against the 'truth database'.
Keeping track of deleted files is a hard thing to do, I guess the best way to do it is keeping track of the history of syncs for each table, but is a difficult task. The easiest way, using this 'truth-database' kind of configuration is to flag the files, so that way yes, you should keep the data on the server as well as on the client.
during synchronization of data between tow table some records or deleted when the table rows are same. and when the rows are different the correctly synchronized, i used this Code click here on image

Core Data syncronization procedure with Web service

I'm developing an application that needs to be syncronized with remote database. The database is connected to the a web-based application that user able to modify some records on the web page.(add/remove/modify) User also able to modify the same records in mobile application. So each side (server - client) must be keep the SAME latest records when an user press the sync button in mobile app. Communication between server and client is provided by Web Serives.(SOAP) and i am not able to change it because of it is strict requirements. (i know this is the worst way that can be used). And another requirement is the clients are not able to delete the server records.
I already be familiar with communicating web service (NSURLConnection), receiving data (NSData) and parsing it. But i could not figure out how the syncronization procedure should be. I have already read this answer which about how i can modify server and client sides with some extra attributes (last_updated_date and is_sync)
Then i could imagine to solve the issue like:
As a first step, client keeps try to modify the server records with sending unsyncronized ones. New recoords are directly added into DB but modified records shoud be compared depending on last_updated_date. At the end of this step, server has the latest data.
But the problem is how can manage to modify the records in mobile app. I thought it in a two way:
is the dummiest way that create a new MOC, download all records into this and change with existing one.
is the getting all modified records which are not in client side, import them into a new MOC and combine these two. But in this point i have some concerns like
There could be two items which are replicated (old version - updated version)
Deleted items could be still located in the main MOCs.
I have to connect multiple relationships among the MOCs. (the new record could have more than 4 relationships with old records)
So i guess you guys can help me to have another ideas which is the best ??
Syncing data is a non-trivial task.
There are several levels of synchronization. Based on your question I am guessing you just need to push changes back to a server. In that case I would suggest catching it during the -save: of the NSManagedObjectContext. Just before the -save: you can query the NSManagedObjectContext and ask it for what objects have been created, updated and deleted. From there you can build a query to post back to your web service.
Dealing with merges, however, is far more complicated and I suggest you deal with them on the server.
As for your relationship question; I suggest you open a second question for that so that there is no confusion.
Update
Once the server has finished the merge it pushes the new "truth" to the client. The client should take these updated records and merge them into its own changes. This merge is fairly simple:
Look for an existing record using a uniqueID.
If the record exists then update it.
If the record does not exist then create it.
Ignoring performance for the moment, this is fairly straight forward:
Set up a loop over the new data coming in.
Set up a NSPredicate to identify the record to be updated/created.
Run your fetch request.
If the record exists update it.
If it doesn't then create it.
Once you get this working with a full round trip then you can start looking at performance, etc. Step one is to get it to work :)

Syncing iOS Core Data with remote server which sends XML

My app parses XML from remote server and store the objects in Core Data(SQLite storage). So that user can browse the material when OFFLINE by reading from local storage.
User may make changes to objects when browsing offline which gets stored locally in Core Data SQLite store. Another User makes some changes to object on Remote server and it is stored there. Now when I detect internet connection, my app should sync my local storage with remote server. Which means remote server is updated with changes I made to my Core Data(SQLite storage) when I was offline and my local storage - Core Data(SQLite storage) needs to be updated with what ever changes other user made to remote server.
For example there is a forum and it is stored in my local storage so that I can read and reply when I am traveling. When later on internet is accessible. My app should automatically put all my replies stored in core data to remote server and also bring other posts on remote server into my local storage.
Remote server is sending XML which I'm parsing and storing in Coredata. My problem is how to sync it ?
How both ways communication happens when there is a change?
How to sync only data which has changed and not to IMPORT whole remote server DB and vice-versa ?
I know one of the way to do it..
add one more field to your local and server database. i.e. Timestamp.
when user change data on the local database change the Timestamp to current time.
do same on the server..i.e. When someone edit data on the server change Timestamp to current time.
When user connects to internet... check local Timestamp to Server timestamp..
case 1 Both are same - Nothing to do
case 2 local time > server time - use sql to get all the data having timestamp greater than server timestamp.. and upload it on the server...
case 3 local < server .... get all the records greater than the local timestamp and add it to the local database..
I am not sure if there is any better way... but this surely works...
One solution could be
iphone syncs the changes to the server
server merges the new and old stuff
iphone gets the new changes (from the merge) from the server
So let the server be the master which should know how to merge stuff and the clients should only download the data incrementally after some changes.
Transactions. You want to follow the ACID rules for transactions. Essentially you have to make sure that data has not be updated that you have not refreshed locally before altering your local write to update.
So the easiest way is have a user get the most recent update from the server, then overwrite that and make sure that with timestamps no othe update happens during that process. Even better yet, use blocking with threads to insure that nothing else happens.
If you Google transactions or ACID there will be a lot of info out there. It's a big area in RDBMS environments where many users can corrupt the data and locks must be held between writes and updates.