in my webApi Project i have this situation, when the user inserts data on DataBase it has a expires limite, which means, in "3" days all information about this certain user has to be gone, but i'm not sure how to handle this, I need to create something that stays running in background checking the current date and the expiration date. I found something about it, but it's not quite what I want here Thanks!
Related
Here is the use case:
I am using AFIncrementalStore, in a fairly standard way
When offline, user is still able to update some records
I set up my own queue to upload edited records and process the queue when back online
When back Online I also refetch data
I want to make sure that my updated records don't get re-updated with the old data from the server when back online
Whenever I edit a record, I flag it in core data as 'edited', and clear the flag only when it is successfully sent to server
The goal is:
when I get results from server, if the results already exist in core
data, but are flagged as 'updated' or 'deleted', I don't want them to
be refreshed with values from the server
I am looking for the best design to achieve that, out of the box if possible. I would like to avoid subclassing.
We have an app that is written in PHP. The front end uses javascript heavily. Generally, for normal applications that require page reloads, continuous deployment is not really an issue, because:
The app can be deployed with build tags: myapp-4-3-2013-b1, myapp-4-3-2013-b2, etc.
When the user loads a page (we are using the front controller pattern), we can inject the buildtag and the files are loaded from the app directory with the correct build tag.
We do not need to keep the older builds around for too long because as the older requests finish, they will move to the newer build tags.
The issue with database and user data being incompatible is not very high as we move people to the newer builds after their requests finish (more on this later).
Now, the problem with our app is that it uses AJAX heavily for smooth page loads. In addition, because there is no page refresh at all when people navigate through the application, people can keep their unsaved data in a their current browser session and revisit it as long as the browser has not been refreshed.
This leads to bigger problems if we want to achieve continuous deployment:
We can keep the user's buildtag in their session (set when they make the first request) and only switch to newer buildtags after the logout and login again. This is obviously bad, because if things like the database schema changes or the format of files to be written to disk changes in a newer build, there is no way to reconcile this.
We force all new requests to a newer build tag, but there is a possibility we change client side javascript and will break a lot of things if we force everyone with a session onto the new build tags immediately.
Obviously, the above won't occur with every build we push and hopefully will not happen a lot, but we want to build a fool proof process so that every build which passes our tests can be deployed. At the same time, we want to make sure that every deployed and test passing build does not inadvertently break in clients with running sessions cause a whole bunch of problems.
I have done some investigation and what google does (at least in google groups) is that they push a message out to the clients to refresh the application (browser window). However, in their case, all unsaved client side data (like unsaved message, etc) would be lost.
Given that applications that uses AJAX and local data are very common these days, what are some more intelligent ways of handling this that will provide minimal disruption to users/clients?
Let me preface this that I haven't ever thought of continuous deployment before reading your post, but it does sound like quite a good idea! I've got a few examples where this would be nice.
My thoughts on solving your problem though would be to go for your first suggestion (which is cleaner), and get around the database schema changes like this:
Implement an API service layer in your application that handles the database or file access, which is outside of your build tag environment. For example, you'd have myapp-4-3-2013-b1, and db-services folders.
db-services would provide any interaction with the database with a series of versioned services. For example, registerNewUser2() or processOrder3().
When you needed to change the database schema, you'd provide a new version of that service and upgrade your build tag environment to look at the new version. You'd also provide a legacy service that handles the old schema to new schema upgrade.
For example, say you registered new users like this:
registerNewUser2(username, password, fullname) {
writeToDB(username, password, fullname);
}
And you needed to update the schema to add the user's date of birth:
registerNewUser3(username, password, fullname, dateofbirth) {
writeToDB(username, password, fullname, dateofbirth);
}
registerNewUser2(username, password, fullname) {
registerNewUser3(username, password, fullname, NULL);
}
The new build tag will be changed to call registerNewUser3(), while the previous build tag is still using registerNewUser2().
So the old build tag will continue to work, just that any new users registered will have a NULL date of birth. When an updated build tag is used, the date of birth is written to the database correctly.
You would need to update db-services immediately, as soon as you roll out the new build tag - or even before you roll out the build tag I guess.
Once you're sure that everyone is using the new version, you can just delete registerNewUser2() from the next version of db-services.
This will be quite complicated to make sure that you are correctly handling the conversion between old API and new API calls, but might be feasible if you're already handling continuous deployment.
I'm setting up a basic sync service for an iPad application I'm developing. The goal is to have data consistent throughout several instances of the iPad app, as well as having a read-only version of the data on the web, hence rolling a custom solution.
The current flow is this:
Each entity has a 'created', 'modified' and 'UUID' field which are automatically updated by Core Data
On sync, each entity with a created or modified date after the last sync date is serialised into JSON and sent to the server
The server persists any changes to a MySQL database using the client-generated UUIDs as PKs (if there's a conflict, it just uses the most recently modified entity as the 'true' version, nothing fancy there) and sends back any updated entities to the client
The client then merges these changes back into its Core Data DB
This all seems to be working fine. My problem is how to track deleted objects using this method? I'm guessing I can add a 'deleted' flag to each entity and set this whenever a client deletes something, I can then push that change to the server with the rest of the sync data. Once the sync is complete then the client can actually delete these entities. My questions are:
Can I override Core Data's delete methods to automatically set this flag?
Will this require keeping all deleted entities indefinitely on the server? We'll have no way of knowing when every client has synced and actually deleted each entity (I'm not currently tracking client instances)
Is there a better way of doing this?
How about you keep a delta history table with UUID and created/updated/deleted field, maybe with a revision number for each update? So you keep a small check list of changes since your last successful sync.
That way, if you delete an object you could add an entry in the delta history table with the deleted UUID and mark it deleted. Same with created and updated objects, you only need to check the delta table to see what items you the server needs to delete, update, create, etc. You could even store every revision on the server to support rolling back to a previous version in the future if you feel like it.
I think a revision number is better than relying on client's clock that could potentially be changed manually.
You could use NSManagedObjectContext's insertedObjects, updatedObjects, deletedObjects methods to create the delta objects before every save procedure :)
My 2 cents
Whether or not you have to keep deleted objects on the server or not totally depends on your needs. You will need a deleted flag locally to mark as deleted for the sync, maybe also on the server depending on your desire to roll back.
I have taken care of this problem a few ways before. Here is one possibility:
When a client deletes something, just mark it to be deleted locally and delete from the server during the sync (at which point you can purge from core data). When other clients request to access that data, send back an HTTP 404 because you dont have the object any more. At that point the client can delete the entity locally. Now if a client requests a list of things and this object has been deleted, it will just be missing from the list of things he gets back so you can detect that and delete it. I do that in a client by creating an array of object IDs when I get a response from the server and deleting any local objects that don't have those IDs.
We have a deleted field on the server, but just to have the ability to roll back in case something is deleted by accident.
Of course you could return deleted objects to the client so they know to delete but if you don't want to keep a copy on the server, you would have to make some assumption that the clients would all update within a time frame. Then you could garbage collect after that time frame has expired.
I don't really like that solution though. If your data is too heavy to ask for all the objects for a complete sync, you could use your current merge strategy for creating and updating, and then run a separate call to check for deleted items. That call could simply ask for all IDs that the client should have on the device. It could delete the ones that don't exist. OR it could send all IDs on the client and get back a list of IDs to delete.
I think you have to provide more details about the nature of the data if you want a more opinionated suggestion.
Regarding your second question: You can design this so that the server doesn't have to keep deleted records around, if you want to. Let each app know if a given piece of data (based on its UUID) is stored on the server (e.g. add an existsOnServer property or similar). This starts out false when a new item is created in the app, but is set to true once it has been synced to the server for the first time. That way, if the app tries to sync later, but the UUID is not found, you can differentiate the two cases: If existsOnServer is false, then then this item is newly created and should be synced to the server, but if it is true then it can be taken to mean that it was already on the sever before, but has now been deleted, so you can delete it in the app too.
I'd probably argue against this approach, since it seems more error prone to me (I imagine a database or connection error incorrectly being interpreted as a deletion) and keeping records around on your server would usually not be a big deal, but it is possible. The "delta-approach" suggested by dzeikei could be used at the same time, so an update to a record that does not exist on the server signifies that it was deleted, while an insert does not.
You may take a look at Cross-Platform Data Synchronization by Dan Grover if you haven't. It's a very well written paper regarding synchronization and iOS.
About your questions:
You can avoid deleting a file in Core Data and set a 'deleted flag': just update the file instead of deleting it. You could make your own 'deleting' method that actually would call and update the flag on the record.
Keep always a last_sync and a last_updated for each record on the server and on each client. This way you'll always know when someone did change something anywhere and if that change was synced or not against the 'truth database'.
Keeping track of deleted files is a hard thing to do, I guess the best way to do it is keeping track of the history of syncs for each table, but is a difficult task. The easiest way, using this 'truth-database' kind of configuration is to flag the files, so that way yes, you should keep the data on the server as well as on the client.
during synchronization of data between tow table some records or deleted when the table rows are same. and when the rows are different the correctly synchronized, i used this Code click here on image
I am still learning xcode and objective-c. I use to build app for iphone environment only.
However I am in need of realizing an application with an existing prefilled sql database.
For prefilling the database I wouldn't like to use code in the ditributed app, but I would rather prefer to have a separate app for doing that.
The reason is that, the app could only download the updated database, rather than a whole code update .
So, questions are:
is this a possible scenario
if yes, what kind of application
should I build in xcode for
prefilling database ?
thanks
There's no reason that you can't have one app that both uses the database and downloads updates. Keeping the database updated without downloading the whole thing is pretty simple.
If you record the creation and modification timestamps of rows in the database on the server and keep track of those same modification timestamps on the device, updating the database works like this:
The device determines latest modification timestamp it has for a given table. We'll call it latestTimestamp. It sends the latestTimestamp to the server.
The server compares the latestTimestamp to the creation and modification timestamps in the database. The server sends back data based on the comparison result:
If the modification timestamp is earlier than latestTimestamp it doesn't need to send the record, the device already has it;
If the modification timestamp is later than latestTimestamp and the creation timestamp is earlier than latestTimestamp, it sends the record back noting that it is to be updated in the device database;
If the modification timestamp is later than latestTimestamp and the creation timestamp is later than latestTimestamp, it sends the record back noting that it is to be added in the device database.
Lastly, the server database needs to keep track of deleted records and a deletion timestamp for every record recorded. If latestTimestamp is later than the deletion timestamp, it sends back that the record needs to be deleted.
Obviously it gets a bit more complicated when you have a variety of connected tables, but as long as things are sent back in the correct order, it works great.
Use asynchronous data requests (the ASIHTTPRequest library makes it a breeze) and update the data in the background while the user uses the app. If it's essential that the data be updated prior to any interaction with it you can display an activity indicator and have the user wait.
No need at all for a separate app.
I would discourage you from doing that. No matter it is a pre-filled-database-purpose app, or a normal-purpose app, Apple Review Team would treat them with the same procedure, leaving the developer waiting for weeks before that app is finally available on App Store.
Besides, as far as I know, communication between apps is still strictly limited. If the data you would like to transfer between your main app and your db app is larger than a few lines of, let's say, NSString, it might be technically un-plausible.
I have a fairly long HTML form that the user fills out. After filling it out, the user is given a preview of the data they are submitting. From there they are able to commit the data into the system or go back and edit it. I'm wondering what the best approach to handling this preview step is. Some ideas I had are:
Store the form data in a cookie for
the preview
Store the form data in a
session
Put the data in the DB, with
a status column indicating it's a
preview
What do you usually do when creating a preview like this? Are there other issues to consider?
Put the data as hidden fields ().
Why not cookie or session?
- If the user decide to discard this data, he may just navigate to other page. When he return later and see the data intact, he maybe surprised.
Why not database?
- If the user just close the browser, who clean up the data in your db? ... I would rather not write a cron job for this.
I'm not sure if it's the best-practise, but when I did this task, I've put it in a session. I expected the user to preview and submit/reedit the data during just single session so the session was enough for me.
If you want you preview to persist on your users machine you should use a cookie - that means the user doesn't have to sumbit/reedit the preview during single session, but can close the browser between this operations, and than return back to the preview in next session. Using this aproach, you have to consider that user can deny cookies in his browser. That's why people usually combine sessions with cookies together.
Putting the data in a database (with a status column) is not necessary unless you want to track and store the previews and edit operations somehow. You can imagine the database as a drawer in your table - you put there papers with whatever you want to store and find later. If you're just drawing a preview draft, and after the result is submitted, only a final version is stored in a drawer/database and the preview is crumpled and thrown away, than you won't put this in database. But if for some reason you think you will later go through the drafts, then they have to be stored in a database.
I'm not sure if it's clear with my english, but I did my best :D
I'd gauge it based on how difficult the form was to fill out in the first place. If it's a lengthy process (like information for a mortgage or something) and you have user logins, you may want to provide them an opportunity to save the uncompleted form and come back to it later.
A session is only good for (depending on your setup) tasks that will take less than an hour. Manual input of data (like CD/DVD cataloging) that is easy to start and easy to finish will be perfect to store a session. Likewise, if the person has to stop and root around for some documents (again, in the case of a mortgage app, or an online tax form, etc.), you'll have a really irate person if the session times out and they have to retype information.
I'd avoid directly injecting content into a cookie, since the data is passed for subsequent requests and, assumedly, you already have access to basic session functionality.
If you go with a DB, you will need to timestamp the access (assuming you don't just leave it around with some saved name as determined by your user, like 'My 2008 Mortgage Documents') so you can clean it up later. If the user does save it mid-form, just leave it around until they complete the form or delete it.