Updating my Windows Store App [duplicate] - deployment

This question already has an answer here:
How to Delete Files and Application Data Container Values in One Go?
(1 answer)
Closed 8 years ago.
I would like to update my Windows Store App, but I need to delete everything in the local state folder of the application:
C:\Users\usr1\AppData\Local\Packages\myApp\LocalState
I am not familiar with the process of updating Windows Store App and the official documentation does not clarify how can I clear the folder just as if the App was reinstalled.
I was wondering if I had to do this by hand (using the version number of the App) or if there was an automated way to perform it.

Let me first give a little background. By design, app data is preserved across the installation of app updates. The reason for this is that the versions of your state (app data) are typically a separate concern from the versions of the app itself. That is, an app could go between versions 1.3 and 4.1 and still use the same app data structures.
The version of the app data is set through Windows.Storage.ApplicationData.SetVersionAsync (http://msdn.microsoft.com/en-us/library/windows/apps/windows.storage.applicationdata.setversionasync.aspx). Where this primarily matters is with roaming data, as this version mark determines what distinct copies of the roaming data are preserved in the cloud.
Now in your case you're talking about local app data, not roaming, in which case you can either use SetVersionAsync, or you can simply maintain a version number in an app data setting yourself. For your scenario (which sound like clearing out a cache of sorts), using your own setting is probably better, because if/when you use roaming state you won't be having to change the app data version with every app update.
If you have a version number of your own, then, simply write your updated app to check for whatever version you don't want to carry forward. If that version exists, then call ApplicationData.ClearAsync(ApplicationDataLocality.Local) (see http://msdn.microsoft.com/en-us/library/windows/apps/hh701425.aspx). You can call ClearAsync with no args to clear local, temp, and roaming all together.
If for any reason you have state that can be migrated instead of rebuilt, then you can use that version number to check for what you need to migrate.
The other way to do this is to use a background task with the ServicingComplete trigger. A guide for that is here: http://msdn.microsoft.com/en-us/library/windows/apps/jj651556.aspx. You'd basically just have the background task call ClearAsync as before and/or migrate the state.

Related

On-demand resources - Force purge/clear of downloaded resource bundles

I have an swift4 app that is having certain issues where on-demand resources packages hang when downloading. The issue appears to be related to different versions of resource bundles being used in testflight environment as opposed to prod. Some users indicate that the packages do not download, but after several days (presumably after the bundles have been purged by the OS), the downloads start working again.
My question is, is there a way to forcibly clear the downloaded bundles rather than waiting for the operating system to remove them at its own leisure. I know it can be done via xcode (via purge in the data panel), but I need a solution that is native to the app itself.
(Using the NSBundleResourceRequest.endAccessingResources() function will only stop the resources being used, but will not remove them)
In short, there is no way to delete ODR programmatically. I asked a similar question on Apple Developer forums and got the answer:
There is currently no way to programmatically purge ODR resources. It is up to discretion the OS.
When a new network request for ODR content is initiated, the system will do a sort of inventory check, checking what is currently in memory and if there’s room for new memory. This check also takes into account several other metrics, such as which assets are currently being used, which were used recently, is the asset being used for the UI, etc.
After this is done, the system will determine how much of the chosen assets are to be purged, so that there is enough room to fit the new content. If I remember correctly, the OS will try and delete whole assetpacks. What this means is, if the system is purging assets, it will purge resources that are grouped together such as, all assets from Level 1, Level 2, and Level 3, provided the user is on Level 4. So, the system might purge slighly more space than exactly needed.
If you would like to change your app's ODR assets, you will have to submit an update to your app.

How to Sync iPhone Core Data with web server, and then push to other devices? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been working on a method to sync core data stored in an iPhone application between multiple devices, such as an iPad or a Mac. There are not many (if any at all) sync frameworks for use with Core Data on iOS. However, I have been thinking about the following concept:
A change is made to the local core data store, and the change is saved. (a) If the device is online, it tries to send the changeset to the server, including the device ID of the device which sent the changeset. (b) If the changeset does not reach the server, or if the device is not online, the app will add the change set to a queue to send when it does come online.
The server, sitting in the cloud, merges the specific change sets it receives with its master database.
After a change set (or a queue of change sets) is merged on the cloud server, the server pushes all of those change sets to the other devices registered with the server using some sort of polling system. (I thought to use Apple's Push services, but apparently according to the comments this is not a workable system.)
Is there anything fancy that I need to be thinking about? I have looked at REST frameworks such as ObjectiveResource, Core Resource, and RestfulCoreData. Of course, these are all working with Ruby on Rails, which I am not tied to, but it's a place to start. The main requirements I have for my solution are:
Any changes should be sent in the background without pausing the main thread.
It should use as little bandwidth as possible.
I have thought about a number of the challenges:
Making sure that the object IDs for the different data stores on different devices are attached on the server. That is to say, I will have a table of object IDs and device IDs, which are tied via a reference to the object stored in the database. I will have a record (DatabaseId [unique to this table], ObjectId [unique to the item in the whole database], Datafield1, Datafield2), the ObjectId field will reference another table, AllObjects: (ObjectId, DeviceId, DeviceObjectId). Then, when the device pushes up a change set, it will pass along the device Id and the objectId from the core data object in the local data store. Then my cloud server will check against the objectId and device Id in the AllObjects table, and find the record to change in the initial table.
All changes should be timestamped, so that they can be merged.
The device will have to poll the server, without using up too much battery.
The local devices will also need to update anything held in memory if/when changes are received from the server.
Is there anything else I am missing here? What kinds of frameworks should I look at to make this possible?
I've done something similar to what you're trying to do. Let me tell you what I've learned and how I did it.
I assume you have a one-to-one relationship between your Core Data object and the model (or db schema) on the server. You simply want to keep the server contents in sync with the clients, but clients can also modify and add data. If I got that right, then keep reading.
I added four fields to assist with synchronization:
sync_status - Add this field to your core data model only. It's used by the app to determine if you have a pending change on the item. I use the following codes: 0 means no changes, 1 means it's queued to be synchronized to the server, and 2 means it's a temporary object and can be purged.
is_deleted - Add this to the server and core data model. Delete event shouldn't actually delete a row from the database or from your client model because it leaves you with nothing to synchronize back. By having this simple boolean flag, you can set is_deleted to 1, synchronize it, and everyone will be happy. You must also modify the code on the server and client to query non deleted items with "is_deleted=0".
last_modified - Add this to the server and core data model. This field should automatically be updated with the current date and time by the server whenever anything changes on that record. It should never be modified by the client.
guid - Add a globally unique id (see http://en.wikipedia.org/wiki/Globally_unique_identifier) field to the server and core data model. This field becomes the primary key and becomes important when creating new records on the client. Normally your primary key is an incrementing integer on the server, but we have to keep in mind that content could be created offline and synchronized later. The GUID allows us to create a key while being offline.
On the client, add code to set sync_status to 1 on your model object whenever something changes and needs to be synchronized to the server. New model objects must generate a GUID.
Synchronization is a single request. The request contains:
The MAX last_modified time stamp of your model objects. This tells the server you only want changes after this time stamp.
A JSON array containing all items with sync_status=1.
The server gets the request and does this:
It takes the contents from the JSON array and modifies or adds the records it contains. The last_modified field is automatically updated.
The server returns a JSON array containing all objects with a last_modified time stamp greater than the time stamp sent in the request. This will include the objects it just received, which serves as an acknowledgment that the record was successfully synchronized to the server.
The app receives the response and does this:
It takes the contents from the JSON array and modifies or adds the records it contains. Each record get set a sync_status of 0.
I used the word record and model interchangeably, but I think you get the idea.
I suggest carefully reading and implementing the sync strategy discussed by Dan Grover at iPhone 2009 conference, available here as a pdf document.
This is a viable solution and is not that difficult to implement (Dan implemented this in several of its applications), overlapping the solution described by Chris. For an in-depth, theoretical discussion of syncing, see the paper from Russ Cox (MIT) and William Josephson (Princeton):
File Synchronization with Vector Time Pairs
which applies equally well to core data with some obvious modifications. This provides an overall much more robust and reliable sync strategy, but requires more effort to be implemented correctly.
EDIT:
It seems that the Grover's pdf file is no longer available (broken link, March 2015). UPDATE: the link is available through the Way Back Machine here
The Objective-C framework called ZSync and developed by Marcus Zarra has been deprecated, given that iCloud finally seems to support correct core data synchronization.
If you are still looking for a way to go, look into the Couchbase mobile. This basically does all you want. (http://www.couchbase.com/nosql-databases/couchbase-mobile)
Similar like #Cris I've implemented class for synchronization between client and server and solved all known problems so far (send/receive data to/from server, merge conflicts based on timestamps, removed duplicate entries in unreliable network conditions, synchronize nested data and files etc .. )
You just tell the class which entity and which columns should it sync and where is your server.
M3Synchronization * syncEntity = [[M3Synchronization alloc] initForClass: #"Car"
andContext: context
andServerUrl: kWebsiteUrl
andServerReceiverScriptName: kServerReceiverScript
andServerFetcherScriptName: kServerFetcherScript
ansSyncedTableFields:#[#"licenceNumber", #"manufacturer", #"model"]
andUniqueTableFields:#[#"licenceNumber"]];
syncEntity.delegate = self; // delegate should implement onComplete and onError methods
syncEntity.additionalPostParamsDictionary = ... // add some POST params to authenticate current user
[syncEntity sync];
You can find source, working example and more instructions here: github.com/knagode/M3Synchronization.
Notice user to update data via push notification.
Use a background thread in the app to check the local data and the data on the cloud server,while change happens on server,change the local data,vice versa.
So I think the most difficult part is to estimate data in which side is invalidate.
Hope this can help u
I have just posted the first version of my new Core Data Cloud Syncing API, known as SynCloud.
SynCloud has a lot of differences with iCloud because it allows for Multi-user sync interface. It is also different from other syncing api's because it allows for multi-table, relational data.
Please find out more at http://www.syncloudapi.com
Build with iOS 6 SDK, it is very up to date as of 9/27/2012.
I think a good solution to the GUID issue is "distributed ID system". I'm not sure what the correct term is, but I think that's what MS SQL server docs used to call it (SQL uses/used this method for distributed/sync'ed databases). It's pretty simple:
The server assigns all IDs. Each time a sync is done, the first thing that is checked are "How many IDs do I have left on this client?" If the client is running low, it asks the server for a new block of IDs. The client then uses IDs in that range for new records. This works great for most needs, if you can assign a block large enough that it should "never" run out before the next sync, but not so large that the server runs out over time. If the client ever does run out, the handling can be pretty simple, just tell the user "sorry you cannot add more items until you sync"... if they are adding that many items, shouldn't they sync to avoid stale data issues anyway?
I think this is superior to using random GUIDs because random GUIDs are not 100% safe, and usually need to be much longer than a standard ID (128-bits vs 32-bits). You usually have indexes by ID and often keep ID numbers in memory, so it is important to keep them small.
Didn't really want to post as answer, but I don't know that anyone would see as a comment, and I think it's important to this topic and not included in other answers.
First you should rethink how many data, tables and relations you will have. In my solution I’ve implemented syncing through Dropbox files. I observe changes in main MOC and save these data to files (each row is saved as gzipped json). If there is an internet connection working, I check if there are any changes on Dropbox (Dropbox gives me delta changes), download them and merge (latest wins), and finally put changed files. Before sync I put lock file on Dropbox to prevent other clients syncing incomplete data. When downloading changes it’s safe that only partial data is downloaded (eg lost internet connection). When downloading is finished (fully or partial) it starts to load files into Core Data. When there are unresolved relations (not all files are downloaded) it stops loading files and tries to finish downloading later. Relations are stored only as GUID, so I can easly check which files to load to have full data integrity.
Syncing is starting after changes to core data are made. If there are no changes, than it checks for changes on Dropbox every few minutes and on app startup. Additionaly when changes are sent to server I send a broadcast to other devices to inform them about changes, so they can sync faster.
Each synced entity has GUID property (guid is used also as a filename for exchange files). I have also Sync database where I store Dropbox revision of each file (I can compare it when Dropbox delta resets it’s state). Files also contain entity name, state (deleted/not deleted), guid (same as filename), database revision (to detect data migrations or to avoid syncing with never app versions) and of course the data (if row is not deleted).
This solution is working for thousands of files and about 30 entities. Instead of Dropbox I could use key/value store as REST web service which I want to do later, but have no time for this :) For now, in my opinion, my solution is more reliable than iCloud and, which is very important, I have full control on how it’s working (mainly because it’s my own code).
Another solution is to save MOC changes as transactions - there will be much less files exchanged with server, but it’s harder to do initial load in proper order into empty core data. iCloud is working this way, and also other syncing solutions have similar approach, eg TICoreDataSync.
--
UPDATE
After a while, I migrated to Ensembles - I recommend this solution over reinventing the wheel.

Syncing Core Data model between Mac and iPhone

I am currently building my Core Data model, which I would like to sync between the Mac and iPhone versions of my application.
I will be using Bonjour for device discovery, etc but I have a question regarding the data sync part of the problem.
So far I have added a UID and modification timestamp to each object that will be involved in syncing, so I should be able to match up objects and detect which ones have changed.
Are there any good links, resources out there regarding writing sync code for this type of situation, i.e. syncing records between two instances of a model?
Sync is a problem with quite a few edge cases which have been solved many times by people in the past, so I was expecting to find some info on the subject but all I can find are links to Apple's SyncServices (which doesn't exist on iPhone) and some MS sync technology.
I'm really looking for general theory so I can implement it myself, not necessarily a ready-made solution.
The SyncML specification may be of help, but it's quite hard to read and obviously biased towards SyncML.
I've had to implement this for Task Coach, so here are a few ideas:
A modification flag is enough, a timestamp doesn't really provide much more information. Typically, my objects are in one of these states:
None
New
Deleted
Modified
The following transitions happen when the object is modified:
None -> Modified
New -> New
Deleted -> (should not happen)
Modified -> Modified
and the following ones when it is deleted:
None -> Deleted
New -> Actually deleted (it may be removed from storage)
Deleted -> (should not happen)
Modified -> Deleted
When synchronizing, the device first sends to the desktop all objects with a status different than None. The desktop asks the user to resolve conflicts if one of these has a status != None on its side. In any case, the object goes into state None on the device, or is deleted from storage if its state was Deleted.
Then, the desktop sends its own changes to the device. There are no conflicts possible since all objects are in state None on the device. Objects on the desktop go into state None or are deleted from storage as well, and sync is over.
There are two types of possible conflicts, depending on the device/desktop states:
modified/deleted. If the user chooses to trust the device, the desktop object is replaced with the device one; else, the desktop does nothing and keeps the deleted state, so that the object will be removed from the device in phase 2.
deleted/modified: If the device wins, the object is actually deleted from the desktop. Else, the object goes into state New on the desktop so that it is restored on the device in phase 2.
deleted/deleted: Duh. Just remove it from storage.
modified/modified: The user decides which values to keep, maybe on a field by field basis. The state stays to Modified on the desktop so that these choices are propagated back to the device in phase 2.
Some conflicts may be avoided if the Modified state is kept for each field, so that for instance an object with a modified Subject on the device and modified Summary on the desktop will not trigger a conflict.
You may take a look at the code for Task Coach for an example (SVN repository on SourceForge, it has both the desktop app in Python and the iPhone app). Actually, in this case I decided to use a simpler approach; I don't keep track of the state on the desktop. After phase 1 (device to desktop), I just make a full replacement of objects on the device with the ones on the desktop. Thus, there are no conflicts (the device always wins).
Obviously, this only works between two fixed devices; if you want to synchronize with several phones/desktop apps, you have to assign a unique ID to each and keep different states for different devices/apps. This may begin to get hairy.
HTH
Marcus Zarra created a framework called ZSync to simplify synching iPhone/iPad apps to their Mac counterparts. Take a look at it, it may help solve the problem.

Django Iphone sync

I am writting a django app and Iphone app, I need to keep them in sync.
Users can delete, update and create new objects in the web app, and in the iphone app.
When they get online with the iphone both app must be in sync.
Is there simple way to do this?
Thanks,
Joaquin
In general: There's no simple way. But I'll outline an approach.
If you don't care about changes being overwritten: Keep a timestamp of the most recent change to each record, and a timestamp of each sync. When syncing, you get a list of all updates on the iPhone since the last sync, and all updates on the server. You write from the iPhone to the server if the iPhone timestamp for that record is newer than the server one, and vice versa.
But you probably care. Say you've edited a note called "Where to meet up on Friday." It started out empty. Now, on the phone, you've written, "My house." Ten minutes later, your friend edits the same note on the server and writes, "The diner." Who wins out? Stack Overflow can't answer that for you; it's application-specific.
OK, so modify the approach above: if both the server version of a record and the local version have been edited since the last sync, then you have to ask the user what to do. That's the basic algorithm.
If you care a lot about changes not being overwritten, to the point that you want to merge changes to different places in the same documents, then your system will begin to approach the complexity of version control systems like Subversion or Git. Not at all simple.
There's no built in way to do this. You need to keep a server data store, and a local data store on the iPhone, and when online, check the differences manually, and see what action you should take on the server and the iPhone side (delete, update, etc.).
Sync is usually hard. I suggest you start laying out the server and iPhone data stores, and think how they relate, and how can the server or the iPhone know the status of their counterpart record, so to keep them in sync.

iPhone offline application with synchronization

I'm looking into building an application which works just as well offline as it does online. Since the application cannot communicate with the server while in offline, there is some level of synchronization which needs to take place.
What are some good tools to read about and start thinking about when planning offline operations with synchronization for your iPhone?
What tools would I have to create on my own, versus tools that apple already provides to help with this particular issue?
I've been working on an app that handles this exact behavior for the last 2 months or so. It has a small subset of functions which are online only and a large set of functionality that is offline/online.
I'm using sqlite for local storage as suggested here with a modified version of the sqlitepersistentobjects library. The base version of sqlitepersistentobjects is not thread safe so watch out if you are using it. (check out objectiverecord in: objectivesync for a thread safe alternative but be prepared to dig into the code). If you are willing to develop for the 3.0 sdk then core data is another possibility for a sqlite library.
The overall architecture is simple enough I have modeled local storage using sqlite and remote interaction using objective resource against a rails app and REST api. It can use either xml or json for data serialization.
When an object is modified locally the change is first saved to the sqlite database record for that object and then added to a queue which is serialized and stored in the local sqlite db as well. (The queue can then be processed at any time)
If there is a connection available any queued local changes are deserialized and added to an NSOperationQueue which then processes them in the background.
In order to make this all work I've subclassed NSOperation so that it can support several types of remote queue operations - create, update, delete essentially using objective resource to make the remote requests.
The nice thing about using NSOperationQueue and NSOperation is that they handle the background threading for you so I'd highly recommend having a look at the apple docs for those classes and also at the apple threading guide.
When the application loads there is a bit of remote checking done and processed in the background to pull down the latest data - although to be honest I am still changing the way this behaves a bit.
That's a quick overview of what I've had to deal with so far...hope it helps a little.
there are plenty of application on the app store which rely on both online as well as offline data
what you should really be doing is on start of your app, run a background thread (which runs silently so your user never sees any delay). this thread downloads the latest data from your server and pushes it into your local database (sqlite is the best choice)
make sure you implement some kind of data versioning so that your app only downloads data which is actually changed since last download - else you would unnecessarily be downloading the entire dataset which can be quite huge (depending upon your app requirements)
also make sure to test for internet connectivity when doing this. if no internet is available, alert the user for sure
this way you get the best of both worlds. users when away from internet can still use your app with their local sqlite data
in iphone os 3.0 apple has introduced push services - where you can simply "PUSH" your data instead of doing a "PULL" however this is not available in the current iPhone OS (2.x.x)
Push is probably not a viable option here, since the amount of data you can push is miniscule, and basically comes back to "tell my app to make a server call". We use an online/offline model in Satchel. Whenever we have to communicate with the server, we bundle that communication (a URL and possibly some POST data) and store it to a database. If we're online, we pull it right back out, send it, and when we get a valid response back, we remove the record from the database. If we're offline, those rows build up, and the next time we ARE online, they get sent out. This is not a workable model in all situations, but can be adapted to most.
In 3.0, you've got access to CoreData, which is a great data management tool. Other than that, the NSURLXXX family is your friend.
I would store all the information I gather while offline in a SQLite database. Then, on user 's request, you can SYNC all the stored information with a server using HTTP or a custom TCP/IP protocol you can come up with.
I have been using this approach on Palm OS applications for almost 10 years now, and they do work very effectively.
As far as I know, the only "tool" you will have to accomplish this is plain old OBJECTIVE-C with Cocoa Touch. Although you could use some TCP/IP C++ libraries that will make your life easier if you decide to implement your own protocol.
Wonder if you have considered using a Sync Framework to manage the synchronization. If that interests you can take a look at the open source project, OpenMobster's Sync service. You can do the following sync operations
two-way
one-way client
one-way device
bootup
Besides that, all modifications are automatically tracked and synced with the Cloud. You can have your app offline when network connection is down. It will track any changes and automatically in the background synchronize it with the cloud when the connection returns. It also provides synchronization like iCloud across multiple devices
Also, modifications in the Cloud are synched using Push notifications, so the data is always current even if it is stored locally.
Here is a link to the open source project: http://openmobster.googlecode.com
Here is a link to iPhone App Sync: http://code.google.com/p/openmobster/wiki/iPhoneSyncApp