ObjectMapper + Realm + Alamofire - swift

I map my objects with ObjectMapper, that are delivered by Alamofire and persist them in Realm.
Everthing is working fine. But how can I delete objects, that exist in Realm but have been deleted in my webservice?
Update: Based on the answer below I currently ended with this code:
if let overviewItemsArray = response.result.value{
do{
try self.realm.write{
self.realm.delete(self.realm.objects(OverviewItem))
self.realm.add(overviewItemsArray, update: true)
}
}
catch let err as NSError {
logger.error("Error with realm: \(err.localizedDescription)")
}
overviewItemsAsList.removeAll()
overviewItemsAsList.appendContentsOf(self.realm.objects(OverviewItem)
.sorted("sortOrder", ascending: true))
successHandler(overviewItemsAsList)
}
Perhaps somebody has further input how to improve this. I have 10 objects of this type. But on other objects I get 1500 items.

I finally figured it out and it works very well. I compute the difference of the cached data and the new data and delete it:
private func deleteOrphans(existingData: List<VotingHeader>, fetchedData:[VotingHeader]){
guard existingData.count>0 else {
return
}
let existingIDs = Set(existingData.map({$0.votingID}))
let incomingIDs = fetchedData.map({$0.votingID})
let idsToDelete = existingIDs.subtract(incomingIDs)
if idsToDelete.count>0{
let itemsToDelete = self.realm.objects(VotingHeader).filter("votingID IN %#",idsToDelete)
try! self.realm.write{
self.realm.delete(itemsToDelete)
}
}
}

You could:
-use updated_at field and update that field via id, after that delete all objects that haven't been updated or
-delete all objects and then insert new ones (expensive, I wouldn't do such a thing)
I think that you should go with the first option especially if you have a lot of data, when you get your response from web service, go through each and update corresponding records in database with NSDate() and afterwards, delete objects that are "old". Note that this is not updated_at from server but your local storage.
The other one is fine if you have small amount of data and you are sure that it will be fast and cheap.

This sounds like you need to implement some sort of mechanism for determining which objects were deleted on the server, and explicitly relaying that information to your local devices to reflect that change. I can think of a few possible solutions:
If you have control of the API, you could implement a system where your device tells the server the timestamp of when it last called the API, and the server could then return a list of the ID numbers in its response for the entries that had been deleted after that time.
If you're simply downloading the same list every time, rather than blow away everything in the Realm file each time, perform a manual loop through each entry in the Realm file and see if it has a corresponding entry in the downloaded list. If not, delete it. Granted, this will still be a relatively slow operation, but it would be much faster than deleting/recreating everything each time.
Since it sounds like Realm is being used to cache server information here, you could just set a really strict expiry time on the Realm objects that would delete them if it is exceeded, or extended if the object is refreshed from the server.
I hope that helps!

Related

What is a secure data model in Firestore for location based app?

I am building a location-based app. People can see other people in 10 miles radius. I have data stored which are public like names and some which are private like latitude and longitude and a geohash. How can I create a model for this data to minimize reads and maximize security and maximize performace. Currently I have a Users collection and documents for each user inside that collection which contain all public and private data. My current code looks like this
db.collection("Users").whereField("geohash", >=: geohash_prefix)
.whereField("geohash", <=: "geohash_prefix" + "~").getDocuments { (querySnapshot, err) in
if err != nil{
print("\(err!.localizedDescription)")
}else{
if querySnapshot!.isEmpty{
return completion(arr_of_users)
}
for document in querySnapshot!.documents {
let d = document.data()
let isWithin = CLLocationCoordinate2D(latitude: (d["loc"] as! GeoPoint).latitude, longitude: (d["location"] as! GeoPoint).longitude).isWithin(min: self.MBR.1, max: self.MBR.0) //just used for filtering documents not in radius
if !isWithin{ //skip all docs not in range
continue
}
nearPeople.append([d["firstName"] as! String, d["lastName"] as! String])
}
}
So, you can see that to display the first and last names to the user of everyone who is in a 10 miles radius, I have to send over location data to the client which isn't safe. I also cannot separate it into a subcollection because I need to get names and other public info which will require a second query (more reads). I know google cloud functions is fully insulated from the client, so is that my only option? Will that sacrifice performance? Also, I am sure this type of app is not uncommon, what is the most common approach to this?
As I said on your previous question: to be able to query on certain data, the user needs to be able to read that data. And since Firestore can't perform your isWithin(min: self.MBR.1, max: self.MBR.0) condition on the server, that means they will at the very least need to have access to all data within db.collection("Users").whereField("geohash", >=: geohash_prefix).whereField("geohash", <=: "geohash_prefix" + "~").
If you want to restrict access to a specific set of geohashes in the collection, your security rules will have to relate the position around which you query to the the corresponding geohashes. While I'm not saying it is not possible to do so, looking at this implementation in geofire makes it seem at the very least far from trivial.
Your most direct approach is to do this in Cloud Functions indeed.
I'm not sure what most other apps do, but it may help to realize that you also have no way to ensure that the user is sending their actual location to the database. So even if you give the perfect results for the location/range, nothing stops them from repeatedly querying with a different location to still get all of your user's data.

Merging different context's data, automaticallyMergesChangesFromParent usage

I have an application that uses CoreData and has several background contexts (NSManagedObjectContext).
While writing some tests I've observed a weird behavior that seems to be controversial with official documentation:
Changes from one context are automatically propagated to another one, while .automaticallyMergesChangesFromParent is set to false in both contexts.
Both contexts are received from NSPersistentContainer - one is from .viewContext, other - with .newBackgroundContext() function.
as save says, on context saving changes are committed to context's parent store, which is NSPersistentContainer.
But in fact, changes also appear in another context, despite the fact automaticallyMergesChangesFromParent == false (it's a default value).
let persistentContainer = NSPersistentContainer(name: "TESTING")
let mainContext = persistentContainer.viewContext
let otherContext = persistentContainer.newBackgroundContext()
//test entity is created on anotherContext
let entity: TestEntity = NSEntityDescription.insertNewObject
(forEntityName: String(describing: TestEntity.self),
into: anotherContext) as! TestEntity
entity.statusCode = "Testing"
//prepare fetch request for test entity
let fetchReq: NSFetchRequest<TestEntity> = TestEntity.fetchRequest()
fetchReq.predicate = NSPredicate(format: "statusCode = %#",
argumentArray: ["Testing"])
//ensure that entity is not present in mainContext
let entityFromMain = try! mainContext.fetch(fetchReq)
XCTAssertEqual(entityFromMain.count, 0)
//save context that has entity
try! otherContext.save()
//ensure that changes from parent store aren't merged automatically
XCTAssertFalse(mainContext.automaticallyMergesChangesFromParent)
//get inserted entity from mainContext
let entityOnMainAfterSaving = try! mainContext.fetch(fetchReq)
//entity is present in mainContext
XCTAssertTrue(entityOnMainAfterSaving.count > 0)
Expected output - entityOnMainAfterSaving should not contain a newly created entity, but it is already there, although mainContext wasn't refreshed.
UPDATE:
I'm asking this because in my app there is a situation, where:
1. entity's property is changed in otherContext
2. otherContext is saved
3. entity is received through viewContext
4. property's value is not updated to its state as in p.1 (!)
At the same time, property's value will get updated if viewContext.refreshAllObjects() is called right before fetching in p.3
You seem to be confusing automaticallyMergesChangesFromParent or merging contexts in general with fetching.
A fetch will always reach out to the persistent store irrespective of whether the context has been merged or not with another. That's how fetches work in Core Data. There's a section on "Avoiding Fetch Requests" on this in this book (https://www.objc.io/books/core-data/) explaining the same thing. I paraphrase:
“The biggest performance offenders are fetch requests. Fetch requests
have to traverse the entire Core Data stack. By API contract, a fetch
request — even though it originates at the managed object context —
will consult the SQLite store in the file system.
Because of this, fetch requests are inherently expensive.”
Excerpt From: Florian Kugler. “Core Data.”
And another from page 25, Fetch Requests:
“One important thing we want to point out now is this: every time you
execute a fetch request, Core Data goes through the complete Core Data
stack, all the way to the file system. By contract, a fetch request is
a round trip: from the context, through the persistent store
coordinator and the persistent store, down to SQLite, and then all the
way back.”
Excerpt From: Florian Kugler. “Core Data.”
This is why even though you've turned automaticallyMergesChangesFromParent off, a fetch will still read the most recent value out of the database.
The 2015 and 2016 WWDC sessions on Core Data are good (well, they all are) and I'd recommend you go through them. There are hardly half a dozen sessions in the last 5 years or so. I've learnt a lot from watching these sessions as they go over best practices as well as the incremental changes added to Core Data over the years.
If you wish to continue seeing an older snapshot of your data for whatever reason (perhaps your background context is continually adding / removing entries and you're not ready to refresh your view context just yet), then I would suggest you look into Query Generation. It should give you what you're trying to achieve.
This is the expected output.
You are saving an entity on a background context. The background context is a context from a persistent container. The method save tells us the following:
Attempts to commit unsaved changes to registered objects to the
context’s parent store.
What is the parent store of the backgroundContext? persistentContainer.
What is the parent store of the viewContext? persistentContainer.
When you query on the viewContext, you will receive all values that are in the database. When you call save on the backgroundContext, it will tell the context's parent store (persistentContainer) that it wants to save. After the save, the values are committed in the 'real database', and not only on the private background context.
You can say that the viewContext is special in a way it queries the most up to date committed data. If you would have 2 private background contexts and set automaticallyMergesChangesFromParent to false, they don't merge back eachother commit actions, thus queries objects which are saved in private background context A will not be visible in private background context B.
If automaticallyMergesChangesFromParent is set to true, background contexts will be updated from commits from other contexts (if they derive from the same persistent container)
It would be absolutely nuts to not merge commits from other contexts in the viewContext, I think you can figure out why, it would not be maintainable.

How to Be Notified if the Owner Removes Me from a CKShare on CloudKit

Let's say the owner of a record shares it with me. I get sent a share link and I open it and accept the share like this:
let operation = CKAcceptSharesOperation(shareMetadatas: [metadata])
operation.acceptSharesCompletionBlock = { error in
if let error = error{
print("accept share error: \(error)")
}else{
//Share accepted...
}
}
CloudKit.container.add(operation)
I am also previously subscribed to the Shared database already like so:
let subscriptionSharedDatabase = CKDatabaseSubscription(subscriptionID: "subscriptionSharedDatabase")
let sharedInfo = CKSubscription.NotificationInfo()
sharedInfo.shouldSendContentAvailable = true
sharedInfo.alertBody = "" //This needs to be set or pushes don't get sent
subscriptionSharedDatabase.notificationInfo = sharedInfo
let subShared = CKModifySubscriptionsOperation(subscriptionsToSave: [subscriptionSharedDatabase], subscriptionIDsToDelete: nil)
CloudKit.sharedDB.add(subShared)
But now let's say the owner of the CKShare removes me as a participant on that record and saves the updated participants list to CloudKit.
As far as I can tell, the only notification I get is another shared database subscription (subscriptionSharedDatabase) change, but no records are changed or deleted (I looked and there are no changed records when I fetch them).
As far as I know, the only way to be notified of changes to the participants on a CKShare is to subscribe to notifications on the cloudkit.share record type, but that isn't available to me in the shared database, right?
How can I be notified when I am removed from a CKShare?
Interesting how this has no answers. I just implemented some CKShare-related code and it seems to work fairly predictably. Here is the basic approach.
1) at the start of my app, I do CKFetchRecordZonesOperation.fetchAllRecordZonesOperation() on the shared database, to get all the current record zones that are shared with me.
2) I set up CKDatabaseSubscription on the shared database as you suggest.
3) Upon receiving this notification on the shared database, I do a batch CKFetchRecordZoneChangesOperation across all the shared record zones. (You can pass it multiple record zones, together with a server change token for each zone, to do a bulk updates query.)
4) If any shares were unshared but the record zones themselves are still valid, I observe that the recordWithIDWasDeletedBlock is run twice, once with the unshared CKRecord and once with the related CKShare.
5) (I didn’t yet fully figure out this part) if the share was the last one for a given user, the whole shared record zone from that user gets removed from my shared database. I didn’t yet fully figure out how to best handle this part. I could query for fresh record zones every time I get a CKDatabaseNotification, but that seems wasteful. I see that I can run CKFetchDatabaseChanges operation that informs me of changed zones, but I have yet to figure out when is the best time to run it.

Swift and Cloud Firestore Transactions - getDocuments?

Transactions in Cloud Firestore support getting a document using transaction.getDocument, but even though there is a .getDocuments method, there doesn’t seem to be a .getDocuments for getting multiple documents that works with a transaction.
I have a Yelp-like app using a Cloud Firestore database with the following structure:
- Places to rate are called spots.
- Each spot has a document in the spots collection (identified by a unique documentID).
- Each spot can have a reviews collection containing all reviews for that spot.
- Each review is identified by its own unique documentID, and each review document contains a rating of the spot.
Below is an image of my Cloud Firestore setup with some data.
I’ve tried to create a transaction getting data for all of the reviews in a spot, with the hope that I could then make an updated calculation of average review & save this back out to a property of the spot document. I've tried using:
let db = Firestore.firestore()
db.runTransaction({ (transaction, errorPointer) -> Any? in
let ref = db.collection("spots").document(self.documentID).collection("reviews")
guard let document = try? transaction.getDocuments(ref) else {
print("*** ERROR trying to get document for ref = \(ref)")
return nil
}
…
Xcode states:
Value of type ‘Transaction’ has no member ‘getDocuments’.
There is a getDocument, which that one can use to get a single document (see https://firebase.google.com/docs/firestore/manage-data/transactions).
Is it possible to get a collection of documents in a transaction? I wanted to do this because each place I'm rating (spot) has an averageRating, and whenever there's a change to one of the ratings, I want to call a function that:
- starts a transaction (done)
- reads in all of the current reviews for that spot (can't get to work)
- calculates the new averageRating
- updates the spot with the new averageRating value.
I know Google's FriendlyEats uses a technique where each change is applied to the current average rating value, but I'd prefer to make a precise re-calculation with each change to keep numerical precision (even if it's a bit more expensive w/an additional query).
Thanks for advice.
No. Client libraries do not allow you to make queries inside of transactions. You can only request specific documents inside of a query. You could do something really hacky, like run the query outside of the transaction, then request every individual document inside the transaction, but I would not recommend that.
What might be better is to run this on the server side. Like, say, with a Cloud Function, which does allow you to run queries inside transactions. More importantly, you no longer have to trust the client to update the average review score for a restaurant, which is a Bad Thing.
That said, I still might recommend using a Cloud Function that does some of the same logic that Friendly Eats does, where you say something along the lines of New average = Old average + new review / (Total number of reviews) It'll make sure you're not performing excessive reads if your app gets really popular.

Does Firebase cache data itself on local storage or just query results? Also, if online, is local DB hit?

This is my first time working with a remote database, so bear with me.
I know via the docs that queries using the same syntax will make use of the cache. Ie: In the following code, if the first query is hit during the remote connection, and connection is broken before the second query executes, the second query will still work via the cache.
let scoresRef = FIRDatabase.database().referenceWithPath("scores")
scoresRef.queryOrderedByValue().queryLimitedToLast(4).observeEventType(.ChildAdded, withBlock: { snapshot in
print("The \(snapshot.key) dinosaur's score is \(snapshot.value)")
})
scoresRef.queryOrderedByValue().queryLimitedToLast(2).observeEventType(.ChildAdded, withBlock: { snapshot in
print("The \(snapshot.key) dinosaur's score is \(snapshot.value)")
})
Is the data in the document itself cached, causing a query in any form for data already fetched to succeed once offline. For example if in this example I had a 3rd, offline query that tried to fetch the 4th to last child of scores by its key, would it work via the cache?
When remote connection is working, will a FirDataEventType query go straight to the remote, or will a local query be run before a remote is run?
Thank you for any input you have!
In your current code, the second query will not have to retrieve additional data, since the children have already been retrieved.
But there are many subtleties in play here. And Firebase synchronizes the data when it changes, which allows for even more scenarios.
Instead of trying to imagine all the things that might be happening, it is probably more educational if you enable debug logging. This will show the actual data that is being retrieved by the client for each query.