Deleting an Item in Firebase - swift

I have the following data in Firebase:
Before Deletion (Link)
In the "-Kabn1954" branch, I want to delete the item "apple". Using Swift, I delete an item at a specific index, in a particular branch, using this:
self.ref.child("-Kabn1954").child("foods").child("1").removeValue()
However, after I do this, the Firebase data looks like this:
After Deletion (Link)
As you can see, the data in this branch now goes directly from index 0 to index 2. For this reason, I get an error. How can I make it such that when the item at index 1 is deleted, the two remaining items have an index of 0 followed by an index of 1?

Firebase doesn't actually store the data as an array, instead it stores it as an object keyed by the index as you're observing. The guide suggests that you should try to restructure your data so that the array-like behavior is not used.
If that is not possible or really not preferable, I don't know about how the Swift API works, however in both the python and JavaScript libraries, if you observe on the parent foods element, you'll get an array object which you can splice and push an update. I'm guessing this is also true in Swift, as the API indicates that an NSArray can be returned too.
As the blog post mentions, you'll need to update the entire array when you want to reindex, as Firebase will not do it for you. setValue() accepts an NSArray which can be called on the foods reference. Be careful about race conditions here, you'll want to encapsulate the read and write into a single transaction to avoid losing your update.

Related

AngularFire docData read value once

What is the correct way to read the content of a document as value without subscription ?
Keep in mind that when persistence is enabled the Observer receives two values, first value is the locally persisted one and the second is the actual value that is stored in the database.
You can use get() method to retrieve the contents of a single document.
For More details you can check this StackOverflow thread

Firestore pagination by offset

I would like to create two queries, with pagination option. On the first one I would like to get the first ten records and the second one I would like to get the other all records:
.startAt(0)
.limit(10)
.startAt(9)
.limit(null)
Can anyone confirm that above code is correct for both condition?
Firestore does not support index or offset based pagination. Your query will not work with these values.
Please read the documentation on pagination carefully. Pagination requires that you provide a document reference (or field values in that document) that defines the next page to query. This means that your pagination will typically start at the beginning of the query results, then progress through them using the last document you see in the prior page.
From CollectionReference:
offset(offset) → {Query}
Specifies the offset of the returned results.
As Doug mentioned, Firestore does not support Index/offset - BUT you can get similar effects using combinations of what it does support.
Firestore has it's own internal sort order (usually the document.id), but any query can be sorted .orderBy(), and the first document will be relative to that sorting - only an orderBy() query has a real concept of a "0" position.
Firestore also allows you to limit the number of documents returned .limit(n)
.endAt(), .endBefore(), .startAt(), .startBefore() all need either an object of the same fields as the orderBy, or a DocumentSnapshot - NOT an index
what I would do is create a Query:
const MyOrderedQuery = FirebaseInstance.collection().orderBy()
Then first execute
MyOrderedQuery.limit(n).get()
or
MyOrderedQuery.limit(n).get().onSnapshot()
which will return one way or the other a QuerySnapshot, which will contain an array of the DocumentSnapshots. Let's save that array
let ArrayOfDocumentSnapshots = QuerySnapshot.docs;
Warning Will Robinson! javascript settings is usually by reference,
and even with spread operator pretty shallow - make sure your code actually
copies the full deep structure or that the reference is kept around!
Then to get the "rest" of the documents as you ask above, I would do:
MyOrderedQuery.startAfter(ArrayOfDocumentSnapshots[n-1]).get()
or
MyOrderedQuery.startAfter(ArrayOfDocumentSnapshots[n-1]).onSnapshot()
which will start AFTER the last returned document snapshot of the FIRST query. Note the re-use of the MyOrderedQuery
You can get something like a "pagination" by saving the ordered Query as above, then repeatedly use the returned Snapshot and the original query
MyOrderedQuery.startAfter(ArrayOfDocumentSnapshots[n-1]).limit(n).get() // page forward
MyOrderedQuery.endBefore(ArrayOfDocumentSnapshots[0]).limit(n).get() // page back
This does make your state management more complex - you have to hold onto the ordered Query, and the last returned QuerySnapshot - but hey, now you're paginating.
BIG NOTE
This is not terribly efficient - setting up a listener is fairly "expensive" for Firestore, so you don't want to do it often. Depending on your document size(s), you may want to "listen" to larger sections of your collections, and handle more of the paging locally (Redux or whatever) - Firestore Documentation indicates you want your listeners around at least 30 seconds for efficiency. For some applications, even pages of 10 can be efficient; for others you may need 500 or more stored locally and paged in smaller chucks.

Get Deleted Object in AbstractMongoEventListener

I want to run some logic when an Object get deleted from MongoDB. I am using SpringData Mongo.
I am using AbstractMongoEventListener as the object can be deleted from collection through number of ways and I am overriding the
public void onBeforeDelete(BeforeDeleteEvent<Object> event)
method. But there are no method in event object which will return the Object I am going to delete.
event.getSource() and event.getDocument() returns the document. How can I get the object.
Somehow this Event seems to be messed up. In difference to the other MongoMappingEvent<T> descendents, this one inherits a MongoMappingEvent<Document> through AbstractDelteEvent<T>. I cannot explain this difference.
But as I also was in need to retrive the Documents before deleting them, I used the debugger to find, it is possible to retrive the Document Ids, using some hackish shit undocumented get("Key")-chain.
event.getDocument()
.get("_id", Document.class) // BSON Document!
.getList("$in", ObjectId.class) // ObjectId.class or what ever Type your Id is.
With that you can retrive a list of the ids of your documents. Take the repository or what ever, and use those ids to fetch the documents.
I do really not like using those string-key-things that I have not found in a documentation, as who knows when they will be removed.
I would love to remove this answer as soon as someone provides a less hackish way.
Be aware, that when you are using an #EventHandler, it can not consider the type parameter.

How to update collection documents efficiently when changing a specific value in Firestore?

I have 2 collections. One of them is named "USERS", and the other one "MATCHES". USERS, can join in the MATCHES, appearing the avatar of the user who has joined in the match. The problem is that when the user changes their avatar image after joining in the match, the match avatar doesn't changed, because the match has the old avatar.
The avatar is saved as Base64 in Firestore, but I will change it to "Storage" in the near future.
I have been trying to set the reference, but that only gives me the path.
If I have to do a Database Api Call for each match which is joined the user, maybe I have to do 20 Api calls updating the matches. It can be a solution, but not the best.
Maybe the solution is in the Google Functions?
I'm out of ideas.
Maybe the solution is in the Google Functions?
Cloud Functions also access Firestore through an SDK, so they can't magically do things that the SDK doesn't allow.
If you're duplicating data and you update one of the duplicates, you'll have to consider updating the others. If they all need to be updated, that indeed requires a separate call for each duplicate.
If you don't want to have to do this, don't store duplicate data.
For more on the strategies for updating duplicated data, see How to write denormalized data in Firebase

Conditional update for MongoDB (Meteor)

TLDR: Is there a way I can conditionally update a Meteor Mongo record inside a collection, so that if I use the id as a selector, I want to update if that matches and only if the revision number is greater than what already exists, or perform an upsert if there is no id match?
I am having an issue with updates to server side Meteor Mongo collections, whereby it seems the added() function callback in the Observers is being triggered on an upsert.
Here is what I am trying to do in a nutshell.
My meteor js app boots and then connects to an endpoint, fetching data and then upserting it into the collection.
collection.update({'sys.id': item.sys.id}, item, {upsert: true});
The 'sys.id' selector checks to see if the item exists, and then updates if it does or adds if it does not.
I have an observer monitoring the above collection, which then acts when an item has been added/updated to the collection.
collection.find({}).observeChanges({
added: this.itemAdded.bind(this),
changed: this.itemChanged.bind(this),
removed: this.itemRemoved.bind(this)
});
The first thing that puzzles me is that when the app is closed and then booted again, the 'added()' callback is fired when the collection is observed. What I would hope to happen is that the changed() callback is fired.
Going back to my original update - is it possible in Mongo to conditionally update something, so you have the selector, then the item, but only perform the update when another condition is met?
// Incoming item
var item = {
sys: {
id: 1,
revision: 5
}
};
collection.update({'sys.id': item.sys.id, 'sys.revision': {$gt: item.sys.revision}, item, {upsert: true});
If you look at the above code, what this is going to do is try to match the sys.id which is fine, but then the revisions will of course be different which means the update function will see it as a different document and then perform a new insert, thus creating duplicate data.
How do I fix this?
To your main question:
What you want is called findAndModify. First, look for the the document meeting the specs, and then update accordingly. This is a really powerful idea because if you did it in 2 queries, the document you found could be deleted/updated before you got to update it. Luckily for you, someone made a package (I really wish this existed a year ago!) https://github.com/fongandrew/meteor-find-and-modify
If you were to do this without using findAndModify you'd have to use javascript to find the doc, see if it matches your criteria, and then update it. In your use case, this would probably work, but there will always be that "what if" in the back of your mind.
Regarding observeChanges, the added is called each time the local minimongo receives a document (it's just reading what the DDP is telling it). Since a refresh will delete your local collection, you have to add those docs one by one. What you could do is wait until all added callbacks have fired, and then run your server method. In doing so, you get a ton of adds, and then a couple more changes will trickle in afterwards.
As Matt K said, you want findAndModify. There are some gotchas to be aware of:
findAndModify is about 100x slower than a find followed by an update. Find+modify is, obviously, not atomic and so won't do what you need, but be aware of the speed hit. (This is based off experience with MongoDB v2.4, so run some benchmarks to confirm under your own version.)
If your query matches multiple items, findAndModify will only act on the first one. In this case, you're querying on a unique id, but be aware of the issue for future use.
findAndModify will return the document after doing its thing, but by default it returns the pre-modification version. If you want the modified one, you need to pass the 'new: true' in your query.