Firebase Cloud Firestore - Initializing a Collection [duplicate] - swift

This question already has an answer here:
How to create a Collection in Firestore with node.js code
(1 answer)
Closed 4 years ago.
I'm developing an chat app in Swift and when the user creates a chatroom for the first time, it obviously doesn't exist yet in Firebase. Only until the first message gets created does everything build out.
When my app first loads, in the viewDidLoad(), I'm setting a reference to the chat at this path (keep in mind it doesn't exist):
reference = db.collection("chatrooms").document("\(chatRoomId)").collection("thread").document("\(threadId)").collection("messages")
So when I add my snapshot listener just below this, and the user creates his first message on the thread, the listener doesn't fire for everyone (I assume, anyway, because it doesn't fire for me)
I do like that once you add your first message, it creates the whole document structure above, but this means my first users who might be watching the thread (however unlikely this is) won't get the message.
Is this an uncommon edge case I'm trying to guard against, or do I need to do something like this (pseudo-code)?
IF (this collection doesn't exist yet) THEN
CREATE GROUP/THREAD/EMPTY MESSAGE COLLECTION
reference = db.collection("chatrooms").document("\(chatRoomId)").collection("thread").document("\(threadId)").collection("messages")
ELSE
reference = db.collection("chatrooms").document("\(chatRoomId)").collection("thread").document("\(threadId)").collection("messages")
END
I was wondering if there is some sort of initializer I can call to make the structure for me, but still set it up so my first message to the collection fires the snapshot listener.
Thanks!

What you're doing is unnecessary. There is no formal method required to create a collection. A collection simply springs into existence when you first write a document into it. You can listen on an empty collection with no problems, and listeners should be invoked even if the collection wasn't visible in the dashboard at the time it was added.

Related

Whats the best practice for object pool pattern in flutter/dart?

Imagine a very simple application with two pages, PostList and PostDetail. On the former page, we show a list of posts, and on the latter, the details of a single post.
Now consider the following scenario. Our user clicks on the first PostItem and navigates to the PostDetail page. We fetch the full post data from the server. The likes_count of this post gets increased by one. Now if our user navigates back to the first page, the PostItem should be updated and show the new likes_count as well.
One possible solution to handle this is to create a pool of posts. Now when we fetch some new data from the server, instead of creating a new post object, we can update our corresponding pool instance object. For example, if we navigate to post with id=3, we can do something like this:
Post oldPost = PostPool.get(3)
oldPost.updateFromJson(servers_new_json_for_post_3)
Since the same object is used on the PostDetail page, our PostItem on the PostList page will be updated as well.
Other approaches that do not use a unique "single instance" of our Post objects, across the application, would not be clean to implement and requires tricks to keep the UI sync.
But the ObjectPool approach also has its own problems and leads to memory leaks since the size of the pool gets bigger and bigger over time. For this problem to get solved we need to manually count the number of references for each pool object instance and discard them when this count is equal to zero. This manual calculation is full of bugs and I was wondering if there are any better ways to achieve this.
You can also solve this by using streams and StreamBuilders. First you create the stream and populates it with the initial data fetched from the API:
I like to use BehaviorSubject from rxdart but you can use normal streams too.
final BehaviorSubject<List<Post>> postsStream = BehaviorSubject<List<Post>>()..startWith(<Post>[]);
On the constructor body or initState function you would fetch the data and add it to the stream.
PostPage() {
_fetchPosts().then((posts) => postsStream.add(posts));
}
You can now subscribe to changes on this postsStream in both pages with StreamBuilder. Any update you need to do you would emit a new (updated) List<Post> to the stream triggering a rebuild on any StreamBuilder subscribed to the stream with the new and updated values.
You can latter dispose the StreamController.

Get Deleted Object in AbstractMongoEventListener

I want to run some logic when an Object get deleted from MongoDB. I am using SpringData Mongo.
I am using AbstractMongoEventListener as the object can be deleted from collection through number of ways and I am overriding the
public void onBeforeDelete(BeforeDeleteEvent<Object> event)
method. But there are no method in event object which will return the Object I am going to delete.
event.getSource() and event.getDocument() returns the document. How can I get the object.
Somehow this Event seems to be messed up. In difference to the other MongoMappingEvent<T> descendents, this one inherits a MongoMappingEvent<Document> through AbstractDelteEvent<T>. I cannot explain this difference.
But as I also was in need to retrive the Documents before deleting them, I used the debugger to find, it is possible to retrive the Document Ids, using some hackish shit undocumented get("Key")-chain.
event.getDocument()
.get("_id", Document.class) // BSON Document!
.getList("$in", ObjectId.class) // ObjectId.class or what ever Type your Id is.
With that you can retrive a list of the ids of your documents. Take the repository or what ever, and use those ids to fetch the documents.
I do really not like using those string-key-things that I have not found in a documentation, as who knows when they will be removed.
I would love to remove this answer as soon as someone provides a less hackish way.
Be aware, that when you are using an #EventHandler, it can not consider the type parameter.

Conditional update for MongoDB (Meteor)

TLDR: Is there a way I can conditionally update a Meteor Mongo record inside a collection, so that if I use the id as a selector, I want to update if that matches and only if the revision number is greater than what already exists, or perform an upsert if there is no id match?
I am having an issue with updates to server side Meteor Mongo collections, whereby it seems the added() function callback in the Observers is being triggered on an upsert.
Here is what I am trying to do in a nutshell.
My meteor js app boots and then connects to an endpoint, fetching data and then upserting it into the collection.
collection.update({'sys.id': item.sys.id}, item, {upsert: true});
The 'sys.id' selector checks to see if the item exists, and then updates if it does or adds if it does not.
I have an observer monitoring the above collection, which then acts when an item has been added/updated to the collection.
collection.find({}).observeChanges({
added: this.itemAdded.bind(this),
changed: this.itemChanged.bind(this),
removed: this.itemRemoved.bind(this)
});
The first thing that puzzles me is that when the app is closed and then booted again, the 'added()' callback is fired when the collection is observed. What I would hope to happen is that the changed() callback is fired.
Going back to my original update - is it possible in Mongo to conditionally update something, so you have the selector, then the item, but only perform the update when another condition is met?
// Incoming item
var item = {
sys: {
id: 1,
revision: 5
}
};
collection.update({'sys.id': item.sys.id, 'sys.revision': {$gt: item.sys.revision}, item, {upsert: true});
If you look at the above code, what this is going to do is try to match the sys.id which is fine, but then the revisions will of course be different which means the update function will see it as a different document and then perform a new insert, thus creating duplicate data.
How do I fix this?
To your main question:
What you want is called findAndModify. First, look for the the document meeting the specs, and then update accordingly. This is a really powerful idea because if you did it in 2 queries, the document you found could be deleted/updated before you got to update it. Luckily for you, someone made a package (I really wish this existed a year ago!) https://github.com/fongandrew/meteor-find-and-modify
If you were to do this without using findAndModify you'd have to use javascript to find the doc, see if it matches your criteria, and then update it. In your use case, this would probably work, but there will always be that "what if" in the back of your mind.
Regarding observeChanges, the added is called each time the local minimongo receives a document (it's just reading what the DDP is telling it). Since a refresh will delete your local collection, you have to add those docs one by one. What you could do is wait until all added callbacks have fired, and then run your server method. In doing so, you get a ton of adds, and then a couple more changes will trickle in afterwards.
As Matt K said, you want findAndModify. There are some gotchas to be aware of:
findAndModify is about 100x slower than a find followed by an update. Find+modify is, obviously, not atomic and so won't do what you need, but be aware of the speed hit. (This is based off experience with MongoDB v2.4, so run some benchmarks to confirm under your own version.)
If your query matches multiple items, findAndModify will only act on the first one. In this case, you're querying on a unique id, but be aware of the issue for future use.
findAndModify will return the document after doing its thing, but by default it returns the pre-modification version. If you want the modified one, you need to pass the 'new: true' in your query.

How to get list of aggregates using JOliviers's CommonDomain and EventStore?

The repository in the CommonDomain only exposes the "GetById()". So what to do if my Handler needs a list of Customers for example?
On face value of your question, if you needed to perform operations on multiple aggregates, you would just provide the ID's of each aggregate in your command (which the client would obtain from the query side), then you get each aggregate from the repository.
However, looking at one of your comments in response to another answer I see what you are actually referring to is set based validation.
This very question has raised quite a lot debate about how to do this, and Greg Young has written an blog post on it.
The classic question is 'how do I check that the username hasn't already been used when processing my 'CreateUserCommand'. I believe the suggested approach is to assume that the client has already done this check by asking the query side before issuing the command. When the user aggregate is created the UserCreatedEvent will be raised and handled by the query side. Here, the insert query will fail (either because of a check or unique constraint in the DB), and a compensating command would be issued, which would delete the newly created aggregate and perhaps email the user telling them the username is already taken.
The main point is, you assume that the client has done the check. I know this is approach is difficult to grasp at first - but it's the nature of eventual consistency.
Also you might want to read this other question which is similar, and contains some wise words from Udi Dahan.
In the classic event sourcing model, queries like get all customers would be carried out by a separate query handler which listens to all events in the domain and builds a query model to satisfy the relevant questions.
If you need to query customers by last name, for instance, you could listen to all customer created and customer name change events and just update one table of last-name to customer-id pairs. You could hold other information relevant to the UI that is showing the data, or you could simply hold IDs and go to the repository for the relevant customers in order to work further with them.
You don't need list of customers in your handler. Each aggregate MUST be processed in its own transaction. If you want to show this list to user - just build appropriate view.
Your command needs to contain the id of the aggregate root it should operate on.
This id will be looked up by the client sending the command using a view in your readmodel. This view will be populated with data from the events that your AR emits.

How can I best deal with problems when initializing a model in an iphone app?

So assume I have a class that has an init method that does something like... grabs some data off the net in xml format and parses it to initialize some of its properties. My concern is how should I handle the case where the network is down or the xml data my object receives is bad?
Normally in C i would use return values to indicate an error and what kind and then that would get propagated back till I could report it to the user. I don't really think that will work in this situation.
Use asynchronous network requests.
Create the UI and show it with
either dummy replacement for the
actual values (like pictures) or no
data (for example empty table).
Then create and send the request for
data and register handler that gets
called with data.
When you receive data your handler
gets called with them.
You parse the data and update the
UI. In case of data being invalid
you can now update UI to inform the
user.
You can use timeouts to cancel
requests in case of network problem
and functions not returning with
data within specific time.
There was an example in last year Stanford's CS193p class (iPhone programming but the same applies to desktop apps) with showing empty user interface and updating it when data coming back. You can probably find references to it on net or otherwise there'll be new example this year.
For network down you have a few options
Alert the user you cannot retrieve the needed data
Show stale (last loaded, maybe not stale?) data
For Bad Data:
Alert the user
Try again
Show old data
Try to fix the data (missing a closing tag? etc)
Show a subset of the data (maybe you can extract something that is usable?)
As far as error codes, you can do:
Return codes ie bad_data -1, no_network -2, etc.
You can throw exceptions, catch them and map them to user friendly display messages