Imagine a very simple application with two pages, PostList and PostDetail. On the former page, we show a list of posts, and on the latter, the details of a single post.
Now consider the following scenario. Our user clicks on the first PostItem and navigates to the PostDetail page. We fetch the full post data from the server. The likes_count of this post gets increased by one. Now if our user navigates back to the first page, the PostItem should be updated and show the new likes_count as well.
One possible solution to handle this is to create a pool of posts. Now when we fetch some new data from the server, instead of creating a new post object, we can update our corresponding pool instance object. For example, if we navigate to post with id=3, we can do something like this:
Post oldPost = PostPool.get(3)
oldPost.updateFromJson(servers_new_json_for_post_3)
Since the same object is used on the PostDetail page, our PostItem on the PostList page will be updated as well.
Other approaches that do not use a unique "single instance" of our Post objects, across the application, would not be clean to implement and requires tricks to keep the UI sync.
But the ObjectPool approach also has its own problems and leads to memory leaks since the size of the pool gets bigger and bigger over time. For this problem to get solved we need to manually count the number of references for each pool object instance and discard them when this count is equal to zero. This manual calculation is full of bugs and I was wondering if there are any better ways to achieve this.
You can also solve this by using streams and StreamBuilders. First you create the stream and populates it with the initial data fetched from the API:
I like to use BehaviorSubject from rxdart but you can use normal streams too.
final BehaviorSubject<List<Post>> postsStream = BehaviorSubject<List<Post>>()..startWith(<Post>[]);
On the constructor body or initState function you would fetch the data and add it to the stream.
PostPage() {
_fetchPosts().then((posts) => postsStream.add(posts));
}
You can now subscribe to changes on this postsStream in both pages with StreamBuilder. Any update you need to do you would emit a new (updated) List<Post> to the stream triggering a rebuild on any StreamBuilder subscribed to the stream with the new and updated values.
You can latter dispose the StreamController.
Related
I'm experimenting with microservices architecture. I have UserService and ShoppingService.
In UserService I'm using MongoDb. When I'm creating new user in UserService I want to sync basic user info to ShoppingService. In UserService I'm using something like event sourcing. When I'm creating new User, I first create the UserCreatedEvent and then I apply the event onto domain User object. So in the end I get the domain User object that has current state and list of events containing one UserCreatedEvent.
I wonder if I should persist the Events collection as a nested property of User document or in separate UserEvents collection. I was planning to use Kafka Connect to synchronize the events from UserService to ShoppingService.
If I decide to persist the events inside the User document then I don't need transaction that I would use to save event to separate UserEvents collection but I can't setup the Kafka connector to track changes in the nested property only.
If I decide to persist events in separate UserEvents collection I need to wrap in transaction changes to User and UserEvents. But saving events to separate collection makes setting up Kafka connector very easy because I track only inserts and I don't need to track updates of nested UserEvents array in User document.
I think I will go with the second option for sake of simplicity but maybe I've missed something. Is it good idea to implement it like this?
I would generally advise the second approach. Note that you can also eliminate the need for a transaction by observing that User is just a snapshot based on the UserEvents up to some point in the stream and thus doesn't have to be immediately updated.
With this, your read operation for User can be: select a user from User (the latest snapshot), which includes a version/sequence number saying that it's as-of some event; then select the events with later sequence numbers and apply those events to the user. If there's some querier which wants a faster response and can tolerate getting something stale, a different endpoint (or an option in the query) can bypass the event replay.
You can then have some asynchronous process which subscribes to the stream of user events and updates User based on those events.
I'm new to dart/flutter and am building an app right now that keeps track of some data on each user in a firstore database. I have a 'Users' collection which contains a document for each user, and one of the fields in the user document is the "UID" received through firebase_auth.
That being said, to make sure I have access to the latest copy of a user document, I hold a Stream. I want to somehow access the "UID" field from the latest snapshot in the stream and then do some other operations with it.
Is there anyway to do this? Am I using/understanding streams incorrectly?
If you only ever need the UID to build other widgets, you can simply use a StreamBuilder which will rebuild its children whenever a new value is emitted from the stream (which you get a copy of). However, if you need to access the latest UID at some arbitrary point of time, check out RxDart's BehaviorSubject which provides constant-time synchronous access to the latest emitted value, if one exists. It is very helpful for handling state in general.
This question already has an answer here:
How to create a Collection in Firestore with node.js code
(1 answer)
Closed 4 years ago.
I'm developing an chat app in Swift and when the user creates a chatroom for the first time, it obviously doesn't exist yet in Firebase. Only until the first message gets created does everything build out.
When my app first loads, in the viewDidLoad(), I'm setting a reference to the chat at this path (keep in mind it doesn't exist):
reference = db.collection("chatrooms").document("\(chatRoomId)").collection("thread").document("\(threadId)").collection("messages")
So when I add my snapshot listener just below this, and the user creates his first message on the thread, the listener doesn't fire for everyone (I assume, anyway, because it doesn't fire for me)
I do like that once you add your first message, it creates the whole document structure above, but this means my first users who might be watching the thread (however unlikely this is) won't get the message.
Is this an uncommon edge case I'm trying to guard against, or do I need to do something like this (pseudo-code)?
IF (this collection doesn't exist yet) THEN
CREATE GROUP/THREAD/EMPTY MESSAGE COLLECTION
reference = db.collection("chatrooms").document("\(chatRoomId)").collection("thread").document("\(threadId)").collection("messages")
ELSE
reference = db.collection("chatrooms").document("\(chatRoomId)").collection("thread").document("\(threadId)").collection("messages")
END
I was wondering if there is some sort of initializer I can call to make the structure for me, but still set it up so my first message to the collection fires the snapshot listener.
Thanks!
What you're doing is unnecessary. There is no formal method required to create a collection. A collection simply springs into existence when you first write a document into it. You can listen on an empty collection with no problems, and listeners should be invoked even if the collection wasn't visible in the dashboard at the time it was added.
When I creating same type objects and save them into database, should I send a list of that objects in one request or should I send individually for each one?
For example, I would like to create a todo list, I can create multiple todos, then click save to send a list of todos, or when I finish editing one todo, I save it directly.
The first way can save request numbers, only one request needed to create many objects. But is the first way RESTful? All infomation about create in REST is creating a single object, but will there be poblems if increasing requests numbers?
----Edit
Thank you guys answering me.
For a more spicific usecase, I am using Django Rest Framework. I created a Todo model and a corresponding serializer. I am wondering, how could I create a list of Todos? I tried to send a list of Todos to serializer, and expecting serializer can automatically loop through it as same as getting a list of instance. But that doesn't work. I know I may be able to create a loop to call create method everytime. But is there a better way to do it?
There is nothing in REST that tells you what kind of payload you are allowed to use. You can POST/PUT whatever you want - one entity representation or many representations, in lists, dictionaries, XML, URL-encoded key/values or JSON, what ever suits your use case best.
In your case you might even want to send a delta/diff list of changes on the client: Lets for instance say your client loads some existing 3 todo items. Then the user modifies one of them, deletes another one and adds a new one. You can either do that in three requests or one single request with add/modify/delete operations encoded in it. Both ways are valid and the best solution depends on your use case and constraints like bandwidth, processing power and network round-trip time.
So assume I have a class that has an init method that does something like... grabs some data off the net in xml format and parses it to initialize some of its properties. My concern is how should I handle the case where the network is down or the xml data my object receives is bad?
Normally in C i would use return values to indicate an error and what kind and then that would get propagated back till I could report it to the user. I don't really think that will work in this situation.
Use asynchronous network requests.
Create the UI and show it with
either dummy replacement for the
actual values (like pictures) or no
data (for example empty table).
Then create and send the request for
data and register handler that gets
called with data.
When you receive data your handler
gets called with them.
You parse the data and update the
UI. In case of data being invalid
you can now update UI to inform the
user.
You can use timeouts to cancel
requests in case of network problem
and functions not returning with
data within specific time.
There was an example in last year Stanford's CS193p class (iPhone programming but the same applies to desktop apps) with showing empty user interface and updating it when data coming back. You can probably find references to it on net or otherwise there'll be new example this year.
For network down you have a few options
Alert the user you cannot retrieve the needed data
Show stale (last loaded, maybe not stale?) data
For Bad Data:
Alert the user
Try again
Show old data
Try to fix the data (missing a closing tag? etc)
Show a subset of the data (maybe you can extract something that is usable?)
As far as error codes, you can do:
Return codes ie bad_data -1, no_network -2, etc.
You can throw exceptions, catch them and map them to user friendly display messages