Say I subscribe 10 documents when the page is rendered and subscribe more documents as and when needed. Basically I am showing images to the user. So when the page is opened, I want to subscribe the first 5 documents. Now when the user is on 3rd document, I want to subscribe the next 5. Any help on how to proceed??
I can subscribe to the first 10 documents using limit property of mongodb. I know when to fire the next meteor.subscribe call but how should I mention that it should subscribe the next five documents
A simple pattern to do this is to use a Session variable or a reactiveVar to track how many items you want to load then have a Tracker.autorun() update the subscription automatically when that changes.
Initialize (when you're setting up the layout):
Session.set('nDocs',10);
Tracker:
Tracker.autorun(() => {
Meteor.subscribe('myPublication', Session.get('nDocs'));
});
Event Handler (triggered when the user views the 3rd doc in your case):
someEvent(ev){
Session.set('nDocs', Session.get('nDocs')+10);
}
Related
Firebase has a limit of 100 snapshot listeners per client. I have a Screen called "MyChats" which displays all the chats a user has and wanted to know the following:
Is my below function, which gets the data for the screen, counting as just a single listener?
Would grabbing the latest message for each chatroom (to list it in a preview, similar to Whatsapp, FB Messenger and other chat applications) affect my number of listeners?
firestore
.collection("chats")
.where("membersArray", "array-contains", currentUser.uid)
.where("deletedAt", "==", null)
.orderBy("lastMessage.date")
.startAt(new Date())
.onSnapshot(...);
Is my below function, which gets the data for the screen, counting as just a single listener?
You are calling onSnapshot() only once so that's only 1 listener.
Would grabbing the latest message for each chatroom affect my number of listeners?
If you are referring to above code, that's just a single query with 1 listener. If you individually add a listener for each document then that'll be N listeners.
const col = db.collection("chats");
// 1 listener irrespective of number of documents in collection
col.onSnapshot(...)
// 3 listeners, as they are assigned separately
col.doc("1").onSnapshot(...)
col.doc("2").onSnapshot(...)
col.doc("3").onSnapshot(...)
Imagine a very simple application with two pages, PostList and PostDetail. On the former page, we show a list of posts, and on the latter, the details of a single post.
Now consider the following scenario. Our user clicks on the first PostItem and navigates to the PostDetail page. We fetch the full post data from the server. The likes_count of this post gets increased by one. Now if our user navigates back to the first page, the PostItem should be updated and show the new likes_count as well.
One possible solution to handle this is to create a pool of posts. Now when we fetch some new data from the server, instead of creating a new post object, we can update our corresponding pool instance object. For example, if we navigate to post with id=3, we can do something like this:
Post oldPost = PostPool.get(3)
oldPost.updateFromJson(servers_new_json_for_post_3)
Since the same object is used on the PostDetail page, our PostItem on the PostList page will be updated as well.
Other approaches that do not use a unique "single instance" of our Post objects, across the application, would not be clean to implement and requires tricks to keep the UI sync.
But the ObjectPool approach also has its own problems and leads to memory leaks since the size of the pool gets bigger and bigger over time. For this problem to get solved we need to manually count the number of references for each pool object instance and discard them when this count is equal to zero. This manual calculation is full of bugs and I was wondering if there are any better ways to achieve this.
You can also solve this by using streams and StreamBuilders. First you create the stream and populates it with the initial data fetched from the API:
I like to use BehaviorSubject from rxdart but you can use normal streams too.
final BehaviorSubject<List<Post>> postsStream = BehaviorSubject<List<Post>>()..startWith(<Post>[]);
On the constructor body or initState function you would fetch the data and add it to the stream.
PostPage() {
_fetchPosts().then((posts) => postsStream.add(posts));
}
You can now subscribe to changes on this postsStream in both pages with StreamBuilder. Any update you need to do you would emit a new (updated) List<Post> to the stream triggering a rebuild on any StreamBuilder subscribed to the stream with the new and updated values.
You can latter dispose the StreamController.
This question already has an answer here:
How to create a Collection in Firestore with node.js code
(1 answer)
Closed 4 years ago.
I'm developing an chat app in Swift and when the user creates a chatroom for the first time, it obviously doesn't exist yet in Firebase. Only until the first message gets created does everything build out.
When my app first loads, in the viewDidLoad(), I'm setting a reference to the chat at this path (keep in mind it doesn't exist):
reference = db.collection("chatrooms").document("\(chatRoomId)").collection("thread").document("\(threadId)").collection("messages")
So when I add my snapshot listener just below this, and the user creates his first message on the thread, the listener doesn't fire for everyone (I assume, anyway, because it doesn't fire for me)
I do like that once you add your first message, it creates the whole document structure above, but this means my first users who might be watching the thread (however unlikely this is) won't get the message.
Is this an uncommon edge case I'm trying to guard against, or do I need to do something like this (pseudo-code)?
IF (this collection doesn't exist yet) THEN
CREATE GROUP/THREAD/EMPTY MESSAGE COLLECTION
reference = db.collection("chatrooms").document("\(chatRoomId)").collection("thread").document("\(threadId)").collection("messages")
ELSE
reference = db.collection("chatrooms").document("\(chatRoomId)").collection("thread").document("\(threadId)").collection("messages")
END
I was wondering if there is some sort of initializer I can call to make the structure for me, but still set it up so my first message to the collection fires the snapshot listener.
Thanks!
What you're doing is unnecessary. There is no formal method required to create a collection. A collection simply springs into existence when you first write a document into it. You can listen on an empty collection with no problems, and listeners should be invoked even if the collection wasn't visible in the dashboard at the time it was added.
I had a question last month about POSTing to Workfront's RESVT field. Since then, I've completed my database to pull all of our department's Leave Calendar data and feed it into Workfront's API as POST methods; however, I've discovered a new problem.
Every time I add a new RESVT event to a user with an existing RESVT event, it deletes the previous event before it saves the new event. I've looked into submitting a bulk edit using the bulk editing format to get all of the events for a user online all at once like this:
https://sosprojects.preview.workfront.com/attask/api/v9.0
/5b6b72b5007d93b00b00dda361398cad?method=put&updates=
[
{
objCode:”RESVT”,
startDate:”2018-08-20T00:00:00:000-0700”,
endDate:”2018-08-23T00:00:00:000-700”
},
{
objCode:”RESVT”,
startDate:”2018-09-20T00:00:00:000-0700”,
endDate:”2018-09-23T00:00:00:000-0700”
}
]
&sessionID=209055d209f94662b32ac50175b34bc7
Which Workfront "accepts" (it doesn't spit out an error code), but it still only saves the last RESVT event (e.g. 9/20 - 9/23).
I've tried using PUT to edit an existing RESVT event, but each RESVT event will only accept one start and one end date so it collapses those attempts into one extra long event.
I know the time-off calendar can manually create multiple RESVT events per user, but I can't figure out how to replicate that feat with my http methods. The calendar always creates new RESVT events for every event logged whenever I add a new event to it; so I think it is doing something like the bulk POST I tried at the top, so why can't my method do the same thing?
The API docs don't show update as a valid method for the RESVT object. Just modify the fields directly for the specific object you want to update. Do you know its ID?
PUT https://<url>.my.workfront.com/attask/api/v9.0/RESVT/<ID of the reserved time you want to edit>?userID=abc1234,startDate=<date>,endDate=<date>
There is refresh function in Spine.js which has this option:
You can pass the option {clear: true} to wipe all the existing records.
but let's say i'm implementing pagination and want all records be cleared on every fetch, because now when i fetch next page,
then new records are just appended to current recordset and page bloats, wchich is undesirable.
Unfortunately fetch has no such useful option. So is it possible to achieve similar functionality for fetch ?
The approach I would take is to override the default fetch by adding your custom fetch method on your model