How to use Observables as a lazy data source - event-handling

I'm wrapping an API that emits events in Observables and currently my datasource code looks something like this, with db.getEventEmitter() returning an EventEmitter.
const Datasource = {
getSomeData() {
return Observable.fromEvent(db.getEventEmitter(), 'value');
}
};
However, to actually use this, I need to both memoize the function and have it return a ReplaySubject, otherwise each subsequent call to getSomeData() would reinitialize the entire sequence and recreate more event emitters or not have any data until the next update, which is undesirable, so my code looks a lot more like this for every function
const someDataCache = null;
const Datasource = {
getSomeData() {
if (someDataCache) { return someDataCache; }
const subject = new ReplaySubject(1);
Observable.fromEvent(db.getEventEmitter(), 'value').subscribe(subject);
someDataCache = subject;
return subject;
}
};
which ends up being quite a lot of boilerplate for just one single function, and becomes more of an issue when there are more parameters.
Is there a better/more elegant design pattern to accomplish this? Basically, I'd like that
Only one event emitter is created.
Callers who call the datasource later get the most recent result.
The event emitters are created when they're needed.
but right now I feel like this pattern is fighting the Observable pattern, resulting a bunch of boilerplate.

As a followup to this question, I ended up commonizing the logic to leverage Observables in this way. publishReplay as cartant mentioned does get me most of the way to what I needed. I've documented what I've learned in this post, with the following tl;dr code:
let first = true
Rx.Observable.create(
observer => {
const callback = data => {
first = false
observer.next(data)
}
const event = first ? 'value' : 'child_changed'
db.ref(path).on(event, callback, error => observer.error(error))
return {event, callback}
},
(handler, {event, callback}) => {
db.ref(path).off(event, callback)
},
)
.map(snapshot => snapshot.val())
.publishReplay(1)
.refCount()

Related

react-query: How to process a queue, one item at a time, and remove the original data after processing?

I'm using react-query 4 to get some data from my server via JSON:API and create some objects:
export type QueryReturnQueue = QueueObject[] | false;
const getQueryQueue = async (query: string): Promise<QueryReturnQueue> => {
const data = await fetchAuth(query);
const returnData = [] as QueueObject[];
if (data) {
data.map((queueItem) => returnData.push(new QueueObject(queueItem)));
return returnData;
}
return false;
};
function useMyQueue(
queueType: QueueType,
): UseQueryResult<QueryReturnQueue, Error> {
const queryKey = ['getQueue', queueType];
return useQuery<QueryReturnQueue, Error>(
queryKey,
async () => {
const query = getUrl(queueType);
return getQueryQueue(query);
},
);
}
Then I have a component that displays the objects one at a time and the user is asked to make a choice (for example, "swipe left" or "swipe right"). This queue only goes in one direction-- the user sees a queueObject, processes the object, and then goes to the next one. The user cannot go back to a previous object, and the user cannot skip ahead.
So far, I've been using useContext() to track the index in the queue as state. However, I've been running into several bugs with this when the queue gets refreshed, which happens a lot, so I thought it would be easier to directly manipulate the data returned by useQuery().
How can I remove items as they are processed from the locally cached query results?
My current flow:
Fetch the queue data and generation objects with useQuery().
Display the queue objects one at a time using useContext().
Mutate the displayed object with useMutation() to modify useContext() and then show the next object in the cached data from useQuery().
My desired flow:
Fetch the queue data and generation objects with useQuery().
Mutate the displayed object with useMutation(), somehow removing the mutated item from the cached data from useQuery() (like what shift() does for arrays).
Sources I consulted
Best practices for editing data after useQuery call (couldn't find an answer relevant to my case)
Optimistic updates (don't know how to apply it to my case)
My desired flow:
Fetch the queue data and generation objects with useQuery().
Mutate the displayed object with useMutation(), somehow removing the mutated item from the cached data from useQuery() (like what shift() does for arrays).
This is the correct way to think about the data flow. But mutations shouldn't be updating the cache with data, they should be invalidating existing cache data.
You have defined your query correctly. Now you simply have to instruct your mutation function (which should be making an API call that updates the records queue) to invalidate all existing queries for the data in the onSuccess handler.
e.g.
function useMyMutation(recordId, queueType) {
const queryClient = useQueryClient();
return useMutation({
mutationFn: ({id, swipeDirection}) =>
asyncAPICall(`/swipes/${id}`, { swipeDirection }),
onSuccess: () => queryClient.invalidateQueries(['getQueue', queueType]);
});
}
As suggested by #Jakub Kotrs:
shift the first item from the list + only ever display the first
I was able to implement this in my useMutation() hook:
onMutate: async (queueObjectRemoved) => {
const queryKey = ['getQueue', queueType];
// Cancel any outgoing refetches
// (so they don't overwrite our optimistic update).
await queryClient.cancelQueries({
queryKey,
});
if (data?.[0]?.id === queueObjectRemvoed.data.id) {
// Optimistically update the data by removing the first item.
data.shift();
queryClient.setQueryData(queryKey, () => data);
} else {
throw new Error('Unable to set queue!');
}
},
onError: () => {
const queryKey = ['getQueue', queueType];
setShowErrorToast(true);
queryClient.invalidateQueries(
queryKey,
);
},
This way users can process all the items in the current queue before needing to refetch.

How to limit API calls per second with angular2

I have an API limit of 10 calls per second (however thousands per day), however, when I run this function (Called each Style ID of object, > 10 per second):
getStyleByID(styleID: number): void {
this._EdmundsAPIService.getStyleByID(styleID).subscribe(
style => {this.style.push(style); },
error => this.errorMessage = <any>error);
}
from this function (only 1 call, used onInit):
getStylesWithoutYear(): void {
this._EdmundsAPIService.getStylesWithoutYear(this.makeNiceName, this.modelNiceName, this.modelCategory)
.subscribe(
styles => { this.styles = styles;
this.styles.years.forEach(year =>
year.styles.forEach(style =>
this.getStyleByID(style.id)));
console.log(this.styles); },
error => this.errorMessage = <any>error);
}
It makes > 10 calls a second. How can I throttle or slow down these calls in order to prevent from getting a 403 error?
I have a pretty neat solution where you combine two observables with the .zip() operator:
An observable emitting the requests.
Another observable emitting a value every .1 second.
You end up with one observable emitting requests every .1 second (= 10 requests per second).
Here's the code (JSBin):
// Stream of style ids you need to request (this will be throttled).
const styleIdsObs = new Rx.Subject<number>();
// Getting a style means pushing a new styleId to the stream of style ids.
const getStyleByID = (id) => styleIdsObs.next(id);
// This second observable will act as the "throttler".
// It emits one value every .1 second, so 10 values per second.
const intervalObs = Rx.Observable.interval(100);
Rx.Observable
// Combine the 2 observables. The obs now emits a styleId every .1s.
.zip(styleIdsObs, intervalObs, (styleId, i) => styleId)
// Get the style, i.e. run the request.
.mergeMap(styleId => this._EdmundsAPIService.getStyleByID(styleId))
// Use the style.
.subscribe(style => {
console.log(style);
this.style.push(style);
});
// Launch of bunch of requests at once, they'll be throttled automatically.
for (let i=0; i<20; i++) {
getStyleByID(i);
}
Hopefully you'll be able to translate my code to your own use case. Let me know if you have any questions.
UPDATE: Thanks to Adam, there's also a JSBin showing how to throttle the requests if they don't come in consistently (see convo in the comments). It uses the concatMap() operator instead of the zip() operator.
You could use a timed Observable that triggers every n milliseconds. I didn't adapt your code but this one shows how it would work:
someMethod() {
// flatten your styles into an array:
let stylesArray = ["style1", "style2", "style3"];
// create a scheduled Observable that triggers each second
let source = Observable.timer(1000,1000);
// use a counter to track when all styles are processed
let counter = 0;
let subscription = source.subscribe( x => {
if (counter < stylesArray.length) {
// call your API here
counter++;
} else {
subscription.complete();
}
});
}
Find here a plunk that shows it in action
While I didn't test this code, I would do try something along these lines.
Basically I create a variable that keeps track of when the next request is allowed to be made. If that time has not passed, and a new request comes in, it will use setTimeout to allow that function to run at the appropriate time interval. If the delayUntil value is in the past, then the request can run immediately, and also push back the timer by 100 ms from the current time.
delayUntil = Date.now();
getStylesWithoutYear(): void {
this.delayRequest(() => {
this._EdmundsAPIService.getStylesWithoutYear(this.makeNiceName, this.modelNiceName, this.modelCategory)
.subscribe(
styles => { this.styles = styles;
this.styles.years.forEach(year =>
year.styles.forEach(style =>
this.getStyleByID(style.id)));
console.log(this.styles); },
error => this.errorMessage = <any>error);
};
}
delayRequest(delayedFunction) {
if (this.delayUntil > Date.now()) {
setTimeout(delayedFunction, this.delayUntil - Date.now());
this.delayUntil += 100;
} else {
delayedFunction();
this.delayUntil = Date.now() + 100;
}
}

Confused about side effects / ContinueAfter

I have a scenario in which I download parent entities from an api and save them to a database. I then want, once all of the parents have been saved, to download and save their children.
I've seen (or misunderstood) some comments about how this is a side-effect as I will not be passing the result of the parent save operation to the save children operation. I simply want to begin it when the parents are saved.
Could someone explain to me the best way of doing this?
Perhaps try something like this:
Observable
.Create<int>(o =>
{
var parentIds = new int?[] { null };
return
Observable
.While(
() => parentIds.Any(),
parentIds
.ToObservable()
.Select(parentId => Save(parentId)))
.Finally(() => { /* update `parentIds` here with next level */ })
.Subscribe(o);
})
.Subscribe(x => { });
This is effectively doing a breadth-first traversal of all of the entities, saving them as it goes, but outputting a single observable that you can subscribe to.

RXJS : Idiomatic way to create an observable stream from a paged interface

I have paged interface. Given a starting point a request will produce a list of results and a continuation indicator.
I've created an observable that is built by constructing and flat mapping an observable that reads the page. The result of this observable contains both the data for the page and a value to continue with. I pluck the data and flat map it to the subscriber. Producing a stream of values.
To handle the paging I've created a subject for the next page values. It's seeded with an initial value then each time I receive a response with a valid next page I push to the pages subject and trigger another read until such time as there is no more to read.
Is there a more idiomatic way of doing this?
function records(start = 'LATEST', limit = 1000) {
let pages = new rx.Subject();
this.connect(start)
.subscribe(page => pages.onNext(page));
let records = pages
.flatMap(page => {
return this.read(page, limit)
.doOnNext(result => {
let next = result.next;
if (next === undefined) {
pages.onCompleted();
} else {
pages.onNext(next);
}
});
})
.pluck('data')
.flatMap(data => data);
return records;
}
That's a reasonable way to do it. It has a couple of potential flaws in it (that may or may not impact you depending upon your use case):
You provide no way to observe any errors that occur in this.connect(start)
Your observable is effectively hot. If the caller does not immediately subscribe to the observable (perhaps they store it and subscribe later), then they'll miss the completion of this.connect(start) and the observable will appear to never produce anything.
You provide no way to unsubscribe from the initial connect call if the caller changes its mind and unsubscribes early. Not a real big deal, but usually when one constructs an observable, one should try to chain the disposables together so it call cleans up properly if the caller unsubscribes.
Here's a modified version:
It passes errors from this.connect to the observer.
It uses Observable.create to create a cold observable that only starts is business when the caller actually subscribes so there is no chance of missing the initial page value and stalling the stream.
It combines the this.connect subscription disposable with the overall subscription disposable
Code:
function records(start = 'LATEST', limit = 1000) {
return Rx.Observable.create(observer => {
let pages = new Rx.Subject();
let connectSub = new Rx.SingleAssignmentDisposable();
let resultsSub = new Rx.SingleAssignmentDisposable();
let sub = new Rx.CompositeDisposable(connectSub, resultsSub);
// Make sure we subscribe to pages before we issue this.connect()
// just in case this.connect() finishes synchronously (possible if it caches values or something?)
let results = pages
.flatMap(page => this.read(page, limit))
.doOnNext(r => this.next !== undefined ? pages.onNext(this.next) : pages.onCompleted())
.flatMap(r => r.data);
resultsSub.setDisposable(results.subscribe(observer));
// now query the first page
connectSub.setDisposable(this.connect(start)
.subscribe(p => pages.onNext(p), e => observer.onError(e)));
return sub;
});
}
Note: I've not used the ES6 syntax before, so hopefully I didn't mess anything up here.

.bind("move_node.jstree",.. -> data.rslt.obj undefined. How to get node data?

I have a custom functionality for check_move:
crrm : {
move : {
"check_move" : function (m) {
var p = this._get_parent(m.o);
if(!p)
return false;
if(m.cr===-1)
return false;
return true;
}
}
},
This seems to work as intended.
I then try to bind to the "move_node" event to update my database:
.bind("move_node.jstree",function(event,data){
if(data.rslt.obj.attr("id")==""){
/* I omitted this snippet from this paste - it's really long and it basically does the same thing as below, just gets the node's id in a more complicated way*/
} else {
controller.moveNode(data.rslt.obj.attr("id"),data.inst._get_parent(this).attr("id"),data.rslt.obj.attr("rel"));
}
})
This results in an error. data.rslt.obj is undefined. I'm truly at loss at what to do, I've binded to multiple events before and this is how I've done it.
How can I get node attributes etc after the move_node event, if data.rslt.obj doesn't work?
Oh, the controller.moveNode() is one of my own functions, so don't just copy-paste if you're trying to learn jstree.
I found the answer to my own question pretty soon after asking about it (typical).
One must use data.rslt.o.attr("id") instead of -.obj.- An odd inconsistency if you ask me.
I would delete this post, but I think this could be a pretty common problem. If someone thinks otherwise, feel free to delete.
if(!p)
return false;
if(m.cr===-1)
return false;
return true;
next time try to do it like this:
return (p && m.cr !== -1);