React-Query: Is there an alternative way to update useQuery staleTime besides setQueryDefaults? - react-query

I'm trying to do this:
function useReactiveStaleTimeQuery() {
const [staleTime, setStaleTime] = useState(1000);
const { data } = useQuery(queryKey, queryFn, { staleTime, notifyOnChangeProps: ["tracked"] })
}
But setQueryDefaults feels weird:
function useReactiveStaleTimeQuery() {
const queryClient = useQueryClient();
const [staleTime, setStaleTime] = useState(1000);
useEffect(() => queryClient.setQueryDefaults(queryKey, { staleTime }), [staleTime]);
const { data } = useQuery(queryKey, queryFn, { notifyOnChangeProps: ["data"]});
return data;
}
It's possible that only the queryKey and/or queryFn is watched by useQuery but I'm not entirely sure (don't have time to go through the source code) but what I don't understand is how is it possible that the enable option is watched but staleTime is ignored. Docs don't say anything about using staleTime as a thunk like initialDataUpdatedAt.

but what I don't understand is how is it possible that the enable option is watched but staleTime is ignored.
so this usually works:
function useReactiveStaleTimeQuery() {
const queryClient = useQueryClient();
const [staleTime, setStaleTime] = useState(1000);
const { data } = useQuery(
queryKey,
queryFn,
{ staleTime }
)
return [data, setStaleTime];
}
Once you update the staleTime, the query will notice it. However, unlike enabled (which works on a per-observer level), there is only one query in the cache, so there can only be one staleTime.
This is important if you have two observers (use the custom hook twice). You can have one enabled and one disabled query, but you cannot have staleTime: 1000 and staleTime: 500 on the same query. As far as I know, the shorter one "wins".
So yeah, controlling the staleTime with state works, but I think it would be better to store it globally and not with local state on a per hook level.

Related

What is the proper way to run fetch calls which use reactive components from a store?

I am getting two reactive variables I need from a store to use for my fetch calls. I need these fetch calls to rerun when the data in these store values change. I am able to make this work however when I reload the page it causes my app to crash because there are no values that are getting from the store. I am able to make it work if I disable ssr on the +page.js file.
I also believe it is relevant to mention that I am using a relative URL (/api) to make the fetch call because I have a proxy server to bypass CORS
What is the proper way to get this data by rerunning the fetch calls using a reactive component from a store without disabling ssr? Or is this the best/only solution?
+page.svelte
<script>
import { dateStore, shiftStore } from '../../../lib/store';
$: shift = $shiftStore
$: date = $dateStore
/**
* #type {any[]}
*/
export let comments = []
/**
* #type {any[]}
*/
let areas = []
//console.log(date)
async function getComments() {
const response = await fetch(`/api/${date.toISOString().split('T')[0]}/${shift}/1`)
comments = await response.json()
console.log(comments)
}
async function getAreas() {
const response = await fetch(`/api/api/TurnReportArea/1/${date.toISOString().split('T')[0]}/${shift}`)
areas = await response.json()
console.log(areas)
}
// both of these call function if date or shift value changes
$: date && shift && getAreas()
$: date , shift , getComments()
</script>
I tried to use the +page.js file for my fetch calls, however I cannot use the reactive values in the store in the +page.js file. Below the date variable is set as a 'Writble(Date)' When I try to add the $ in front of the value let dare = $dateStore, I get the error 'Cannot find name '$dateSrote'' If i put the $ in the fetch call I get the error 'Cannot find $date'. Even if I were able to make this work, I do not understand how my page would know to rerender if these fetch calls were ran so I do not think this is the solution. As I mentioned, the only solution I have found is to disable ssr on the +page.js, which I do not think is the best way to fix this issue.
import { dateStore, shiftStore } from "../../../lib/store"
export const load = async ({ }) => {
let shift = shiftStore
let date = dateStore
const getComments = async() => {
const commentRes = await fetch(`/api/${date.toISOString().split('T')[0]}/${shift}/1`)
const comments = await commentRes.json()
console.log(comments)
}
const getAreas = async () => {
const areasRes = await fetch(`/api/api/TurnReportArea/1/${date.toISOString().split('T')[0]}/${shift}`)
const areas = await areasRes.json()
console.log(areas)
}
return {
comments: getComments(),
areas: getAreas()
}
}

replaceReducer on #reduxjs/toolkit

Although I'm newbie of redux as well as of RTK, I'm just starting redux project with RTK as if you start with Spring Boot not Spring these days.
I want to dynamically inject reducers on demand on redux toolkit(RTK). I realized that I need to keep track of current reducers to make it. I expected my store from RTK would have a reference to them, but unfortunately it seems it doesn't have such a property.
While I found this module that seems to do the job, but it seems I have to go back to the days before RTK was created to make it work.
import {createStore, createReducer, Slice, Reducer, AnyAction, combineReducers} from "#reduxjs/toolkit";
const Store = createStore<any, any, any, any>(createReducer({}, () => {}));
const reducers: {
[key: string]: Reducer<any, AnyAction>;
} = {};
export const injectReducer = (slice: Slice) => {
reducers[slice.name] = slice.reducer;
Store.replaceReducer(combineReducers(reducers));
};
Even more (maybe I just don't know the way) type definitions will go insane.
Is there any way to make this?
I had the same issue, and finally managed to fix it with a few lines;
First, I created the store:
const staticReducers = {
counter: counterReducerSlice,
};
export const store = configureStore({
reducer: staticReducers,
middleware: getDefaultMiddleware => [logger, ...getDefaultMiddleware()],
enhancers: compose([monitorReducerEnhancer]),
});
then making the async reducers directory:
store.asyncReducers = {};
store.injectReducer = (key, asyncReducer) => {
store.asyncReducers[key] = asyncReducer;
store.replaceReducer(createReducer(store.asyncReducers));
};
function createReducer(asyncReducers) {
return combineReducers({
...staticReducers,
...asyncReducers
});
}
finally, in my code, every time I want to add a new reducer to my store, I call store.injectReducer:
store.injectReducer('reducerKey', theReducerSlice);

Sails.js 1.0 helpers chaining as Promises

I'm new to Sails.js and I'm looking to develop a new application using sail.js and in this application, I want to respond to a POST request as quickly as possible, then handle a number of tasks with the payload asynchronously. Ideally I'd have a helper for each step of the tasks I want to carry out on the payload and chain them all asynchronously in the action. I've been trawling through the docs and can't seem to find a way to do this.
Is this the right way to approach this issue (if so how/can you point me to docs) or are there alternative ways to handle this issue that I have overlooked?
Thanks
With ES6, you can use helpers both with async/await or as promises.
const temp1 = await sails.helpers.stepone();
const temp2 = await sails.helpers.steptwo( temp1 );
let result = await sails.helpers.stepthree( temp2 );
// use result here
OR
sails.helpers.stepone
.then(sails.helpers.steptwo)
.then(sails.helpers.stepthree)
.then(result => {
// use result here
});
Just set up your service methods as promises and resolve early. You can import bluebird, for example, to accomplish this.
In your controller:
myPostEndpoint: (req, res) => {
return MyProcessorService.initProcessing(req.body).then(res.json);
}
And in your service MyProcessorService:
var Promise = import('bluebird');
//... other init code
module.exports = {
initProcessing: data => {
//do some validation...
// then just resolve and continue
Promise.resolve({ status: 'processing'});
return MyProcessorService.step1(data)
.then(MyProcessorService.step2)
.then(MyProcessorService.step3)//and so on....
},
step1: dataFromInit => {
//do stuff and resolve for step2
},
step2: dataFromStep1 => {
//do stuff and resolve for step3
},
step3: dataFromStep2 => {
//do stuff and resolve
},
//and so on
}
You could also set up a worker queue with something like Bull and Redis to send off jobs to and run in a WorkerService or separate worker app.

How can I render data after 10 seconds of actual database update reactively in a particular template in Meteor.js?

I'm currently working on a simple game called Bingo. Now I've made a spectate option in which I need to broadcast the game not real time but with a 10 second delay. Now how can I do that easily ?
The idea to use observe routine seems good but there are at least a couple of ways this can be implemented. One way is to delay the subscription itself. Here's a working example:
import { Meteor } from 'meteor/meteor';
import { TheCollection } from '/imports/collections.js';
Meteor.publish('delayed', function (delay) {
let isStopped = false;
const handle = TheCollection.find({}).observeChanges({
added: (id, fields) => {
Meteor.setTimeout(() => {
if (!isStopped) {
this.added(TheCollection._name, id, fields);
}
}, delay);
},
changed: (id, fields) => {
Meteor.setTimeout(() => {
if (!isStopped) {
this.changed(TheCollection._name, id, fields);
}
}, delay);
},
removed: (id) => {
Meteor.setTimeout(() => {
if (!isStopped) {
this.removed(TheCollection._name, id);
}
}, delay);
}
});
this.onStop(() => {
isStopped = true;
handle.stop();
});
this.ready();
});
Another way would be to create a local ProxyCollection that is only used for rendering purpose. The data would be copied from TheCollection to ProxyCollection with some delay using the same "observe technique" as in the subscription case.
In both scenarios you will need to handle some edge cases, for example:
Should the data be delayed on the initial load?
Should the update be delayed if document is removed?
Should the update be delayed for the user that initialized the change?
They can all be solved by utilizing and adjusting the technique presented above. I believe though, they're outside the scope of this question.
EDIT
To prevent delays on the initial data load you can update the above code as follows:
let initializing = true;
const handle = TheCollection.find({}).observeChanges({
added: (id, fields) => {
if (initializing) {
this.added(TheCollection._name, id, fields);
} else {
Meteor.setTimeout(() => {
if (!isStopped) {
this.added(TheCollection._name, id, fields);
}
}, delay);
}
},
// ...
});
// ...
this.ready();
initializing = false;
At first, it may not be obvious why this works, but everything here is being executed within a fiber. The observeChanges routine "blocks" and it first calls added for each document of the entire initial dataset. Only then it proceeds to the next part of your publish method body.
Something that one should be aware of is because of the behavior described above, a subscription may be stopped before the initial data set is processed and so, before the onStop callback is even defined. In this particular case it shouldn't hurt but sometimes it can be problematic.
You can use .observe(). It will tell you when added/changed events fire and you can do whatever you want in those events. Documentation here.
CollectionName.find().observe({
added: function (document) {
//do something here, like delaying the update
},
changed: function (document) {
//do something here, like delaying the update
},
});

How can I leverage reactive extensions to do caching, without a subject?

I want to be able to fetch data from an external Api for a specific request, but when that data is returned, also make it available in the cache, to represent the current state of the application.
This solution seems to work:
var Rx = require('rx');
var cached_todos = new Rx.ReplaySubject(1);
var api = {
refresh_and_get_todos: function() {
var fetch_todos = Rx.Observable.fromCallback($.get('example.com/todos'));
return fetch_todos()
.tap(todos => cached_todos.onNext(todos));
},
current_todos: function() {
return cached_todos;
}
};
But - apparently Subjects are bad practice in Rx, since they don't really follow functional reactive programming.
What is the right way to do this in a functional reactive programming way?
It is recommended not to use Subjects because there is a tendency to abuse them to inject side-effects as you have done. They are perfectly valid to use as ways of pushing values into a stream, however their scope should be tightly constrained to avoid bleeding state into other areas of code.
Here is the first refactoring, notice that you can create the source beforehand and then your api code is just wrapping it up in a neat little bow:
var api = (function() {
var fetch_todos = Rx.Observable.fromCallback($.get('example.com/todos'))
source = new Rx.Subject(),
cached_todos = source
.flatMapLatest(function() {
return fetch_todos();
})
.replay(null, 1)
.refCount();
return {
refresh: function() {
source.onNext(null);
},
current_todos: function() {
return cached_todos;
}
};
})();
The above is alright, it maintains your current interface and side-effects and state have been contained, but we can do better than that. We can create either an extension method or a static method that accepts an Observable. We can then simplify even further to something along the lines of:
//Executes the function and caches the last result every time source emits
Rx.Observable.withCache = function(fn, count) {
return this.flatMapLatest(function() {
return fn();
})
.replay(null, count || 1)
.refCount();
};
//Later we would use it like so:
var todos = Rx.Observable.fromEvent(/*Button click or whatever*/))
.withCache(
Rx.Observable.fromCallback($.get('example.com/todos')),
1 /*Cache size*/);
todos.subscribe(/*Update state*/);