if staleTime is bigger than cacheTime in react query, what happen? - react-query

What happens if the staleTime is longer than cacheTime, such as when the staleTime is 10 minutes and the cacheTime is 5 minutes? I thought that even if cacheTime is not valid after 6 minutes, staleTime is still valid, so it doesn't call to server to get the data.
However, I found a post that says that if cacheTime is not valid, and the data -that still fresh- will be deleted. If staleTime is longer than cacheTime, doesn't staleTime work as I expected?

staleTime and cacheTime are two completely different times that are not related at all. They serve two different purposes and can be two different times - each one can be smaller or larger than the other, it doesn't matter.
I'll copy the section from my blogpost here:
StaleTime: The duration until a query transitions from fresh to stale. As long as the query is fresh, data will always be read from the cache only - no network request will happen! If the query is stale (which per default is: instantly), you will still get data from the cache, but a background refetch can happen under certain conditions.
The first thing to notice here is that even if staleTime elapses, there will not be a request immediately. staleTime just says: Hey, the data that we have in the cache is no longer fresh, so we might refresh it in the background. But for a background refresh to happen, it needs a "trigger": A component mount, a window focus, a network reconnect.
CacheTime: The duration until inactive queries will be removed from the cache. This defaults to 5 minutes. Queries transition to the inactive state as soon as there are no observers registered, so when all components which use that query have unmounted.
cacheTime does nothing as long as you use a query. You can open a webpage and work on that page for forever - React Query will not remove that data, because that wold mean the screen would transition to a loading state. That would be bad. cacheTime is about gargabe collection - removing data that is no longer in use. This is why we'll rename this option to gcTime in the next major version.
From the comments:
It doesn't make sense to make the stateTime longer than the cacheTime.
Yes, it does, if you want. You can even have staleTime: Infinity, cacheTime: 0. This basically means: Never refetch data if you have cached data, but remove data from the cache as soon as I don't use it anymore.

Related

Swift: How can I change backgroundFetchIntervalMinimum with BGAppRefreshTask

I am changing my deprecated function setMinimumBackgroundFetchInterval() to new one BGAppRefreshTask() but every example is using actual-time like this:
request.earliestBeginDate = Date(timeIntervalSinceNow: 15 * 60) // Fetch no earlier than 15 minutes from now
but my current source used backgroundFetchIntervalMinimum like...
UIApplication.shared.setMinimumBackgroundFetchInterval(UIApplication.backgroundFetchIntervalMinimum)
so how can I apply backgroundFetchIntervalMinimum with BGAppRefreshTask?
Setting the request’s earliestBeginDate is equivalent to the old minimum fetch interval. Just use earliestBeginDate like you have seen in those other questions. And when your app runs a background fetch, you just schedule the next background fetch with its own earliestBeginDate at that point. That is how BGAppRefreshTask works.
If you are asking what is equivalent to UIApplication.backgroundFetchIntervalMinimum, you can just set earliestBeginDate to nil. The documentation tells us that:
Specify nil for no start delay.
Setting the property indicates that the background task shouldn’t start any earlier than this date. However, the system doesn’t guarantee launching the task at the specified date, but only that it won’t begin sooner.
The latter caveat applies to backgroundFetchIntervalMinimum, too. Background fetch, regardless of which mechanism you use, is at the discretion of the OS. It attempts to balance the app’s desire to have current data ready for the user the next time they launch the app with other considerations (such as preserving the device battery, avoiding excessive use of the cellular data plan, etc.).
The earliestBeginDate (like the older backgroundFetchIntervalMinimum) is merely a mechanism whereby you can give the OS additional hints to avoid redundant fetches. E.g., if your backend only updates a resource every twelve hours, there is no point in performing backend refreshes much more frequently than that.

Is this the best way to set CoreData entity variables to 0 every day at 24:00?

I currently have a coreData entity called CalorieProgress, which I would like to reset all variables (calorieProgress, fatProgress) to 0, every day.
I am still quite new to SwiftUI, and the only method I thought of as of now, is to add a Date Created variable to this entity called created, and when the user opens the app, to check if that date was yesterday. If so set all values to 0 etc.
Is there a more efficient way to do this?
Thanks
Your design is good and simple, and a reasonable choice if you're getting started.
It can have trouble, however, when people move between time zones. It is even possible for people to move to previous days this way (most dramatically when they cross the date line). There is no single answer to that question. Your app has to decide what it means by "today" when strange clock events happen. (Users also sometimes change their clock, and you want to behave "reasonably" in those cases, even if it means the data is wrong.)
Having built several of these, my suggestion is to just store raw, immutable, data records, and work out things like resetting values when you're running queries. For example, to work out how many calories someone has burned "today" doesn't require that you set any value to zero. You can just perform a query for records that occur after some time and sum their calories (you can even do this with aggregate queries directly on Core Data).
Core Data can be very fast, but if these queries become too slow, you can store daily aggregation records in Core Data. But keeping the original raw data means that those are really just caches and you can throw them away and recompute any time you need to.
Assuming that a new day starts as midnight (I've worked on apps where days started "when the user wakes up in the morning" which is much more complicated...) you should also be aware of significantTimeChangeNotification which is posted at midnight (and a few other times). You can't use this to launch your app or do processing in the background, but it's very nice for updating your UI if the user has the app open.

jQuery Auto Complete - Performance Issue

We are using the plugin https://goodies.pixabay.com/jquery/tag-editor/demo.html for our AutoComplete feature. We load the source with 3500 items. The performance gets too bad when the user starts typing and the autocomplete loads the filtered result after 6 to 8 seconds.
What are alternate approach that we can take for upto 4000 items for Autocomplete.
Appreciate your response!
are you using the minLength attribute from autocomplete?
on their homepage, they have something like this:
$('#my_textarea').tagEditor({ autocomplete: { 'source': '/url/', minLength: 3 } });
this effectively means, that the user has to enter at least 3 charaters before autocomplete will be used. doing so will usually reduce the amount of results from the autocomplete to a more sane count (like 20-30 maybe).
However, this might not necessarily be your problem. first you should figure out, if it's your server that's got a problem with responding fast (you can use your browser developer toolbar to see how long the requests takes to complete).
If the request takes 6-8 seconds, then you will have to optimize your server's code. On the other hand, if the response is quick, but tageditor needs a long time to build the suggestion list, then the problem is, that it might not be optimized for so many suggestions. in that case, the ultimate solution would be to rewrite the autocompletion module yourself or patch the existing one to better scale to your needs.
Do you go back to the server every time the user types in something to get the matching results?
I am using SPRING ehcache which gets all the items from database and stores in the server cache when the server is started. Whenever the user types the cached data is used which gets the results with few milliseconds. Some one else recommended me to use this.Below is the example for it
http://www.mkyong.com/ehcache/ehcache-hello-world-example/
I am using the jQuery autocomplete features with 2500 items without any issue.
here is the link where it being used http://www.all4sportsonline.com

Could I save Postgres transaction and continue work with db within it later

I know about prepared transaction in Postgres, but seems you can just commit or rollback it later. You cannot even view the transaction's db state before you've committed it. Is any way to save transaction for later use?
What I want to achieve actually is a preview (and correcting) of some changes in db (changes are imports from csv file, so user need to see preview before apply it). I want to make changes, add some changes later, see full state of db and apply it (certainly, commit transaction)
I cannot find a very good reference in docs, but I have a very strong feeling that the answer is: No, you cannot do that.
It would mean that when you "save" the transaction, the database would basically have to maintain all of its locks in place for an indefinite amount of time. Even if it was possible, it would mean horrible failure modes and trouble on all fronts.
For the pattern that you are describing, I would use two separate transactions. Import to a staging table and show that to user (or import to the main table but mark rows as "unapproved"). If user approves, in another transactions move or update these rows.
You can always end up in a situation where user can simply leave or crash without clicking "OK" or "Cancel". If what you're describing was possible, you would end up with a hung transaction holding all these resources. In my proposed solution you end up with wasteful rows in "staging" table that you may still show to user later or remove.
You may want to read up on persistence saga. This is actually a very simple example of a well known and researched problem.
To make the long story short, this pattern breaks down a long-running process like yours into smaller operations that are applied and persisted in some way in separate transactions. If any of them happens to fail (or does not occur as expected), you have compensating actions that usually undo what the steps executed so far have done (e.g. by throwing away stale/irrelevant data).
Here's a decent introduction:
https://blog.couchbase.com/saga-pattern-implement-business-transactions-using-microservices-part/#:~:text=The%20SAGA%20Pattern,completion%20of%20the%20previous%20one.
http://vasters.com/clemensv/2012/09/01/Sagas.aspx
This concept was formally introduced in the 80s, but is well alive and relevant today.

Regenerating CouchDB views

Contrived example:
{
productName: 'Lost Series 67 DVD',
availableFrom: '19/May/2011',
availableTo: '19/Sep/2011'
}
View storeFront/currentlyAvailableProducts basically checks if current datetime is within availableFrom - availableTo and emits the doc.
I would like to force a view to regenerate at 1am every night, i.e. process/map all docs.
At first I had a simple python script scheduled via crontab that touched each document hence causing a new revision and the view to update,however since couchdb is append only this wasnt very efficient - i.e. loads of unnecessary IO and disk space usage followed by compaction, very resource wasteful on all fronts.
Second solution was to push the view definition again via couchapp push however this meant the view was unavailable (or partially unavailable) for several minutes which was also unacceptable.
Is there any other solutions?
Will's answer is great; but just to get the consensus viewpoint represented here:
Keep one view, and query it differently every day
Determine your time-slice size, for example one day.
Next, for each document, you emit once for every time slice (day) that it is available. So if a document is available from 19 May to 21 May (inclusive), your emit keys would be:
"2011-05-19"
"2011-05-20"
"2011-05-21"
Once that is computed for every document, to find docs available on a certain day, just query the view with (e.g. today) ?key="2011-05-18".
You never have to update or re-run your views.
If you must never change your query URL for some reason, you might be able to use a _show function to 302 (temporary) redirect to today's correct query.
So your view is not being updated automatically I take it?
New and changed documents are not being added on the fly?
Oh I see, you're cheating. You're using "out of document" information (i.e. the current date) during view creation.
There's no view renaming, but if you were desperate you could use url rewriting.
Simply create a design document "each day": /db/_design/today05172011
Then use some url rewriting to change: GET /db/_design/today/_view/yourview
to: GET /db/_design/today051711/_view/yourview
Create the view at 11pm server time (tweak it so that "now" is "tomorrow", or whatever).
Then add some more clean up code to later delete the older views.
This way your view builds each night as you like.
Obviously you'll need to front Couch with some other web server/proxy to pull this off.
It's elegant, and inelegant, at the same time.