Does invalidating order remove order from Item cache? - atg

My assumption is that when we invalidate order by invoking orderImpl.invalidateOrder(), all it does is set the containers to null and thereby during the next invocation of (say) getCommerceItem(), refreshOrder pipeline gets executed and loads the items.
My question is - even though we invalidate the order, when the refreshOrder pipeline executes, it loads the order from the item cache (if available) indicating that order does not get removed from the cache on orderImpl.invalidateOrder?

Speaking from ATG10.2 and lower versions...
There's (what I consider) a bug in ATG's caching / Order invalidation.
As #radimpe comment suggests, if you call OrderImpl.invalidateOrder(), the next time anything in the Order object is accessed, the uncached RepositoryItems should be used.
However, ATG's Repository caching actually works by putting the items it removes from a Cache into a WeakReference, which means that it immediately gets the item back out of the cache. So the functionality is broken, IMHO.
Unless anyone knows better?

Related

if staleTime is bigger than cacheTime in react query, what happen?

What happens if the staleTime is longer than cacheTime, such as when the staleTime is 10 minutes and the cacheTime is 5 minutes? I thought that even if cacheTime is not valid after 6 minutes, staleTime is still valid, so it doesn't call to server to get the data.
However, I found a post that says that if cacheTime is not valid, and the data -that still fresh- will be deleted. If staleTime is longer than cacheTime, doesn't staleTime work as I expected?
staleTime and cacheTime are two completely different times that are not related at all. They serve two different purposes and can be two different times - each one can be smaller or larger than the other, it doesn't matter.
I'll copy the section from my blogpost here:
StaleTime: The duration until a query transitions from fresh to stale. As long as the query is fresh, data will always be read from the cache only - no network request will happen! If the query is stale (which per default is: instantly), you will still get data from the cache, but a background refetch can happen under certain conditions.
The first thing to notice here is that even if staleTime elapses, there will not be a request immediately. staleTime just says: Hey, the data that we have in the cache is no longer fresh, so we might refresh it in the background. But for a background refresh to happen, it needs a "trigger": A component mount, a window focus, a network reconnect.
CacheTime: The duration until inactive queries will be removed from the cache. This defaults to 5 minutes. Queries transition to the inactive state as soon as there are no observers registered, so when all components which use that query have unmounted.
cacheTime does nothing as long as you use a query. You can open a webpage and work on that page for forever - React Query will not remove that data, because that wold mean the screen would transition to a loading state. That would be bad. cacheTime is about gargabe collection - removing data that is no longer in use. This is why we'll rename this option to gcTime in the next major version.
From the comments:
It doesn't make sense to make the stateTime longer than the cacheTime.
Yes, it does, if you want. You can even have staleTime: Infinity, cacheTime: 0. This basically means: Never refetch data if you have cached data, but remove data from the cache as soon as I don't use it anymore.

Data syncing with pouchdb-based systems client-side: is there a workaround to the 'deleted' flag?

I'm planning on using rxdb + hasura/postgresql in the backend. I'm reading this rxdb page for example, which off the bat requires sync-able entities to have a deleted flag.
Q1 (main question)
Is there ANY point at which I can finally hard-delete these entities? What conditions would have to be met - eg could I simply use "older than X months" and then force my app to only ever displays data for less than X months?
Is such a hard-delete, if possible, best carried out directly in the central db, since it will be the source of truth? Would there be any repercussions client-side that I'm not foreseeing/understanding?
I foresee the number of deleted's growing rapidly in my app and i don't want to have to store all this extra data forever.
Q2 (bonus / just curious)
What is the (algorithmic) basis for needing a 'deleted' flag? Is it that it's just faster to check a flag rather than to check for the omission of an object from, say, a very large list. I apologize if it's kind of a stupid question :(
Ultimately it comes down to a decision that's informed by your particular business/product with regards to how long you want to keep deleted entities in your system. For some applications it's important to always keep a history of deleted things or even individual revisions to records stored as a kind of ledger or history. You'll have to make a judgement call as to how long you want to keep your deleted entities.
I'd recommend that you also add a deleted_at column if you haven't already and then you could easily leverage something like Hasura's new Scheduled Triggers functionality to run a recurring job that fully deletes records older than whatever your threshold is.
You could also leverage Hasura's permissions system to ensure that rows that have been deleted aren't returned to the client. There is documentation and examples for ways to work with soft deletes and Hasura
For your second question it is definitely much faster to check for the deleted flag on records than to have to try and diff the entire dataset looking for things that are now missing.

What does edge-based and level-based mean?

What does "level-based" and "edge-based" mean in general?
I read "In other words, the system's behavior is level-based rather than edge-based" from kubernetes documentation:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api-conventions.md
with Google, I only find:
http://www.keil.com/forum/9423/edge-based-vs-level-based-interrupt/
Thank you.
It also has a more general definition (at least the way we tend to use it in the documentation). A piece of logic is "level based" if it only depends on the current state. A piece of logic is "edge-based" if it depends on history/transitions in addition to the current state.
"Level based" components are more resilient because if they crash, they can come back up and just look at the current state. "Edge-based" components must store the history they rely on (or depend on some other component that stores it), so that when they come back up they can look at the current state and the history. Also, if there is some kind of temporary network partition and an edge-based component misses some of the updates, then it will compute the wrong output.
However, "level based" components are usually less efficient, because they may need to scan a lot of state in order to compute an output, rather than just reading deltas.
Many components are a mixture of the two.
Simple example: You want to build a component that reports the number of pods in READY state. A level-based implementation would fetch all the pods from etcd (or the API server) and count. An edge-based implementation would do that once at startup, and then just watch for pods entering and exiting READY state.
I'd say they explained it pretty well on the site:
When a new version of an object is POSTed or PUT, the "spec" is updated and available immediately. Over time the system will work to bring the "status" into line with the "spec". The system will drive toward the most recent "spec" regardless of previous versions of that stanza. In other words, if a value is changed from 2 to 5 in one PUT and then back down to 3 in another PUT the system is not required to 'touch base' at 5 before changing the "status" to 3.
So from that statement, we know that "level base" means a PUT request does not need to be satisfied if the primary goal does not require it; it's free to skip PUT requests when seen fit.
This makes me assume that "edge based" systems require every PUT request to be satisfied, even if some requests could be skipped without altering the final result, since that would be the alternative to skipping requests.
I'm no RESTful developer (you can see by my account activity). I could not find any source of information for these things anywhere else, so I'm going based on the explanation they gave, which seems pretty straight forward.
The Kubernetes API doesn't store a history of all the changes made to an object. The controller that is responsible for that object cannot reliably observe each change; it may only observe the current state of the object.
The term "level" means the current state of the object, and the term "edge" means a transition to a new state. Controllers are "level-based" because they cannot reliably observe the transitions.
From the API conventions:
When a new version of an object is POSTed or PUT, the spec is updated and available immediately. Over time the system will work to bring the status into line with the spec. The system will drive toward the most recent spec regardless of previous versions of that stanza. For example, if a value is changed from 2 to 5 in one PUT and then back down to 3 in another PUT the system is not required to 'touch base' at 5 before changing the status to 3. In other words, the system's behavior is level-based rather than edge-based. This enables robust behavior in the presence of missed intermediate state changes.

What is the proper way to keep track of updates in progress using MondoDB?

I have a collection with a bunch of documents representing various items. Once in a while, I need to update item properties, but the update takes some time. When properties are updated, the item gets a new timestamp for when it was modified. If I run updates one at a time, then there is no problem. However, if I want to run multiple update processes simultaneously, it's possible that one process starts updating the item, but the next process still sees the item as needing an update and starts updating it as well.
One solution is to mark the item as soon as it is retrieved for update (findAndModify), but it seems wasteful to add a whole extra field to every document just to keep track of items currently being updated.
This should be a very common issue. Maybe there are some built-in functions that exist to address it? If not, is there a standard established method to deal with it?
I apologize if this has been addressed before, but I am having a hard time finding this information. I may just be using the wrong terms.
You could use db.currentOp() to check if an update is already in flight.

Handling background changes with NSFetchedResultsController

I am having a few nagging issues with NSFetchedResultsController and CoreData, any of which I would be very grateful to get help on.
Issue 1 - Updates: I update my store on a background thread which results in certain rows being delete, inserted or updated. The changes are merged into the context on the main thread using the "mergeChangesFromContextDidSaveNotification:" method. Inserts and deletes are updated properly, but updates are not (e.g. the cell label is not updated with the change) although I have confirmed the updates to come through the contextDidSaveNotifcation, exactly like the inserts and deleted. My current workaround is to temporarily change the staleness interval of the context to 0, but this does not seem like the ideal solution.
Issue 2 - Deleting objects: My fetch batch size is 20. If an object is deleted by the background thread which is in the first 20 rows, everything works fine. But if the object is after the first 20 rows and the table is scrolled down, a "CoreData could not fulfill a fault" error is raised. I have tried resaving the context and reperforming the frc fetch - all to no avail. Note: In this scenario, the frc delegate method "didChangeObject...." is not called for the delete - I assume this is because the object in question had not been faulted at that time (as it is was outside the initial fetch range). But for some reason, the context still thinks the object is around, although is has been deleted from the store.
Issue 3 - Deleting sections : When the deletion of a row leads to the deletion of a section, I have gotten the "invalid number of rows in section???" error. I have worked around this by removing the "reloadSection" line from the NSFetchedResultsChangeMove: section and replacing it with "[tableView insertRowsAtIndexPaths...." This seems to work, but once again, I am not sure if this is the best solution.
Any help would be greatly appreciated. Thank you!
I think all your problems relate to the fetched results controller's cache.
Issue 1 is caused by the FRC using the cached objects (whose IDs have not changed.) When you add or remove an object that changes the IDs and forces an update of the cache but changing the attributes of an object doesn't do so reliably.
Issue 2 is caused by the FRC checking for the object in cache. Most likely, the object has an unfaulted relationship that persist in the cache. When you delete it in the background the FRC tries to fault in the object at the other end of the relationship and cannot.
Issue 3: Same problem. The cache does not reflect the changes.
You really shouldn't use a FRC's cache when some object other than the FRC is modifying the data model. You have two options:
(Preferred) Don't use the cache. When creating the FRC set the cache property to nil.
Clear the cache anytime the background process alters the data model.
Of course, two defeats the purpose of using the cache in the first place.
The cache is only useful if the data is largely static and/or the FRC manages the changes. In any other circumstance, you shouldn't use it because FRC need to check the actual data model repeatedly to ensure that it has a current understanding of the data. It can't rely on the object copies it squirreled away because another input may have changed the real objects.
My advice:
Detect the changes needed on the background thread
Post the changes to the main thread as a payload
Make the actual changes and save on the main thread (Managed Object Context on the main thread)
DO use the cache for the FRC; you'll get better performance
Quote from "Pro Core Data for iOS" by Michael Privat, Robert Warner:
"Core Data manages its caches intelligently so that if the results are updated by another call, the cache is removed if affected."