we're using react-admin with a jsonServerRestClient from ra-data-json-server. We've encapsulated it to handle GET_MANY a little differently, so we already have a touchpoint there to control what goes to the server.
Going forward we would like to be able to send a delta of changes when an item is modified, as some items (in our case, groups) have 4k+ members in them.
While we could save the raw objects to local storage when they come into our REST client and use that info to create a delta upon save, the state and reducers should have that info already somewhere else, right?
So what files should I look at to see how to modify what gets sent to the REST client during modify events?
Figured it out, React-Admin actually is nice enough to give us the previous version of whatever data it wants to save. When you're writing a restClient you will be given type, resource and params. Inside params is both params.data and params.previousData. You can do your deltas there, comparing both.
Related
UPDATE - Not sure how to update this to say pate's comment made me realise what I was doing wrong? Either way this is resolved for me now thanks
I return JSON from the server that contains a list of objects, each representing a digital post it note (text, position on page, database id etc.) I cache this in the service worker during fetch as normal.
Currently as users update/add post its I update a local copy of the JSON in the javascript as well as sending the info to the server.
What I want to do is as they update/add items the client JS will also save the new JSON to the application cache, then on page load use a cache-while-revalidate pattern so they only need to refresh if another user makes changes to their data. Otherwise they will get the cached JSON that will already contain their most recent changes.
As the application cache is versioned and the version number is stored in the sw.js file I'm currently sending a message (using MessageChannel) from the client to the SW to get the version number so the client can then put the JSON into the right cache. The only other options I can think of are to either make the application cache version a global variable somewhere other then the SW.js or just send the entire JSON in the message to the SW and let it put the update JSON into the cache.
Either way these all seem like workarounds/anti-patterns and I can't seem to find a better way of the client updating the application cache.
The other reason I want to do this is so that I can eventually move to an offline mode of working using the background sync api to handle add/updates etc. so want the cached JSON to be as up to date as possible.
Am I missing a trick somewhere?
Does anyone have best practice pattern for cqrs with put/post, specifically the client is doing a get for updated resource after it has sent command/event... Would you allow/require the client to keep local copy of the updated resource, and send a last updated timestamp in the get response? Or ensure that get includes the unprocessed commands? Of course, if the same resource is retrieved by another client, may/not get the updated resource.
What's worked best for you?
Would you contend with added complexity of the get also checking command queue?
Does anyone have best practice pattern for cqrs with put/post, specifically the client is doing a get for updated resource after it has sent command/event...
How would you do it on a web site?
Normally, you would do a GET to load the resource, and that would give you version 0, possibly with some validators in the meta data to let you know what version of the representation you received. If you tried to GET the resource again, the generic components could see from the headers that your copy was up to date, and would send you back a message to that effect (304 Not Modified).
When you POST to that resource, a successful response lets all of the intermediate components know that the previously cached copy of the resource has been invalidated, so the next GET request will retrieve a fresh copy, with all of the modifications.
This all works great, right up to the point where, in a CQRS setting, the read requests follow a different path than the write requests. The read side will update itself eventually, so the trick is how to avoid returning a stale representation to the client that knows it should have changed.
The analogy you are looking for is 202 Accepted; we want the write side to let the client know that the operation succeeded, and that there is a resource that can be used to get the change.
Which is to say, the write side returns a response indicating that the command was successful, and provides a link that includes data that the read model can use to determine if its copy is up to date.
The client's job is to follow the links, just like everywhere else in REST.
The link provided will of course be some safe operation, pointing to the read model. The read model compares the information in the link to the meta data of the currently available representation; if the read model copy is up to date, it returns that, otherwise it returns a message telling the client to retry (presumably after some interval).
In short, we use polling on the read model, waiting for it to catch up.
I have an app that makes extensive use of the Editor Framework. Right now I'm at the point where I want to add a new feature: if a user edits an entity, I'd like to record which changes were made and store them in a separate datastore entity. This requires knowing if a field was changed, the field name, and the value it was changed to.
This is what I'd like to implement:
App calls edit(bean);
User makes changes, calls flush() and data gets sent back to server.
In server handler, changes from the bean are sent to processChanges(List<String> paths) which then creates and stores the record that "field foo" was changed to "bar", and so on.
The entity is saved, overwriting the existing one.
I use GWTP and currently use the RPC Command Pattern. I've read a bit about RequestFactory and as I understand, one of its main benefits is that it only sends the changed fields known as "deltas" back to the server to minimise the payload, so I'm wondering if using RequestFactory would be a better fit for my app?
Apologies - I've been reading through the GWT docs and Javadocs for the Editor Framework and RequestFactory but I'm still pretty confused. RequestFactoryEditorDriver.getPaths() seems like it might be what I need but any advice or pointers greatly appreciated.
I could probably watch for changes client-side but that seems like a bad idea.
I believe you could do that using an EditorVisitor, similar to the DirtCollector visitor used internally by the Editor framework (have a look at the PathCollector for how to collect paths in a visitor).
I would start by visiting the hierarchy to collect the initial values just after the call to edit() (this is done already by the DirtCollector internally, but there's no way to access its results, and it only collects leaf values anyway).
Then you could call flush() ans see whether there are errors, and possibly validated your object to see if everything's OK. Then you visit the hierarchy again to collect the changes (against the initial values you previously collected) so you can send them to the server.
I am working on a distributed execution server. I have decided to use a REST API based on HTTP on the server. The clients will connect to the server and GET the next task to be accomplished. Obviously I need to "update" the task that is retrieved to ensure that it is only processed once. A GET is not supposed to have any side effects (like changing the state of the resource retrieved). I could use a POST (to update the resource), but I also need to retrieve it. I am thinking that I could have a URL that a POST marks the task as "claimed", then a GET marks the task as retrieved. Unfortunately I have a side effect on GET again. Is this just not going to work in REST? I am OK with have a "function" resource to do this, but don't want to give up the paradigm without a little research.
Pat O
If nothing else fits, you're supposed to use a POST request. Nothing prevents you from returning the resource on a POST request. But it becomes apparent that something (in this case) will happen to that resource, which wouldn't be the case when using a GET request.
REST is really just a concept, and you can implement it however you want. There is no one 'right way', as everyones use cases are different. (yes I understand that there is a defined spec out there, but you can still do it however you want) In this situation if your GET needs to have a side effect, it will have a side effect. Just make sure to properly document what you did (and potentially why you did it).
However it sounds like you're just trying to create a queue with multiple subscribers, and if the subscribers are automated (such as scripts or other machines) you may want to look at using an actual queue. (http://www.rabbitmq.com/getstarted.html).
If you are using this to power a web UI or something where actual people process this, you could also use a queue, with your GET request simply pulling the next item from the queue.
Note that when using most of the messaging systems you will not be able to guarantee the order in which the messages are pulled from the queue, so if the order is necessary you may not be able to use this approach.
I am trying to write a Core Data application for the iPhone that uses an external data source. I'm not really using Core Data to persist my objects but rather for the object life-cycle management. I have a pretty good idea on how to use Core Data for local data, but have run into a few issues with remote data. I'll just use Flickr's API as an example.
The first thing is that if I need say, a list of the recent photos, I need to grab them from an external data source. After I've retrieved the list, it seems like I should iterate and create managed objects for each photo. At this point, I can continue in my code and use the standard Core Data API to set up a fetch request and retrieve a subset of photos about, say, dogs.
But what if I then want to continue and retrieve a list of the user's photos? Since there's a possibility that these two data sets might intersect, do I have to perform a fetch request on the existing data, update what's already there, and then insert the new objects?
--
In the older pattern, I would simply have separate data structures for each of these data sets and access them appropriately. A recentPhotos set and and a usersPhotos set. But since the general pattern of Core Data seems to be to use one managed object context, it seems (I could be wrong) that I have to merge my data with the main pool of data. But that seems like a lot of overhead just to grab a list of photos. Should I create a separate managed object context for the different set? Should Core Data even be used here?
I think that what I find appealing about Core Data is that before (for a web service) I would make a request for certain data and either filter it in the request or filter it in code and produce a list I would use. With Core Data, I can just get list of objects, add them to my pool (updating old objects as necessary), and then query against it. One problem, I can see with this approach, however, is that if objects are externally deleted, I can't know, since I'm keeping my old data.
Am I way off base here? Are there any patterns people follow for dealing with remote data and Core Data? :) I've found a few posts of people saying they've done it, and that it works for them, but little in the way of examples. Thanks.
You might try a combination of two things. This strategy will give you an interface where you get the results of a NSFetchRequest twice: Once synchronously, and once again when data has been loaded from the network.
Create your own subclass of
NSFetchRequest that takes an additional block property to
execute when the fetch is finished.
This is for your asynchronous
request to the network. Let's call
it FLRFetchRequest
Create a class to which you pass
this request. Let's call it
FLRPhotoManager. FLRPhotoManager has a method executeFetchRequest: which takes an
instance of the FLRFetchRequest and...
Queues your network request based on the fetch request and passes along the retained fetch request to be processed again when the network request is finished.
Executes the fetch request against your CoreData cache and immediately returns the results.
Now when the network request finishes, update your core data cache with the network data, run the fetch request again against the cache, and this time, pull the block from the FLRFetchRequest and pass the results of this fetch request into the block, completing the second phase.
This is the best pattern I have come up with, but like you, I'm interested in other's opinions.
It seems to me that your first instincts are right: you should use fetchrequests to update your existing store. The approach I used for an importer was the following: get a list of all the files that are eligible for importing and store it somewhere. I'm assuming here that getting that list is fast and lightweight (just a name and an url or unique id), but that really importing something will take a bit more time and effort and the user may quit the program or want to do something else before all the importing is done.
Then, on a separate background thread (this is not as hard as it sounds thanks to NSRunLoop and NSTimer, google on "Core Data: Efficiently Importing Data"), get the first item of that list, get the object from Flickr or wherever and search for it in the Core Data database (carefully read Apple's Predicate Programming Guide on setting up efficient, cached NSFetchRequests). If the remote object already lives in Core Data, update the information as necessary, if not insert. When that is done, remove the item from the to-be-imported list and move on to the next one.
As for the problem of objects that have been deleted in the remote store, there are two solutions: periodic syncing or lazy, on-demand syncing. Does importing a photo from Flickr mean importing the original thing and all its metadata (I don't know what the policy is regarding ownership etc) or do you just want to import a thumbnail and some info?
If you store everything locally, you could just run a check every few days or weeks to see if everything in your local store is present remotely as well: if not, the user may decide to keep the photo anyway or delete it.
If you only store thumbnails or previews, then you will need to connect to Flickr each time the user wants to see the full picture. If it has been deleted, you can then inform the user and delete it locally as well, or mark it as not being accessible any more.
For a situation like this you could use Cocoa's archiving facilities to save the photo objects (and an index) to disk between sessions, and just overwrite it all every time the app calls home to Flickr.
But since you're already using Core Data, and like the features it provides, why not modify your data model to include a "source" or "callType" attribute? At the moment you're implicitly creating a bunch of objects with source "Flickr API", but you can just as easily treat the different API calls as unique sources and then store that explicitly.
To handle deletion, the simplest way would be to clear the data store each time it's refreshed. Otherwise you'd need to iterate over everything and only delete the photo objects with filenames that weren't included in the new results.
I'm planning to do something similar to this myself so I hope this helps.
PS: If you're not storing the photo objects between sessions at all, you could just use two different contexts and query them separately. As long as they're never saved, and the central store doesn't have anything in it already, it would work just like you describe.