Is there a way to do automated re-fetching invalidating all cache states for active subscribers at once? - redux-toolkit

Let's say we were to implement a useRefresh() hook just like this one from react-admin. It should invalidate all queries cache states and do automated re-fetching for subscribers. Similar to invalidateQueries() from react-query. Is there a way to do that with RTK Query? How could I approach this? I know I could do that for every service api through dispatch(someApi.util.invalidateTags([...someApiTags])). But then I'd need to run that for every api importing all their tags. I was wondering if there's a way to do that globally.
References:
https://github.com/marmelab/react-admin/blob/v4.1.1/packages/ra-core/src/dataProvider/useRefresh.ts
https://redux-toolkit.js.org/rtk-query/api/created-api/api-slice-utils#invalidatetags
https://react-query.tanstack.com/guides/query-invalidation

As per the documentation, your application should have almost certainly only one, or maybe two apis, but your formulation of "every single service api" tells me that you somehow went the direction of "a lot of apis". That is almost certainly not how RTK Query is meant to be used.
So my suggestion would be that you consolidate your apis into one - you can keep the endpoint in individual files using the documented code splitting approaches.
With only one api, you only have to dispatch resetApiState once.

Related

Can I create many Job at the same time in kubernetes by client api?

I want to create hundreds of Jobs in kubernetes by its api. Is there any way to do this? I have to create them one by one now. Thanks.
I mean you have to make 1 API call per object you want to create, but you can certainly write a script for that. Kubernetes does not offer a "bulk create" API endpoint if that's what you are asking, or really much of anything for bulk operations. It's a boring old REST API :)

WebApi supporting Range requests without querying the db multiple times

Currently I have a dotnetcore WebApi that is serving up videos. The videos are stored in a SQL server table as a varbinary(MAX). This was working however I was reading that to support on IOS safari we needed to accept the ranges header, so I have added support for this (I think).
However now I am noticing two things (could be unrelated):
1) Whenever a call is made to this API the CPU throttles to 100%. I can only assume that is EntityFramework querying the db for a 25MB file. Seems crazy but the API is doing nothing else? Can this be improved as the server just grinds.
2) Multiple requests are made to the API with different range bytes requested. However my api in turn queries the db on each request and so sends the CPU into overdrive for a long period.
Is there a better way of handling range requests when querying for a large object?
If you ask me, EF is not really well suited for this, it's too clunky and resources consuming. You can write your own T-SQL using something like substring. This being said, from a practical point of view, depending on how many and how big these files are and how many users you have, I would not go with such a solution.
I don't think a SQL database should be how you store your data at all for this.
You could start doing some research on how netflix does it: https://www.techhive.com/article/2158040/how-netflix-streams-movies-to-your-tv.html
You probably want something like that, a CDN system, some sort of caching. Your way of doing it now might work while you build it, with one or two users but if this is an API used by lots of people, you will quickly find out that it won't scale.

Google Fusion Table REST Api vs Advanced Services Fusion Table Services in app scripts

I am very confused about the correct or recommended mechanism to use for accessing google fusion tables APIs in app scripts. There seem to be two methods with examples but no discussion about which is preferred or why. Is one of these interfaces newer and preferred while the other is dying? Is one obsolete or more restricted in what it can do?
Method 1 is the REST API described here
https://developers.google.com/fusiontables/docs/v2/sql-reference#Select
Method 2 is a set of library functions sort of described here under the Apps Script/Google Advanced Services:
https://developers.google.com/apps-script/advanced/fusion-tables
For example, using the REST api to do a dql query, we end up with something like this:
function runSQL(sql){
var getDataURL = 'https://www.googleapis.com/fusiontables/v1/query?sql='+sql;
var dataResponse = UrlFetchApp.fetch(getDataURL,getUrlFetchOptions()).getContentText();
return dataResponse;
}
And using the advanced API we use something like this:
result = FusionTables.Query.sql(sql, { hdrs: false });
The REST API seems much harder to use, requireing complex oAuth and developer keys to be configured in advance and coded into the application while the Advanced Services API harvests all this behind the scenes and makes for simple API calls like I show here.
I have seen numerous examples using each of the above with no hint as to why one author chose her mechanism instead of the other.
Your help is greatly appreciated.
The service within app-script is a work in progress, so the full functionality of the API might not be fully supported at the moment. As you mentioned though, the big advantage of the service over the REST API is that you do not have to handle the OAuth flow, as you only need to enable it on your script (as stated here).
The Apps Script "advanced service" implementation still lacks some advanced functionality (like alt=media format queries or multipart / resumable uploads) -- if it actually has those features, it lacks extremely basic documentation of them, to the point that the Apps Script editor autocomplete is unaware of them. The tradeoff of these functionality gaps is that you don't need to handle keys, request building, etc.
So, if you're doing simple sql select / importRows work, the Advanced Service should be able to cover almost all your needs. If you need to delete from your FusionTables, you might want to consider setting up the REST API - because deleting is 1 record per query, the better way to delete is to instead "download what you want to keep, then re-upload it back via replaceRows."
(This worked for me for a while, but eventually what I was keeping outgrew the Apps Script service's limitations and I began receiving Empty Response errors from the call to replaceRows. My remedy was to perform my record maintenance tasks via the REST API, where I can specify resumable uploads, timeouts, etc., while more "normal" interactions are done through the Advanced Service.)

Programmatic export/dump/mass data retrieval (BaaS)

Does anyone have experiences with programmatic exports of data in conjunction with BaaS providers like e.g. parse.com or StackMob?
I am aware that both providers (as far as I can tell from the marketing talk) offer a REST API which will allow for queries against the database, not only to be used by mobile clients but also by e.g. custom web apps.
I am also aware that both providers offer a manual export of data (parse.com via their web interface, StackMob via support).
But lets say I would like to dump all data nightly, so that I can import it into a reporting system for instance. Or maybe simply to have an up-to-date backup.
In this case, I would need a programmatic way to export/replicate the data stored in the backend. Manual exports are not an option for obvious reasons.
The REST APIs offered however seem to be designed for specific queries, not for mass reads (performance?). Let alone the pricing - I assume none of the providers would be happy about a nightly X Gigabyte data export via their REST API, so their probably will be a price tag.
I just couldn't find any specific information on this topic so far, so I was wondering if anyone else has already gone through this. Also, any suggestions on StackMob/parse alternatives are welcome, especially if related to the data export topic.
Cheers, Alex
Did you see the section of the Parse REST API on Batch operations? Batch operations reduce the number of API calls needed to grab data so that you are not using a call for every row you retrieve. Keep in mind that there is still a limit (the default is 100, but you can set it to a maximum of 1000). That means you are still limited to pulling down 1000 rows per API call.
I can't comment on StackMob because I haven't used it. At my present job, we are using Parse and we wrote a C# app which compares the data in a Parse class with a SQL table and pulls down any changes.

Best practice for updating a structured resource via REST?

I have a client-side interface that allows the user to perform multiple edits against a tree-like outline. I consider the aggregate of the records making up that outline, in totality, a single resource (/outlines/39) even though its parts could be accessed as separate resources via different URLs.
The problem is the user can edit existing nodes in the outline as well as add new nodes to the outline. Normally, when you edit something you PUT its changes and when you add something new you POST it; however, in some cases you'll want to wrap all the changes--including both adds and edits--in a single transaction. What are some practical ways people have handled this?
Even though the outline already exists and a PUT seems appropriate, the embedded adds violate the idempotence of the PUT. I'm not sure that POST seems appropriate either. For design purposes, I have decided not to save each discrete update the user makes though I guess this offers one solution. Still, there must be others who have dealt with my issue or have ideas about it.
Is there any way you could make the add idempotent? E.g. if nodes had a natural key, then when the client tried to add a node a second time you could do nothing.
How about: make a new resource: /outlines/39/transactions, and POST your transaction to that resource, e.g.
POST "addNode=node1, addNode=node2, editNode=node3,newName=foobar" to /outlines/39/transactions