I want to send the huge amount of data through spring rest api. some people have suggested like to make into the chunks and then send via rest. can anyone suggest me how to make into possible.
Related
My fullstack React eCommerce application interacts with Stripe using my Express backend.
I need the client to be able to perform CRUD operations on products and orders, and as such they are currently stored in my mongoDB database.
However, I have discovered that interacting with Stripe's API is significantly easier if products (and thus orders) are stored on their database too.
As such, I am considering using both databases as sources of truth. However, this means that every CUD operation on one would need to be reflected in the other, making things more complex.
What is the best approach to this predicament? Thank you!
It really depends on your use-case and how you'd like to structure your integration. You're correct that it would make it easier to integrate with Stripe's API if you have the products and other information stored on Stripe. Stripe does provide a way for you to listen for any changes made to an object and update your own database accordingly using Webhooks [1].
You can build a webhook endpoint and listen to a variety of events in order to receive updates in real-time. This would allow you to maintain your own database without worrying about writing a script that polls API to retrieve the latest state of data/objects.
[1] https://stripe.com/docs/webhooks
First of all, sorry for my English.
This is a more theoretical question than practical. Our client wants a REST API, that would work as a normal REST API + the API should have a function when the API would send large amounts of data to the other side in some timeframe (probably their API to sync data of tables between two companies) without them asking for the data. The timeframe should be editable. What do you think, what is the best way to solve this?
We thought about saving timeframes in a table, but then we would need to check regularly the table for changes, which is not ideal.
Do you have any other ideas or tips even for the API?
Thank you
How to retrieve large amount of data from REST API GitHub? Nowadays it provided only a small amount of data JSON from GitHub timeline, in many cases limited to only 300 events. I need a bigger volume to work in my Master Research and i need to know how to via the REST API.
github's api (and most IMHO good apis) use pagination to reduce load on themselves and clients. you could write a simple script to go through all the "pages" of results one at a time, then combine your results after the fact locally.
more info here:
http://developer.github.com/guides/traversing-with-pagination/
Does anyone have experiences with programmatic exports of data in conjunction with BaaS providers like e.g. parse.com or StackMob?
I am aware that both providers (as far as I can tell from the marketing talk) offer a REST API which will allow for queries against the database, not only to be used by mobile clients but also by e.g. custom web apps.
I am also aware that both providers offer a manual export of data (parse.com via their web interface, StackMob via support).
But lets say I would like to dump all data nightly, so that I can import it into a reporting system for instance. Or maybe simply to have an up-to-date backup.
In this case, I would need a programmatic way to export/replicate the data stored in the backend. Manual exports are not an option for obvious reasons.
The REST APIs offered however seem to be designed for specific queries, not for mass reads (performance?). Let alone the pricing - I assume none of the providers would be happy about a nightly X Gigabyte data export via their REST API, so their probably will be a price tag.
I just couldn't find any specific information on this topic so far, so I was wondering if anyone else has already gone through this. Also, any suggestions on StackMob/parse alternatives are welcome, especially if related to the data export topic.
Cheers, Alex
Did you see the section of the Parse REST API on Batch operations? Batch operations reduce the number of API calls needed to grab data so that you are not using a call for every row you retrieve. Keep in mind that there is still a limit (the default is 100, but you can set it to a maximum of 1000). That means you are still limited to pulling down 1000 rows per API call.
I can't comment on StackMob because I haven't used it. At my present job, we are using Parse and we wrote a C# app which compares the data in a Parse class with a SQL table and pulls down any changes.
Is this a good idea? When is it a good idea, and when is it bad?
Just heard about this in one of the WWDC videos, and I don't quite understand why would one want to do it this way. Seems complicated and I cannot see the benefit.
The way I see it, it would be to totally abstract the data access layer. You'd then be able to access the web service using Core Data fetch request API. You'd also be able to implement caching in the persistent store without affecting the application logic.
Also changing the web service request/response format could potentially only affect the persistent store layer.
I see it can be a benefit for large requests. Since networking is quite expensive in battery life, application should use as less bandwidth as possible so developing a single request sending more information but using Core Data to access only subsets at a time is a good design in my opinion.
Finally, I think Core Data API blends well with major ORM web framework like rails or django for example.
It is complicated and it is meant to show what you can do with Core Data. I personally like to keep server communication separate from the local cache and then update the server based on changes to the local cache. This means that I use code that listens for save events from Core Data and then updates the server.