I just heard that there is a limitation on the Google Cloud Storage, so that you can only access it with a request once per second. I searched through the internet, but didn't find any appropriate answer to this.
Is this right, or can i access it more then once per second? Just want to know for an webapplication i write at the moment, that can up- and download images on the Storage. If there is an limitation, it would cause some delay, if more requests per second are send from different users.
You may be referring to the limitation that you can update or overwrite the same object up to once per second. There's no limit to the number of times you can update across different objects, or to the number of reads you can do to any object.
https://cloud.google.com/storage/docs/concepts-techniques#object-updates
Related
I need some guidance on how to properly build out a system that will be able to scale. I will give you some information about what I am trying to do and then ask my specific question.
I have a site where I want visitors to send some data to be processed. They input the data into a textarea or upload it in a file. Simple. The data is somewhat preprocessed on the client side before a POST request is made to a REST endpoint.
What I am stuck on is what is a good way to take this posted data store it and then associate an id with it that references the user since I cannot process the data fast enough for it to be returned to the user in a reasonable amount of time?
This question is a bit vague and open to opinion, I admit it. I just need a push in the right direction to keep moving. What I have been considering is throwing the data into a message queue and then having some workers process the data elsewhere and when the data is processed alert the user as to where to find it with some sort of link to an S3 bucket or just a URL to a file. The other idea was to just run the request for each item to be processed against another end-point that already processes individual records in some sort of loop client side. The problem is as follows with this idea:
To process the data it may take somewhere from 30 minutes to 2 hours depending upon the amount that they want processed. It's not ideal for them to just sit there and wait for that to finish depending on the amount of records they need processed, so I have ruled this out mostly.
Any guidance would be very much appreciated as I don't have any coworkers to bounce things off of, nor do I know many people with the domain knowledge that I could freely ask. If this isn't the right place to ask this, could you point me in the right direction as to where it should be asked?
Chris
If I've got you right, your pipeline is:
Accept item from user
Possibly preprocess/validate it (?)
Put into some queue
Process data
Return result.
You man use one or several queues on stage (3). Entity from user gets added to one of the queues. If it's big enough, it could be stored in S3 or storage alike, and only info about it put into the queue: link, add date, user id (or email of alike). Processors can pull items from queue and give feedback to users.
If you have no strict requirements on order, things get much simpler: you don't need any sync between them. Treat all the components: upload acceptors, queues, storages and processors as independent pools of processes. Monitor each pool separately. If there's some bottlenecks - add machines to that pool.
Akamai recently released their REST API for handling purge from edge servers.
I'm making a function/method to reach out to that API and invalidate the cache of inputted object in our storage.
The docs say that it's possible to pass multiple objects to the request (see the Purge Request section). They don't say however, how many objects I can pass into it.
I'm talking about potentially thousands of objects that need to be purged in one call, does anyone know exactly how many objects can I pass per call?
10,000 is max queue length.
If you have more than that, you should implement api call to check queue status and then queue more objects when there is room.
I am the manager of an iOS application and it uses Google Places API. Right now I am limited to 100,000 requests and during our testing, one or two users could use up to 2000 requests per day (without autocomplete). This means that only about 50 to 200 people will be able to use the app per day before I run out of quota. I know I will need to fill out the uplift request form when the app launches to get more quota but I still feel that I will need a very large quota based on these test results. Can anyone help me with this issue?
Note: I do not want to launch the app until I know I will be able to get a larger quota.
First up, put your review request in sooner rather than later so I have time to review it and make sure it complies with our Terms of Service.
Secondly, how are your users burning 2k requests per day? Would caching results help you lower your request count?
I'm facing the same problem!
Is it possible to use Places library of the Google Maps Javascript API which gives the quota on each end user instead of an API key so that the quota will grow as user grows. See here
Theoretically I think it's possible to do that since it just need a webView or javascript runtime to use the library, but didn't see anyone seems to use this approach.
On an iPhone app (or mobile in general) that constantly needs to send requests to a Web Service, is it better to work with one single requests that will fetch a large amount of data or multiple (possibly simultaneous) requests for each element with smaller amount of data fetched.
Example
I want to load a list of elements in a node. I have the node's ID. The 2 ways I can fetch the elements are the following :
send a single request with the node ID and get all the information about the n first elements in the node in a single response ;
send a first request with the node ID to get the IDs of the n first elements in the node, then for each one send another request to have one response per element.
I'm balanced between about that.
the heavyweight single response may cause more lag and timeouts because of the very unstable and slow mobile internet connection ;
the phone may have trouble handling too many responses at the same time.
What's your opinion ?
Since there is overhead for every request, one large request is generally faster than several small ones of the same size. This applies to high speed networks too, but in mobile networks the ratio between transfer speed and latency is even bigger.
I don't think the phone will have any problem handling the responses, so the multiple requests approach seems better for large requests/answers. However, depending on the size of your requests/responses, it may actually be faster to do it in a single request, in order to reduce the delay associated with multiple requests. The single request approach will also need to transfer slightly less data than the multiple request one.
Every call will have its overhead (i.e. network load), the number of connections might also be limited.
You might or might not be able to update your user interface during download, depending on how often your callbacks are called - you may be able to process partial data as it arrives.
If your data is easy to compress (typically text data), then using a single call might even reduce your total network usage footprint even more.
If the chunks of data are large, I'd go with several individual ones. This will also make things easier in case of network errors. Bottom line for me is to just get the right balance - make the packets reasonable sized and don't flood the server.
This is depend upon the situation. If you don't want to bother your user to waiting everytime throughout the app then you can use single request to load all the data at a time.
If you don't mind to let user wait then you can use multiple request on demand. For example if you just want to show title in tableview and detail when user tap on any title. So you can first get the title only and then when user tap you can get details for that title by ID. so that would be pretty good way to request on demand only.
Sometimes the situation merits use for only single requests for say a certain category. Say you have a twitter app and the tweets are seperated out into categories. Someone who has the app but only cares about sports may only look at the sports section which could be a single ajax call. Another user may only be intersted in two categories out of 15 categories. This means the user doesn't have to load unneccessary data. The important thing you need to determine is this.
Does all of the data need to be loaded all at once for the app to work correctly and are your users generally going to want all that data in the first place.
i've been looking on the RT site but cannot find any details, i'm just patching it together from what i've read on forums:
It appears the rottentomatoes' API is limited to 10k calls per day (1 call each 8.64secs), per IP address. Eg with the one API key on two separate computers (different IPs), they will not affect each other's limits.
Is this true? Anyone know? It is for an iphone app to get the background.
Thanks
Have taken this question to the RT forum, close-voters can get busy closing this thread if you wish:
http://developer.rottentomatoes.com/forum/read/123466