Google Custom Search API issue: up to 100 queries - iphone

I want to use Google Custom Search API https://developers.google.com/custom-search/v1/overview to search images by keywords.
At main page they say:
Usage is free for all users, up to 100 queries per day. Any usage beyond the free usage quota will fail if you are not signed up for billing. Once you have enabled billing, you will continue to receive 100 free queries per day. However, you will be billed for all additional requests at the rate of $5 per 1000 queries, for up to 10,000 queries per day.
What do they mean: Is that the limit for all users of my application? or each of my users can make up to 100 queries? If you don't know the answer, may be you know the url of google help for these questions.

Since you are supposed to get API key for this service, most likely Google calculates queries number based on your API key, so you get 100 free queries each day in total per single API key (which would be the same for all of your users), and not 100 queries for each user.

Google needs to clarify this.
I enabled billing and I'm not getting squat.
First of all, with billing enabled, paid subscription, it is not possible to do a Google Custom Search (api), and process more than 100 results from that search. If you have access to more than 100 results (with 1K++ or a million results returned), prove it, send me the code.
Also those 100 results? right so that consumes 10 searches. You do the initial search (that's 1 search consumed), you get back 10 results in the JSON response, and you want the next 10? Great, you have to do/consume another search against the quota using &start=11 to get the next ten, and so on, up to 100 (consuming 10 searches), and then you get 404 when &start=101.
This is yet another BIG GOOGLE FAIL!.

Related

Staggered API call in Power BI

I'm fairly new to Power BI so forgive my ignorance if this is a stupid question. I've set up an API query to pull records from a 3rd party app, but the rate limiting of the app allows max 500 records per refresh, with some timing restrictions.
How can I set up my query to stagger the refresh, starting where it left off every time. So for example if there are 2000 records to pull, I'd want to stagger that to 500 (new) records pulled every minute until complete. I considered using incremental refresh, but there are no variables that group the data into small enough chunks.
Help? Can I provide anything else?

Trace cause of Firestore reads

I am having an excessive amount of Firestore reads in the past few weeks. My system generally was processing about 60k reads per day. About 3 weeks ago it jumped to roughly 10 million a day and the past 2 days have hit over 40 million records in a single day! My user base has not grown, my code has not changed so there is no reason for this spike. I suspect an endpoint is being hit from someone outside the scope of my application that may be trying to penetrate or retrieve records. I have reached Firestore repeatedly for help with this as it becoming a huge loss every day this happens but they are unable to assist me.
Is there a way to trace an origin of read requests or more importantly see counts for which collections or documents are being read? This must be traceable somehow as Firestore bills you per read but I cannot seem to find it.
There is currently no source IP address tracking with Cloud Firestore. All reads fall under the same bucket, which is that "they happened".
If you're building a mobile app, now would be a good time to use security rules to limit which authenticated users can read and write what parts of your database, so that it's not just being accessed unlimited from anywhere on the internet.

Operation counts of Algolia free plan

100,000
then paused
until renewal
Algolia pricing page contains a cell like this in it's free plan.
Does it mean after 100.000 operations customers should switch to a paid plan or is does it reset montly just like the others?
If your App reaches operation limit on a free plan, it won't be able to perform any extra call until the next renewal.
Thanks for your question, we should give more details on our pricing page.
They no longer have a limit.
We don’t count indexing operations towards your usage for users on our current (starting July 2020) Free, Standard, and Premium plans. However, to prevent extreme numbers of operations impacting your cluster, we implemented a protective limit of 10,000 indexing operations per unit.

Save Followers count in a field or query each time if needed [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to create an App like Twitter. Now i have a question about this projects database architecture. I want to show each users Followers/Following count in his/her profile like Twitter, but i don't know that i have to query every time from Followers/Followings table/collection or this values can be two small separate field in user record? If i query every time definitely takes very much time and database overhead. In the other hand, If i save in two field for each user, When there is a change, I have to do 2 actions, Modifying Followers or Followings table and This two fields in user record. My database will be huge and very large amount of data.
Which approach is good and standard?
Well, if you want to know what is right, there is only one answer.
Each of the separate fields in the user record contains derived data (data that can be easily derived via a query). Therefore it constitutes a duplication. Therefore it fails Normalisation.
The consequence of failed Normalisation is, you have an Update Anomaly. You no longer have One Fact in One Place, you have one fact in two places. And you have to update them every time one fact changes, every time the Followers/Followed per User changes. Within a Transaction.
That isn't a "trade-off" against performance concerns, that is a crime. When the fact in two places gets "out of synch", your crimes will be exposed. You will have to re-visit the app and database and perform some hard labour to make amends. And you may have to do that several times. Until you remove the causative problem.
Performance
As for the load on the database, if your application is serious, and you expect to be in business next year, get a real SQL platform.
Population or load for this requirement is simply not an issue on a commercial platform. You always get what you pay for, so pay something of value, and get something of value.
Note that if you have millions of Users, that does not mean you have millions of Followers per User. Note that your files will be indexed, so you will not chase down 16 million Users to count 25 Followers, your index will allow you to identify 25 Followers in a maximum of 25 index rows, in very few pages. This kind of concern simply does not exist on a commercial platform, it is the concern of people with no platform.
Well, it depends who is it for?
If it's for your users - they can see how many followers they are having. I would do this Twitter API call only when user logs in to your service.
If for some reason it must be done for all users. I think best way would be to do this followers-count-api-call for example once in an hour, every second hour or just daily. This could be achieved by a script that runs in cron.
Do you really need followers or just followers count? Or both?
If both, you can request Twitter user's followers and limit it to 100 (if your cron runs every minute to every fifteen minutes). Then loop those follower ids against your database and keep inserting them, until there is match. Twitter returns all the newest follower id:s by default. So this is possible at this moment.
Just remember you can make only 15 request per user tokens agains Twitter API when requesting Followers. This limit could vary between different endpoints.
Good to mention that I assumed that you are getting only follower ids. Those you can get 5000 at a time. If you want to request follower objects, there the limit is only 200 per request.
Hope this helps :D

Google Cloud SQL Pricing

I am an avid user of Amazon AWS but I am not sure about the RDS as compared to Google's Cloud SQL. In this site - it is mentioned that Per Use Billing Plan exists.
How is that calculated? It is mentioned 'charged for periods of continuous use, rounded up to the nearest hour'.
How does it go? If there are no visitors to my site there are no charges, right? What if I say I have 100 continuous users for 30 days. Will I still be billed $0.025 per hour (excluding the network usage charges)?
How do I upload my present SQL database to Google Cloud service? Is it the same way as Amazon using Oracle Workbench?
Thank you
Using the per use billing, if your database isn't access for 15 minutes then it is taken offline and you are only charged for data storage ($0.24 per GB per month). Its brought back online the next time it's accessed, which typically takes around a second for a D1 instance. The number of users doesn't affect the charge: you are charged for the database instance, not the user.
More details here
https://developers.google.com/cloud-sql/faq#how_usage_calculated
More information on importing data here:
https://developers.google.com/cloud-sql/docs/import-export
For Google Cloud SQL, I think we need to differentiate the MySQL 1st generation and the 2nd generation. In this FAQ link (answered by Joe Faith), https://developers.google.com/cloud-sql/faq#how_usage_calculated, it is about the 1st generation with activation policy of ON_DEMAND, meaning that you are charged per minute of usage.
However, with MySQL 2nd generation (as answered by Se Song), it will charge you entirely every minute (24 h per day) regardless whether you have active connections or not. The reason is that it uses the instance with activation policy = ALWAYS. You can read more the pricing details in here: https://cloud.google.com/sql/pricing/#2nd-gen-pricing
You can manually stop and restart your database instance, and hence it could be possible to write a script that activates it under particular circumstances, but this is not provided within GCP's features.
Watch the default settings carefully or you risk $350/month fees. Here's my experience: https://medium.com/#the-bumbling-developer/can-you-use-google-cloud-platform-gcp-cheaply-and-safely-86284e04b332