Cloud firestore bandwidth exhausted error - google-cloud-firestore

We are using cloud firestore as our database and getting following error when rate of parallel reads from database increases.
details: "Bandwidth exhausted"
message: "8 RESOURCE_EXHAUSTED: Bandwidth exhausted"
stack: "Error: 8 RESOURCE_EXHAUSTED: Bandwidth exhausted
at callErrorFromStatus (/usr/service/node_modules/#grpc/grpc-js/build/src/call.js:30:26)
at Http2CallStream.call.on (/usr/service/node_modules/#grpc/grpc-js/build/src/call.js:79:34)
at Http2CallStream.emit (events.js:198:15)
at process.nextTick (/usr/service/node_modules/#grpc/grpc-js/build/src/call-stream.js:100:22)
at processTicksAndRejections (internal/process/task_queues.js:79:9)"
We couldn't find what is the rate limits. Could you please let me know what are the read rate limits and in which cases firestore returns Bandwidth exhausted error?
Note: Billing is enabled in our project. The problem is we can't find what limit we are reaching.

The RESOURCE_EXHAUSTED error indicates that the project exceeded either its quota or the region/multi-region capacity, so probably your app is doing more reads than expected given what you described. You can check more details in this documentation.
You can check the free quotas and the standard limits on this link and the pricing for what exceeds those numbers on this link. It's important to note that, if you choose to allow your app to go further than the free quotas, you must enable billing for your Cloud Platform project, here is a how to.
You can also check how much your app is actually using of the quotas on app engine on the section below:
Hope This helps.

If your are reading all data from Firebase, this issue happens to you, I had the same problem for reading all data from firebase, after a while, I figured out if we stop this process and run a new process we can pass the error and continue the job.
then I used child-process and it helped me:
I did write a parent script and a child script,
Parent script runs child script as a child-process,
The child goes through a collection until get [8 RESOURCE_EXHAUSTED] error, then send a message to parent to inform it from the error.
Then parent kills child and create a new one and tells it where to start reading again.
This is a solution that works 100 percent, but it's a little advance and beginners-intermediates may could not able to implement it.
update:
I have written a complete instruction on GitHub Gist for this issue, you can check it:
https://gist.github.com/navidshad/973e9c594a63838d1ebb8f2c2495cf87

Related

custom program error: 0x3f metaplex candy machine createSetCollectionDuringMintInstruction

I have a metaplex candy machine and collection that I set up several weeks back. Minting worked initially but is now failing.
The error reported is
custom program error: 0x3f
Which appears to be from the nested instruction to the metadata program. Which should be
set_and_verify_collection
readonly code: number = 0x3f;
readonly name: string = 'DataTypeMismatch';
It can be thrown from metdata deserialize.
https://github.com/metaplex-foundation/metaplex-program-library/blob/master/token-metadata/program/src/state/mod.rs
Which is called for the token metadata and collection metadata data.
I believe those are the only two places it would be thrown from in this method. AccountInfo is resolved for several accounts but it's only deserialized into a typed entity, with size and type considerations for those two entities.
Checking the metadata, on the collection, it's present and the length looks normal for metaplex metadata accounts at 679 bytes.
Now the metadata for the token being minted is not present because the tx failed. However, if, I attempt a transaction without the 'SetCollectionDuringMint' instruction added, the tx succeeds.
Interesting. The metadata account for the token has zero bytes allocated.
I don't recall this changing. In fact, if I go through my source history to older revisions, I've not been explicitly requesting to create the metadata account. I've simply been pre-allocating the account and calling mint nft on the candy machine.
Did the candy machine change to no longer automatically create the metadata account for the minted NFT?
It occurred to me almost as soon as I finished typing up the question, what the likely cause was.
It came to my attention a few weeks back that this older v2 version of the candy machine, does not actually halt transaction execution on constraint violations, but rather, charges the client a fee , for executing the transaction incorrectly.
It's likely the 'bot tax' protocol is allowing the real error, which may be occurring earlier, to get suppressed.
v3 of the candy machhine has made this something you can disable but we are a bit coupled to v2 at the moment.
Anyhow, what I think has happened here is that the bot taxing version of the candy machine, allowed the nft to mint, but didn't actually finish setting it up. Then the next instruction, set collection during mint, was unable to complete.
The real failure is earlier in the transaction, somewhere during the mint, where we no longer meet the mint criteria, and the old version of the candy machine is just charging us and failing silently.
Unfortunately, the root cause is still not clear. One other change that would have occurred between now and then is that the collection is now 'live' having passed the go live date. I'll have to dig through the validation constraints and see if there are any bot tax related short circuits related to this golive date transition.
EDIT: UPDATE: Looks like there were some changes, specific to devnet's token metadata program and my machine was affected. I'll need some new devnet machines.

Cosmos DB request errors after migration

I made data migration with help of the Azure Database Migration Service from Mongo3.4 to Azure Cosmos DB. All collections were copied. Then I deployed app and run report inside the application. I was receiving errors in k8s like:
[report-srv-8a49370c7976028acfc037b7b9b69a37b34b8afezmg5r] 2020-09-17T14:12:27.653Z ERROR: [handleControllerHeart] Error handling heart: {"err":{"driver":true,"name":"MongoError","index":0,"code":16500}}
Error=16500, RetryAfterMs=5481, Details='Response status code does not
indicate success: TooManyRequests (429); Substatus: 3200; ActivityId:
********; Reason: ({\r\n "Errors": [\r\n "Request rate is large. More Request Units may be needed, so no changes were made.
Please retry this request later. Learn more:
http://aka.ms/cosmosdb-error-429
Then I increased RUs but the same behavior.
Does anybody have experience with migration from Mongo3.4 to Azure Cosmos DB?
You need to increase Throughput aka RUs (Request Units). You can do it from here, see how much you are using already from here and may be double it then from the dashboard as before see how much you used when you ran your report and then adjust with what you need.
As a result we created indexes in each collection and that gave us a possibility to decrease shared RU.
Increasing RU also helped but queries were very slow.

Owin- Slow CompatibleWithModel call

I have this line of code within a request of an ApiController of an Azure Mobile App Web API:
var user = t.TrackDependency(() => context.Users.SingleOrDefault(x => x.Email == loginRequest.Id || x.Name == loginRequest.Id), "GetUser");
Here is the result from Application Insights:
We can see that while the line of code took 2613ms, the actual query call to the database took 190ms. While this is an edge case it happens often enough to get complaining users about slow performance.
The thing is I have no idea where the difference could come from. Note this is not due to a cold start, the app was warm when this exact call happened.
The second line is the actual call to the database endpoint. Before that it is not database related.
ps: the graph is from application insights. They capture the call to the database and I add my onwn data through the TrackDependency method.
UPDATE
Today I got more data thanks to Application Insights sampling (great tool!).
Here are the results (this is not the exact request call instance but this is the same problem):
It clearly shows that context.Database.CompatibleWithModel(false) is the culprit. It is called by the call to InitializeDatabase of my custom implementation of IDatabaseInitializer. My custom intializer is set at Startup.
I found another unanswered question on SOF with the same issue
Why does it take so long?
InitializeDatabase is not always called and I don't know when it is called and why.
I found another culprit:
Now we see that EntityConnection.Open is waiting something. Are there some locks on the connection? So far the call to the database endpoint is still not made so we're still on EntityFramework here.
UPDATE 2
There are two issues in that post:
Why is CompatibleWithModel slow? There are many articles about startup time improvements. This is not be adressed in that SOF question.
Why is EntityConnection.Open blocking. This is not related to EntityFramework but is general to getting a connection which takes up to 3 seconds if not called within a 5 minutes windows. I raised that problem in a specific post.
Hence there is no more questions in that post which and it could be deleted but may still be useful as an analysis of tracking down lost time in Web Api calls.

Setting up MongoDB environment requirements for Parse Server

I have my instance running and am able to connect remotely however I'm stuck on where to set this parameter to false since it states that the default is set to true:
failIndexKeyTooLong
Setting the 'failIndexKeyTooLong' is a three-step process:
You need to go to the command console in the Tools menu item for the admin database of your database instance. This command will only work on the admin database, pictured here:
Once there, pick any command from the list and it will give you a short JSON text for that command.
Erase the command they provide (I chose 'ping') and enter the following JSON:
{
"setParameter" : 1,
"failIndexKeyTooLong" : false
}
Here is an example to help:
Note if you are using a free plan at MongoLab: This will NOT work if you have a free plan; it only works with paid plans. If you have the free plan, you will not even see the admin database. HOWEVER, I contacted MongoLab and here is what they suggest:
Hello,
First of all, welcome to MongoLab. We'd be happy to help.
The failIndexKeyTooLong=false option is only necessary when your data
include indexed values that exceed the maximum key value length of
1024 bytes. This only occurs when Parse auto-indexes certain
collections, which can actually lead to incorrect query results. Parse
has updated their migration guide to include a bit more information
about this, here:
https://parse.com/docs/server/guide#database-why-do-i-need-to-set-failindexkeytoolong-false-
Chances are high that your migration will succeed without this
parameter being set. Can you please give that a try? If for any reason
it does fail, please let us know and we can help you on potential next
steps.
Our Dedicated and Shared Cluster plans
(https://mongolab.com/plans/pricing/) do provide the ability to toggle
this option, but because our free Sandbox plans are running on shared
server processes, with other Sandbox users, this parameter is not
configurable.
When launching your mongodb server, you can set this parameter to false :
mongod --setParameter failIndexKeyTooLong=false
I have wrote an article that help you to Setting up Parse-Server and all its dependencies on your own server:
https://medium.com/#jcminarro/run-parse-server-on-your-own-server-using-digitalocean-b2a7d66e1205

Neo4j: Cypher over REST get summary of operations

Is there any way when using the REST API to get the summary of operations that have completed without returning the nodes.
When using the web admin console after doing an operation I get a summary like
1 node inserted
2 relationships inserted
1 node deleted.
In the examples here I notice there is no example of summary information sent back to the client. I would have to return the nodes inserted to know the insert had occurred.
When doing a request over the network often it is a good idea to minimize the data response size. A quick summary would help with this. is it possible to get one from the REST endpoint?
I'm pretty sure this is not possible. It would be a nice addition, though. Have you filed a feature request?