Can we modify block gas limit in Kaleido? - kaleido

I am trying to deploy a smart contract on Kaleido Ethereum consortium, however, I am getting an error:
Failed to deploy the smart contract. Error: Error: Returned error: exceeds block gas limit

Kaleido does not yet provide the ability to customise the block gas limit. The --targetgaslimit on the nodes is unset, and you will see the default of 4712388.
Have you estimated the gas needed to successfully install your contract?
The 'exceeds block gas limit' error will be returned if you request more than 4712388 when submitting your transaction, regardless of whether the installation would have taken less than that value.
So it's worth checking it does fail when requesting exactly 4712388.

Related

custom program error: 0x3f metaplex candy machine createSetCollectionDuringMintInstruction

I have a metaplex candy machine and collection that I set up several weeks back. Minting worked initially but is now failing.
The error reported is
custom program error: 0x3f
Which appears to be from the nested instruction to the metadata program. Which should be
set_and_verify_collection
readonly code: number = 0x3f;
readonly name: string = 'DataTypeMismatch';
It can be thrown from metdata deserialize.
https://github.com/metaplex-foundation/metaplex-program-library/blob/master/token-metadata/program/src/state/mod.rs
Which is called for the token metadata and collection metadata data.
I believe those are the only two places it would be thrown from in this method. AccountInfo is resolved for several accounts but it's only deserialized into a typed entity, with size and type considerations for those two entities.
Checking the metadata, on the collection, it's present and the length looks normal for metaplex metadata accounts at 679 bytes.
Now the metadata for the token being minted is not present because the tx failed. However, if, I attempt a transaction without the 'SetCollectionDuringMint' instruction added, the tx succeeds.
Interesting. The metadata account for the token has zero bytes allocated.
I don't recall this changing. In fact, if I go through my source history to older revisions, I've not been explicitly requesting to create the metadata account. I've simply been pre-allocating the account and calling mint nft on the candy machine.
Did the candy machine change to no longer automatically create the metadata account for the minted NFT?
It occurred to me almost as soon as I finished typing up the question, what the likely cause was.
It came to my attention a few weeks back that this older v2 version of the candy machine, does not actually halt transaction execution on constraint violations, but rather, charges the client a fee , for executing the transaction incorrectly.
It's likely the 'bot tax' protocol is allowing the real error, which may be occurring earlier, to get suppressed.
v3 of the candy machhine has made this something you can disable but we are a bit coupled to v2 at the moment.
Anyhow, what I think has happened here is that the bot taxing version of the candy machine, allowed the nft to mint, but didn't actually finish setting it up. Then the next instruction, set collection during mint, was unable to complete.
The real failure is earlier in the transaction, somewhere during the mint, where we no longer meet the mint criteria, and the old version of the candy machine is just charging us and failing silently.
Unfortunately, the root cause is still not clear. One other change that would have occurred between now and then is that the collection is now 'live' having passed the go live date. I'll have to dig through the validation constraints and see if there are any bot tax related short circuits related to this golive date transition.
EDIT: UPDATE: Looks like there were some changes, specific to devnet's token metadata program and my machine was affected. I'll need some new devnet machines.

How to make a negative testing in magento 2 for paypal?

I'm trying to handle error payment denied from Paypal after doCapture method.
https://developer.paypal.com/docs/api/sandbox/nt-classic/#test-api-error-handling-routines
You can force two types of API errors: those related to the transaction amount, and those not related to the amount.
To trigger an error condition on an amount-related field, specify a error code value as a number with two digits to the right of the decimal point. For example, specify a value of 107.55 to trigger the 10755 error.
To trigger errors on fields that are not amount related, specify the error code in whole. For example, use a value of 10539 to trigger a "payment declined" error.
How to set the amount to trigger error ‘payment declined’ in magento 2
Any advice.
Thanks.
Documentation: Negative testing for PayPal's classic APIs

Cloud firestore bandwidth exhausted error

We are using cloud firestore as our database and getting following error when rate of parallel reads from database increases.
details: "Bandwidth exhausted"
message: "8 RESOURCE_EXHAUSTED: Bandwidth exhausted"
stack: "Error: 8 RESOURCE_EXHAUSTED: Bandwidth exhausted
at callErrorFromStatus (/usr/service/node_modules/#grpc/grpc-js/build/src/call.js:30:26)
at Http2CallStream.call.on (/usr/service/node_modules/#grpc/grpc-js/build/src/call.js:79:34)
at Http2CallStream.emit (events.js:198:15)
at process.nextTick (/usr/service/node_modules/#grpc/grpc-js/build/src/call-stream.js:100:22)
at processTicksAndRejections (internal/process/task_queues.js:79:9)"
We couldn't find what is the rate limits. Could you please let me know what are the read rate limits and in which cases firestore returns Bandwidth exhausted error?
Note: Billing is enabled in our project. The problem is we can't find what limit we are reaching.
The RESOURCE_EXHAUSTED error indicates that the project exceeded either its quota or the region/multi-region capacity, so probably your app is doing more reads than expected given what you described. You can check more details in this documentation.
You can check the free quotas and the standard limits on this link and the pricing for what exceeds those numbers on this link. It's important to note that, if you choose to allow your app to go further than the free quotas, you must enable billing for your Cloud Platform project, here is a how to.
You can also check how much your app is actually using of the quotas on app engine on the section below:
Hope This helps.
If your are reading all data from Firebase, this issue happens to you, I had the same problem for reading all data from firebase, after a while, I figured out if we stop this process and run a new process we can pass the error and continue the job.
then I used child-process and it helped me:
I did write a parent script and a child script,
Parent script runs child script as a child-process,
The child goes through a collection until get [8 RESOURCE_EXHAUSTED] error, then send a message to parent to inform it from the error.
Then parent kills child and create a new one and tells it where to start reading again.
This is a solution that works 100 percent, but it's a little advance and beginners-intermediates may could not able to implement it.
update:
I have written a complete instruction on GitHub Gist for this issue, you can check it:
https://gist.github.com/navidshad/973e9c594a63838d1ebb8f2c2495cf87

Cloud SQL API Explorer, settingsVersion

I'm getting familiarized with Cloud SQL API (v1beta1). I'm trying to update authorizedNetworks (sql.instances.update) and I'm using API explorer. I think my my request body is alright except for 'settingsVersion'. According to the docs it should be:
The version of instance settings. This is a required field for update
method to make sure concurrent updates are handled properly. During
update, use the most recent settingsVersion value for this instance
and do not try to update this value.
Source: https://developers.google.com/cloud-sql/docs/admin-api/v1beta3/instances/update
I have not found anything useful related to settingsVersion. When I try with different srings, instead of receiving 200 and the response, I get 400 and:
"message": "Invalid value for: Expected a signed long, got '' (class
java.lang.String)"
If a insert random number, I get 412 (Precondition failed) and:
"message": "Condition does not match."
Where do I obtain versionSettings and what is a signed long string?
You should do a GET operation on your instance and fetch the current settings, those settings will contain the current version number, you should use that value.
This is done to avoid unintentional settings overwrites.
For example, if two people get the current instance status which has version 1, and they both try to change something different (for example, one wants to change the tier and the other wants to change the pricingPlan) by doing an Update operation, the second one to send the request would undo the change of the first one if the operation was permitted. However, since the version number is increased every time an update operation is performed, once the first person updates the instance, the second person's request will fail because the version number does not match anymore.

Windows Workflow Correlation with Workflow Services

I have a locally hosted Windows Workflow (4.5) site running on App Fabric. The Workflow is very simple, at present, consisting of two workflow services the first saves correlation based on Guid (DB Generated) and the second just receives the same object with the guid in, the second is set to retrieve the guid for correlation.
I have a problem with the second part of my workflow, apparently, not being called. The calling site returns with this error:
The operation did not complete within the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout.
Now, what is puzzling is if I intentionally put in a correlation id (guid) that is not correct, then Workflow returns saying there is no matching process. When there is correct correlation identifier it times-out.
Correlation keys are the same at both points, it is in idle state in IIS and App Fabric. App Fabric has the above error in it.
Any ideas?