Issues Deploying to Polygon Mainnet - deployment

I am having issues deploying to the Polygon Mainnet. I can deploy easily to Polygon Testnet but have yet to successfully deploy to mainnet. I have tried using Truffle, Hardhat and Remix without various errors. Most errors are receiving null back in the blocks in truffle or are gas related. I've increased gas and then I'll get the timeouts or null returns when querying block during deployment. I can't even deploy a 20 line tutorial contract.
Is anyone else having issues and is this a known issue. I find nothing on the web to tell me. I can provide additional information but I think this is a Polygon network issue and I'm trying to confirm.

Manualy Edit the gas fee to higher on the metamask wallet during the deployment and you will be able to deploy smart contract on polygon network.
Metamask gives you options so i chose to use higher priority gas fee and it worked. No more issues

Related

How To Get Email Notification when Firebase Cloud Function Exceeds Certain Invocation Threshold

I am developing a flutter application and using Firebase Cloud Function as a backend service. Due to a bug, my Flutter application was making infinite request to an HTTP cloud function. The requests invocation would stop when I would dispose the Screen. I was able to detect this bug due to the print logs.
I have resolved the issue but this led me to the search of a service or a way by which I can be notified when my Cloud Function is being called too many time in a certain period of time, For example, if my Cloud Function is called 100 times in under 20 seconds then I should be notified via an Email so that I can fix the issue. The application is currently in development so I want to keep the threshold as low as possible so that I can detect an issue as quickly as possible to avoid expensive costing.
I have done research of my own but I cannot seem to find a proper answer. I have found out about Google Cloud Monitoring but I don't understand it properly. The documentation is quite complex, I cannot find any tutorials on youtube or any answers on SO.
I tried to make an alert policy in the Google Cloud Monitoring and tried the 'metrics' but it did not work because I do not have good understanding of the plateform.
Can anyone help me in solving this issue. I would like a step by step solution to the problem.

Error using the connection to database when RDS scales out

We have a .net API hosted in ECS that queries data from a serverless v1 cluster using Entity Framework. Under normal load this service performs very well but when there's a large spike in traffic that require the RDS cluster to scale out to more ACUs we are seeing a lot of connection errors in our API.
An error occurred using the connection to database '\"ourdatabasename\"' on server '\"tcp://ourcluster.region.rds.amazonaws.com:5432\"'.
The high level overiew of the infrastructure looks like this:
CloudFront >> Load Balancer >> ECS Fargate >> RDS Aurora PostgeSQL Serverless v1
Stack information:
.Net 6 API compiled for Linux
Entity Framework Core 6.x
Npgsql.EntityFrameworkCore.PostgreSQL 6.x
PostgeSQL 10.18
We did open AWS support cases about this issue in the past year, but those basically always result in the answer that this is an implementation issue and not an infrastructure issue.
We can easily reproduce the issue by running a k6 stress test on our API (bypassing the CloudFront caching layer of course) to generate a spike high enough to trigger scaling of the RDS cluster.
For the past year we have worked around this issue by configuring RDS at a capacity at which it basically never needs to scale out. This is of course wasting money, and not the purpose of serverless as all, so we would like to find the underlying root cause and solve that.
Some things we have tried already:
We have experimented with serverless v2 which should scale in a completely different fashion as it's just the same vm consuming more resources from the hosting machine. But our preliminary conclusion is that this was even worse. We do not yet understand why that is, but it appears to trigger the same effect but than a lot faster/more as v2 scales a lot faster/more. With v1 we get in trouble around 400 requests per second, with v2 it was at 150rps.
EnableRetryOnFailure seemed to help a tiny bit, but not a lot. We have left it at the default configuration as implemented by Npgsql for now.
We have experimented with the Maximum Pool Size connection string parameter. At 300 it appears to be a bit better, but it does not solve the issue.
Changing the scaling behaviour of ECS/the ALB or even just prescaling that to handle peak load did not change anything.
We have not tried:
RDS Proxy, it's supposed to solve all your connection pooling issues. But we're not sure it's even a pooling issue. We're not keen on trusting on yet another black box service to solve the issues our first black box service (aurora serverless) has. And it's not really cheap. If all of SO will now convince us this is the holy grail, then surely we'll try it out.
Data API for RDS, you can't have connection management issues if you're not making them right? It's a huge investment to rewrite all EF code to Data API requests and I'm not sure what it says about the service if it's still not out for serverless v2. So, not for now I think.
The first purpose of this question here on SO it trying to find someone that could help us understand what is even going on. Helping us understand the error and where it comes from. We understand that you cannot expect that ECS+RDS can just magically handle all the load you throw at it. But if we do not fully understand how it breaks we are not able to come up with how to create potential failover mechanisms or how to make the system fail more gracefully.
If someone knows the magic setting but not the why that's also great of course :) We can then maybe figure out the why ourselves and share that back with the community ;)
Feel free to ask more questions where needed.

Import a custom Linux image for POWER-IAAS part of the IBM Cloud?

I am trying to import a cloud-enabled Debian Linux image for the Power architecture to run on the IBM public cloud, which supports this architecture.
I think I am following the instructions, but the behavior I am seeing is that, at image-import-time, after filling in all the relevant information, when I hit the "import" button, the GUI just exits silently, with no apparent effect, and no reported error.
I am reasonably experienced doing simple iaas stuff on AWS, but am new to the IBM cloud, and have not deployed a custom image on any cloud provider. I'm aware of "cloud-init", and have a reasonable general knowledge of what problem it solves (mapping cloud-provider metadata to config entries in the resulting VM at start-time), but not a great deal about how it actually works.
What I have done is:
Got an IBM cloud account, and upgraded out of the free tier, for access to Power.
Activated the Power Systems Virtual Server service.
Activated the Cloud Object Storage service.
Created a bucket in the COS.
Created an HMAC-enabled service credential for this bucket.
Uploaded my image, in .tar.gz format, to the bucket (via the CLI, it's too big to upload by GUI).
The image is from here -- that page is a bit vague on which cloud providers it may be expected to work with, but AFAIK the IBM cloud is the only public cloud supporting Power?
Then, from the Power Systems Virtual Server service page, I clicked the "Boot Images" item on the left, to show the empty list, then "Import Image" at the top of the list, and filled in the form. I have answers for all of the entries -- I can make up a new name, I know the region of my COS, the image file name" (the "key", in key-object storage parlance), the bucket name, and the access key and secret keys, which are available from the credential description in the COS panel.
Then the "import" button lights up, and I click it, and the import dialog disappears, no error is reported, and no image is imported.
There are various things that might be wrong that I'm not sure how to investigate.
It's possible the credential is not connected to the bucket in the right way, I didn't really understand the documentation about that, but in the GUI it looks like it's in the right scope and has the right data in it.
It's also possible that only certain types of images are allowed, and my image is failing some kind of validation check, but in that case I would expect an error message?
I have found the image-importing instructions for the non-Power-IAAS, but it seems like it's out of scope. I have also found some docs on how to prepare a custom image, but they also seem to be non-Power-IAAS.
What's the right way to do this?
Edit to add: Also tried doing this via the CLI ("ibmcloud pi image-import"), where it gets a time-out, apparently on the endpoint that's supposed to receive the image. Also, the command-line tool has an --os-type flag that apparently only takes [aix | sles | redhat | ibmi] -- my first attempt used raw, which is an error.
This is perhaps additional evidence that what I want to do is actually impossible?
PowerVS supports only .ova images. Those are not the same supported by VMWare, for instance.
You can get from here https://public.dhe.ibm.com/software/server/powervs/images/
Or you can use the images available in the regional pool of images:
ibmcloud pi image-list-catalog
Once you have your first VM up and running you can use https://github.com/ppc64le-cloud/pvsadm to create a new .ova. Today the tool only supports RHEL, CentOS and CoreOS.
If you want to easily play with PowerVS you can also use https://github.com/rpsene/powervs-actions.

Creating Actions for Personal use only

My house has an home automation system from the 1960's that I have managed to tap into. I've been able to setup an interface which allows me to write adapters for various technologies such as Node Red, Alexa, and now Google Assistent.
Given that this will only ever work with my house, I see no reason to make public Smart Home Actions. On Alexa's side, I can let these services stay in a Development state indefinitely which has worked great for the last 6 months. On Google's side, however, the FAQ says (https://developers.google.com/actions/smarthome/faq):
Q: How often do I need to run gactions test?
A: gactions test needs be refreshed every 3 days. After 3 days the test agent will disappear from mobile-HomeControl settings. If you run into this, just run gaction test again.
Therefore, I was wondering what they best way is to make a PERSONAL Google Actions service? Of course, the obvious method would be to script and schedule the gactions call to keep testing alive but I would hope there was a better way to support this!
Additional details: I'm using Amazon's OAuth service for sign-in. This way, I can validate the Amazon ClientID, UserID, etc. through the AccesssToken Google passes in for authorization. Therefore, I could theoretically run this publicly without any issues but I would need to figure out how Google could review it for testing purposes! I don't need some Google employee turning on and off my lights while the Google Maps car drive by to verify the change... ;)
I would just use a script to call gaction periodically.
Publishing it would unnecessarily pollute the Actions directory. Also, they'll make you jump through hoops for "brand verification" and other restrictions they have for naming invocation terms.
If you did publish it, you give them a temporary account for verification purposes and disable the account when published. They would be randomly controlling the lights during the verification period though which can be up to a week!

Why am I being charged on the very first day of Bluemix trial?

i signed up bluemix, so i am on trial account
I have started learning Kitura along the tutorial https://developer.ibm.com/swift/2016/07/06/tutorial-deploying-a-swift-to-do-list-app-to-the-cloud/#comment-2218
I uploaded the files a several time to make the server run (actually the tutorial gave me wrong link for the files)
now it works.
but I saw my Runtime Cost is $289.
I have not added any support plan
although I have not put my credit card info yet, is that what is going to be charged after Trial or for every month ?
Why am I being charged anyway?
nearly $300 is too high for testing a server.
Would you explain about the Runtime Cost that I am currently being charged please?
Bluemix provides you with a cost calculator that allows you to trace what you will pay for services, containers, and VMs. In your case since you have a trial account, that is only an estimate of what you should pay.