Nowadays, I tried to deploy smart contract on Mainnet but I cant deploy it successfully.
why? and how can I deploy program on mainnet?
The current tools perform a fee check on every single message sent to deploy the program, which can make things take much longer and timeout many of the signed transactions.
As a temporary workaround, if you're sure the funding account has enough SOL, you can use:
solana program deploy --skip-fee-check <rest of your parameters here
Related
I am the only developer (full-stack) in my company I have too much work other than automating the deployments as of now. In the future, we may hire a DevOps guy for the same.
Problem: We have 3 servers under Load Balancer. I don't want to block 2nd & 3rd servers till the 1st server updated and repeat the same with 2nd & 3rd because there might be huge traffic for one server initially and may fail at some specif time before other servers go live.
Server 1
User's ----> Load Balancer ----> Server 2 -----> Database
Server 3
Personal Opinion: Is there a way where we can pull the code by writing any scripts in the Load Balancer. I can replace the traditional Digital Ocean load balancer with Nginx Server making it a reverse proxy.
NOTE: I know there are plenty of other questions asked in Stack
Overflow on the same but none of them solves my queries.
Solutions I know
GIT Hooks - I know somewhat about GIT Hooks but don't want to use it because if I commit to master branch by mistake then it must not get sync to my production and create havoc in the live server and live users.
Open multiple tabs of servers and do it manually (Current Scenario). Believe me its pain in the ass :)
Any suggestions or redirects to the solutions will be really helpful for me. Thanks in advance.
One of the solutions is to write ansible playbook for this. With Ansible, you can specify to run it per one host at the time and also as the last step you can include verification check that checks if your application responds with response code 200 or it can query some endpoint that indicates the status of your application. If the check fails, Ansible will stop the execution. For example, in your case, Server1 deploys fine, but on server2 it fails. The playbook will stop and you will have servers 1 and 3 running.
I have done it myself. Works fine in environments without continuous deployments.
Here is one example
This question is purposefully general because I'm trying to understand things more from an architectural perspective, because that will impact which group I need to contact. My team is using Azure DevOps (cloud) with on-prem build agents. The agents connect to ADO via a proxy.
We use several tools in-house provided by vendors with ADO plugins in the Marketplace that require us to set up service connections. Because the services are installed on-prem, the endpoints we enter are not available via the Web (e.g. https://vendor-product.my-company.com).
If I log into the build machine and open up IE, I am able to connect to the service endpoint URL. However, whenever I try to run a task from ADO, it fails with some kind of connection-related issue ("The underlying connection was closed: An unexpected error occurred on a send", "Task ended with an exception: Error: read ECONNRESET", etc.).
The way I thought it worked, all the work takes place on the build machine itself, so the calls would be going from my-build-server.my-company.com to https://vendor-product.my-company.com. Those error messages though make me wonder if the connection is actually coming from https://dev.azure.com.
So the questions I have are:
For situations like this, is the connection to a service endpoint going to be seen as coming from my on-prem build agent, or from ADO (or does it vary based on how the vendor writes their plugin)?
If the answer to #1 is "it varies", is there any way for me to tell just from the plugin itself without having to contact the vendor? (In my experience some of the vendor reps don't understand how the cloud works.)
and/or
Because my build agent was configured to use a proxy when I set it up, is it going to use that proxy for all connections, even internal ones? I think I can set up a proxy bypass list for the agents but I presently only have read access to the build box. I can request temporary elevated access but I'd need some level of confidence that's what the issue is.
Hope I explained the situation clearly, thanks in advance for any insight.
I am facing below problem,Appreciate if any one help
I am having jenkin job which will trigger java jar contains code to read the email from excel and the same email to be passed in jenkins for sending email in username field.
Thanks
Did you take a look at parameterized jobs (If you want to manually trigger it)?
If, you want to read from excel and pass to another job please take a look at this.
I'm trying to understand your problem:
1. Jenkins to trigger emails to DevOps team that the deployment task results
2. Application to trigger emails that the application is being deployed successfully (Or any other scenarios you want to achieve, please indicate, and I will try to enhance this post)
If above is the case, you should try to utilize Jenkins to complete the full process: build, test, deploy and verify, then consolidate the results then send out via email
It's a clear cut that Jenkins for deployment and app for business
There are different ways to verify your application is deployed successfully depends on how you identify if app deployment is successful.
Jenkins can detect those signals, e.g. send a ping or curl to the application and verify the response
Now only Jenkins needs to know the list of emails address for the deployment results, you can use parameterized jobs as #Avneesh Srivastava mentioned
Short version: Two builds, A and B, for the same commit, both running on our build server using the VSTS agent service
Build A:
Agent running as Network Service
Saves a .coverage file of 267kb, showing non-zero % code coverage
Runs successfully, no errors, same test logs as build B
Build B:
Agent running as Local System
Saves a .coverage file of 1kb, showing 0% code coverage
Runs successfully, no errors (except that a quality gate fails due to the 0% code coverage, but that's intentional), same test logs as build A
Extra info:
The VSTS Agent service normally ran on our build server as "Network Service", and all was well. Until we had to modify the agent service to run as "Local System" so it could access a cert in the "LocalMachine" store which we need for Azure AD service auth. After that, it still claimed to do everything successfully except that the code coverage file is tiny and claims 0% code coverage, which is weird because the unit tests are certainly being run. The logs from the two test tasks are exactly identical (except for things like timestamps and the build numbers), no helpful warnings or errors in there.
I'm sure it's probably not ideal to run the agent as Local System, but that account has more permissions than network service does, so I don't know how it could be a permission issue. I've probably just made a mistake in setting up something, but it seems like the only way out of this is to either
give Network Service extra permissions (bad)
regenerate / move the Azure AD service principal cert into the "CurrentUser" cert store for Network Service (feels bad but I'm not sure why)
set up a new service account and resign ourselves to having permissions issues forevermore (ugh)
Can we somehow diagnose what exactly is going on with this test task without resorting to procmon? Or is there a better way to manage this stuff?
Well this is rather annoying: I fixed it, but I don't know how. While demonstrating it to a colleague, all I did was repeat my previous steps of rebooting the server and switching the agent service back and forth between the two accounts a couple of times, at which point the problem stopped being reproducible. It seems this is one of those mysteriously vanishing problems that hides whenever you try too hard to investigate it. Hopefully it doesn't come back...
We use Windows Azure Cloud services to host our application. One of the great features of Windows Azure is the Production/Staging model. You can have the clients of your application routed to your production server, while you can test your new code running on a staging server. For example, you can configure Azure to point a production server to http://www.coolapp.com while designating a staging server for the same app to something like this: http://7f8e9d5ba73a4f7ea9ebd65a02ee195d.cloudapp.net.
Physically both of these servers are publicly facing. If you were to know the cryptic URL of a staging server you would be able to browse to the app just as easily as you would browse to www.coolapp.com. However, the presence of a GUID in the URL makes it virtually impossible for someone to guess it, thus making the staging server "private". This gives a nice mechanism to the developers of an application to deploy and test the new bits on a staging server before releasing them to public. Once they make sure that things look good, with a flip of a switch they swap the two servers, making staging server a production server and vice versa.
This model creates a small problem for us in relation to Facebook integration. To be able to integrate Facebook plugins you have to register your app with them. FB will then issue an AppId and an AppSecret keys. These keys are tied to the URL of your application. So in order for my app to work with FB plugins I need to obtain one set of keys that is tied to 7f8e9d5ba73a4f7ea9ebd65a02ee195d.cloudapp.net, and another set that is tied to www.coolapp.com.
When I read about Windows Azure, they really urge developers to treat staging vs. production servers as the same. The only difference between them should be the URL. In other words, Azure does not recommend basing your app logic on which server the code happens to be running on as Azure has no inherent knowledge of this. Staging vs. production is just a handy "abstraction" if you will. I guess you see the problem here. In our example above, I have to use one set of keys issued by FB versus another depending on which URL (production vs. staging) my app is running at. I assume I am not the first one running into this problem. What are the correct ways of handling this? One obvious way is to sniff the URL property of the Request object and branch my logic that way. However, intuition tells me this is a hack. Any other ideas?
Regards,
Archil
The mechanisms I know of are:
using "production" within a totally separate service account to "testing" - this leaves "staging" within the production service to be used as an area for "deployment candidates" and provides a separate clean testing domain with a non-changing URL for earlier "dev and test" work.
using different .cscfg files for staging and production - and being careful to update this .cscfg before you do any live switching.
sniffing the incoming URL - as you suggest
Personally, I use the first of these techniques - its easy and it helps prevent nasty accidents
As an aside, one of the techniques we've used for "removing" the Guid from staging is to CNAME the Guid with a really short TTL on the DNS - this allows us to quickly and automatically update the CNAME record for the staging server when we deploy.