Having Capistrano skip over down hosts - deployment

My setup
I am deploying a Ruby on Rails application to 70+ hosts. These hosts are located behind consumer-grade ADSL connections which may or may not be up. Probability of being up is aroud 99% but definently not 100%.
The deploy process works perfectly fine and I have no problem specific to it.
The problem
When Capistrano encounters a down host, it stops the entire process. This is a problem because if host n°30 is down, then the 40 other hosts after it do not get the deployment.
What I would like is definently an error for the hosts that are down but I would also like Capistrano to continue deploying to all the hosts that are up.
Is there any setting or configuration that would enable me to do this ?

I ended up running a Capistrano instance for each IP then parsing the logs to see which one has failed and which one has succeeded.
A little Python script adjusted to my needs does this fine.

Related

NestJS schedualers are not working in production

I have a BE service in NestJS that is deployed in Vercel.
I need several schedulers, so I have used #nestjs/schedule lib, which is super easy to use.
Locally, everything works perfectly.
For some reason, the only thing that is not working in my production environment is those schedulers. Everything else is working - endpoints, data base access..
Does anyone has an idea why? is it something with my deployment? maybe Vercel has some issue with that? maybe this schedule library requires something the Vercel doesn't have?
I am clueless..
Cold boot is the process of starting a computer from shutdown or a powerless state and setting it to normal working condition.
Which means that the code you deployed in a serveless manner, will run when the endpoint is called. The platform you are using spins up a virtual machine, to execute your code. And keeps the machine running for a certain period of time, incase you get another API hit, it's cheaper and easier on them to keep the machine running for lets say 5 minutes or 60 seconds, than to redeploy it on every call after shutting the machine when function execution ends.
So in your case, most likely what is happening is that the machine that you are setting the cron on, is killed after a period of time. Crons are system specific tasks which run in the kernel. But if the machine is shutdown, the cron dies with it. The only case where the cron would run, is if the cron was triggered at a point of time, before the machine was shut down.
Certain cloud providers give you the option to keep the machines alive. I remember google cloud used to follow the path of that if a serveless function is called frequently, it shifts from cold boot to hot start, which doesn't kill the machine entirely, and if you have traffic the machines stay alive.
From quick research, vercel isn't the best to handle crons, due to the nature of the infrastructure, and this is what you are looking for. In general, crons aren't for serveless functions. You can deploy the crons using queues for example or another third party service, check out this link by vercel.

Two versions of fluentd fighting over port in my cluster

Somehow, I have 2 versions of fluentd running in my cluster:
They end up fighting over the same port, they just keep cranking away, trying to start up on that port, and it saturates all the CPU in the cluster.
unexpected error error_class=Errno::EADDRINUSE error="Address already in use - bind(2) for 0.0.0.0:24231
/opt/google-fluentd/embedded/lib/ruby/2.6.0/socket.rb:201:in 'bind'
I've tried deleting the daemon sets and deployments, they just keep coming back. Also tried ssh'ing into the machines and killing the process on that port. Nothing seems to work.
Obviously, I only want one version of fluentd to run (and I'm not even sure which one).
I seem to have fixed it. I went to GCP dashboard cluster edit page, Kubernetes Engine Monitoring dropdown was blank. It seems not even the dropdown could decide what to display here.
It seems the automated agent, or whatever, seriously messed up here, and had 2 versions of the logging and monitoring system running, fighting over a port, and crushing the CPU on every machine in the cluster. On top of that, I couldn't delete the daemon sets, pods, or deployments. It seems Google treats these as special somehow, maybe with some kind of automated agent, I don't know.
From the dropdown, I just selected System and workload logging and monitoring, saved, and it applied the changes.
Everything looking good so far, but this whole event has me worried, I didn't do anything. This just....happened.
This is a dev cluster, but if it was a production cluster...

Why would running a container on GCE get stuck Metadata request unsuccessful forbidden (403)

I'm trying to run a container in a custom VM on Google Compute Engine. This is to perform a heavy ETL process so I need a large machine but only for a couple of hours a month. I have two versions of my container with small startup changes. Both versions were built and pushed to the same google container registry by the same computer using the same Google login. The older one works fine but the newer one fails by getting stuck in an endless list of the following error:
E0927 09:10:13 7f5be3fff700 api_server.cc:184 Metadata request unsuccessful: Server responded with 'Forbidden' (403): Transport endpoint is not connected
Can anyone tell me exactly what's going on here? Can anyone please explain why one of my images doesn't have this problem (well it gives a few of these messages but gets past them) and the other does have this problem (thousands of this message and taking over 24 hours before I killed it).
If I ssh in to a GCE instance then both versions of the container pull and run just fine. I'm suspecting the INTEGRITY_RULE checking from the logs but I know nothing about how that works.
MORE INFO: this is down to "restart policy: never". Even a simple Centos:7 container that says "hello world" deployed from the console triggers this if the restart policy is never. At least in the short term I can fix this in the entrypoint script as the instance will be destroyed when the monitor realises that the process has finished
I suggest you try creating a 3rd container that's focused on the metadata service functionality to isolate the issue. It may be that there's a timing difference between the 2 containers that's not being overcome.
Make sure you can ‘curl’ the metadata service from the VM and that the request to the metadata service is using the VM's service account.

Can "spring.cloud.consul.host" config value have multiple Consul agents?

I'm a bit confused with this configuration. My Spring Boot app with #EnableDiscoveryClient has spring.cloud.consul.host set to localhost. I'm running a Consul Agent on the host where my Boot app is running, but I've a few questions (can't seem to find my answers in the documentation).
Can this config accept multiple values?
If so, I'd prefer to set the values to a list of Consul server addresses (but then, what's the point of running Consul Agents at all, so this doesn't seem practical, which means I'm not understanding something here)
If not, are we expected to run a Consul Agent on every node a Boot app with #EnableDiscoveryClient is running? (this feels wrong as well; for one, this would seem like a single point of failure even though one agent should be able to tell everything about the cluster; what if I can't contact this one agent?)
What's the best practice for this configuration?
Actuallly this is Consul itself to solve your problem. An agent is runing on every server to handle clustering, failures, sharing data, autodiscovery etc. for you so that you don't neen to know the other hosts in your Spring Boot configuration. Spring Boot app always connects to the agent running on the same machine.
See https://www.consul.io/docs/agent/basics.html

How to handle deploy process of capistrano if it fails on a machine in case of multiserver deployment?

I am trying to use capistrano in which i have 4 application servers. I am not sure about the behaviour of capistrano in case deploy fail on one machine. If the network connection is disconnected for a machine then the processes on all the machines go on interrupted sleep and does not notify anything. Please let me know how to trac such issues.
Thanks
Capistrano has its own transaction management, if it fails on any of the machine then the deploy:rollback is called and all the machines always remain in same valid state.