Capistrano duplicate tasks for each role - capistrano

I must be missing something with Capistrano, because I've just started writing capfiles and I'm looking at tons of duplicated code. Consider this
role :dev, "dev1", "dev2"
role :prod, "prod1", "prod2"
desc "Deploy the app in dev"
task :deploy_dev, :roles => :dev do
run "sudo install-stuff"
end
desc "Deploy the app in prod"
task :deploy_prod, :roles => :prod do
run "sudo install-stuff"
end
IMO it's totally reasonable to want to run the exact same task in dev or prod, but from what I can tell, Capistrano would have me write 2 tasks just to specify the different nodes...
Seems like if you could refer to roles on the CLI like
cap deploy dev
cap deploy prod
there could be a single definition of the 'deploy' task in the capfile, as opposed to a duplicated one for each set of servers.
Is there a way to write a task once and specify the role dynamically?

Have a look at the multistage extension. While fairly easy to set up the tasks you need yourself, the multistage extension will do it all for you.
If you'd rather do it yourself, see the calling tasks section of the handbook. The trick is that you can invoke different tasks in order from the command line.

Related

Azure DevOps Agent - Custom Setup/Teardown Operations

We have a cloud full of self-hosted Azure Agents running on custom AMIs. In some cases, I have some cleanup operations which I'd really like to do either before or after a job runs on the machine, but I don't want the developer waiting for the job to wait either at the beginning or the end of the job (which holds up other stages).
What I'd really like is to have the Azure Agent itself say "after this job finishes, I will run a set of custom scripts that will prepare for the next job, and I won't accept more work until that set of scripts is done".
In a pinch, maybe just a "cooldown" setting would work -- wait 30 seconds before accepting another job. (Then at least a job could trigger some background work before finishing.)
Has anyone had experience with this, or knows of a workable solution?
I suggest three solutions
Create another pipeline to run the clean up tasks on agents - you can also add demand for specific agents (See here https://learn.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml) by Agent.Name -equals [Your Agent Name]. You can set frequency to minutes, hours, as you like using cron pattern. As while this pipeline will be running and taking up the agent, the agent being cleaned will not be available for other jobs. Do note that you can trigger this pipeline from another pipeline, but if both are using the same agents - they can just get deadlocked.
Create a template containing scripts tasks having all clean up logic and use it at the end of every job (which you have discounted).
Rather than using static VM's for agent hosting, use Azure scaleset for Self hosted agents - everytime agents are scaled down they are gone and when scaled up they start fresh. This saves a lot of money in terms of sitting agents not doing anything when no one is working. We use this option and went away from static agents. We have also used packer to create the VM image/vhd overnight to update it with patches, softwares required, and docker images cached.
ref: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops
For those discovering this question, there's a much better way: run your self-hosted agent with the --once flag, documented here.
You'll need to wrap it in your own bash script, but something like this works:
while :
do
echo "Performing pre-job setup..."
echo "Waiting for job..."
./run.sh --once
echo "Cleaning up..."
sleep 2
done
Another option would be to use a ScaleSet VM setup which preps a new agent for each run and discards the VM when the run is done. It can prepare new VMs in the background while the job is running.
And I suspect you could implement your own IMaintenanceProvirer..
https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Agent.Worker/Maintenance/MaintenanceJobExtension.cs#L53

get the environment from cap staging deploy or cap production deploy

I have a task that runs on deployment of either staging or production. Ideally I would like to pass in some arguments to the task depending on whether I am deploying to production or staging.
These tasks are within lib/capistrano/tasks/.
Within the .rake file how can I access the environment so I can determine what I need to set as the flag.
I have no issues setting the flag just not sure how I can access the environment.
If anyone can help it would be very much appreciated.
Depending on how you are invoking the Rake task, you should be able to set an environment variable based on the value of fetch(:stage). For example, something like:
run "APP_ENV=#{fetch(:stage)} bundle exec rake my:task"
The above code is untested, but should be basically what you are looking for.

How to run a capistrano task on another stage?

My root capistrano has a task that dumps the database: cap production dump or cap staging dump will dump the database.
Now, I want to define a task in staging that will run this task on production.
I could do
desc 'Updates the database of acceptance with the latest production database'
task :update_db do
run_locally do
execute :cap, 'production', 'dump'
# move dump-file from production, via local, to acceptance
end
on roles(:db) do
execute :rake, 'db:data:load'
end
end
But running a cap task from a cap task via a shell feels ugly and
fragile.
I found
Calling a multistage capistrano task from within a capistrano task
but that does not work, probably because its a solution for an old
version of Capistrano.
Is there a way to run a certain capistrano task on a certain "stage"
from within Capistrano?

Capistrano 2 deploy:rollback task clarification

which file is referenced by Capistrano 2 to perform a rollback?
I am using chef's deploy_revision and this create a cache of the revisions under
/var/chef/cache/revision-deploys/path/to/myapp
I was testing Capistrano 2 in the same project, trying to figure out which one i choose for the deployment. i did this to perform a rollback
cap <stagename> deploy:rollback
but it ended up rolling back to a revision which is not in line with the one in the cache cache copy of the revision list.
I may be wrong to expect Capistrano to follow what was there for the chef. But i am trying to straighten this rollback in Cap2.
the application service fails to start properly
Inside the deploy.rb file, i placed a task like this
after "deploy", "deploy:restart_app"
the task looks like this:
task :restart_app, :roles => :web do
run "sudo /etc/init.d/abc restart", :shell => :bash
end
but when the deploy is completed if do a status for my app (abc), it says "process dead and pid exist". this also true the pid file exist in /var/run/abc.pid
manual test execution of sudo /etc/init.d/abc restart as the deploy user, work fine.

Executing Presto Task for QA and Production but not in Dev

I have a task that needs to run in QA and prod, but not dev. The task is to stop a clustered application. The problem is that the dev servers aren’t clustered and the task to stop the cluster fails on these servers. Is there a way to handle this?
We used to have that issue as well. When the task ran to stop the cluster, it would fail in dev:
The system cannot find the path specified
C:\Windows\Sysnative\Cluster.exe /cluster:server resource "Company Name Product" /offline
To get this to work, we can move the cluster commands to variables instead of directly in the task. That way we can have the dev version of stopping the cluster just do a no-op: cmd /exit. The QA version will run the real cluster stop command.
Task:
Dev Server Variable Group:
QA Server Variable Group: