get the environment from cap staging deploy or cap production deploy - capistrano

I have a task that runs on deployment of either staging or production. Ideally I would like to pass in some arguments to the task depending on whether I am deploying to production or staging.
These tasks are within lib/capistrano/tasks/.
Within the .rake file how can I access the environment so I can determine what I need to set as the flag.
I have no issues setting the flag just not sure how I can access the environment.
If anyone can help it would be very much appreciated.

Depending on how you are invoking the Rake task, you should be able to set an environment variable based on the value of fetch(:stage). For example, something like:
run "APP_ENV=#{fetch(:stage)} bundle exec rake my:task"
The above code is untested, but should be basically what you are looking for.

Related

Azure DevOps Agent - Custom Setup/Teardown Operations

We have a cloud full of self-hosted Azure Agents running on custom AMIs. In some cases, I have some cleanup operations which I'd really like to do either before or after a job runs on the machine, but I don't want the developer waiting for the job to wait either at the beginning or the end of the job (which holds up other stages).
What I'd really like is to have the Azure Agent itself say "after this job finishes, I will run a set of custom scripts that will prepare for the next job, and I won't accept more work until that set of scripts is done".
In a pinch, maybe just a "cooldown" setting would work -- wait 30 seconds before accepting another job. (Then at least a job could trigger some background work before finishing.)
Has anyone had experience with this, or knows of a workable solution?
I suggest three solutions
Create another pipeline to run the clean up tasks on agents - you can also add demand for specific agents (See here https://learn.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml) by Agent.Name -equals [Your Agent Name]. You can set frequency to minutes, hours, as you like using cron pattern. As while this pipeline will be running and taking up the agent, the agent being cleaned will not be available for other jobs. Do note that you can trigger this pipeline from another pipeline, but if both are using the same agents - they can just get deadlocked.
Create a template containing scripts tasks having all clean up logic and use it at the end of every job (which you have discounted).
Rather than using static VM's for agent hosting, use Azure scaleset for Self hosted agents - everytime agents are scaled down they are gone and when scaled up they start fresh. This saves a lot of money in terms of sitting agents not doing anything when no one is working. We use this option and went away from static agents. We have also used packer to create the VM image/vhd overnight to update it with patches, softwares required, and docker images cached.
ref: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops
For those discovering this question, there's a much better way: run your self-hosted agent with the --once flag, documented here.
You'll need to wrap it in your own bash script, but something like this works:
while :
do
echo "Performing pre-job setup..."
echo "Waiting for job..."
./run.sh --once
echo "Cleaning up..."
sleep 2
done
Another option would be to use a ScaleSet VM setup which preps a new agent for each run and discards the VM when the run is done. It can prepare new VMs in the background while the job is running.
And I suspect you could implement your own IMaintenanceProvirer..
https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Agent.Worker/Maintenance/MaintenanceJobExtension.cs#L53

Azure terraform pipeline

I hope somebody can help me to solve this issue and understand how to implement the best approach.
I have a production environment running tons of azure services (sql server, databases, web app etc).
all those infra has been created with terraform. For as powerful as it is, I am terrified on using it in a pipeline for 1 reason.
Some of my friend, often they do some changes to the infra manually, and having not having those changes in my terraform states, if I automate this process, it might destroy the resource ungracefully, which is something that I don't want to face.
so I was wondering if anyone can shade some light on the following question:
is it possible to have terraform automated to check the infra state at every push to GitHub, and to quit if the output of the plan reports any change?
change to make clear my example.
Lets say I have a terraform state on which I have 2 web app, and somebody manually created a 3 web app on that resource group, it develops some code and push it to GitHub.My pipeline triggers, and as first step I have terraform that runs a terraform plan and/or terraform apply, if this command reports any change, I want it to quit the pipeline(fail) so I will know there is something new there, and if the terraform plan and/or terraform apply return there are no changes to the infra, is up to date to continue with the code deployment.
thank you in advance for any help and clarification.
Yes, you can just run
terraform plan -detailed-exitcode
If the exit code is != 0, you know there are changes. See here for details.
Let me point out that I would highly advise you to lock down your prod environment so that nobody can do manual changes! Your CI/CD pipeline should be the only way to make changes there.
Adding to the above answer, you can also make use of terraform import command just to import the remote changes to your state file. The terraform import command is used to import existing resources into Terraform. Later run plan to check if the changes are in sync.
Refer: https://www.terraform.io/docs/cli/commands/import.html

Docker compose build order

I have a problem with docker compose and build order. Below is my dockerfile for starting my .net application
As you can see as part of my build process I run some tests using "RUN dotnet test backend_test/backend_test.csproj"
These tests require a mongodb database to be present.
I try to solve this dependency with docker-compose and its "depends_on" feature, see below.
However this doesn't seem to work as when I run "docker-compose up" I get the following:
The tests eventually timeout since there is no mongodb present.
Does depends_on actually affect build order at all or does it only affect start-order (i.e builds everything the proceeds to start in correct order) ?
Is there another way of doing this ? (I want tests to run as part of building my final app)
Ty in advance, let me know If you need extra information
As you guessed, depends_on is for runtime order only, not build time - it just affects docker-compose up and docker-compose stop.
I highly recommend you make all the builds independent of each other. Perhaps you need to consider separate builder and runtime images here, and / or use a Docker-based CI (Gitlab, Travis, Circle etc) to have these dependencies available for testing.
Note also, depends_on often disappoints people - as it just waits for Docker's startup to finish, not the application startup. So your DB / service / whatever may still be starting up when the container that depends on it start will start using it, causing timeouts etc. This is why HEALTH_CHECK now exists (with a similar healthcheck feature in Docker Compose)

Azure DevOps passing Dynamically assigned variables between build hosts

I'm using Azure DevOps on a vs2017-win2016 build agent to provision some infrastructure using Terraform.
What I want to know is it possible to pass the Terraform Output of a hosts dynamically assigned IP address to a
2nd Job running a different build agent.
I'm able to pass these to build variables in the first Job
BASTION_PRIV_IP=x.x.x.x
BASTION_PUB_IP=1.1.1.1
But un-able to get these variables to appear to be consumed with the second build agent running ubuntu-16.04
I am able to pass any static defined parameters like Azure Resource Group name that I define before the job start, its just the
dynamically assigned ones.
This is pretty easily done when you are using the YAML based builds.
It's important to know that variables are only available within the scope of current job by default.
However you can set a variable as an output variable for your job.
This output variable can then be mapped to a variable within second job (do note that you need to set the first job as a dependency for the second job).
Please see the following link for an example of how to get this to work
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-multi-job-output-variable
It may also be doable in the visual designer type of build, but i couldn't get that to work in the quick test i did, maybe you can get something to work inspired on the linked example.

Capistrano duplicate tasks for each role

I must be missing something with Capistrano, because I've just started writing capfiles and I'm looking at tons of duplicated code. Consider this
role :dev, "dev1", "dev2"
role :prod, "prod1", "prod2"
desc "Deploy the app in dev"
task :deploy_dev, :roles => :dev do
run "sudo install-stuff"
end
desc "Deploy the app in prod"
task :deploy_prod, :roles => :prod do
run "sudo install-stuff"
end
IMO it's totally reasonable to want to run the exact same task in dev or prod, but from what I can tell, Capistrano would have me write 2 tasks just to specify the different nodes...
Seems like if you could refer to roles on the CLI like
cap deploy dev
cap deploy prod
there could be a single definition of the 'deploy' task in the capfile, as opposed to a duplicated one for each set of servers.
Is there a way to write a task once and specify the role dynamically?
Have a look at the multistage extension. While fairly easy to set up the tasks you need yourself, the multistage extension will do it all for you.
If you'd rather do it yourself, see the calling tasks section of the handbook. The trick is that you can invoke different tasks in order from the command line.