Default target for capistrano - capistrano

It's possible to set a custom target for capistrano, so instead of
cap production deploy
or
cap production my_custom:task
I can use
cap deploy
cap my_custom:task

For easy way you can create alias for command cap production
alias capp="cap production"
and after that you can use command how
capp deploy
capp my_custom:task
for run alias for your session you can set that into ~/.bashrc

Related

GCloud authentication race conditions

I'm trying to avoid race conditions with gcloud / gsutil authentication on the same system but different CI/CD jobs on my Gitlab-Runner on a Mac Mini.
I have tried setting the auth manually with
RUN gcloud auth activate-service-account --key-file="gitlab-runner.json"
RUN gcloud config set project $GCP_PROJECT_ID
for the Dockerfile (in which I'm performing a download operation from a Google Cloud Storage bucket).
I'm using a configuration in the bash script to run the docker command and in the same script for authenticating I'm using
gcloud config configurations activate $TARGET
Where I've previously done the above two commands to save them to the configuration.
The configurations are working fine if I start the CI/CD jobs one after the other has finished. But I want to trigger them for all clients at the same time, which causes race conditions with gcloud authentication and one of the jobs trying to download from the wrong project bucket.
How to avoid a race condition? I'm already authenticating before each gsutil command but still its causing the race condition. Do I need something like CloudBuild to separate the runtime environments?
You can use Cloud Build to get separate execution environments but this might be an overkill for your use case, as a Cloud Build worker uses an entire VM which might be just too heavy, linux containers / Docker can provide necessary isolation as well.
You should make sure that each container you run has a unique config file placed in the path expected by gcloud. The issue may come from improper volume mounting (all the containers share the same location from the host/OS), or maybe you should mount a directory containing their configuration file (unique for each bucket) on running an image, or perhaps you should run gcloud config configurations activate in a Dockerfile step (thus creating image variants for different buckets if it’s feasible).
Alternatively, and I think this solution might be easier, you can switch from Cloud SDK distribution to standalone gsutil distribution. That way you can provide a path to a boto configuration file through an environment variable.
Such variables can be specified on running a Docker image.

Deployment(CI-CD) pipeline for serverless application

I have created a simple node express MongoDB app which has 3 API endpoints to perform basic crud operations.
If I was to deploy this to Heroku as a service and use bitbucket-pipeline to perform CI-CD this would do the job for me. On top of this, I can have Heroku pipelines to have multiple stages of environments like dev and production.
And after doing all above I would be done with my pipeline and happy about it.
Now coming back to Serverless, I have deployed my API endpoints to AWS as lambda functions, And that is the only environment (let's say DEV) present at the moment.
Now how can I achieve a pipeline similar to the one mentioned earlier in a serverless architecture?
All the solutions out there do not suggest (maybe I missed some) promoting the actual code which is tried and tested on dev env to Production. But rather a deploy a new set of code, is this a limitation?
Option 1
Presuming that you are developing a Node Serverless application, deploying a new set of code with the same git commit ID and package-lock.json/yarn.lock should result in the same environment. This can be achieved by executing multiple deploy commands to different stages e.g.
sls deploy -s dev
sls deploy -s prod
There are various factors that may cause the deployed environments to be different, but the risk of that should be very low. This is the simplest CI/CD solution you can implement.
Option 2
If you'd like to avoid the risk from Option 1 at all cost, you can split the package and deployment phase in your pipeline. Create the package before you deploy from the codebase that you have checked out:
sls package -s dev --package build/dev
sls package -s prod --package build/prod
Archive as necessary, then to deploy:
sls deploy -s dev --package build/dev
sls deploy -s prod --package build/prod
Option 3
This is an improved version of Option 2. I have not tried this solution but it should theoretically be possible. The problem with Option 2 is that you have to execute the package command multiple times which might not be desirable YMMV. To avoid the need of packaging more than once, first create the package:
sls package -s dev --package build
Then to deploy:
# Execute a script to modify build/cloudformation-template-update-stack.json to match dev environment
sls deploy -s dev --package build
# Execute a script to modify build/cloudformation-template-update-stack.json to match prod environment
sls deploy -s prod --package build
If you have the following resource in build/cloudformation-template-update-stack.json for example:
"MyBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "myapp-dev-bucket"
}
},
The result of the script you execute before sls deploy should modify the CF resource to:
"MyBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "myapp-prod-bucket"
}
},
This option of course will imply that you can't have any hardcoded resource name in your app, every resource names must be injected from serverless.yml to your Lambdas.

get the environment from cap staging deploy or cap production deploy

I have a task that runs on deployment of either staging or production. Ideally I would like to pass in some arguments to the task depending on whether I am deploying to production or staging.
These tasks are within lib/capistrano/tasks/.
Within the .rake file how can I access the environment so I can determine what I need to set as the flag.
I have no issues setting the flag just not sure how I can access the environment.
If anyone can help it would be very much appreciated.
Depending on how you are invoking the Rake task, you should be able to set an environment variable based on the value of fetch(:stage). For example, something like:
run "APP_ENV=#{fetch(:stage)} bundle exec rake my:task"
The above code is untested, but should be basically what you are looking for.

Capistrano duplicate tasks for each role

I must be missing something with Capistrano, because I've just started writing capfiles and I'm looking at tons of duplicated code. Consider this
role :dev, "dev1", "dev2"
role :prod, "prod1", "prod2"
desc "Deploy the app in dev"
task :deploy_dev, :roles => :dev do
run "sudo install-stuff"
end
desc "Deploy the app in prod"
task :deploy_prod, :roles => :prod do
run "sudo install-stuff"
end
IMO it's totally reasonable to want to run the exact same task in dev or prod, but from what I can tell, Capistrano would have me write 2 tasks just to specify the different nodes...
Seems like if you could refer to roles on the CLI like
cap deploy dev
cap deploy prod
there could be a single definition of the 'deploy' task in the capfile, as opposed to a duplicated one for each set of servers.
Is there a way to write a task once and specify the role dynamically?
Have a look at the multistage extension. While fairly easy to set up the tasks you need yourself, the multistage extension will do it all for you.
If you'd rather do it yourself, see the calling tasks section of the handbook. The trick is that you can invoke different tasks in order from the command line.

Executing Presto Task for QA and Production but not in Dev

I have a task that needs to run in QA and prod, but not dev. The task is to stop a clustered application. The problem is that the dev servers aren’t clustered and the task to stop the cluster fails on these servers. Is there a way to handle this?
We used to have that issue as well. When the task ran to stop the cluster, it would fail in dev:
The system cannot find the path specified
C:\Windows\Sysnative\Cluster.exe /cluster:server resource "Company Name Product" /offline
To get this to work, we can move the cluster commands to variables instead of directly in the task. That way we can have the dev version of stopping the cluster just do a no-op: cmd /exit. The QA version will run the real cluster stop command.
Task:
Dev Server Variable Group:
QA Server Variable Group: