Github Action avoid approval on same environment rule within same workflow - github

Reusing same environment rule within same workflow
Running our workflow in Github, we split our tasks up into 2 jobs; Building docker image & attach tags and deploying to AWS using CodeDeploy. The reason for splitting the tasks up is to avoid creating new tags whenever our deployment fails.
However... using environment protection rules creates a roadblock as every job needs to be approved(even though we already ran the same environment previously)
The deployment job is a conditional job, meaning it depends on the success of the Build job.
Is there any way to get around this?
Github workflow

Related

Manually skip job in github

We are managing to change our CI/CD process on github. Previously we were using one branch by environment (Test and Production).
Now, we would like to have the same build deployed on all environment. So we are using one main branch which build and allow to deploy on Test environement and if it succeed and approved to prodution.
So far so good.
Most of the time, we are deploying in Test environment only. For this cases, we are rejecting or cancelling the workflow after deploying to Test but our actions and Pull request are all in failled state.
The question is, how could we manually skipped the production deployment without having the status of Github action being failed.
Thanks in advance

Manually trigger Github Actions workflow after another workflow successfully runs

I'm trying to create CI that does the following:
Run terraform plan -out=plan.out to generate a Terraform plan.
After looking at the Terraform plan output in Github actions, I can manually run another job or workflow that calls terraform apply plan.out with the previously generated plan. I want to manually run this automation after the other automation has successfully run, dependent on the previous automation's success, using an artifact from the previous automation.
I've looked online for some examples of this but all the examples of this I can find just run terraform apply without actually allowing someone to verify the plan output.
Is this something that's possible to do in Github Actions?
This can be done using protected environments' required reviewers: https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment#required-reviewers
What you would do is setup an environment e.g. production and add yourself as reviewer.
In your workflow, you would then add the environments like so:
jobs:
plan:
steps:
- run: terraform plan
apply:
environment: production
steps:
- run: terraform apply
This means that as soon as the workflow reaches the job apply, it is going to stop and you'll need to manually click a button to approve.
My solution ended up being the following:
When the PR is approved and merged, a Terraform plan is created and pushed to an S3 bucket with the commit hash in the path. Then when the apply workflow is triggered via workflow dispatch it looks for a plan for the commit hash of the code it's running and applies it.
Using pull requests as suggested wasn't the right solution for me because of the following:
How do you know that the plan that was run for the pull request was run with the latest changes on the base branch? The plan could be invalid in this case. The way I solved this was by having the plan workflow run on push of a specific branch that corresponds to the environment being Terraformed. This way the plan is always generated for the state the Terraform says the specific environment should be in.
How do you know that an apply is applying the exact plan that was generated for the pull request? All the examples I saw actually ended up re-running the plan in the apply workflow, which breaks the intended use of Terraform plans. The way I solved this was by having the apply workflow look for a specific commit hash in cloud storage.

Azure Pipeline Parallelism option

I am new to azure pipelines, started learning & in the process of creating my very 1st yaml pipeline.
My project is private, I am using a multi-stage templated pipeline, self-hosted as need to concurrently deploy a java web application to 7 VMs using mvn tomcat7 plugin run: command
so as to run selenium automation tests in parallel across all the VMs. A template pipeline which is called 7 times to deploy to all the VMs is such that it needs to stay running as
necessitated by the embedded tomcat instance on each of the VMs which in turn requires the ablitiy to have parallelism enabled, pay extra for to achieve this.
My question is; is there another way without having to pay extra for parallelism or turning my project to be a public one ?
I think what you want is parallel job. Only job could execute publish tasks in parallel.
And from this document, you could use parallel job freely when you change your project to public. And the job can run for up to 360 minutes(6 hours).
You need is that under Project Settings--> Overview--> change Visibility to public.
After that, under pipeline, add the publish task for each new agent job. So that, you could executes the publish task in parallel.

Azure DevOps Release Pipelines: Letting release flow through multiple environments with manual triggers

I'm trying to configure Azure DevOps Release pipelines for our projects, and I have a pretty clear picture of what I want to achieve, but I'm only getting almost all the way there.
Here's what I'd like:
The build pipeline for each respective project outputs, as artifacts, all the things needed to deploy that version into any environment.
The release pipeline automatically deploys to the first environment ("dev" in our case) on each successful build, including PR builds.
For each successive environment, the release must have been deployed successfully to all previous environments. In other words, in order to deploy to the second environment ("st") it must have been deployed to the first one ("dev"), and in order to deploy to the third ("at") it must have been successfully deployed to all previous (both "dev" and "st"), etc.
All environments can have specific requirements on from what branches deployable artifacts must have been built; e.g. only artifacts built from master can be deployed to "at" and "prod".
Each successive deploy to any environment after the first one is triggered manually, by someone on a list of approvers. The list of approvers differs between environments.
The only way I've found to sort-of get all of the above working at the same time, is to automatically trigger the next environment after a successful deployment, and add a pre-deployment gate with a manual approval step. This works, except the manual approval doesn't trigger the deployment per se, but rather let an already triggered deployment start executing. This means that any release that's not approved for lifting into the next environment, is left hanging until manually dismissed.
I can avoid that by having a manual trigger instead of automatic, but then I can't enforce the flow from one environment to the next (it's e.g. possible to deploy to "prod" without waiting for successful deployments to the previous stages).
Is there any way to configure Azure DevOps Release Pipelines to do all of the things I've outlined above at once?
I think you are correct, you can only achieve that by setting automatic releases after successful release with approval gates. I dont see any other options with currect Azure DevOps capabilities.
Manual with approval gates doesnt check previous environments were successfully deployed to, unfortunately.
I hope this provides some clarity after the fact. Have you looked at YAML Pipelines In this you can specify the conditions on each stage
The stages can then have approvals on them as well.

How to test Concourse pipelines

My team has multiple Concourse pipelines and as we refactor tasks, we've realized the need to test our actual pipelines.
We already test our tasks by using environment variables enabling task scripts to be run locally, but the pipeline yaml is another matter.
What is the best way to accomplish testing of the pipeline itself?
You can use the Concourse Pipeline Resource to monitor the git repository where you keep your pipeline config. Whenever the pipeline resource detects a change, it will automatically run a fly set-pipeline to update the config in your running Concourse installation. From there, it's easy to script tests against the updated pipeline that is now running in your Concourse installation.
fly validate-pipeline is pretty useful, running that against pipelines before merging has caught a few bugs in "obviously correct" changes for me.
If you want to test the whole pipeline before merging you need to make sure that the data it's using is static and working (no sense in failing the pipeline if it's the repo that's broken), and that there are no side effects (like notifications) shared between the 'real pipeline' and the 'test pipeline'. I suspect that as long as you're careful with the restrictions, you could make it work, but it would have to be designed in the context of your existing pipelines and infrastructure.