We have Vercel preview deployments setup for every pull request in Github.
On every pull request the code gets deployed to a Vercel test / acc / prod environment which are coupled to different backends.
For every pull request we want to run some (Cypress) tests against it, but only against the Vercel test environment.
We have this working by using the deployment_status event and specifying it should only run when the environment is test.
jobs:
cypress:
if: github.event.deployment_status.environment == 'Preview – Test'
This will result in a skipped Github run for acc / prod and a pass/fail for the test environment.
However Github only lists the last run in the PR Github checks, this can be either the skipped or the pass/fail run and is depending on which preview environment gets deployed last.
Is there a way to enforce that Github only lists the relevant run?
I tried making that run dynamic and making the test one mandatory but the check still get's overridden if test was not the last deployed Vercel environment.
Related
We are managing to change our CI/CD process on github. Previously we were using one branch by environment (Test and Production).
Now, we would like to have the same build deployed on all environment. So we are using one main branch which build and allow to deploy on Test environement and if it succeed and approved to prodution.
So far so good.
Most of the time, we are deploying in Test environment only. For this cases, we are rejecting or cancelling the workflow after deploying to Test but our actions and Pull request are all in failled state.
The question is, how could we manually skipped the production deployment without having the status of Github action being failed.
Thanks in advance
I came up an interesting situation recently and I couldn't find something on that, so maybe someone who knows can share this info.
We have a github repository in a github organization. This has a set of GHA jobs (eg release.yaml) which are supposed to run on github shared runners:
runs-on: ubuntu-latest
The jobs were picked up fine, until I added a job that I wanted to run it on a self-hosted runner:
runs-on: self-hosted
So I registered the self-hosted runner, the latter job got picked up just fine.
But when I came back to run one of the 1st stage jobs (ie release.yaml), the job wasn't getting picked up by the github shared runner as it is supposed to by the code, but was getting queued and waiting for a self-hosted runner to be available.
Has anyone seen this before? Is it standard behaviour or I should commit an issue with github?
PS: Deregistering the self-hosted runner from repository settings resolved the issue, but still, does this mean we can't have a set of jobs that some use self-hosted and some use github-shared runners?
I'm trying to create CI that does the following:
Run terraform plan -out=plan.out to generate a Terraform plan.
After looking at the Terraform plan output in Github actions, I can manually run another job or workflow that calls terraform apply plan.out with the previously generated plan. I want to manually run this automation after the other automation has successfully run, dependent on the previous automation's success, using an artifact from the previous automation.
I've looked online for some examples of this but all the examples of this I can find just run terraform apply without actually allowing someone to verify the plan output.
Is this something that's possible to do in Github Actions?
This can be done using protected environments' required reviewers: https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment#required-reviewers
What you would do is setup an environment e.g. production and add yourself as reviewer.
In your workflow, you would then add the environments like so:
jobs:
plan:
steps:
- run: terraform plan
apply:
environment: production
steps:
- run: terraform apply
This means that as soon as the workflow reaches the job apply, it is going to stop and you'll need to manually click a button to approve.
My solution ended up being the following:
When the PR is approved and merged, a Terraform plan is created and pushed to an S3 bucket with the commit hash in the path. Then when the apply workflow is triggered via workflow dispatch it looks for a plan for the commit hash of the code it's running and applies it.
Using pull requests as suggested wasn't the right solution for me because of the following:
How do you know that the plan that was run for the pull request was run with the latest changes on the base branch? The plan could be invalid in this case. The way I solved this was by having the plan workflow run on push of a specific branch that corresponds to the environment being Terraformed. This way the plan is always generated for the state the Terraform says the specific environment should be in.
How do you know that an apply is applying the exact plan that was generated for the pull request? All the examples I saw actually ended up re-running the plan in the apply workflow, which breaks the intended use of Terraform plans. The way I solved this was by having the apply workflow look for a specific commit hash in cloud storage.
I'm working on a project where we have a front end application. This application has a second entry point I've added for our login application. So I've started setting up a new pipeline for building it. Upon a successful build & push of the login-app artifact I want the login server to also trigger a build. The backend .net app for the login server will serve the built angular app from its public folder, so hence the reason to trigger the pipeline.
In each of the repos we have three branches that we deploy from: qa, uat, and prod. So when a qa build runs for the frontend, I want the qa branch of the login server to run. Same with uat -> uat and prod -> prod. Based on the information here: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/pipeline-triggers?view=azure-devops#branch-considerations it doesn't sound like I can use the pipeline triggers for this. Is there another approach we could take?
If you don't use pipeline completion triggers in yaml, you could consider to use build completion trigger(Classic).
In the other hand, you could install external free extenion: Trigger Build Task or Trigger Azure DevOps Pipeline, so there will be additional available task to trigger a new build when this build is done.
Of course, you could directly use Rest API: Builds - Queue to queue a build.
We have a web application in an Azure DevOps repo and there's a branch policy on the master branch that kicks off a build when a pull request is created. This validates that it compiles and performs code quality checks and the like.
We also have some integration tests (using Mocha and Selenium) that live in another repo. I would like to run the integration tests when a PR against master is created.
As far as I know I cannot have the same build pull from two different repos (without using extensions and it seems cleaner to me to have two separe builds anyway). So I thought I would have another build just to run the integration tests. The build that pulls from the webapp repo would have a final step where it would deploy to an integration tests environment and then the second build would get the latest version of the integration tests and run them against the integration tests environment. I created a Build Completion trigger on the integration tests build that is triggered by the completion of the webapp build.
The problem is that when I queue the webapp build manually, it will launch the integration tests build when done. But when the webapp build is queued by an incoming PR, the integration tests build does not get triggered.
Is this a bug in Azure DevOps or am I going about this wrong?
Also in my side builds from PR doesn't trigger another builds (with Build Completion trigger), I don't know if it's a bug or it's by design.
Anyway, there is a workaround - the final step in the first build will trigger the second build. how? with Trigger Build task.
You just need to change the branch because it will be a merge branch from the PR that doesn't exist in the tests repository:
You can also do it without install extensions with PowerShell task and the Rest API.