My team has multiple Concourse pipelines and as we refactor tasks, we've realized the need to test our actual pipelines.
We already test our tasks by using environment variables enabling task scripts to be run locally, but the pipeline yaml is another matter.
What is the best way to accomplish testing of the pipeline itself?
You can use the Concourse Pipeline Resource to monitor the git repository where you keep your pipeline config. Whenever the pipeline resource detects a change, it will automatically run a fly set-pipeline to update the config in your running Concourse installation. From there, it's easy to script tests against the updated pipeline that is now running in your Concourse installation.
fly validate-pipeline is pretty useful, running that against pipelines before merging has caught a few bugs in "obviously correct" changes for me.
If you want to test the whole pipeline before merging you need to make sure that the data it's using is static and working (no sense in failing the pipeline if it's the repo that's broken), and that there are no side effects (like notifications) shared between the 'real pipeline' and the 'test pipeline'. I suspect that as long as you're careful with the restrictions, you could make it work, but it would have to be designed in the context of your existing pipelines and infrastructure.
Related
I'm facing the following issue: I have one git repo with a Node.js application. The application is divided into several components, namely: server, client, microserviceA, microserviceB. There is also a directory named shared with some sharaed code used by all the other components.
I have a pipeline for each of the components that only runs tests, which run on pull-request to master. Each pipeline only runs when the PR contains changes relevant to him, e.g. server-ci will run only when there were changes in the server component, etc.
Now, on merge to master, I would like to build the components and deploy them on a staging server. Currently what I have is as follows: for each component (beside the shared) I have another build pipeline (<component>-build) which on merge to master builds the corresponding component (depending on the changes made, as above). I have one Release pipeline which takes as artifacts all these build pipelines and deploys them on the staging server. So the good thing about this is that merge to master which includes only changes in client will build only client and not all the rest of the components.
The problem is, that on merge to master that contains changes to several components, I'll have more than one build pipeline running, so they will both trigger the Release pipeline. This is not desirable.
A possible solution I thought about was, to have only one build pipeline which runs on merge to master, but then I'd have to build ALL the components on each merge, which is inefficient.
What is the best way to deal with such situation?
In the release stage settings you can configure that Number of parallel deployments will be 1:
What I'm thinking about is to have a step in the pipeline to generate a full-blown pipeline to run afterwards.
Apparently this particular thing is not there yet (feature requests here, here). But maybe somebody has some fresh thoughts on workarounds?
Not really. It's a pain that's just a fact of life when working with YAML pipelines. It's especially annoying when trying to work through runtime vs compile time variable resolution issues.
Commit, run, commit, run, commit, run, over and over.
For the dynamic work you could set up a repo with one dummy yaml file and a pipeline registration that targets this yaml file.
From a static pipeline that is responsible for kicking off a dynamic pipeline you then execute two steps:
Create a fresh branch of that "dynamic" yaml file and commit the required dynamic workload
Not sure about branch limits though. You could also decide to reuse a branch.
Kick off this "dynamic" pipeline through az devops cli using the static pipeline's access token
Also see the following documentation:
https://learn.microsoft.com/en-us/azure/devops/cli/azure-devops-cli-in-yaml?view=azure-devops
We have been using a YAML file to do our CI in Azure DevOps for a few months with the idea that we would get our release configured using YAML in the future.
Well that time is now and I'm confused by how you would introduce a CD process. With the MyProject-CI.yml being a Build Pipeline and our Releases being Classic Pipelines I assumed that when the time came to get the CD process down as YAML we would create a MyProject-CD.yml. That would be triggered by the dropping of an Artifact within the MyProject-CI.yml CI.
However I think that was just a misunderstanding on my behalf and what we are supposed to do is convert the original MyProject-CI.yml into a multi-stage pipeline that has the following stages
Build and Run Unit Tests
Deploy to Development and run WebTests
Deploy to Production and run WebTests
Is the switch to a multi stage CI/CD in one file correct rtaher than Release and Build in separate files?
The short answer is yes, you got the idea. A single multi-stage pipeline yml is the way to do both build and deploy, and that is the base intention. Here is an exercise that parallels your case that might help.
As your pipelines get more complex, you will likely get into scenarios with multiple files, as you can template parts of your pipeline for reuse in multiple places, or to enforce conventions from a central location.
We have a custom Azure DevOps extension to in order to inject SonarQube pipeline tasks into every definition using the Pipeline Decorator feature. These tasks are a mixutre of both pre and post tasks.
In YAML defined pipelines, the tasks run perfectly, however in Classic pipeline definitions, only the post tasks run, although the classic and YAML pipelines are defined identically (steps, agents, demands, variables etc.).
As this is a relatively new feature of Azure DevOps, there is a lack of documentation, especially regarding classic pipelines.
Is there something that we could possibly be missing for this to happen?
Is there something that we could possibly be missing for this to
happen?
This seems the issue on our side. And, it only exists to the sonarcloud/sonarqube prepare task if we apply it into Decorator.
As you know, users use yaml template for the steps to inserted at the specified location. And in fact, on our backend, this template file is processed through yaml template engine.
As our design, after you enable the Pipeline decorators at organization level. In Initialize job, Pipeline will call one backend class to get the JobContext, which will add decorator providers to JobContext. Then JobContext use these providers to fetch contributions to add pre/post tasks in job while preparing the job to run.
BUT, the sonar prepare task can not actively be detected by engine, then inject it into JobContext. For why I point to this specific task, because this kind of abnormality only exists in the prepare task of sonarcloud and sonarqube until now.
Our team will do some investigation and fix with sonar team together.
Until now, there has 2 work around you could consider to apply.
Work around 1:
As I mentioned previously, this prepare task can not actively be detected and injected into JobContext. So, the first work around we actively add this task info into JobContext via adding prepare task into agent job.
But this will cause one disadvantage is, it will load 2 prepare tasks. One is executed in pre-job, and next it will run second.
Work around 2:
Try to use YAML to build your pipeline until we implement this abnormality thing. So that it will not cause error because of lacking prepare task
Will update the status here to let you know once we have any progress.
I'm trying to configure Azure DevOps Release pipelines for our projects, and I have a pretty clear picture of what I want to achieve, but I'm only getting almost all the way there.
Here's what I'd like:
The build pipeline for each respective project outputs, as artifacts, all the things needed to deploy that version into any environment.
The release pipeline automatically deploys to the first environment ("dev" in our case) on each successful build, including PR builds.
For each successive environment, the release must have been deployed successfully to all previous environments. In other words, in order to deploy to the second environment ("st") it must have been deployed to the first one ("dev"), and in order to deploy to the third ("at") it must have been successfully deployed to all previous (both "dev" and "st"), etc.
All environments can have specific requirements on from what branches deployable artifacts must have been built; e.g. only artifacts built from master can be deployed to "at" and "prod".
Each successive deploy to any environment after the first one is triggered manually, by someone on a list of approvers. The list of approvers differs between environments.
The only way I've found to sort-of get all of the above working at the same time, is to automatically trigger the next environment after a successful deployment, and add a pre-deployment gate with a manual approval step. This works, except the manual approval doesn't trigger the deployment per se, but rather let an already triggered deployment start executing. This means that any release that's not approved for lifting into the next environment, is left hanging until manually dismissed.
I can avoid that by having a manual trigger instead of automatic, but then I can't enforce the flow from one environment to the next (it's e.g. possible to deploy to "prod" without waiting for successful deployments to the previous stages).
Is there any way to configure Azure DevOps Release Pipelines to do all of the things I've outlined above at once?
I think you are correct, you can only achieve that by setting automatic releases after successful release with approval gates. I dont see any other options with currect Azure DevOps capabilities.
Manual with approval gates doesnt check previous environments were successfully deployed to, unfortunately.
I hope this provides some clarity after the fact. Have you looked at YAML Pipelines In this you can specify the conditions on each stage
The stages can then have approvals on them as well.