We have been using a YAML file to do our CI in Azure DevOps for a few months with the idea that we would get our release configured using YAML in the future.
Well that time is now and I'm confused by how you would introduce a CD process. With the MyProject-CI.yml being a Build Pipeline and our Releases being Classic Pipelines I assumed that when the time came to get the CD process down as YAML we would create a MyProject-CD.yml. That would be triggered by the dropping of an Artifact within the MyProject-CI.yml CI.
However I think that was just a misunderstanding on my behalf and what we are supposed to do is convert the original MyProject-CI.yml into a multi-stage pipeline that has the following stages
Build and Run Unit Tests
Deploy to Development and run WebTests
Deploy to Production and run WebTests
Is the switch to a multi stage CI/CD in one file correct rtaher than Release and Build in separate files?
The short answer is yes, you got the idea. A single multi-stage pipeline yml is the way to do both build and deploy, and that is the base intention. Here is an exercise that parallels your case that might help.
As your pipelines get more complex, you will likely get into scenarios with multiple files, as you can template parts of your pipeline for reuse in multiple places, or to enforce conventions from a central location.
Related
We have been converting our Release pipelines in Azure DevOps to instead be YAML files that run as Pipelines. This is so we can store our deployment process as code. The deployment process is working well - so a developer commits code and the Pipeline builds and publishes the artifact in one stage and then it auto-deploys to the QA environment in another stage. Subsequent stages deploy to an Environment (e.g. QA, Staging, Production) each of which requires some approval(s). The deployments themselves aren't the issue.
What I'm struggling with is unlike the old Releases there isn't a dashboard that will tell me which version of the project is in each environment. The Pipeline summary represents each stage for the Run as a dot (running, succeeded, failed, canceled, etc.) but wasn't built to represent what each environment has (probably because a stage doesn't have to be a deployment).
Is there somewhere else I can look for this information or do I have to build my own dashboard by calling the AzDO APIs? Looking at the Environment gives a list and I can root through the history, but that's not the experience our developers are looking for.
What I'm thinking about is to have a step in the pipeline to generate a full-blown pipeline to run afterwards.
Apparently this particular thing is not there yet (feature requests here, here). But maybe somebody has some fresh thoughts on workarounds?
Not really. It's a pain that's just a fact of life when working with YAML pipelines. It's especially annoying when trying to work through runtime vs compile time variable resolution issues.
Commit, run, commit, run, commit, run, over and over.
For the dynamic work you could set up a repo with one dummy yaml file and a pipeline registration that targets this yaml file.
From a static pipeline that is responsible for kicking off a dynamic pipeline you then execute two steps:
Create a fresh branch of that "dynamic" yaml file and commit the required dynamic workload
Not sure about branch limits though. You could also decide to reuse a branch.
Kick off this "dynamic" pipeline through az devops cli using the static pipeline's access token
Also see the following documentation:
https://learn.microsoft.com/en-us/azure/devops/cli/azure-devops-cli-in-yaml?view=azure-devops
I have a question on general best practices for Azure DevOps. When building a project, we have two build configs, debug or release. At some point in the deployment pipeline across a multi-stage environments, these need to be changed, which means two builds from what I can understand.
Is it better to have one yml, with the build config being set as a condition from the source branch (i.e. if source branch contains "release" build config is Release, if source branch isn't from release, then config is Debug, or should I be having multiple builds and a pipeline triggered by two different artifacts?
It is much better to have one yml. By having only the one build pipeline and using a simple condition, you limit the possibility of bugs arising due to differences in the separate yml definitions. Additionally, it is more maintainable as changes are only made in one place.
I've recently been working on switching from On premise TFS to Azure DevOps, and trying to learn more about the different pipelines and I think I may have had my Build pipeline do too much.
Currently I have my Build Pipeline do
Get Source code from Repo
Run database scripts/deploy dacpacs
Copy files over to virtual machines that have web application set up already
Run unit/integration tests
Publish the test results
I repeat these steps closely multiple times, one for develop branch, one for current and previous release branch.
But if I want to take advantage of the Releases and Deployments areas what would that really get me?
It looks like it would be easier to say yes this code did make it out to this dev/beta environment.
I'm working with ColdFusion code that includes some .NET webservices within the repo, would I have to make an artifact that zips up the repo and then deploys it, or is there a better way to take advantage of the release pipeline?
It's not necessary to make an artifact that zips up the repo and then deploys it. There are several types of tools you might use in your application lifecycle process to produce or store artifacts. For example, you might use version control systems such as Git or TFVC to store your artifacts. You can configure Azure Pipelines to deploy artifacts from multiple sources. Check the following link for more details:
https://learn.microsoft.com/en-us/azure/devops/pipelines/release/artifacts?view=azure-devops#sources
My team has multiple Concourse pipelines and as we refactor tasks, we've realized the need to test our actual pipelines.
We already test our tasks by using environment variables enabling task scripts to be run locally, but the pipeline yaml is another matter.
What is the best way to accomplish testing of the pipeline itself?
You can use the Concourse Pipeline Resource to monitor the git repository where you keep your pipeline config. Whenever the pipeline resource detects a change, it will automatically run a fly set-pipeline to update the config in your running Concourse installation. From there, it's easy to script tests against the updated pipeline that is now running in your Concourse installation.
fly validate-pipeline is pretty useful, running that against pipelines before merging has caught a few bugs in "obviously correct" changes for me.
If you want to test the whole pipeline before merging you need to make sure that the data it's using is static and working (no sense in failing the pipeline if it's the repo that's broken), and that there are no side effects (like notifications) shared between the 'real pipeline' and the 'test pipeline'. I suspect that as long as you're careful with the restrictions, you could make it work, but it would have to be designed in the context of your existing pipelines and infrastructure.