Is it possible to realease a full set of potentially dependant artifacts (microservices) at once? - azure-devops

Traditionally we have to deliver our applications on the test and pre-production platforms one by one (usually by hand using setups). Applications like the front end javascript SPA UI are linked to backend services and their delivery sometimes goes together.
Each service and each application has its own git repository. (we are using on premise TFS 2018 for now )
Then when it is necessary to go into production, we deliver all of the front end services and applications at once that have been validated at once.
We would like to automate our process but we don't know if Azure Devops is suitable.
From what I understand with Azure Devops, we can make an independent artifact for each microservice and each front end application. We can also deliver them independently.
It seems to me that Azure Devops by default allows you to manage the delivery cycle for a particular microservice but not for an assembly making up a complete system, right?
But is it possible to deliver a set of projects each with a particular version? For that, must all of our projects be in the same solution or the same git repository?

Yes, you can use multiple artifacts from different sources (build artifacts, repositories, package feeds, github, docker hub, Azure Container Registry, ++) within a single pipeline or release definition. That's true for both the classic release definitions and the modern multi-stage pipeline implementation.
For example you can define a pipeline or release definition that consumes a front-end web app from a build artifact sourced from RepoA, a back-end service artifact consumed from a container registry originally from RepoB, and say a script library in the form of a Git artifact from RepoC. From there you could deploy each of those artifacts together, or in parallel stages, in sequence, partially, with approvals, conditionally, etc, all from the same pipeline.
The full configuration as code YAML multi-stage pipelines are still in preview, so there are some workflow orchestrations that are a little tougher to implement. But there is enough feature parity with the classic release definitions that I would default to using multi-stage for any net new needs.

Related

How to automate Azure data factory pipeline deployments

I want to automate Azure data factory pipeline deployments.
I have Self Hosted Integration runtimes with a different name in each environment (i.e. SHIR-{environment}).
I have different data sources and destinations for each environment. (i.e. different SQL server names or Hostnames)
How can I perform the automatic weekly deployments to promote changes from GitHub dev branch to stage and stage to production? I don't want to modify these database server names in linked services during the GitHub PR merge.
To set up automated deployment, start with an automation tool, such as Azure DevOps. Azure DevOps provides various interfaces and tools in order to automate the entire process.
A development data factory is created and configured with Azure Repos Git. All developers should have permission to author Data Factory resources like pipelines and datasets.
A developer creates a feature branch to make a change. They debug their pipeline runs with their most recent changes. For more information on how to debug a pipeline run, see Iterative development and debugging with Azure Data Factory.
After a developer is satisfied with their changes, they create a pull request from their feature branch to the main or collaboration branch to get their changes reviewed by peers.
After a pull request is approved and changes are merged in the main branch, the changes get published to the development factory.
When the team is ready to deploy the changes to a test or UAT (User Acceptance Testing) factory, the team goes to their Azure Pipelines release and deploys the desired version of the development factory to UAT. This deployment takes place as part of an Azure Pipelines task and uses Resource Manager template parameters to apply the appropriate configuration.
After the changes have been verified in the test factory, deploy to the production factory by using the next task of the pipelines release.
For more information follow this link

Multiple Projects and One Solution - DevOps Best Practice

For example, I have the one solution and multiple projects. To be basic, the multiple projects are:
Solution
REST API Web Application
Admin UI Web Application
Shared Libraries
If I want to have an Azure DevOps pipeline for the REST API and the Admin UI, what would be the best approach?
If anything changed in the solution, trigger build for everything and deploy BOTH web apps
CI / CD triggers for each deployable project; ie. if only UI change in Admin UI, then only build for that happens
I like #2. I am thinking that if only REST API changes, then only REST API will trigger CI / CD for itself. If Admin UI changes, only CI / CD for it triggers. If Shared Libraries is changed, both CI / CD triggers.
I believe this can be done using Path Filters as trigger. However, what if in the future I have a new Shared Library2? I would need to edit the Pipeline trigger for that new project. So I am not sure anymore if this is good practice (might forget to add the trigger?)
To answer your question on best practice it would be breaking the deployment in such a way to minimize potential impact while also ensuring functionality within reason while also ensuring a consistent integration and quality stream of code.
I'd recommended combining the approaches if you want to maintain control over both scenarios. This can be accomplished by breaking each deployment into it's own stage and then tie that stage to an environment that requires approval. For Dev would recommend doing a full CD to catch anything that might fail and then approvals on your additional environments. Your pipeline would could look like:
Build Stage:
Build and publish REST API Web Application
Build and publish Admin UI Web Application
Build and publish Shared Libraries
Deploy Rest API Stage [Dev]
Job to deploy API
Deploy Admin UI Web Stage [Dev]
Job to deploy UI
Deploy Shared Libraries Stage [Dev]
Job to deploy Shared Libraries
Deploy Rest API Stage [UAT]...
Deploy Admin UI Web Stage [UAT]...
Deploy Shared Libraries Stage [UAT]..
Do this pattern for each environment and have the environment approval be configured. This would allow for a complete Deployment in Dev every time w/o approvals and in additional environments the ability to approve which deployments are needed. Additional if doing multiple geos/instances can call that out in the jobs for that stage to allow for scalability of the different components. In addition the jobs and tasks could be templatized to optimize reusability and reduce copy and paste.
Also if doing this would highly recommend using YAML Templates as this would allow you to define the jobs/steps needed for a stage once and reuse.
Here is how to set up environment approval
Here is how to look at configuring stage templates
There is no one way to address it. What can you consider actually is - am I going to deploy all apps always together? If yes than you can go with one pipeline. If not you should go with seprate pipelines.
This second solution is a bit more complicated, however you can make this work easier using this fetures:
path filters and to avoid potential changes when you add next shared lib do this:
trigger:
branches:
include:
- master
- releases/*
paths:
include:
- sharedA
exclude:
- '*'
use pipeline resource triggers it allows you to run Rest API and Admin UI when shared lib will run
But if your projects are small, please consider first aproach, as it probably run enough fast and save your time on complex pipeline structure.

confusion on Azure DevOps pipelines

I've recently been working on switching from On premise TFS to Azure DevOps, and trying to learn more about the different pipelines and I think I may have had my Build pipeline do too much.
Currently I have my Build Pipeline do
Get Source code from Repo
Run database scripts/deploy dacpacs
Copy files over to virtual machines that have web application set up already
Run unit/integration tests
Publish the test results
I repeat these steps closely multiple times, one for develop branch, one for current and previous release branch.
But if I want to take advantage of the Releases and Deployments areas what would that really get me?
It looks like it would be easier to say yes this code did make it out to this dev/beta environment.
I'm working with ColdFusion code that includes some .NET webservices within the repo, would I have to make an artifact that zips up the repo and then deploys it, or is there a better way to take advantage of the release pipeline?
It's not necessary to make an artifact that zips up the repo and then deploys it. There are several types of tools you might use in your application lifecycle process to produce or store artifacts. For example, you might use version control systems such as Git or TFVC to store your artifacts. You can configure Azure Pipelines to deploy artifacts from multiple sources. Check the following link for more details:
https://learn.microsoft.com/en-us/azure/devops/pipelines/release/artifacts?view=azure-devops#sources

Pipeline artifacts in .NET client libraries for Azure DevOps Services (and TFS)

Originally posted on GitHub.
We are using .NET client libraries for Azure DevOps Services (and TFS) in some custom tools. BuildHttpClient.GetArtifactContentZipAsync does not work for the new pipeline artifacts. Which HttpClient do I use to download this type of artifacts?
Pipeline artifacts in .NET client libraries for Azure DevOps Services (and TFS)
I am afraid there is no such .NET client libraries for Pipeline artifacts.
As we could to know, the Pipeline artifacts:
Pipeline artifacts provide a way to share files between stages in a
pipeline or between different pipelines.
When we share the files between stages in a pipeline, it just like a "copy" inside in the pipeline, it is more like a copy instruction of windows. So, this operation does not have the client's libraries to implement it.
You could implicitly get related information from the document Keep in mind:
If you plan to consume the artifact from a job running on a different operating system or file system, you must ensure all file paths in the artifact are valid for the target environment. For example, a file name containing a \ or * character will typically
fail to download on Windows.
On the other hand, I have checked the source code for this azure-pipelines-tasks, there is also no source code to implement it here.
Hope this helps.

How to complete CI/CD pipelines with Azure DevOps for Azure API Management

I need help to understand better how to create complete CI/CD with Azure Devops for APim. Ok I already has explored the tools and read docs:
https://github.com/Azure/azure-api-management-devops-resource-kit
But I still have questions, my scenario:
APim dev instance with APi and operations created and others settings, as well APim prod instance created but empty.
I ran the extract tool and got the templates (not all), still need the Master (linked templates), and on this point seat my doubt, I already have 2 repos too(dev and prod).
How can I create the Master template, and how my changes from dev environment will be automatically applied to prod?
I didn't used the policyXMLBaseUrl parameters not sure what Path insert there, although seems #miaojiang inserted a folder from azure storage.
After some search and tries I deployed API's and Operations from an environment to another, but we still don't have a full automated scenario, where I make a change in a instance and that is automatically available.Is necessary to edit policies and definitions directly on the repositories or run the extract tool again.