Deployment scenario of git integrated Azure Data Factory via arm template - deployment

What happens if you have multiple features being tested in test environment of a ADF V2 test data factory and only one or few of them is ready for production deployment. How do we hande this type of deployment scenario in Microsoft recommended CICD model of git/vsts integrated adf v2 through arm template
Consider we have dev test and prod environment of ADF v2. The dev environment is git integrated. The developers have debuged their changes and merged with collaboration​ branch after pull request. The changes are published and deployed to test environment first. Here many features are getting tested but few are ready for prod and few are not, how do we move the ones which are ready since tge arm template takes the entire factory?

this is somewhat of a strange question. you can apply same logic to anything, how do you create a feature for an application since application is only deployed as a single entity. answer would be: use git flow or something akin to that. Use feature branches and promotions.

Related

Implementing SemVer versioning on Azure Function App Deployment with GitLab-CI pipeline

I'm looking for a sample in GitLab-CI-Pipeline for deploying an Azure Function App written in .Net 6.0 and deployed with Terraform [IaaC], which is available in a separate repository.
I am able to Build and Deploy but not getting a right reference for implementing versioning on App-Project-Repo.
The versioning has the following requirements:
It starts with every deployment from Master Branch.
Version manually incremented on QA-Deployment which happens from master-branch
Version also need to be create tagged to Master-branch
It needs to support roll-back as well.

How to automate Azure data factory pipeline deployments

I want to automate Azure data factory pipeline deployments.
I have Self Hosted Integration runtimes with a different name in each environment (i.e. SHIR-{environment}).
I have different data sources and destinations for each environment. (i.e. different SQL server names or Hostnames)
How can I perform the automatic weekly deployments to promote changes from GitHub dev branch to stage and stage to production? I don't want to modify these database server names in linked services during the GitHub PR merge.
To set up automated deployment, start with an automation tool, such as Azure DevOps. Azure DevOps provides various interfaces and tools in order to automate the entire process.
A development data factory is created and configured with Azure Repos Git. All developers should have permission to author Data Factory resources like pipelines and datasets.
A developer creates a feature branch to make a change. They debug their pipeline runs with their most recent changes. For more information on how to debug a pipeline run, see Iterative development and debugging with Azure Data Factory.
After a developer is satisfied with their changes, they create a pull request from their feature branch to the main or collaboration branch to get their changes reviewed by peers.
After a pull request is approved and changes are merged in the main branch, the changes get published to the development factory.
When the team is ready to deploy the changes to a test or UAT (User Acceptance Testing) factory, the team goes to their Azure Pipelines release and deploys the desired version of the development factory to UAT. This deployment takes place as part of an Azure Pipelines task and uses Resource Manager template parameters to apply the appropriate configuration.
After the changes have been verified in the test factory, deploy to the production factory by using the next task of the pipelines release.
For more information follow this link

SonarQube + Azure DevOps + Pipeline as Code - Is it possible?

The company I work on recently purchased SonarQube Enterprise to improve code quality throughout all repositories. I found out that there is a feature that enables SonarQube to comment automatically on PRs targeting a specific branch, and I successfully managed to try that out.
Thing is:
That configuration is not scalable: I would need to manually configure every repo to follow that rule
That configuration needs a build pipeline to be defined "old school" on Azure DevOps to work, and we are moving into Pipeline as Code, starting of course with CI (where this takes place)
Anyone managed to get the PR commenting working in that scenario? Or, at least, solving the #1 problem?
Cheers
You can use REST APIs to do whatever configuration you need to do across your repositories. Refer to the REST API documentation.
Shouldn't matter, although I haven't tested it. The SonarQube tasks aren't aware of whether the build source is YAML or visual designer/classic/JSON builds. The underlying tasks and job running architecture is the same. As long as the build is hooked up to a branch policy, it should still work.

Is it possible to realease a full set of potentially dependant artifacts (microservices) at once?

Traditionally we have to deliver our applications on the test and pre-production platforms one by one (usually by hand using setups). Applications like the front end javascript SPA UI are linked to backend services and their delivery sometimes goes together.
Each service and each application has its own git repository. (we are using on premise TFS 2018 for now )
Then when it is necessary to go into production, we deliver all of the front end services and applications at once that have been validated at once.
We would like to automate our process but we don't know if Azure Devops is suitable.
From what I understand with Azure Devops, we can make an independent artifact for each microservice and each front end application. We can also deliver them independently.
It seems to me that Azure Devops by default allows you to manage the delivery cycle for a particular microservice but not for an assembly making up a complete system, right?
But is it possible to deliver a set of projects each with a particular version? For that, must all of our projects be in the same solution or the same git repository?
Yes, you can use multiple artifacts from different sources (build artifacts, repositories, package feeds, github, docker hub, Azure Container Registry, ++) within a single pipeline or release definition. That's true for both the classic release definitions and the modern multi-stage pipeline implementation.
For example you can define a pipeline or release definition that consumes a front-end web app from a build artifact sourced from RepoA, a back-end service artifact consumed from a container registry originally from RepoB, and say a script library in the form of a Git artifact from RepoC. From there you could deploy each of those artifacts together, or in parallel stages, in sequence, partially, with approvals, conditionally, etc, all from the same pipeline.
The full configuration as code YAML multi-stage pipelines are still in preview, so there are some workflow orchestrations that are a little tougher to implement. But there is enough feature parity with the classic release definitions that I would default to using multi-stage for any net new needs.

How to complete CI/CD pipelines with Azure DevOps for Azure API Management

I need help to understand better how to create complete CI/CD with Azure Devops for APim. Ok I already has explored the tools and read docs:
https://github.com/Azure/azure-api-management-devops-resource-kit
But I still have questions, my scenario:
APim dev instance with APi and operations created and others settings, as well APim prod instance created but empty.
I ran the extract tool and got the templates (not all), still need the Master (linked templates), and on this point seat my doubt, I already have 2 repos too(dev and prod).
How can I create the Master template, and how my changes from dev environment will be automatically applied to prod?
I didn't used the policyXMLBaseUrl parameters not sure what Path insert there, although seems #miaojiang inserted a folder from azure storage.
After some search and tries I deployed API's and Operations from an environment to another, but we still don't have a full automated scenario, where I make a change in a instance and that is automatically available.Is necessary to edit policies and definitions directly on the repositories or run the extract tool again.