Azure Data Factory designer in Visual Studio 2019 project - azure-data-factory

I've run into a release pipeline issue with the Azure Data Factory. In most types of Azure entities (such as a web app or sql project), I can develop the project in Visual Studio and use ADO to create a build and releases to deploy the project to Azure. Using variables and libraries, I can create the appropriate release definitions variables on a per-environment basis, and ensure that the build I deploy is the same for each step of the release pipeline (i.e dev -> tst -> stg -> prd). You know, the normal continuous integration/continuous deployment process?
However with Azure Data factories, it seems that I have to create the data factory in the Azure Portal directly for an environment and then create it again for another environment in the pipeline (I know I can export and reimport manually).
What I would like is to be able to create an Azure Data Factory project in Visual Studio (2019), maintain it in Visual Studio with a similar designer like the one in the Azure portal, check it into git and deploy it with a build and release in ADO.
Since creating an Azure Data Factory project doesn't seem possible (or am I missing something?), what is the recommended approach to working with Azure data factories in a continuous integration/continuous deployment ADO environment?

ADFV2 does not have a plugin for Visual Studio, most of the effort has been on the UX side.
We recommend you to use ADF UI as your development tool, where you can define your workflow easily and validate your changes.
For CICD, you can integrate your Dev factory with GIT, and then setup CICD in the below way
https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment.
Every publish to Dev factory will trigger your release pipeline which can take the build and deploy to remaining stages.

Related

How to use ClickOnce custom prerequisites with Azure DevOps Pipelines

We have a ClickOnce application we try to deploy with Azure DevOps pipelines.
We have figured the deployment process out with creating manifests and signing them.
The problem is now we want to also use the prerequisits option of ClickOnce with some custom packages.
After reading the docs we created a custom bootstrapper package and it was displayed in Visual Studio. When we publish the app with Visual Studio the custom package is added. But if we use it on the DevOps pipelin it's ignored. This makes sense because the build server doesn't know the custom package.
For example one prerequisit is the Microsoft OLE DB Driver for SQL Server.
One Idea maybe would be to not use the option "Download prerequisites from the component vendor's web site", but host the exe/msi files ourselves and link to them.
Another option could be this support URL for individual prerequisites but here I don't know how to set this to other applications.
Does somebody have an idea how custom prerequisites with ClickOnce can be added to a Azure DevOps build server/pipeline?
You could try to use Azure cli to upload the package(with its setup.bin file) to Azure DevOps artifact feeds
as universal packages type.
Create a feed first then you could use az artifacts universal publish to upload the packages.
Then you could use the "Universal packages" task in your Azure DevOps pielines to download the packages for using.
As you said, you could use visual studio run with the packages successfully. If you use VSBuild#1 task as the doc recommend in your pipelines to deploy the ClickOnce, you could add the path of the custom package in Azure DevOps, it looks like something like this:
'p:GenerateBootstrapperSdkPath=$(System.DefaultWorkingDirectory)\bootstrapper'
I hope it could help.

How to automate Azure data factory pipeline deployments

I want to automate Azure data factory pipeline deployments.
I have Self Hosted Integration runtimes with a different name in each environment (i.e. SHIR-{environment}).
I have different data sources and destinations for each environment. (i.e. different SQL server names or Hostnames)
How can I perform the automatic weekly deployments to promote changes from GitHub dev branch to stage and stage to production? I don't want to modify these database server names in linked services during the GitHub PR merge.
To set up automated deployment, start with an automation tool, such as Azure DevOps. Azure DevOps provides various interfaces and tools in order to automate the entire process.
A development data factory is created and configured with Azure Repos Git. All developers should have permission to author Data Factory resources like pipelines and datasets.
A developer creates a feature branch to make a change. They debug their pipeline runs with their most recent changes. For more information on how to debug a pipeline run, see Iterative development and debugging with Azure Data Factory.
After a developer is satisfied with their changes, they create a pull request from their feature branch to the main or collaboration branch to get their changes reviewed by peers.
After a pull request is approved and changes are merged in the main branch, the changes get published to the development factory.
When the team is ready to deploy the changes to a test or UAT (User Acceptance Testing) factory, the team goes to their Azure Pipelines release and deploys the desired version of the development factory to UAT. This deployment takes place as part of an Azure Pipelines task and uses Resource Manager template parameters to apply the appropriate configuration.
After the changes have been verified in the test factory, deploy to the production factory by using the next task of the pipelines release.
For more information follow this link

Self Hosted IR unavailable after ARM deployment

We are trying to use self hosted integration runtime to extract data from on-prem fileshare. To implement CI/CD, I have created arm templates from the data factory where IR is successfully working and enabled sharing on for the Data Factory in which I am going to deploy my pipelines using ARM templates. I can successfully deploy pipeline and self hosted IR and linked services but IR is not available in the new data factory connections.
Is it normal? Because to use CI/CD with Data Factory, as soon as ARM gets deployed we should be ready to run pipelines without manual changes? And if I am correct then can anyone help why IR in the new Data Factory isn't available which is making the pipeline failed when I am trying to run it.
Self Hosted Integration Runtime are tied to the ADF it is created in.
To use CI/CD with Self Hosted IR you need to do following steps:
Create a new Data Factory other than the ones you using in CI/CD
process,then create the Self hosted Integration Runtime their.(This
ADF doesn't need to contain any of your pipeline or Dataset).
Go to the newly created Integration Runtime and click on edit or pencil
icon. Go to sharing tab of opened window.
Click on Grant Permission to other Data factory.(Search and Give Permission to all ADF
involved in CI/CD Process).
Copy the resource id Displayed. Go to the DEV Data Factory and create new Self hosted runtime of type linked.
5.Enter the Resource ID when asked and click create.
6.Then proceed to setup CI/CD process through DEV Data Factory.
Since through ARM template in all other Data factory linked Self Hosted IR will be created and if you provided permission then everything will work.
A Self-Hosted Integration Runtime, is 'owned' by exactly one Data Factory instance. The way the 'owner' and the 'sharer' factories define the IR are different. When you deployed one over the other, the type changed and you ended up with either two 'owners' or two 'sharers'. Since there can only be one 'owner' or a 'sharer' points to an 'owner', things break.

How to Azure DevOps CI/CD Pipeline for PowerBuilder 2017 R3 project, is it even possible?

Summary
Recently migrated PB126 apps to PB2017 and changed source control to Azure DevOps Git.
Now, I'd like to integrate Azure DevOps CI/CD Build Pipeline to the app dev life-cycle.
jenkins
I know it's feasible to configure jenkins CI server so it builds PB2017 projects.
Continuous Integration with PowerBuilder 2017, Bonobo Git and Jenkins
My problem here's I can't get it to work on a local Docker container and make it accessible to the outside world (Internet) so Azure DevOps can trigger its build action. Supposedly, it's a Docker for Windows thing which Docker handles differently from the Linux-based Docker.
Azure DevOps Pipeline
As per this link, Azure Pipelines is the CI/CD solution for any language, any platform, any cloud, it says any language, which makes me believe it's feasible to build PB2017 projects using Azure DevOps Pipelines.
The fact is that I'm totally new to CI/CD in terms of implementing it myself. I've experienced it in many projects where I wasn't responsible to implement it. Now I am. I've been on it since a few days now, and I do want it to work.
Any help appreciated.
The Appeon offical user forum: https://community.appeon.com/index.php/qna/q-a

Deploy SSDT package via VSTS and cannot Drop objects?

I am successfully deploying an SSDT package in my VSTS release. If publishing directly from Visual Studio, there is a flag "drop_objects_not_in_source" under advanced settings. However, cannot figure out how introduce this in my CI/CD pipeline. There are a bunch of refactoring tools but none address this. So, I am stuck with object on the sql server that have been deleted.
Any suggestions?
Pass the command line argument /p:DropObjectsNotInSource=true when publishing the database or use a publish profile that contains the setting.