How to deploy/filer the respective Server base endpoint in Swagger - azure-devops

I have an YAML/JSON files and we have the base serve endpoint defined as seen in the below screenshot.
How do we filter only the respective base URL for specific environment
For instance:
Server: dev files should be deployed to DEV environment, Stage files should be deployed to Stage environment and so on
Note: I'm using Azure pipeline for deployment.

In your current situation, in the devops pipeline, we do not have this function/option to do this. We recommend you can try to create a New Generic service connection and use it in your different deploy steps.

Related

Configuring FHIR OSS to use a specific database name

I am deploying the Microsoft Open Source FHIR server to Azure using the supplied ARM templates (which I have converted to BICEP templates).
I want to deploy a test instance and a prod instance (in different resource groups), but I would like them to use the same cosmosdb account (which is in a 3rd resource group).
Whilst you can point a deployment to use an existing cosmosdb account, presumably the database names would clash.
In principle this seems possible if you could configure the name of the database to be used by a deployment.
Any suggestions or ideas?
Many thanks,
Andreas.

Azure DevOps Pipeline as Code and Deployment Group

I'm facing the challenge to use the same resource (VM in my company) for all my dev environment. That means that multiple apps will be deployed there.
I have:
https://dev.azure.com/mycompany/project1 with a Pipeline as Code for CI/CD & environment called D-Stage
https://dev.azure.com/mycompany/project2 with a Pipeline as Code for CI/CD & environment called D-Stage
Since I can’t use Deployment group, each time I register the VM as a Resource of each project’s pipeline, if I don’t change the registration name, one reg replace another and the last registration is the only one that has connection to the VM.
On the other side if I create a new registration I get a new azure agent per project.
What should be the right way to handle the scenarios since Deployment Group is not supported in YAML files?
If we want to add multiple resources in a VM environment, and the resources refer to the same VM, we need to modify the agent name in the registration script, otherwise the resource with the same agent name will replace the previously registered resource.
By modifying the agent name(--agent $env:COMPUTERNAME) in the registration script, we can register multiple agents in a VM environment:

Create service connection and use the same in next stage of azure devops multi stage yaml pipeline

I have two yaml templates defined one for creating a docker registry service connection and second for deploying some stuff via container job. The second template uses the docker registry connection which is being deployed in first template. When I am running both the templates separately then both the stages are successful but when I run them in one azure-pipelines.yaml, it fails :
There was a resource authorization issue: "The pipeline is not valid. A service connection with name shared-stratus-acr-endpoint could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz."
Is there any way like dependsOn or condition that we can provide in this situation?
It's likely that you only authorized the service connection for the individual template\pipelines when you created them. The workflow is not super friendly.
What if you try and authorize the pipeline that is failing for that service connection explicitly. See docs here
You could also just authorize the service connection for all pipelines depending on your security needs.

Desired state configuration

I have two web servers and one service server and a database server and all these servers are domain joined. And I have set my private build agent from VSTS from where I can build my artifacts and based on build configuration. And all my DEV,QA and STAGING environments are setup on those servers.
My problem is i am looking for a way using PowerShell Desired state configuration such a way that based on the environment artifacts (DEV,QA and STAGING) the scripts has to copy the artifacts to specific location on those "TWO web-servers" and ensure the website is configured correctly with all the required permissions where these artifacts are used to host IIS website and perform the delete and creation action of particular windows service on "SERVICE service" and should also perform the migration activities on "DATABASE server" for particular database. since I have separated database for each individual environment.
Any kind of help or suggestion would be appreciated. thank you.
My suggestions are:
Don't use DSC for deployment (i.e. deploy applications or databases)
Use DSC for configuration (e.g. install IIS)
Install the VSTS Agent on each server in Deployment Groups mode, running as a service with local administrator privileges
Use the IIS Deploy Tasks designed for Deployment Groups
Use the Powershell Task to manage the Windows Services (tip. help *-Service)

Azure Data Factory v2 parameters for connection string

I am new to using Azure Data Factory v2 and have a few questions regarding general transforming connection strings / LinkedServices when deploying to multiple environments.
Coming from SSIS background:
we used to define connection strings as project parameters. This allowed transforming the connecting string when deploying the artifacts onto different environments.
How can I accomplish the same using Azure Data Factory v2 ?
Is there an easy way to do this ?
I was trying to set up linked services with connection strings as parameters which then could be passed along with the triggers? Is this feasible ?
This feature is now avaialble from URL below. Are you the one who requested the feature? :)
https://azure.microsoft.com/en-us/blog/parameterize-connections-to-your-data-stores-in-azure-data-factory/
Relating to SSIS (where we would use configuration files - .dtsconfig for deployment to different deployments), for ADFV2 (& ADFV1 too) we could look into the option of using ARM templates where for every different environment (dev, test & prod) to deploy the ADF solution that many deployment files(.json) could be made and script the deployments using PowerShell. It is possible to use ARM template parameters to parameterize connections to linked services and other environment specific values. Then there are ADFV2 specific PowerShell cmdlets for creation/deployment of ADFV2 pipelines.
Also you can use PowerShell to parametrize connections to linked services and other environment specific values.
With the ADFV2 UI the VSTS GIT integration is possible so is the deployment and integration. VSTS GIT integration allows to choose a feature/development branch or create a new one in the VSTS GIT repository. Once the changes are merged with the master branch it could be published to data factory using ADFV2 UI.
I ended up solving this issue with setting up an azure key vault per environment each having a connection string secret (more details here : https://learn.microsoft.com/en-us/azure/data-factory/store-credentials-in-key-vault)
- dev
- dev-azure-datafactory
- dev-key-vault
- key: db-conn-string
value: dev-db.windows.net
- qa
- qa-azure-datafactory
- qa-key-vault
- key: db-conn-string
value: qa-db.windows.net
- production
- prod-azure-datafactory
- prod-key-vault
- key: db-conn-string
value: prod-db.windows.net
In Azure Data Factory
Define an Azure Key Vault linked service
Use the azure key vault linked service while defining connection string(s) for other linked services
This approach removes any changing of parameters in the actual linked service
The connection string with azure key vault linked service can be changed as part of your azure pipeline deployment (more details here : https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment)
Each azure data factory can be given access to its azure key vault using MSI (automated it with terraform in our case)