Configuring FHIR OSS to use a specific database name - deployment

I am deploying the Microsoft Open Source FHIR server to Azure using the supplied ARM templates (which I have converted to BICEP templates).
I want to deploy a test instance and a prod instance (in different resource groups), but I would like them to use the same cosmosdb account (which is in a 3rd resource group).
Whilst you can point a deployment to use an existing cosmosdb account, presumably the database names would clash.
In principle this seems possible if you could configure the name of the database to be used by a deployment.
Any suggestions or ideas?
Many thanks,
Andreas.

Related

Can we use different run-time in Azure Data Factory v2 (ADFv2) Copy Activity?

I have copy activity, where source is Oracle on premises connected through Self-hosted IR and destination is Microsoft Synapse connected via Azure Run-time. These run-time is defined in connections (Linked Services).
But while execution pipeline is using Self Hosted Run-time through-out and overriding the run-time of Azure Synapse. And because of that connection is failing.
Is this default behavior? Can't I run pipeline with 2 different run-time.
Thanks #wBob but I am sorry that is not true, I found the answer at Microsoft documentation.
Copying between a cloud data source and a data source in private network: if either source or sink linked service points to a self-hosted IR, the copy activity is executed on that self-hosted Integration Runtime.
Ref: https://learn.microsoft.com/en-us/azure/data-factory/concepts-integration-runtime#determining-which-ir-to-use
Integration runtimes are defined at the linked service level. So you should have a linked service definition for your Oracle database and a separate linked service definition for your Azure Synapse Analytics (formerly known as Azure SQL Data Warehouse). So you can specify different integration runtimes, eg here's an example:
NB Azure Synapse Analytics is using the Autoresolve runtime and does not need a self-hosted integration runtime (SHIR) as it is a native PaaS service. Are you getting a specific error? If so, please post details.

Using Managed Identity on Azure SQL Managed Instance for Dacpac deployment in AzureDevOps

I am trying to configure Azure Key Vault and setup Managed Identities for use in CI/CD pipeline for Azure Dev Ops.
I have looked around in MSDN documentation but I only specific links for use with Azure SQL and we are using Azure SQL Managed Instances.
If I did not make any misunderstand, in fact, you want to use Managed Identity work with Azure SQL Managed Instance? If this, unfortunately to say, the Managed Identity could not work with Azure SQL Managed Instance. Please see this doc: Services that support managed identities for Azure resources. It list all of the Azure services name which support work with the Managed identities in great detail.
You can see for SQL database, it only support the integration with Azure SQL instead of Azure SQL Managed Instance. That's why you only see the doc link for the usage with Azure SQL.
Until now, the Azure SQL managed instance only support two authentication method:
SQL Authentication:
This authentication method uses a username and password.
Azure Active Directory Authentication:
This authentication method uses identities managed by Azure Active
Directory and is supported for managed and integrated domains. Use
Active Directory authentication (integrated security) whenever
possible.
You can refer to this thread: Managed Identity with Azure SQL Managed Instance?. In this thread, out engineer provided some work around if you trying to configure the app with Managed Identity.

Desired state configuration

I have two web servers and one service server and a database server and all these servers are domain joined. And I have set my private build agent from VSTS from where I can build my artifacts and based on build configuration. And all my DEV,QA and STAGING environments are setup on those servers.
My problem is i am looking for a way using PowerShell Desired state configuration such a way that based on the environment artifacts (DEV,QA and STAGING) the scripts has to copy the artifacts to specific location on those "TWO web-servers" and ensure the website is configured correctly with all the required permissions where these artifacts are used to host IIS website and perform the delete and creation action of particular windows service on "SERVICE service" and should also perform the migration activities on "DATABASE server" for particular database. since I have separated database for each individual environment.
Any kind of help or suggestion would be appreciated. thank you.
My suggestions are:
Don't use DSC for deployment (i.e. deploy applications or databases)
Use DSC for configuration (e.g. install IIS)
Install the VSTS Agent on each server in Deployment Groups mode, running as a service with local administrator privileges
Use the IIS Deploy Tasks designed for Deployment Groups
Use the Powershell Task to manage the Windows Services (tip. help *-Service)

What to use in Production environment instead of UserSecrets

I have a console app in dotnet core. I use appsettings.Development.josn and appsettings.Staging.json for dev and staging environment but for the production environment i use the UserSecrets. I have two problem when the app is running on production env it does not create UserSecrets in the %Appdata%/Microsoft so I have to make it manually and then it starts to work.
Another part of my question is this:
today I found out that microsoft wrote here
The Secret Manager tool is used only in development. You can safeguard Azure test and production secrets with the Microsoft Azure Key Vault configuration provider. See Azure Key Vault configuration provider for more information.
I dont have Azure. What can I use in production if I am not supposed to use the UserSecrets.
While environment variables are one of the most used options in web development and The Twelve Factor App documents states: "Store config in the environment" there are some reasons why this may not be the best approach:
the environment is implicitly available to the process and it's hard to track access. As a result, for example, you may face with situation when your error report will contain your secrets
The whole environment is passed down to child processes (if not explicitly filtered). So your secret keys are implicitly made available to any 3rd-party tools that may be used.
All this are one of the reasons why products like Vault become popular nowadays.
So, yes, you may use environment variables, but be aware)
For storing secure data in your app, if you're using Azure, So Azure KeyValut is your answer, you can see Azure Microsoft Azure Key Valut,
In case you're using K8S, you can store it on CSI driver
Or system OS environment variables

Programmatically download RDP file of Azure Resource Manager VM

I am able to create VM from a custom image using Azure resource management sdk for .net. Now, I want to download the RDP file for virtual machine programmatically. I have searched and able to find Rest API for azure 'Classic' deployments which contains an api call to download RDP file but i can't find the same in Rest API for 'ARM' deployment. Also, I can't find any such Method in .net sdk for azure.
Does there any way exist to achieve that? Please guide..
I don't know of a way to get the RDP file, but you can get all the information you need from the deployment itself. On the deployment, you can set outputs for the values you need like the publicIp dns. See this:
https://github.com/bmoore-msft/AzureRM-Samples/blob/master/VMCSEInstallFilePS/azuredeploy.json#L213-215
If your environment is more complex (load balancers, network security groups) you need to account for port numbers, etc.