Devops pipeline release to another tenant - azure-devops

We have two tenants for nonprod and prod resources, along with two separate directories. The devops project connects to the nonprod directory. We have container workload in app service. And uses build and release pipeline for ci cd. Question, how to release to the production tenant?

If you go to Project Settings -> Service Connections, and add a new service connection, choose Azure Resource Manager as the type, and then Azure Resource Manager using service principal (manual), you should be able to specify everything you need to connect to the tenant and subscription to which you wish to deploy:

Related

Does setting up SSL for Azure devops on permises impact already configured deployment group agents using http

We have Azure devops setup using http and planning to move to https. As Deployment group agent targets has been setup using http site, does it impact the deployments using deployment group agents ?
Deployment group agents has been configured on servers using PowerShell referring to http site , does it mean we need to reconfigure the agents?
Yes, you will need to reconfigure all agents.
Basically the Project Collection Uri for your instance will change.

How to deploy automatically Azure VMs and AKS on an environment?

I made a Terraform to create an infrastructure on Azure.
I used the provider "microsoft/azuredevops".
I need to add VMs and Azure Kubernetes Service on a specific environment that I created.
My question is how to deploy the script (which put tags) to them ?
It's not possible to deploy it in an Azure DevOps Pipeline because this last doesn't know VMs (and AKS).
I don't see anything else to made it with the provider azuredevops.
The solution (I think) will would be to extract the original script from an environment on Pipeline/Environment. I need to change all variables as personal access token and others too.
But I don't know if Microsoft change the script regularly.
What's the best solution ?
Thank you.
If you want to manage Azure resources with Terraform then you need to use the AzureRM provider not the Azure DevOps provider. The Azure DevOps provider is for managing your Azure DevOps instance.
The AzureRM provider contains resources for managing Linux and Windows VM's or for other types of resource such as AKS
Once you've written your terraform code, you can use a pipeline to run the terraform against Azure. Microsoft provide a terraform extension which can be used to call terraform in your pipeline
For the pipeline to be able to authenticate against Azure you'll need to set up a service connection. This will allow the pipeline to use a service principle in azure which can be given the appropriate level of permissions to create, update, destroy Azure Resources
Hashicorp have a good tutorial on getting started with Terraform and Azure and Microsoft also have some good documentation
Microsoft also have a tutorial on using Terraform from a pipeline, it uses the classic GUI based pipelines rather than YAML but the tasks and principles will be the same for both

Azure DevOps YAML Environment Auto-Deploy Trigger for New Servers

I want to use an Azure DevOps YAML pipeline to deploy to an AWS stack with EC2 instances and a Load Balancer. I've read here that I can use the AWS userdata script to join new EC2 instances to the Azure DevOps Environment.
My question is, how can I get my Azure DevOps Environment or YAML build to deploy to new servers that join that group? For example, if I use auto-scaling and a new server spins up.
I know that Deployment Groups which are used in the Classic Pipelines had a feature that allowed you to enable a Post Deployment Trigger that could redeploy the last successful build when a new server joined like this.
Is this possible to do with YAML Environments? If so, how?
If it matters, I hope to be able to share the AWS stack and have several separate applications that will get deployed to the same stack with their own YAML builds.

Using permissions/other features to limit deployment in Azure DevOps

I need to configure permissions and make use of native features to limit deployment within Azure DevOps, so that those with limited access can only release to dev/test environments and those with privileged access can deploy to all environments, including staging/prod, for example.
I'd like to achieve this without splitting release pipelines up - is it best just to use pre-deployment approvals or is there a better way to remove the ability for those with limited access to deploy into prod, at all?
Can this be done by limiting access to service connections, for example? So a limited user would have 'User' access to the dev/test service connections but not staging/prod, as a safety net?
Just looking for some tips/best practice advice.
Thanks..
You could use deployment groups to handle this.
A deployment group is a logical set of deployment target machines that have agents installed on each one. Deployment groups represent the physical environments; for example, "Dev", "Test", "UAT", and "Production". In effect, a deployment group is just another grouping of agents, much like an agent pool.
When authoring an Azure Pipelines or TFS Release pipeline, you can specify the deployment targets for a job using a deployment group. This makes it easy to define parallel execution of deployment tasks.
Deployment groups:
Specify the security context and runtime targets for the agents. As
you create a deployment group, you add users and give them
appropriate permissions to administer, manage, view, and use the group.
Let you view live logs for each server as a deployment takes place,
and download logs for all servers to track your deployments down to
individual machines.
Enable you to use machine tags to limit deployment to specific sets
of target servers.
Besides, suggest you also take a look at this blog: Configuring your release pipelines for safe deployments which include multiple points:
Gradual rollout to multiple environments
Uniform deployment workflow for all environments
Manual approval for rollouts
Segregation of roles
Health check during roll out
Branch filters for deployments
Secure the pipelines

Team Services deploy to on-premise Service Fabric without exposed endpoint

We have a Service Fabric cluster on-premise and would like to deploy code to it from Visual Studio Team Services. We use this cluster for testing and it does not have an endpoint exposed to the outside world. It is only accessible internally from inside our network.
From inside Team Services the normal way to deploy a Service Fabric application is with the "Service Fabric Application Deployment" task. This task requires a "Cluster Connection" parameter, or link to the Service Fabric Service endpoint that Team Services can access. On this cluster I can't provide an endpoint to the outside world, so this method won't work.
Is there a good, accepted way of accomplishing this? I'm considering looking at having an Agent on one of the Service Fabric nodes that can run a PowerShell script as part of the build process. I can kick off a PowerShell script on the node as part of the build process. If I could retrieve the artifacts from Team Services with this script I believe the rest of the release would be relatively straightforward.
Is this a good line of thought, or is there a more straightforward way to deploy to Service Fabric from Team Services without exposing an endpoint?
We have the same set up and using VSTS. We set up a On-Prem agent pool where agent is within our network. The agent is hook with VSTS so build and release can be trigger from VSTS. Agent have access to the artifact on VSTS and can download it for deployment. The different might be we set up a service fabric end point instead of using powershell.
Its a very simple set up and works well for us.Good luck