Azure DevOps Pipeline as Code and Deployment Group - azure-devops

I'm facing the challenge to use the same resource (VM in my company) for all my dev environment. That means that multiple apps will be deployed there.
I have:
https://dev.azure.com/mycompany/project1 with a Pipeline as Code for CI/CD & environment called D-Stage
https://dev.azure.com/mycompany/project2 with a Pipeline as Code for CI/CD & environment called D-Stage
Since I can’t use Deployment group, each time I register the VM as a Resource of each project’s pipeline, if I don’t change the registration name, one reg replace another and the last registration is the only one that has connection to the VM.
On the other side if I create a new registration I get a new azure agent per project.
What should be the right way to handle the scenarios since Deployment Group is not supported in YAML files?

If we want to add multiple resources in a VM environment, and the resources refer to the same VM, we need to modify the agent name in the registration script, otherwise the resource with the same agent name will replace the previously registered resource.
By modifying the agent name(--agent $env:COMPUTERNAME) in the registration script, we can register multiple agents in a VM environment:

Related

which user do Azure devops pipelines run as?

we have azure devops pipelines to build and deploy various projects.
Rccently, we wanted to use the "azureblog file copy" pipeline to copy some files to a blob storage.
This needs write access to the storage account over in azure.
Our administrator says that the pipeline runs as whoever manually runs the pipeline. If this is true, we would have to give all devs and users read/write access to the blog storage, which would be crazy.
I assume he is wrong, and that pipelines run as a specific designated user no matter how they were kicked off. The question is, how to find out what this user is for a given pipeine?
The "edit pipeline" has a security tab near the top, adn this lists a but of "Azure Devops groups", which are presumably groups who have the ability to run the pipelines.
But where is the pipeline user defined?
which user do Azure devops pipelines run as?
It depends on the context in which you are discussing the question.
If you mean inside a pure DevOps service, then I can tell you that the user that the pipeline runs on is not the one who triggers the pipeline (which is the case with native DevOps services by default, unless you install some weird extension or have a problem with the pipeline special design), but this identity:
<Project Name> Build Service Account
'run as someone' is just a property of pipeline run. Pipeline run as 'Build Service Account'(On the DevOps side), If you need a pipeline to download or upload an artifact, you can clearly feel this. If the account mentioned above has no permission, you can't do anything.
If you mean the user's operations based on the Azure side, then I can tell you that for native DevOps service, the 'user' that the pipeline performs operations on the Azure side is not the person who triggers the pipeline. The pipeline of DevOps service consists of multiple tasks, and tasks generally interact with other services through service connection (you can find this in Project settings).
There are many types of service connections. When interacting with services in Azure, this type is generally used:
Azure Resource Manager -> Service principal
When you create a service connection of this type on the DevOps side, Azure will also create an AAD app related to this service connection, and this ADD app corresponds to a service principal in Azure. In Azure, permissions are assigned based on service principals or users, and your DevOps pipeline's operations to Azure are based on this service principal and have nothing to do with anything else in DevOps. On the Azure side, this service principal can be considered as an Azure-side 'avatar' of the tasks of the DevOps pipeline.
If you are interacting with Azure through pure code/pure script, then please follow the logic of the script/code.
Our administrator says that the pipeline runs as whoever manually runs
the pipeline. If this is true, we would have to give all devs and
users read/write access to the blog storage, which would be crazy.
For native DevOps service, of course not. Unless this pipeline has a special design.
I believe the pipeline runs as the "agent". Who the "agent user" is will depend on, first, whether you've chosen a "Microsoft-hosted" or "self-hosted" agent to run your pipeline.
When running pipelines in Azure DevOps that are directly working with Azure Resources you need an Azure Resource Manager service connection. The credentials used to make the service connection are the credentials the pipeline will use when it runs.
You can have your administrator provide you with an Azure AD account that has the permissions you need and then use that Azure AD account to create the service connection for the pipeline. Once you have created the service connection you can use the ResourceID of that connection in place of you azureSubscription.
Here is the link to the Microsoft documentation on creating a service connection.
Here is the link to the Microsoft documentation on the Azure file copy task. To verify you can use the service connection in place of the azureSubscription.

How to deploy/filer the respective Server base endpoint in Swagger

I have an YAML/JSON files and we have the base serve endpoint defined as seen in the below screenshot.
How do we filter only the respective base URL for specific environment
For instance:
Server: dev files should be deployed to DEV environment, Stage files should be deployed to Stage environment and so on
Note: I'm using Azure pipeline for deployment.
In your current situation, in the devops pipeline, we do not have this function/option to do this. We recommend you can try to create a New Generic service connection and use it in your different deploy steps.

VSTS secret variables are actually secret or not?

VSTS build definition has the option to create a secret variable. How secret is that variable? Is it safe to store the user credentials which is specific to a set of users? Can other users (who are not authorized to do it) can decrypt that variable?
I came across this article.
Assuming users have build modification access then is it possible to decrypt the variable?
Variables stored are as secure as the agent that runs the build and the integrity of your build definition.
Like you said, if a user can modify the Build Definition and has access to the secret they can pass it to a PowerShell or a Curl task etc. Or if the user can take control over a Build Task's script they can iterate all available secrets (build tasks are considered trusted by the Build System).
Consider that everyone who has write-access over the work directories of the agent can access all secrets that are available to the Build Definitions that execute on the build agent. They can change the scripts used by Build Tasks and thus gain the same level of trust. Any build that runs after this change and until a new version of the task is pushed to the agent will be compromised in this scenario. In theory can every build definition "infect" the _tasks folder of the agent as well. Best way to protect against this is to use the Hosted Pool or to regularly reset your agent's VMs.
YAML build definitions combined with Pull-Requests give you more control over the Change/approval process of build definitions.
Using a Variable Library you can reduce the number of people who can add secret variables to their Build Definition.
You must secure the Agent Pools and the Variable Libraries/Build Definitions in such ways that only limited and trusted users can access these resources. Optionally use single-use passwords that expire after a short time or temporarily grant these permissions.
Remember that all changes to Build Definitions and Variable Libraries and Scripts in the Git Repository are tracked.
The alternate ways to get access to the secrets do not apply to Azure DevOps as none have access to the Application Tier in Azure and access is strictly monitored by Microsoft.

Desired state configuration

I have two web servers and one service server and a database server and all these servers are domain joined. And I have set my private build agent from VSTS from where I can build my artifacts and based on build configuration. And all my DEV,QA and STAGING environments are setup on those servers.
My problem is i am looking for a way using PowerShell Desired state configuration such a way that based on the environment artifacts (DEV,QA and STAGING) the scripts has to copy the artifacts to specific location on those "TWO web-servers" and ensure the website is configured correctly with all the required permissions where these artifacts are used to host IIS website and perform the delete and creation action of particular windows service on "SERVICE service" and should also perform the migration activities on "DATABASE server" for particular database. since I have separated database for each individual environment.
Any kind of help or suggestion would be appreciated. thank you.
My suggestions are:
Don't use DSC for deployment (i.e. deploy applications or databases)
Use DSC for configuration (e.g. install IIS)
Install the VSTS Agent on each server in Deployment Groups mode, running as a service with local administrator privileges
Use the IIS Deploy Tasks designed for Deployment Groups
Use the Powershell Task to manage the Windows Services (tip. help *-Service)

VSTS Deployment to a deployment group from a UNC share

I am using visualstudio.com Teams Services to build and deploy an ASP.NET website to two Azure VMs.
I have a build which on completion triggers a release to my two servers in a deployment group. When you configure a Deployment Group for Visual Studio Team Services you create an agent that by default runs as NT AUTHORITY\SYSTEM.
If I publish my build artifacts to Azure (the server option) then everything works fine and deployment succeeds to both my VMS. However when using a file-drop I get the following error:
The artifact directory does not exist:
\\MACHINE1\drop\RRStore\20170517.20. It can happen if the password of
the account NT AUTHORITY\SYSTEM is changed recently and is not updated
for the agent.
This is basically saying MACHINE2 cannot access \\MACHINE1\drop due to permissions. In windows I can bring up this folder just fine, but since the agent is running as NT AUTHORITY\SYSTEM it cannot access it.
I want to use a filedrop because my website is about 250MB (although in the meantime I am using the 'publish to server' option and deploying via team services.)
I am unclear how to give permissions to the file drop though as the agent is running as SYSTEM. I am running as a WORKGROUP and giving permissions to 'Everyone' does not seem to work.
What is the correct way to configure access to a VSTS drop folder so that the deployment agent can access it?
Few possible options:
Set up a domain (I tried doing this but then I need a new network interface and it sounds klunky)
Continue using teamservices to deploy the artifacts (or reduce the website size!)
Save to a storage account, but again I'm not sure how to configure that.
Run as a different user account
I have similar problems when deploying with VSTS. Instead I chose to:
Run VSTS agent on the deployment group VM as a local user with limited access.
Impersonate the account on the deployment group VM to test its access to the drop folder.
Save/cache a different credential to access the drop folder if applicable.
(So the sensitive information stays on the VM.)
The cached credentials can be a different local user account created on the drop server just for this purpose.
Grant the local user access to various parts of the file system explicitly to limit access permission of this VSTS agent service runner account.
This should work in most cases. In fact, this same way is used in my VSTS, Jenkins and TFS instances. This should prevent you from setting up a domain to solve this problem.
This may not be the best practice, but at least it should get you started in the right direction.