How to create a pipeline in azure devops to deploy scripts - powershell

Hello everyone I'm trying to automate the process to deploy our script but I'm new in azure devops and I don't know where to start.
I want to create a pipeline that everytime new code its pushed to the master branch it will be automatically deploy to the destination server.
here an example:
we have an instance of azure devops running in one of our servers (server1), this is where our script repo are, once the code its merged in the master branch the pipeline should deploy the script to e:\scripts in server2.
the repository only contains powershell script and just need to move the files from the repo the destination server.
these servers are windows and the azure devops version its Dev17.M153.5

There are many ways to copy files onto a target machine.
Start with a pipeline that is triggered when the file paths in the repository change:
- trigger:
branches:
include:
- develop
paths:
include:
- the/nameof/thefolder/withyour/scripts
Next, you'll need to specify which build agent you want to use. The build-agent is responsible for running the pipeline and it can be on the same machine as your Azure DevOps Server, but not always. Check out "Settings" (bottom-left) -> "Pipelines > Agent Pools" to get the name of your pool.
- pool:
name: 'Our Build Agents'
Next, the build agent will need to be able to talk to the machine. There are several factors to consider:
Network: Make sure you don't have any firewall rules between your build agent and the target machine.
Permissions: Make sure you have the correct permissions to write files to the remote machine. Ideally, the user account your build agent uses should be an administrator of the target machine.
There are several options on how to copy files, but the easiest is the build in CopyFiles task.
steps:
- task: CopyFiles#2
inputs:
sourceFolder: $(Build.SourcesDirectory)/the/nameof/thefolder/withyour/scripts
targetFolder: \\MACHINENAME\$E\Scripts
Other options:
running a PowerShell/bash script to copy the items
rsync, robocopy,etc

Related

How to consume or call terraform modules from one project in one organisation to another project from another organisation using azure devops

I would like to know the way to consume or call terraform modules from one project in one organisation to another project from another organisation using azure devops. I tried to explore ways but found one solution using the below but my IT team is not letting to use this method as this is braking the subsequent pipelines. Any suggestions please?
Also, requirement is I just need to refer the modules of terraform which are in another organization but as per my POC its downloading/checkout the code from that organization/project and then I am able to refer those modules. I would like to only refer those modules instead checkout the code from another organization and utilising/referencing.
Below is the reply from pipeline team:
Can you exclude this part as it is not ideal and you need to take a different approach?
echo "Git config update start"
MY_PAT=$(yourPAT)
B64_PAT=$(printf "%s"":$MY_PAT" | base64)
git config --global http.extraheader "Authorization: Basic ${B64_PAT}"
echo "Git config update end"
terraform init
terraform plan
you are introducing your cred in .gitconfig that's breaking all subsequent pipelines
in the agent.
POC: The below code is cloning the entire modules code from another organization and we are referecing those modules but I just need to refer those modules directly instead of downloading and calling/referencing modules.
resources:
repositories:
- repository: Modules
type: git
name: 'Compute Platforms/CES-Terraform-Automation-Service'
endpoint: Repo-bp-digital # Azure DevOps service connection
ref: Modules
- repository: self
type: git
name: 'Cloud Onboarding/terraform-testing-by-vivek'
AFAIK, There’s only one option to connect to the project of another Azure DevOps organization that is by creating a Service Connection in the organization from where you want to run the pipeline and by creating a PAT token in the target organization and referencing it in the service connection,
I created 2 Organizations, 1) Organization alpha1 and 2) Organization beta2. I created 2 projects in both organizations with one YAML script and a task.
Created a PAT Token in Organization beta2.
Created service connection in the Alpha organization from where I am running the pipeline to beta org by referencing PAT token from beta org like below:-
trigger:
- master
variables:
pythonVersion: '3.8'
vmImageName: 'ubuntu-latest'
resources:
repositories:
- repository: remoteRepo
type: git
name: remote-access/shared-common-install
endpoint: remoteaccesstemp # Service connection name
ref: refs/heads/main
stages:
- stage: remote_git_test
jobs:
- job: git_test
steps:
# Running the template from the same repsitory
- template: templates/hello-alpha.yaml
# Checkout the remote repository
- checkout: remoteRepo
persistCredentials: true
# Call the template that is located in another repository in another organization
- template: templates/hello-beta.yaml#remoteRepo
Alternatively, you can create a terraform task in Azure DevOps and call your terraform module from another organization with the below script:-
terraform init -backend-config="repository=organization-beta2/project-beta2/_git/beta-2" -backend-config="token=Pat-token"
and
provider "azuredevops"{
org_service_url = var.org_service_url
personal_access_token = var.personal_access_token
}
You can add this code in your terraform init script in your Organization repo from where you’re running pipeline and reference the template in System.Artifacts.
Even Azure DevOps Rest API does not support connecting to different Azure DevOps organizations.
References:-
GitHub - Azure-Samples/azure-pipelines-remote-tasks
Trying to setup an Azure DevOps organization using Terraform :: my tech ramblings — A blog for writing about my techie ramblings By Carlos
Azure DevOps Git: Fork into another Repo using Azure DevOps REST API - Stack Overflow By Andi Li-MSFT

How can I run Pipeline in old Directory if I have a new Directory?

I have 2 Directory. At Directory A, I have data and I have Pipeline in Azure DevOps. At Directory B, I migrate all of data from Directory A to Directory B. At Directory A, my Pipeline is died because I migrate the data from Directory A to B. So can I run pipeline at Directory A when I am at Directory B ?
I have 2 Directory. At directory A, I built the data and now I want to migrate all my data from directory A to B. But the data at directory A relate the Pipeline. I image that Is there a way to run a pipeline on directory A even though the data is in directory B.
Is there a way to run a pipeline on directory A even though the data
is in directory B.
You could check out multiple repositories in your pipeline with service connection. In the following example, the Azure Repos Git repository in another organization is declared as repository resources and requires service connections that is specified as the endpoint for the repository resource. This example has two checkout steps, which checks out the repository declared as repository resource along with the current self-repository that contains the pipeline YAML.
resources:
repositories:
- repository: MyAzureReposGitRepository # In a different organization
endpoint: MyAzureReposGitServiceConnection
type: git
name: OtherProject/MyAzureReposGitRepo
ref: main
trigger: none
pool:
vmImage: 'ubuntu-latest'
steps:
- checkout: self
- checkout: MyAzureReposGitRepository
Then, create Azure Repos/Team Foundation Server service connection with configuration like below.

Azure CI (YAML) pipe is failing on task SqlAzureDacpacDeployment#1 with SqlPackage.exe exited with code 1

I have a database project that is being deployed to an Azure SQL Database instance. This CI pipe was working in another environment outside the organization. We lift/shifted it into this organization. The job that is failing is a deployment job. The task that is used is SqlAzureDacpacDeployment#1.
Error message:
##[error]*** An unexpected failure occurred: One or more errors occurred..
##[error]The Azure SQL DACPAC task failed. SqlPackage.exe exited with code 1.Check out how to troubleshoot failures at
https://aka.ms/sqlazuredeployreadme#troubleshooting-
Code:
- task: SqlAzureDacpacDeployment#1
displayName: 'info...'
inputs:
azureSubscription: $(ServiceConnection)
serverName: $(sqlServer)
databaseName: $(DbName)
SqlUsername: $(AdminAccount)
SqlPassword: $(AdminAccountPassword)
dacpacFile: '$(BuildName)\\db_name\\bin\\Output\\db_name.dacpac'
publishProfile: '$(BuildName)$(publishProfile)'
The deployment task is using a combination of DACPAC and a publish profile. This is necessary due to extensive usage of SQLCMD variables. The agent is a self-hosted Windows agent. It has been updated. Each time a user defined capability was added the agent service was restarted.
I have validated the account and password by connecting to the target instance with both accounts.
I have tried authenticating with Azure Active Directory principals which are admins on the Azure SQL Database.
I tried using SQL Server authentication.
I have added a user defined capability to the Windows Self-hosted agent for SqlPackage with compatibility level 150 which matched the database compatibility level.
I tried reducing the database compatibility level from 150 to 130 to match the system define capability on the agent.
I verified that the directories structure matches the YAML and that the DACPAC and the publish profile exist.
I verified the values stored in pipe variables outside of the YAML.
I verified that the machine that runs the agent has a firewall rule enabled on the Azure SQL Database instance.
I am looking for a likewise task now.
You can use Service Principal instead of SQL Authentication to deploy the Azure SQL Database.
Refer: https://datasharkx.wordpress.com/2021/03/11/automated-deployment-of-azure-sql-database-azure-sql-data-warehouse-through-azure-devops-via-service-principal-part-1/
https://datasharkx.wordpress.com/2021/03/12/automated-deployment-of-azure-sql-database-azure-sql-data-warehouse-through-azure-devops-via-service-principal-part-2/
Also, remove the publishProfile option and instead provided project variables in this format:
AdditionalArguments: /v:MyVariable=Y /v:Environment=TST,
and this should work.
Your final YAML file should look like this:
- task: SqlAzureDacpacDeployment#1
displayName: Deploy dacpac
inputs:
azureSubscription: $(ServiceConnection)
ServerName: <server_name>
DatabaseName: <database_name>
DacpacFile: $(Pipeline.Workspace)\drop\MyDacpac.dacpac
AdditionalArguments: /v:ResetStuff=Y /v:Environment=TST
DeploymentAction: Publish
AuthenticationType: servicePrincipal

ADF Git Configure Disconnection After Publish with AzureResourceGroupDeployment & ARMTemplates

I’m following the new CICD guide for ADF https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment-improvements
I am then publishing the ARMTemplates generated from the npm export pipeline to my ADF Dev using Azure Resource Group ARM Template deployment described here: https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment#script
Looks like this:
- task: AzureResourceGroupDeployment#1
displayName: 'Azure Deployment:Create Or Update Resource Group action on adf-dev-rg'
inputs:
ConnectedServiceName: 'guycarpenter-privatenonprod-Contributor'
resourceGroupName: 'gc-adf-nasa-prinonprod-dev-rg'
location: 'East US 2'
csmFile: '$(Agent.BuildDirectory)/ARMTemplate/ARMTemplateForFactory.json'
csmParametersFile: '$(Agent.BuildDirectory)/ARMTemplate/ARMTemplateParametersForFactory.json'
After I publish the new ARMTemplate to my ADF Dev, ADF git repo Configure gets disconnected.
How should I publish the new ARMTemplate to my ADF Dev without disconnecting the repo?
Edit:
I also found that setting includeFactoryTemplate=false solves the disconenction, but I need it set to true to parametrize ADF for other environments.
Edit #2:
This solved the problem: https://stackoverflow.com/a/56863897/13570809
How should I publish the new ARMTemplate to my ADF Dev without disconnecting the repo?
There is a known user voice about this:
Retain GIT configuration when deploying Data Factory ARM template
You could vote this request and check the feedback.
And the Jason replied:
This has been implemented by the repoConfiguration properties in the
Azure Resource Manager template for the Data Factory resource. See
here for reference -
https://learn.microsoft.com/en-us/azure/templates/microsoft.datafactory/2018-06-01/factories

Execute YAML templates from Azure DevOps classic pipeline

I would put my questions through following points, hope it's make clear now:
The application source code is in application_code repo.
The pipeline code(YAMLs) are in pipeline_code repo. Because I'd like to version it and don't like to keep in application_code repo. Just to avoid giving control to Dev team to manage it.
Problem statement:
The pipeline YAML won't be triggered unless it's in the source code repository based on the events pr, commit etc.
Can we trigger or execute YAML file which is in pipeline_repo whenever there's event triggered in application_code repo?
I've tried achieving above using Classic pipeline and YAML template but this don't work together. As I can execute a YAML template from a YAML pipeline only not from a classic pipeline like below:
#azure-pipeline.yaml
jobs:
- job: NewJob
- template: job-template-bd1.yaml
Any ideas or better solution than above?
The feature Multi-repository support for YAML pipelines will be available soon for azure devops service. This feature will support for triggering pipelines based on changes made in one of multiple repositories. Please check Azure DevOps Feature Timeline or here. This feature is expected to be rolled out in 2020 Q1 for azure devops service.
Currently you can follow below workaround to achieve above using Build Completion(the pipeline will be triggered on the completion of another build).
1, Setup the triggering pipeline
Create an empty classic pipeline for application_code repo as the triggering pipeline, which will always succeed and do nothing.
And check Enable continuous integration under Triggers tab and setup Bracnh filters
2, setup the triggered pipeline
In the pipeline_code repo using Checkout to Check out multiple repositories in your pipeline. You can specifically checkout the source code of application_code repo to build. Please refer below example:
steps:
- checkout: git://MyProject/application_code_repo#refs/heads/master # Azure Repos Git repository in the same organization
- task: TaskName
...
Then in the yaml pipeline edit page, click the 3dots on the top right corner and click Triggers. Then click +Add beside Build Completion and select above triggering pipeline created in step 1 as the triggering build.
After finishing above two steps, when changes made to application_code repo, the triggering pipeline will be executed and completed with success. Then the triggered pipeline will be triggered to run the real build job.
Update:
Show Azure DevOps Build Pipeline Status in Bitbucket.
you can add a python script task at the end of the yaml pipeline to update the Bitbucket build status. You need to set a condtion: always() to always run this task even if other tasks are failed.
You can get the build status with env variable Agent.JobStatus. For below example:
For more information, please refer to document Integrate your build system with Bitbucket Cloud, and also this thread.
- task: PythonScript#0
condition: always()
inputs:
scriptSource: inline
script: |
import os
import requests
# Use environment variables that your CI server provides to the key, name,
# and url parameters, as well as commit hash. (The values below are used by
# Jenkins.)
data = {
'key': os.getenv('BUILD_ID'),
'state': os.getenv('Agent.JobStatus'),
'name': os.getenv('JOB_NAME'),
'url': os.getenv('BUILD_URL'),
'description': 'The build passed.'
}
# Construct the URL with the API endpoint where the commit status should be
# posted (provide the appropriate owner and slug for your repo).
api_url = ('https://api.bitbucket.org/2.0/repositories/'
'%(owner)s/%(repo_slug)s/commit/%(revision)s/statuses/build'
% {'owner': 'emmap1',
'repo_slug': 'MyRepo',
'revision': os.getenv('GIT_COMMIT')})
# Post the status to Bitbucket. (Include valid credentials here for basic auth.
# You could also use team name and API key.)
requests.post(api_url, auth=('auth_user', 'auth_password'), json=data)