Azure Devops - Can not retrieve artifact - azure-devops

I am publishing a artifact in a job (which completes successfully). In a separate job, I am trying to retrieve the artifact. However, nothings shows in the ls $(Build.ArtifactStagingDirectory). Why is this?
- task: PublishPipelineArtifact#1
displayName: 'Publish Artifact'
inputs:
targetpath: 'anatomy/dist'
artifact: 'anatomy-artifact'
publishLocation: 'pipeline'
- job: deployBucket
displayName: 'Deploy Bucket'
dependsOn: buildAnatomy
condition: eq(dependencies.buildAnatomy.result,'Succeeded')
pool:
vmImage: "ubuntu-20.04"
steps:
- task: Bash#3
displayName: "Upload contents to bucket"
inputs:
targetType: "inline"
script: |
ls -al '$(Build.ArtifactStagingDirectory)'

Each job runs in its own space and the agent cleans up after itself by default. If you want the contents of the artifact to be available in the 2nd job, you need to fetch it again.
Use the DownloadPipelineArtifact#1 task at the start of your 2nd job.
Or move the deploy command into the same job.

Related

Copy a selection of files to a server using Azure Devops

I am trying to copy a selection of files to a destination folder on a target machine.
In my first version, I can already copy all files to the destination. Therefore, I use the following task to build an artifact.
steps:
- task: CopyFiles#2
displayName: 'copy files'
inputs:
SourceFolder: $(workingDirectory)
Contents: '**/files/*'
flattenFolders: true
targetFolder: $(Build.ArtifactStagingDirectory)
- task: PublishBuildArtifacts#1
inputs:
pathToPublish: $(Build.ArtifactStagingDirectory)
artifactName: files
Later I try to use that artifact for a deployment
stage: Deploy
displayName: 'Deploy files to destination'
jobs:
- deployment: VMDeploy
displayName: 'download artifacts'
pool:
vmImage: 'ubuntu-latest'
environment:
name: local_env
resourceType: VirtualMachine
strategy:
runOnce:
deploy:
steps:
- task: DownloadPipelineArtifact#2
displayName: 'download files'
inputs:
artifact: dags
downloadPath: /opt/myfolder/files
This works perfectly fine for all files.
But what I need is the following:
The 'local_env' environment contains multiple servers. The first three letters of each server would be the perfect wild card for the files I needed.
Or in other words, if the environment contains names such as 'Capricorn', 'Aries', 'Pisces', I would like to copy 'cap*.* ', ari*.* ' or 'pis*.*' on the corresponding server.
The way I fixed it for now was
- task: Bash#3
inputs:
targetType: 'inline'
script: "HN=$(hostname | head -c 3) \n cd /opt/myfolder/files/ \n rm -r $(ls -I \"$HN*.*\")"
It does its job, but I am open to mark a better solution as resolution.

How can I execute and schedule Databricks notebook from Azure Devops Pipeline using YAML

I wanted to do CICD of my azure Databricks notebook using YAML file.
I have followed the below flow
Pushed my code from Databricks notebook to Azure Repos.
Created a Build using below YAML script.
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
steps:
- task: CopyFiles#2
displayName: 'Copy Files to: $(build.artifactstagingdirectory)'
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)'
TargetFolder: ' $(build.artifactstagingdirectory)'
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: notebooks'
inputs:
ArtifactName: dev_release
- task: PublishBuildArtifacts#1
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'publish build'
publishLocation: 'Container'
By doing above I was able to create a Artifact.
Now I have added another task to deploy that artifact to my Databricks workspace. By using below YAML Script.
- stage: Deploy
displayName: Deploy stage
jobs:
- job: Deploy
displayName: Deploy
pool:
vmImage: 'vs2017-win2016'
steps:
- task: DownloadBuildArtifacts#0
inputs:
buildType: 'current'
downloadType: 'single'
artifactName: 'dev_release'
downloadPath: '$(System.ArtifactsDirectory)'
- task: databricksDeployScripts#0
inputs:
authMethod: 'bearer'
bearerToken: 'dapj0ee865674cd9tfb583dbad61b78ce9b1-4'
region: 'Central US'
localPath: '$(System.DefaultWorkingDirectory)'
databricksPath: '/Shared'
Now i want to run the deployed notebook from here only. So I have "Configure Databricks CLI" task and "Execute Databricks" task to execute the note book.
Got below Error:
##[error]Error: Unable to locate executable file: 'databricks'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also verify the file has a valid extension for an executable file.
##[error]The given notebook does not exist.
How can I execute notebook from Azure DevOps. My notebooks are in Scala Language.
Is there any other way to use in Production servers.
As you have deployed the Databricks Notebook using Azure DevOps and asking for any other way to run it, I would like to suggest you Azure Data Factory Service.
In Azure Data Factory, you can create pipeline that executes a Databricks notebook against the Databricks jobs cluster. You can also pass Azure Data Factory parameters to the Databricks notebook during execution.
Follow the official tutorial to Run Databricks Notebook with Databricks Notebook Activity in Azure Data Factory to deploy and run Databrick Notebook.
Additionally, you can schedule the pipeline trigger at any particular time or event to make the process completely automatic. Refer https://learn.microsoft.com/en-us/azure/data-factory/concepts-pipeline-execution-triggers
try this :
- job: job_name
displayName: test job
pool:
name: agent_name(selfhostedagent)
#pool:
workspace:
clean: all
steps:
- checkout: none
- task: DownloadBuildArtifacts#0
displayName: 'Download Build Artifacts'
inputs:
artifactName: app
downloadPath: $(System.DefaultWorkingDirectory)
- task: riserrad.azdo-databricks.azdo-databricks-configuredatabricks.configuredatabricks#0
displayName: 'Configure Databricks CLI'
inputs:
url: '$(Databricks_URL)'
token: '$(Databricks_PAT)'
- task: riserrad.azdo-databricks.azdo-databricks-deploynotebooks.deploynotebooks#0
displayName: 'Deploy Notebooks to Workspace'
inputs:
notebooksFolderPath: '$(System.DefaultWorkingDirectory)/app/path/to/notebbok'
workspaceFolder: /Shared
- task: riserrad.azdo-databricks.azdo-databricks-executenotebook.executenotebook#0
displayName: 'Execute /Shared/path/to/notebook'
inputs:
notebookPath: '/Shared/path/to/notebook'
existingClusterId: '$(cluster_id)'

Azure pipeline - unzip artefact, copy one directory into Azure blob store YAML file

I am getting stuck with Azure pipelines.
I have an existing node SPA project that needs built for each environment (TEST and PRODUCTION). This i can do, but need to have a manual step when pushing to PROD. I am using Azure Dev-op pipeline environments with Approval and Checks to mandate this.
The issue is using a 'deploy job' to take an artefact from a previous step I am unable to find the right directory. This is my YAML file have so far:
variables:
# Agent VM image name
vmImageName: 'ubuntu-latest'
trigger:
- master
# Don't run against PRs
pr: none
stages:
- stage: Development
displayName: Devlopment stage
jobs:
- job: install
displayName: Install and test
pool:
vmImage: $(vmImageName)
steps:
- task: NodeTool#0
inputs:
versionSpec: '12.x'
displayName: 'Install Node.js'
- script: |
npm install
displayName: Install node modules
- script: |
npm run build
displayName: 'Build it'
# Build creates a ./dist folder. The contents will need to be copied to blob store
- task: ArchiveFiles#2
inputs:
rootFolderOrFile: '$(Build.BinariesDirectory)'
includeRootFolder: true
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
replaceExistingArchive: true
verbose: true
- deployment: ToDev
environment: development
dependsOn: install
strategy:
runOnce:
deploy:
steps:
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'current'
targetPath: '$(Pipeline.Workspace)'
- task: ExtractFiles#1
inputs:
archiveFilePatterns: '**/*.zip'
cleanDestinationFolder: true
destinationFolder: './cpDist/'
# Somehow within a deploy job retrieve the .zip artefact, unzip, copy the ./dist folder into the blob store
- task: AzureCLI#2
inputs:
azureSubscription: MYTEST-Development
scriptLocation: "inlineScript"
scriptType: "bash"
inlineScript: |
az storage blob upload-batch -d \$web --account-name davey -s dist --connection-string 'DefaultEndpointsProtocol=https;AccountName=davey;AccountKey=xxxxxxx.yyyyyyyyy.zzzzzzzzzz;EndpointSuffix=core.windows.net'
displayName: "Copy build files to Development blob storage davey"
- script: |
pwd
ls
cd cpDist/
pwd
ls -al
displayName: 'list'
- bash: echo "Done"
If you are confused with the folder path, you could add few debug steps to check the location of know system variables to understand what was going on using a powershell script as below:
- task: PowerShell#2
displayName: 'Degug parameters'
inputs:
targetType: Inline
script: |
Write-Host "$(Build.ArtifactStagingDirectory)"
Write-Host "$(System.DefaultWorkingDirectory)"
Write-Host "$(System.ArtifactsDirectory)"
Write-Host "$(Pipeline.Workspace)"
Write-Host "$(System.ArtifactsDirectory)"
You should simply publish the build generated artifacts to drop folder.
Kindly check this official doc -- Artifact selection , in there is explaining that you can define the path which to download the artifacts to with the following task:
steps:
- download: none
- task: DownloadPipelineArtifact#2
displayName: 'Download Build Artifacts'
inputs:
patterns: '**/*.zip'
path: '$(Build.ArtifactStagingDirectory)'
Please be aware that the download happens automatically to $(Pipeline.Workspace), so if you don’t want you deployment to download the files twice, you need to specify the “download: none” in your steps.

File from previous step cannot be found in Azure DevOps-Pipeline

In a pipeline I have two different steps. The first one generates some files, the second should take these files as an input.
the Yaml for that pipeline is the following:
name: myscript
stages:
- stage: Tes/t
displayName: owasp-test
jobs:
- job: owasp_test
displayName: run beasic checks for site
pool:
name: default
demands: Agent.OS -equals Windows_NT
steps:
- task: DotNetCoreCLI#2
inputs:
command: 'build'
projects: '**/*.sln'
- task: dependency-check-build-task#5
inputs:
projectName: 'DependencyCheck'
scanPath: '**/*.dll'
format: 'JUNIT'
- task: PublishTestResults#2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '**/*-junit.xml'
the dependency-check-build-task returns an XML-File:
File upload succeed.
Upload 'P:\Azure-Pipelines-Agent\_work\2\TestResults\dependency-check\dependency-check-junit.xml' to file container: '#/11589616/dependency-check'
Associated artifact 53031 with build 21497
The following step (PublishTestResults) SHOULD take that file but returns
##[warning]No test result files matching **/*-junit.xml were found.
instead. I can see that file in the artifact after the pipeline is run.
This is because your report is written to Common.TestResultsDirectory which is c:\agent_work\1\TestResults (for Microsoft Hosted agents), and publish test task looks in System.DefaultWorkingDirectory which is c:\agent_work\1\s.
Please try:
- task: PublishTestResults#2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '**/*-junit.xml'
searchFolder: '$(Common.TestResultsDirectory)'
I had the same trouble:
I fixed changing the Agent Specification

how to convert classic build job to yaml build in AzureDevops

We have a working classic build job in azure Devops with an self hosted agent pool. But when we tried to convert this build job to yaml method, while executing no agents are getting assigned and its hanging. Could you please correct me here if i am doing something task.
Error
"All eligible agents are disabled or offline"
below is the converted yaml file from classic build - agent job
pool:
name: MYpool
demands: maven
#Your build pipeline references an undefined variable named ‘Parameters.mavenPOMFile’. Create or edit the build pipeline for this YAML file, define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972
steps:
- task: Maven#3
displayName: 'Maven pom.xml'
inputs:
mavenPomFile: '$(Parameters.mavenPOMFile)'
- task: CopyFiles#2
displayName: 'Copy Files to: $(build.artifactstagingdirectory)'
inputs:
SourceFolder: '$(system.defaultworkingdirectory)'
Contents: '**/*.war'
TargetFolder: '$(build.artifactstagingdirectory)'
condition: succeededOrFailed()
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: Root'
inputs:
PathtoPublish: '$(build.artifactstagingdirectory)'
ArtifactName: Root
condition: succeededOrFailed()
- task: CopyFiles#2
displayName: 'Copy wars to build directory'
inputs:
SourceFolder: '$(build.artifactstagingdirectory)/target'
TargetFolder: '/home/myadmin/builds/$(build.buildnumber)'
- task: CopyFiles#2
displayName: 'copying docker file to Build Directory'
inputs:
SourceFolder: Admin
TargetFolder: '/home/myadmin/builds/$(build.buildnumber)'
- bash: |
# Write your commands here
mv /home/myadmin/builds/$(build.buildnumber)/mypack0.0.1.war /home/myadmin/builds/$(build.buildnumber)/ROOT.war
displayName: 'Name war file Root.war'
- task: Docker#2
displayName: 'Build the docker image'
inputs:
repository: 'mycontainerregistry.azurecr.io/myservice'
command: build
Dockerfile: '/home/myadmin/builds/$(build.buildnumber)/Dockerfile'
tags: '$(Build.BuildNumber)-DEV'
- bash: |
# Write your commands here
docker login mycontainerregistry.azurecr.io
docker push mycontainerregistry.azurecr.io/myservice:$(Build.BuildNumber)-DEV
displayName: 'Push Docker Image'
- task: CopyFiles#2
displayName: 'Copy Deployment file'
inputs:
SourceFolder: /home/myadmin/kubernetes
TargetFolder: '/home/myadmin/builds/$(build.buildnumber)'
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: 'Replace image in deployment file'
inputs:
rootDirectory: '/home/myadmin/builds/$(build.buildnumber)'
targetFiles: '**/*.yml'
In my previous answer, I said when I wait for nearly 20-30 mins, the interface of agent will prompt below message.
In fact, this is a process which upgrade the agent to latest version automatically.
Yes, when you using YAML with private agent, the agent version must be the latest one. No matter you add the demands or not.
For our system, the agent version is a implicit demand that your agent must satisfied with the latest one when you applying it in YAML.
If it is not satisfied, it will be blocked and the agent upgrade process will be forced to be performed automatically by system after some times.
So, to execute the private agent in YAML successfully, please upgrade the agent to latest one manually.
Since what my colleague and I talked are all private to microsoft in this ticket, sorry you could not get visible on this summary. So, here I take the screenshots about it, and you can refer to it: https://imgur.com/a/4OnzHp3
We are still working on why the system prompting so confusing message like: All eligible agents are disabled or offline. And, am trying to do some contribution to let this message more clear, for example: no agents meet demands: agent version xxx.