Azure DevOps error when trying to execute a task using an array - azure-devops

I have an AZD deploy template as below. I am trying to execute a task (Kubernetes#1) multiple times looping an array that is defined in parameters.
parameters:
- name: env
- name: serviceConnection
- name: 'serviceNames'
type: object
default:
- audit
- export
- admin
jobs:
- deployment: Deployment
displayName: Deploy to ${{ parameters.env }}
environment: ${{ parameters.env }}
pool: on-prem-pool
variables:
- template: azure-deploy-vars.yaml
parameters:
env: ${{ parameters.env }}
timeoutInMinutes: 10
strategy:
runOnce:
deploy:
steps:
- script: |
echo "Prepare to deploy config for ${{ parameters.serviceNames}}. clean workspace"
ls -la
cd ..
ls -la
rm -rf config
rm -rf devops
rm -rf TestResults
rm -rf helm
rm -f config.sh
rm -f *.properties
displayName: 'Clean Workspace'
- checkout: config
path: config
- ${{ each service in parameters.serviceNames }}:
- task: Kubernetes#1
displayName: Deploy Config
inputs:
connectionType: Kubernetes Service Connection
kubernetesServiceEndpoint: '${{ parameters.serviceConnection }}'
namespace: '$(PROJECT_NAMESPACE)'
configMapName: '${{ service }}'
forceUpdateConfigMap: true
useConfigMapFile: true
configMapFile: '$(Agent.BuildDirectory)/config/${{ service }}/${{ parameters.env }}/application-${{ parameters.env }}.properties'
But I get this error when I try to run the pipeline.
Can anyone see point me if there is an error in my template?
Error:
/ci/azure-deploy.tpl.yaml: (Line: 41, Col: 11, Idx: 1048) - (Line: 41, Col: 12, Idx: 1049): While parsing a block mapping, did not find expected key.

You need to indent the line
- ${{ each service in parameters.serviceNames }}:
so that it matches the - script: and - checkout: lines above it, and then increases the indent of the following lines as well.
Corrected template:
parameters:
- name: env
- name: serviceConnection
- name: 'serviceNames'
type: object
default:
- audit
- export
- admin
jobs:
- deployment: Deployment
displayName: Deploy to ${{ parameters.env }}
environment: ${{ parameters.env }}
pool: on-prem-pool
variables:
- template: azure-deploy-vars.yaml
parameters:
env: ${{ parameters.env }}
timeoutInMinutes: 10
strategy:
runOnce:
deploy:
steps:
- script: |
echo "Prepare to deploy config for ${{ parameters.serviceNames}}. clean workspace"
ls -la
cd ..
ls -la
rm -rf config
rm -rf devops
rm -rf TestResults
rm -rf helm
rm -f config.sh
rm -f *.properties
displayName: 'Clean Workspace'
- checkout: config
path: config
- ${{ each service in parameters.serviceNames }}:
- task: Kubernetes#1
displayName: Deploy Config
inputs:
connectionType: Kubernetes Service Connection
kubernetesServiceEndpoint: '${{ parameters.serviceConnection }}'
namespace: '$(PROJECT_NAMESPACE)'
configMapName: '${{ service }}'
forceUpdateConfigMap: true
useConfigMapFile: true
configMapFile: '$(Agent.BuildDirectory)/config/${{ service }}/${{ parameters.env }}/application-${{ parameters.env }}.properties'

Related

GH Actions: Github#L1 - Every step must define a `uses` or `run` key [duplicate]

This question already has an answer here:
GitHub Actions: every step must define a `uses` or `run` key
(1 answer)
Closed 8 months ago.
I'm trying to setup a deploy.yml file to automatically release my code through tags/releases. I had a working setup but reconfigured it to work with caching but through updating it, it won't work at all.
The error is
Error: .github#L1
every step must define a `uses` or `run` key
but from the following file, I can't see a single step that doesn't have a uses or run key field. Am I missing something? I really would appreciate any help on this
Here's the workflow file that I'm using:
name: deploy
on:
push:
# Sequence of patterns matched against refs/tags
tags:
- "v*" # Push events to matching v*, i.e. v1.0, v20.15.10
env:
bin: git-view
jobs:
windows:
runs-on: windows-latest
strategy:
matrix:
target:
- x86_64-pc-windows-gnu
- x86_64-pc-windows-msvc
steps:
- uses: actions/checkout#v3
- name: Cache Cargo
uses: actions/cache#v3
with:
path: |
~/.cargo/registry
./target
# Example key: windows-stable-x86_64-pc-windows-gnu-3k4j234lksjfd9
key: windows-stable-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
windows-stable-${{ matrix.target }}-
windows-stable-
windows-
- name: Set Rust Channel
run: rustup default stable
shell: bash
- name: Set Rust Target
run: rustup target add ${{ matrix.target }}
shell: bash
- name: Build Release Binary
run: cargo build --target ${{ matrix.target }} --release
shell: bash
- name: Compress Windows Binary
- run: |
cd ./target/${{ matrix.target }}/release/
7z a "${{ env.bin }}-${{ matrix.target }}.zip" "${{ env.bin }}.exe"
mv "${{ env.bin }}-${{ matrix.target }}.zip" $GITHUB_WORKSPACE
shell: bash
- name: Archive Windows Artifact
uses: actions/upload-artifact#v3
with:
name: Windows
path: |
$GITHUB_WORKSPACE/${{ env.bin }}-${{ matrix.target }}.zip
macos:
runs-on: macos-latest
strategy:
matrix:
target:
- x86_64-apple-darwin
steps:
- uses: actions/checkout#v3
- name: Cache Cargo
uses: actions/cache#v3
with:
path: |
~/.cargo/registry
./target
# Example key: macos-stable-x86_64-apple-darwin-3k4j234lksjfd9
key: macos-stable-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
macos-stable-${{ matrix.target }}-
macos-stable-
macos-
- name: Set Rust Channel
run: rustup default stable
- name: Set Rust Target
run: rustup target add ${{ matrix.target }}
- name: Build Release Binary
run: cargo build --target ${{ matrix.target }} --release
- name: Compress macOS Binary
run: tar -czvf ${{ env.bin }}-${{ matrix.target }}.tar.gz --directory=target/${{ matrix.target }}/release ${{ env.bin }}
- name: Archive macOS Artifact
uses: actions/upload-artifact#v3
with:
name: macOS
path: |
./${{ env.bin }}-${{ matrix.target }}.tar.gz
linux:
runs-on: ubuntu-latest
strategy:
matrix:
target:
- x86_64-unknown-linux-gnu
- x86_64-unknown-linux-musl
steps:
- uses: actions/checkout#v3
- name: Cache Cargo
uses: actions/cache#v3
with:
path: |
~/.cargo/registry
./target
# Example key: linux-stable-x86_64-unknown-linux-gnu-3k4j234lksjfd9
key: linux-stable-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
linux-stable-${{ matrix.target }}-
linux-stable-
linux-
- name: Set Rust Channel
run: rustup default stable
- name: Set Rust Target
run: rustup target add ${{ matrix.target }}
- name: Build Release Binary
run: cargo build --target ${{ matrix.target }} --release
- name: Compress Linux Binary
run: tar -czvf ${{ env.bin }}-${{ matrix.target }}.tar.gz --directory=target/${{ matrix.target }}/release ${{ env.bin }}
- name: Archive Linux Artifact
uses: actions/upload-artifact#v3
with:
name: Linux
path: |
./${{ env.bin }}-${{ matrix.target }}.tar.gz
deploy:
needs: [ windows, macos, linux ]
runs-on: ubuntu-latest
steps:
- name: Download Artifacts
uses: actions/download-artifact#v3
with:
path: ./artifacts
- name: Display Structure
run: ls -R
- name: Release
uses: softprops/action-gh-release#v1
with:
files: |
./artifacts/*.tar.gz
./artifacts/*.zip
homebrew:
needs: deploy
runs-on: ubuntu-latest
steps:
- name: Extract Version
run: |
printf "::set-output name=%s::%s\n" tag-name "${GITHUB_REF#refs/tags/}"
- uses: mislav/bump-homebrew-formula-action#v1
if: "!contains(github.ref, '-')" # Skip Pre-Releases
with:
create-pullrequest: true
formula-name: ${{ env.bin }}
formula-path: Formula/${{ env.bin }}.rb
homebrew-tap: sgoudham/homebrew-tap
download-url: https://github.com/sgoudham/${{ env.bin }}/releases/download/${{ steps.extract-version.outputs.tag-name }}/${{ env.bin }}-x86_64-apple-darwin.tar.gz
commit-message: |
{{formulaName}} -> {{version}}
Created by https://github.com/mislav/bump-homebrew-formula-action
env:
COMMITTER_TOKEN: ${{ secrets.HOMEBREW }}
Ah, I think I found the problem!
I somehow managed to skip over
- name: Compress Windows Binary
- run: |
cd ./target/${{ matrix.target }}/release/
7z a "${{ env.bin }}-${{ matrix.target }}.zip" "${{ env.bin }}.exe"
mv "${{ env.bin }}-${{ matrix.target }}.zip" $GITHUB_WORKSPACE
It looks to be working after taking away the - on the run, looking like this:
- name: Compress Windows Binary
run: |
cd ./target/${{ matrix.target }}/release/
7z a "${{ env.bin }}-${{ matrix.target }}.zip" "${{ env.bin }}.exe"
mv "${{ env.bin }}-${{ matrix.target }}.zip" $GITHUB_WORKSPACE

Issues while passing variable groups parameter to template from azure pipeline.yml

I have declared a variable group Agile-Connections as below and the group do not have any restriction for any Pipeline:
I am using another template called vars.yml to store some other variables:
variables:
- group: Agile-Connections
- name: extensions_dir
value: /apps/agile/product/agile936/integration/sdk/extensions
- name: properties_dir
value: /apps/agile/product/Properties
- name: build_name
value: RestrictPreliminaryBOMPX.jar
- name: resource_name
value: RestrictPreliminaryBOMPX.properties
My Azure Pipeline looks like below, which is calling a deploy.yml template, and I am passing two parameters (connection, environment) from azure pipeline.yml to deploy.yml.
Below is my azure-pipeline.yml:
trigger:
- None
pool:
name: AgentBuildAgile
stages:
- template: templates/build.yml
- stage: DEV_Deployment
variables:
- template: templates/vars.yml
jobs:
- job:
steps:
- script:
echo $(Dev-mnode1)
- template: templates/deploy.yml
parameters:
connection: $(Dev-mnode1)
environment: 'DEV'
Below is my deploy.yml:
parameters:
- name: connection
- name: environment
jobs:
- deployment:
variables:
- template: vars.yml
environment: ${{ parameters.environment }}
displayName: Deploy to ${{ parameters.environment }}
strategy:
runOnce:
deploy:
steps:
- script:
echo Initiating Deployment ${{ parameters.connection }}
- template: copy-artifact.yml
parameters:
connection: ${{ parameters.connection }}
# - template: copy-resources.yml
# parameters:
# connection: ${{ parameters.connection }}
From my deploy.yml I am passing a parameter connection further to another template called copy-artifact.yml, which is below:
parameters:
- name: connection
jobs:
- job:
variables:
- template: vars.yml
displayName: 'Copy jar'
steps:
# - script:
# echo ${{ parameters.connection }}
- task: SSH#0
displayName: 'Task - Backup Existing jar file'
inputs:
sshEndpoint: ${{ parameters.connection }}
runOptions: inline
inline:
if [[ -f ${{ variables.extensions_dir }}/${{ variables.build_name }} ]]; then mv ${{ variables.extensions_dir }}/${{ variables.build_name }} ${{ variables.extensions_dir }}/${{ variables.build_name }}_"`date +"%d%m%Y%H%M%S"`"; echo "Successfully Backed up the existing jar"; fi
Now when I run my pipeline I am getting error message :
The pipeline is not valid. Job Job3: Step SSH input sshEndpoint references service connection $(Dev-mnode1) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz.
When I print the value of $(Dev-mnode1) using the commented out lines in copy-artifacts.yml file, it prints fine (Dev11 Connection) but when I try to use that as service connection for my ssh task, it gives me the above error.
Also, there is a service connection Dev11 Connection in my project and all the pipelines are allowed to use that service connection.
The pipeline is not valid. Job Job3: Step SSH input sshEndpoint references service connection $(Dev-mnode1) which could not be found. The service connection does not exist or has not been authorized for use.
From the error message, the parameter not get the variable value when expand the template.
Test the same sample and can reproduce the issue.
When you define the variable template at stage level, the variables in template will expand at runtime and the paramter:connection will be expand at compile time. So the correct value cannot be passed to parameter.
To solve this issue, you can define the variable template at root level.
Refer to the sample:
trigger:
- None
pool:
name: AgentBuildAgile
variables:
- template: templates/vars.yml
stages:
- template: templates/build.yml
- stage: DEV_Deployment
jobs:
- job:
steps:
- script:
echo $(Dev-mnode1)
- template: templates/deploy.yml
parameters:
connection: $(Dev-mnode1)
environment: 'DEV'

Share variables of GitHub Actions job to multiple subsequent jobs while retaining specific order

We have a GitHub Actions workflow consiting of 3 jobs:
provision-eks-with-pulumi: Provisions AWS EKS cluster (using Pulumi here)
install-and-run-argocd-on-eks: Installing & configuring ArgoCD using kubeconfig from job 1.
install-and-run-tekton-on-eks: Installing & running Tekton using kubeconfig from job 1., but depending on job 2.
We are already aware of this answer and the docs and use jobs.<jobs_id>.outputs to define the variable in job 1. and jobs.<job_id>.needs. to use the variable in the subsequent jobs. BUT it only works for our job 2. - but failes for job 3.. Here's our workflow.yml:
name: provision
on: [push]
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'eu-central-1'
jobs:
provision-eks-with-pulumi:
runs-on: ubuntu-latest
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
outputs:
kubeconfig: ${{ steps.pulumi-up.outputs.kubeconfig }}
steps:
...
- name: Provision AWS EKS cluster with Pulumi
id: pulumi-up
run: |
pulumi stack select dev
pulumi up --yes
echo "Create ~/.kube dir only, if not already existent (see https://stackoverflow.com/a/793867/4964553)"
mkdir -p ~/.kube
echo "Create kubeconfig and supply it for depending Action jobs"
pulumi stack output kubeconfig > ~/.kube/config
echo "::set-output name=kubeconfig::$(pulumi stack output kubeconfig)"
- name: Try to connect to our EKS cluster using kubectl
run: kubectl get nodes
install-and-run-argocd-on-eks:
runs-on: ubuntu-latest
needs: provision-eks-with-pulumi
environment:
name: argocd-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install ArgoCD
run: ...
install-and-run-tekton-on-eks:
runs-on: ubuntu-latest
needs: install-and-run-argocd-on-eks
environment:
name: tekton-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install Tekton Pipelines, Dashboard, Triggers
run: ...
The first job gets the kubeconfig correctly using needs.provision-eks-with-pulumi.outputs.kubeconfig - but the second job does not (see this GitHub Actions log). We also don't want our 3. job to only depend on job 1., because then job 2. and 3. will run in parallel.
How could our job 3. run after job 2. - but use the variables with the kubeconfig from job 1.?
That's easy, because a GitHub Actions job can depend on multiple jobs using the needs keyword. All you have to do in job 3. is to use an array notation like needs: [job1, job2].
So for your workflow it will look like this:
name: provision
on: [push]
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'eu-central-1'
jobs:
provision-eks-with-pulumi:
runs-on: ubuntu-latest
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
outputs:
kubeconfig: ${{ steps.pulumi-up.outputs.kubeconfig }}
steps:
...
- name: Provision AWS EKS cluster with Pulumi
id: pulumi-up
run: |
pulumi stack select dev
pulumi up --yes
echo "Create ~/.kube dir only, if not already existent (see https://stackoverflow.com/a/793867/4964553)"
mkdir -p ~/.kube
echo "Create kubeconfig and supply it for depending Action jobs"
pulumi stack output kubeconfig > ~/.kube/config
echo "::set-output name=kubeconfig::$(pulumi stack output kubeconfig)"
- name: Try to connect to our EKS cluster using kubectl
run: kubectl get nodes
install-and-run-argocd-on-eks:
runs-on: ubuntu-latest
needs: provision-eks-with-pulumi
environment:
name: argocd-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install ArgoCD
run: ...
install-and-run-tekton-on-eks:
runs-on: ubuntu-latest
needs: [provision-eks-with-pulumi, install-and-run-argocd-on-eks]
environment:
name: tekton-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install Tekton Pipelines, Dashboard, Triggers
run: ...

Not seeing why published artifacts are working in one case and not another for Azure DevOps Pipelines

Overview
I'm using artifacts to persist some files and builds across stages. In one case it is working, and in another, it is not despite being able to see the artifact is published.
What Works
For example, this uploads my k8s manifests to an artifact so they can be access in the deployment stage to AKS and it works perfectly:
# publishStage.yaml
stages:
- stage: Publish
displayName: Publish artifacts
dependsOn:
- SeleniumTests
- Changed
condition: succeeded()
variables:
anyServicesChanged: $[ stageDependencies.Changed.Changes.outputs['detectChanges.anyServicesChanged'] ]
anyConfigsChanged: $[ stageDependencies.Changed.Changes.outputs['detectChanges.anyConfigsChanged'] ]
jobs:
- job: Publish
condition: or(eq(variables.anyServicesChanged, true), eq(variables.anyConfigsChanged, true), eq(variables['Build.Reason'], 'Manual'))
displayName: Publishing artifacts...
steps:
- upload: k8s
artifact: k8s
# deployStage.yaml
parameters:
- name: tag
default: ''
- name: tagVersion
default: ''
stages:
- stage: Deploy
displayName: Deployment stage...
dependsOn:
- Publish
- Changed
condition: succeeded()
variables:
anyServicesChanged: $[ stageDependencies.Changed.Changes.outputs['detectChanges.anyServicesChanged'] ]
anyConfigsChanged: $[ stageDependencies.Changed.Changes.outputs['detectChanges.anyConfigsChanged'] ]
servicesChanged: $[ stageDependencies.Changed.Changes.outputs['detectChanges.servicesChanged'] ]
jobs:
- deployment: Deploy
condition: or(eq(variables.anyServicesChanged, true), eq(variables.anyConfigsChanged, true), eq(variables['Build.Reason'], 'Manual'))
displayName: Deploying services...
environment: 'App Production AKS'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest#0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
kubernetesServiceConnection: 'App Production AKS'
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
- template: deployStep.yaml
parameters:
tag: ${{ parameters.tag }}
tagVersion: ${{ parameters.tagVersion }}
serviceName: api-v1
pathName: api
# deployStep.yaml
parameters:
- name: tag
default: ''
- name: tagVersion
default: ''
- name: serviceName
default: ''
- name: pathName
default: ''
steps:
- task: KubernetesManifest#0
condition: contains(variables['servicesChanged'], '${{ parameters.serviceName }}')
displayName: Deploy to ${{ parameters.pathName }} Kubernetes cluster...
inputs:
action: deploy
kubernetesServiceConnection: 'App Production AKS'
manifests: |
$(Pipeline.Workspace)/k8s/aks/${{ parameters.pathName }}.yaml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository)-${{ parameters.pathName }}:${{ parameters.tag }}-${{ parameters.tagVersion }}
What Doesn't Work
However, now what I'm trying to do is build a service in the application, run unit tests, and take the exact same build that was tested and use it for building the Docker image.
I'm trying to do this with the following:
# unitTestsStage.yaml
stages:
- stage: UnitTests
displayName: Run unit tests for services...
dependsOn: Changed
condition: succeeded()
jobs:
- template: ../secretsJob.yaml
- template: pythonJob.yaml
parameters:
serviceName: api-v1
pathName: api
# pythonJob.yaml
parameters:
- name: serviceName
type: string
default: ''
- name: pathName
type: string
default: ''
jobs:
- job: UnitTests
displayName: Running unit tests for ${{ parameters.serviceName }}...
variables:
servicesChanged: $[ stageDependencies.Changed.Changes.outputs['detectChanges.servicesChanged'] ]
condition: or(contains(variables['servicesChanged'], '${{ parameters.serviceName }}'), eq(variables['Build.Reason'], 'Manual'))
dependsOn: Secrets
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.8'
- script: |
cd ${{ parameters.pathName }} &&
python -m pip install --upgrade pip &&
pip install -r requirements.txt
displayName: Install requirements for ${{ parameters.pathName }}...
- script: cd ${{ parameters.pathName }} && coverage run --omit='manage.py,config/*,.venv*,*/*__init__.py,*/tests.py,*/admin.py' manage.py test && coverage report
displayName: Run unit tests and coverage for ${{ parameters.pathName }}...
env:
DJANGO_SECRET_KEY: $(PROD-DJANGOSECRETKEY)
DJANGO_DEBUG: $(PROD-DJANGODEBUG)
DOMAIN: $(PROD-DOMAIN)
PGDATABASE: $(PROD-PGDATABASE)
PGDATABASEV2: $(PROD-PGDATABASEV2)
PGUSER: $(PROD-PGUSER)
PGPASSWORD: $(PROD-PGPASSWORD)
PGHOST: $(PROD-PGHOST)
PGPORT: $(PROD-PGPORT)
- upload: ${{ parameters.pathName }}
artifact: ${{ parameters.pathName }}
condition: succeeded()
# buildStage.yaml
parameters:
- name: tag
default: ''
- name: tagVersion
default: ''
#
stages:
- stage: BuildAndPush
displayName: Build and Push Docker images of services...
dependsOn:
- UnitTests
- Changed
condition: succeeded()
variables:
anyServicesChanged: $[ stageDependencies.Changed.Changes.outputs['detectChanges.anyServicesChanged'] ]
servicesChanged: $[ stageDependencies.Changed.Changes.outputs['detectChanges.servicesChanged'] ]
jobs:
- job: BuildAndPush
condition: or(eq(variables.anyServicesChanged, true), eq(variables['Build.Reason'], 'Manual'))
displayName: Building and Push Docker images of services...
steps:
- template: buildStep.yaml
parameters:
tag: ${{ parameters.tag }}
tagVersion: ${{ parameters.tagVersion }}
serviceName: api-v1
pathName: api
# buildStep.yaml
parameters:
- name: tag
default: ''
- name: tagVersion
default: ''
- name: serviceName
default: ''
- name: pathName
default: ''
steps:
- task: Docker#2
condition: contains(variables['servicesChanged'], '${{ parameters.serviceName }}')
displayName: Build and Push ${{ parameters.pathName }} Docker image
inputs:
command: buildAndPush
repository: $(imageRepository)-${{ parameters.pathName }}
dockerfile: $(Pipeline.Workspace)/${{ parameters.pathName }}/Dockerfile
buildContext: $(Pipeline.Workspace)/${{ parameters.pathName }}
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
${{ parameters.tag }}-${{ parameters.tagVersion }}
I can see the api artifact was published:
But then when it comes actually pulling it for building the Docker image I just get:
Starting: Build and Push api Docker image
==============================================================================
Task : Docker
Description : Build or push Docker images, login or logout, start or stop containers, or run a Docker command
Version : 2.187.0
Author : Microsoft Corporation
Help : https://aka.ms/azpipes-docker-tsg
==============================================================================
##[error]Unhandled: No Dockerfile matching /home/vsts/work/1/api/Dockerfile was found.
Finishing: Build and Push api Docker image
If I had a script task to ls -la $(Pipeline.Workspace) I get:
drwxr-xr-x 6 vsts docker 4096 Jul 9 16:19 .
drwxr-xr-x 7 vsts root 4096 Jul 9 16:19 ..
drwxr-xr-x 2 vsts docker 4096 Jul 9 16:19 TestResults
drwxr-xr-x 2 vsts docker 4096 Jul 9 16:19 a
drwxr-xr-x 2 vsts docker 4096 Jul 9 16:19 b
drwxr-xr-x 12 vsts docker 4096 Jul 9 16:19 s
Question
So what am I doing wrong here that referencing the artifact is working in one case and not the other?
Deployment job in you first case automatically download all published artifacts.
In your second case you use regular job which doesn't download it. You have to do it excplicitly by adding download step before running your docker step..

Github Actions deploy artifacts

Below workflow fails when deploying the artifacts to S3. In the last line, in the deployment job, it's complaining that ./build doesn't exist, probably because it can't find the artifacts.
The user-provided path ./build does not exist.
##[error]Process completed with exit code 255.
How do I make it recognise the artifacts created in the build job?
name: Build and deploy
on:
push:
branches:
- master
jobs:
build:
name: Build
runs-on: ubuntu-18.04
strategy:
matrix:
node-version: [10.x]
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- run: |
npm i -D
npm test --if-present
npm run build:prod
env:
CI: true
- name: Upload Artifact
uses: actions/upload-artifact#master
with:
name: build
path: build
deploy:
name: Deploy
needs: build
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#master
- name: Deploy to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
aws s3 cp \
--recursive \
--acl public-read \
--region ap-southeast-2 \
./build s3://example
You need to download the artifact in the deploy job. See the actions/download-artifact action.
deploy:
name: Deploy
needs: build
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#master
- name: Download Artifact
uses: actions/download-artifact#master
with:
name: build
path: build
- name: Deploy to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
aws s3 cp \
--recursive \
--acl public-read \
--region ap-southeast-2 \
./build s3://example