How to pass array variables to parallel matrix in GitLab CI pipeline? - pytest

Trying to configure parallel testing. Is there some way to set up a variable as an array and use it in matrix? For example:
stages:
- test
variables:
SUITES: $SUITES
test:
stage: test
image: $CI_REGISTRY_IMAGE
parallel:
matrix:
- SUITE: [$SUITES]
script:
- pytest -m ${SUITE} --json-report --json-report-file=${SUITE}_report.json
artifacts:
when: always
paths:
- "${SUITE}_report.json"
expire_in: "1 day"
The case is to run jobs in parallel with suite and artifacts for each job. Maybe, I'm looking the wrong place?

See the GitLab docs on the parallel:matrix keyword. The whole idea is to set up multiple variable definitions which will each be run in parallel. Each list element in parallel will be one job specification; list elements should be dictionaries specifying the variables to be set in each job.
In your case:
test:
stage: test
image: $CI_REGISTRY_IMAGE
parallel:
matrix:
- SUITE: endpoints
- SUITE: smoke
- SUITE: auth
script:
- pytest -m ${SUITE} --json-report --json-report-file=${SUITE}_report.json
artifacts:
when: always
paths:
- "${SUITE}_report.json"
expire_in: "1 day"

I have a similar question about the matrix feature. I have a pipeline template that can build multiple images of a "base" docker image, where each image differs in the version of the tool. For example, I want to build custom "base" .NET images for .NET 3.1, 5.0, and 6.1.
Previously I was declaring a variable:
VERSIONS_TO_BUILD: "3.1 5.0 6.0"
and then looping through that list (eg: foreach ver in VERSION_TO_BUILD, run docker build).
I am also scanning the resulting containers. So, multiple jobs would have the same matrix list.
I just discovered this matrix functionality. I realize I can setup my job as such:
build:
parallel:
matrix:
- VERSION: 3.1
- VERSION: 5.0
- VERSION: 6.0
# repeat for scan job
As mentioned, I am using a template so the same pipeline can be used for .NET, Node, Java, Maven, etc. What I am hoping to do is to include the template, then define the versions I'm using for that repo, then re-use it.
include:
- base_image_pipeline.yml
variables:
VERSIONS:
- "3.1"
- "5.0"
- "6.0"
build:
parallel:
matrix:
- $VERSIONS
scan:
parallel:
matrix:
- $VERSIONS
I have a feeling the !reference keyword might be the best option, but would like other inputs.
Thanks!

Related

Need to run test suites on different platforms

As part of our Pipeline i need to run tests on Linux, Windos & MAC. is there any built in option or best practices for such task? any examples?
I am expecting if there is any feasible inbuilt option in pipeline to achieve this
matrix step type in pipelines can be used here https://www.jfrog.com/confluence/display/JFROG/Matrix by passing the nodepool options as part of stepletMutipliers.
sample step:
steps:
- name: step_1
type: Matrix
stepMode: Bash
configuration:
multiNode: true
stepletMultipliers:
environmentVariables:
- foo: foo
- bar: bar
nodePools:
- winNodePool
- u20NodePool

AzureDevops stage dependsOn stageDependencies

How to create a multi-stage pipeline depending on a stage/job-name derived from a parameter whereas stages run firstly in parallel and eventually one stage that waits for all previous stages?
Here's what I've tried so far:
A multi-stage pipeline runs for several stages depending on a tool parameter in parallel, whereas dependsOn is passed as parameter. Running it in parallel for each tool waiting for the previous stage for the said tool works smoothly.
Main template: all wait for for all
- ${{ each tool in parameters.Tools }}:
- template: ../stages/all-wait-for-all.yml
parameters:
Tool: ${{ tool }}
stages/all-wait-for-all.yml
parameters:
- name: Tool
type: string
stages:
- stage: ALL_WAIT_${{ parameters.Tool}}
dependsOn:
- PREPARE_STAGE
- OTHER_TEMPLATE_EXECUTED_FOR_ALL_TOOLS_${{ parameters.Tool }}
Now there should be one stage that should only run once and not per tool, but it should only run after the individual tool stages are done. It can't be hardcoded as there are various tools. So I hoped defining the individual wait-stages in a prepare job would work out:
Main template: prepare-stage
- script: |
toolJson=$(echo '${{ convertToJson(parameters.Tools) }}')
tools=$(echo "$toolJson" | jq '.[]' | xargs)
stage="ALL_WAIT"
for tool in $tools; do
stageName="${stage}_${tool }"
stageWaitArray+=($stageName)
done
echo "##vso[task.setvariable variable=WAIT_ON_STAGES]${stageWaitArray}"
echo "##vso[task.setvariable variable=WAIT_ON_STAGES;isOutput=true]${stageWaitArray}"
displayName: "Define wait stages"
name: WaitStage
stages/one-waits-for-all.yml
stages:
- stage: ONE_WAITS
dependsOn:
- $[ stageDependencies.PREPARE_STAGE.PREPARE_JOB.outputs['waitStage.WAIT_ON_STAGES'] ]
whereas below error is shown:
Stage ONE_WAITS depends on unknown stage $[ stageDependencies.PREPARE_STAGE.PREPARE_JOB.outputs['WaitStage.WAIT_ON_STAGES'] ].
As I understand depends on can not have dynamic $[] or macro $() expressions evaluated at runtime. You can use template expressions ${{}} which are evaluated at queue time.
Guess I was overthinking the solution as eventually it was pretty obvious.
So first template can be called within a loop from the main template whereas it's executed as many times as tools we got. Second template shall be called once waiting on previous stages for all tools, whereas the job/stage prefix is known, only the tool name as postfix was unknown. So just add them in a loop directly in dependsOn..
Here you go:
stages:
- stage: ONE_WAITS
dependsOn:
- PREPARE_STAGE
- ${{ each tool in parameters.Tools }}:
- OTHER_TEMPLATE_EXECUTED_FOR_ALL_TOOLS_${{ tool}}
- ALL_WAIT_${{ tool }}

Can't make use of workflow environment variable in github action from marketplace (through build matrix)

I'm trying to make use of a workflow environment variable in a marketplace action, using a build matrix but it's not working for some reason.
I basically want to define the database versions just once to avoid repeating them in multiple place in my workflow.
Here's my workflow (minimal reproducible example):
name: dummy
on:
pull_request:
env:
MONGODB_3_6: 3.6.13
MONGODB_4_0: 4.0.13
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
MONGODB: [$MONGODB_4_0, $MONGODB_3_6]
steps:
- uses: actions/checkout#v2
- name: Start MongoDB
uses: supercharge/mongodb-github-action#1.3.0
with:
mongodb-version: ${{ matrix.MONGODB }}
And it's failing with the error below, as if the MONGODB_4_0 wasn't defined.
Interesting fact, without the strategy matrix, I'm able to make it work using the env context(doc):
- name: Start MongoDB
uses: supercharge/mongodb-github-action#1.3.0
with:
mongodb-version: ${{ env.MONGODB_4_0 }}
UPDATED: according tests and comments, I think matrix can't take environment variables and/or dynamic values.
so the best way will be :
matrix:
MONGODB: [3.6.13, 4.0.13]
As #max said you can use a variable for your workflow, so I guess your matrix should be wrong, maybe you can try like that :
MONGODB: [${{ env.MONGODB_4_0 }}, ${{ env.MONGODB_3_6 }}]
You have only one job (test) so you can define your env variables at the job level also.
Variables will be accessible for all the job :
jobs:
test:
runs-on: ubuntu-latest
env:
MONGODB_3_6: 3.6.13
MONGODB_4_0: 4.0.13
For further information : github doc

Fill runtime azure pipeline parameters from external source

We looking to create a pipeline to update our multi-tenant azure environment. We need to perform some actions during the update per tenant. To accomplish this, we would like to create a job per tenant, so we can process tenants in parallel. To accomplish this, I want to use a runtime parameter to pass the tenants to update to my pipeline as follows:
parameters:
- name: tenants
type: object
the value of the tenants parameter might look like something like this:
- Name: "customer1"
Someotherproperty: "some value"
- Name: "customer2"
Someotherproperty: "some other value"
to generate the jobs, we do something like this:
stages:
- stage:
jobs:
- job: Update_Tenant
strategy:
matrix:
${{ each tenant in parameters.Tenants }}:
${{ tenant.tenantName }}:
name: ${{ tenant.tenantName }}
someproperty: ${{ tenant.otherProperty }}
maxParallel: 2
steps:
- checkout: none
- script: echo $(name).$(someproperty)
Now what we need, is some way to fill this tenants parameter. Now I tried a few solutions:
Ideally I would like to put a build stage before the Update_Tenants stage to call a REST api to get the tenants, and expand the tenants parameter when the Update_Tenants stage starts, but this is not supported AFAIK, since parameter expansion is done when the pipeline starts.
A less ideal but still workable option would have been to create a variable group yaml file containing the tenants, and include this variable group in my pipeline, and use the ${{ variables.Tenants }} syntax to reference them. However, for some reason, variables can only be strings.
The only solution I can currently think of, is to create a pipeline that calls a REST api to get the tenants to update, and then uses the azure devops api to queue the actual update process with the correct parameter value. But this feels like a bit of a clunky workaround to accomplish this.
Now my question is, are there any (better?) alternatives to accomplish what I want to do?
Maybe this can help. I was able to use external source (.txt file) to fill array variable in azure pipelines.
Working example
# Create a variable
- bash: |
arrVar=()
for images in `cat my_images.txt`;do
arrVar+=$images
arrVar+=","
done;
echo "##vso[task.setvariable variable=list_images]$arrVar"
# Use the variable
# "$(list_images)" is replaced by the contents of the `list_images` variable by Azure Pipelines
# before handing the body of the script to the shell.
- bash: |
echo my pipeline variable is $(list_images)
Sources (there is also example for matrix)
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-job-scoped-variable-from-a-script
Other sources
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script
To accomplish this, we would like to create a job per tenant, so we
can process tenants in parallel.
Apart from rolling deployment strategy, you can also check Strategies and Matrix.
You can try something like this unless you have to use Runtime parameters:
jobs:
- job: Update
strategy:
matrix:
tenant1:
Someotherproperty1: '1.1'
Someotherproperty2: '1.2'
tenant2:
Someotherproperty1: '2.1'
Someotherproperty2: '2.2'
tenant3:
Someotherproperty1: '3.1'
Someotherproperty2: '3.2'
maxParallel: 3
steps:
- checkout: none
- script: echo $(Someotherproperty1).$(Someotherproperty2)
displayName: 'Echo something'

Passing parameters between concourse jobs / tasks

What's the best way to pass parameters between concourse tasks and jobs? For example; if my first task generates a unique ID, what would be the best way to pass that ID to the next job or task?
If you are just passing between tasks within the same job, you can use artifacts (https://concourse-ci.org/running-tasks.html#outputs) and if you are passing between jobs, you can use resources (like putting it in git or s3). For example, if you are passing between tasks, you can have a task file
---
platform: linux
image_resource: # ...
outputs:
- name: unique-id
run:
path: project-src/ci/fill-in-output.sh
And the script fill-in-output.sh will put the file that contains the unique ID into path unique-id/. With that, you can have another task that takes the unique-id output as an input (https://concourse-ci.org/running-tasks.html#inputs) and use that unique id file.
Additionally to tasks resources will place files automagically for you in their working directory.
For example I have a pipeline job as follows
jobs:
- name: build
plan:
- get: git-some-repo
- put: push-some-image
params:
build: git-some-repo/the-image
- task: Use-the-image-details
config:
platform: linux
image_resource:
type: docker-image
source:
repository: alpine
inputs:
- name: push-some-image
run:
path: sh
args:
- -exc
- |
ls -lrt push-some-image
cat push-some-image/repository
cat push-some-image/digest
Well see the details of the image push from push-some-image
+ cat push-some-image/repository
xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/path/image
+ cat push-some-image/digest
sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Passing data within a job's tasks could easily be done with input/output artifacts (files), As Clara Fu noted.
For the case between jobs, when simple e.g. 'string' data has to be passed , and using a git is an overkill, the 'keyval' resource[1] seems to be a good solution.
The readme describes that the data is stored and managed as a standard properties file.
https://github.com/SWCE/keyval-resource