Need to run test suites on different platforms - jfrog-pipelines

As part of our Pipeline i need to run tests on Linux, Windos & MAC. is there any built in option or best practices for such task? any examples?
I am expecting if there is any feasible inbuilt option in pipeline to achieve this

matrix step type in pipelines can be used here https://www.jfrog.com/confluence/display/JFROG/Matrix by passing the nodepool options as part of stepletMutipliers.
sample step:
steps:
- name: step_1
type: Matrix
stepMode: Bash
configuration:
multiNode: true
stepletMultipliers:
environmentVariables:
- foo: foo
- bar: bar
nodePools:
- winNodePool
- u20NodePool

Related

How to pass array variables to parallel matrix in GitLab CI pipeline?

Trying to configure parallel testing. Is there some way to set up a variable as an array and use it in matrix? For example:
stages:
- test
variables:
SUITES: $SUITES
test:
stage: test
image: $CI_REGISTRY_IMAGE
parallel:
matrix:
- SUITE: [$SUITES]
script:
- pytest -m ${SUITE} --json-report --json-report-file=${SUITE}_report.json
artifacts:
when: always
paths:
- "${SUITE}_report.json"
expire_in: "1 day"
The case is to run jobs in parallel with suite and artifacts for each job. Maybe, I'm looking the wrong place?
See the GitLab docs on the parallel:matrix keyword. The whole idea is to set up multiple variable definitions which will each be run in parallel. Each list element in parallel will be one job specification; list elements should be dictionaries specifying the variables to be set in each job.
In your case:
test:
stage: test
image: $CI_REGISTRY_IMAGE
parallel:
matrix:
- SUITE: endpoints
- SUITE: smoke
- SUITE: auth
script:
- pytest -m ${SUITE} --json-report --json-report-file=${SUITE}_report.json
artifacts:
when: always
paths:
- "${SUITE}_report.json"
expire_in: "1 day"
I have a similar question about the matrix feature. I have a pipeline template that can build multiple images of a "base" docker image, where each image differs in the version of the tool. For example, I want to build custom "base" .NET images for .NET 3.1, 5.0, and 6.1.
Previously I was declaring a variable:
VERSIONS_TO_BUILD: "3.1 5.0 6.0"
and then looping through that list (eg: foreach ver in VERSION_TO_BUILD, run docker build).
I am also scanning the resulting containers. So, multiple jobs would have the same matrix list.
I just discovered this matrix functionality. I realize I can setup my job as such:
build:
parallel:
matrix:
- VERSION: 3.1
- VERSION: 5.0
- VERSION: 6.0
# repeat for scan job
As mentioned, I am using a template so the same pipeline can be used for .NET, Node, Java, Maven, etc. What I am hoping to do is to include the template, then define the versions I'm using for that repo, then re-use it.
include:
- base_image_pipeline.yml
variables:
VERSIONS:
- "3.1"
- "5.0"
- "6.0"
build:
parallel:
matrix:
- $VERSIONS
scan:
parallel:
matrix:
- $VERSIONS
I have a feeling the !reference keyword might be the best option, but would like other inputs.
Thanks!

With yaml pipelines, is there a way to select an environment parameter from a dynamic list of all environments?

We've been migrating some of our manual deployment processes from Octopus to Azure DevOps Yaml pipelines. One of the QoL changes we're sorely missing is to be able to select the environment from a drop-down list/ auto-complete field as we could in Octopus.
Is there a way to achieve this? Currently, the only way I can think of doing it is to have a repo with a .yaml template file updated with a list of new environments as part of our provisioning process... Which seems less than ideal.
If you are going to trigger the pipeline manually then you can make use of Runtime parameters in the Azure DevOps pipeline.
For Example:
In order to make OS image name selectable from a list of choices, you can use the following snippet.
parameters:
- name: EnvName
displayName: EnvName
type: string
default: A
values:
- A
- B
- C
- D
- E
- F
trigger: none # trigger is explicitly set to none
jobs:
- job: build
displayName: build
steps:
- script: echo building $(Build.BuildNumber) with ${{ parameters.EnvName }}
Documentation about runtime parameters are here.
The downside to this is that the trigger: None limits you that the pipeline can only be manually triggered. Not sure how this works with other trigger options.

Fill runtime azure pipeline parameters from external source

We looking to create a pipeline to update our multi-tenant azure environment. We need to perform some actions during the update per tenant. To accomplish this, we would like to create a job per tenant, so we can process tenants in parallel. To accomplish this, I want to use a runtime parameter to pass the tenants to update to my pipeline as follows:
parameters:
- name: tenants
type: object
the value of the tenants parameter might look like something like this:
- Name: "customer1"
Someotherproperty: "some value"
- Name: "customer2"
Someotherproperty: "some other value"
to generate the jobs, we do something like this:
stages:
- stage:
jobs:
- job: Update_Tenant
strategy:
matrix:
${{ each tenant in parameters.Tenants }}:
${{ tenant.tenantName }}:
name: ${{ tenant.tenantName }}
someproperty: ${{ tenant.otherProperty }}
maxParallel: 2
steps:
- checkout: none
- script: echo $(name).$(someproperty)
Now what we need, is some way to fill this tenants parameter. Now I tried a few solutions:
Ideally I would like to put a build stage before the Update_Tenants stage to call a REST api to get the tenants, and expand the tenants parameter when the Update_Tenants stage starts, but this is not supported AFAIK, since parameter expansion is done when the pipeline starts.
A less ideal but still workable option would have been to create a variable group yaml file containing the tenants, and include this variable group in my pipeline, and use the ${{ variables.Tenants }} syntax to reference them. However, for some reason, variables can only be strings.
The only solution I can currently think of, is to create a pipeline that calls a REST api to get the tenants to update, and then uses the azure devops api to queue the actual update process with the correct parameter value. But this feels like a bit of a clunky workaround to accomplish this.
Now my question is, are there any (better?) alternatives to accomplish what I want to do?
Maybe this can help. I was able to use external source (.txt file) to fill array variable in azure pipelines.
Working example
# Create a variable
- bash: |
arrVar=()
for images in `cat my_images.txt`;do
arrVar+=$images
arrVar+=","
done;
echo "##vso[task.setvariable variable=list_images]$arrVar"
# Use the variable
# "$(list_images)" is replaced by the contents of the `list_images` variable by Azure Pipelines
# before handing the body of the script to the shell.
- bash: |
echo my pipeline variable is $(list_images)
Sources (there is also example for matrix)
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-job-scoped-variable-from-a-script
Other sources
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script
To accomplish this, we would like to create a job per tenant, so we
can process tenants in parallel.
Apart from rolling deployment strategy, you can also check Strategies and Matrix.
You can try something like this unless you have to use Runtime parameters:
jobs:
- job: Update
strategy:
matrix:
tenant1:
Someotherproperty1: '1.1'
Someotherproperty2: '1.2'
tenant2:
Someotherproperty1: '2.1'
Someotherproperty2: '2.2'
tenant3:
Someotherproperty1: '3.1'
Someotherproperty2: '3.2'
maxParallel: 3
steps:
- checkout: none
- script: echo $(Someotherproperty1).$(Someotherproperty2)
displayName: 'Echo something'

Concourse call job from another job with parameters

I have a job with many tasks like this:
- name: main-job
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- get: <git-resource-3>
- task: <task-1>
file: <git-resource>/<path>/<task-1-no-db>.yml
- task: <task-2>
tags: ['<specific-tag>']
file: <git-resource>/<path>/<task-1>.yml
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- task: <task-2>
tags: ['<specific-tag>']
file: <git-resource>/<path>/<task-1>.yml
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
The problem for me is, I have to literally call the same job but instead of DATABASE params being my-db-1, I want it to be my-db-2.
The only way I am able to do this is by having new job and pass the params, literally copy the entire set of lines. My job is too fat, as in has too many tasks in them, so copying it though is the obvious solution, I am wondering if there's a way to re-use by having multiple pipelines and one main pipeline that essentially calls these pipelines with the param for DATABASE passed or have two small jobs that calls this main job with different params something like this:
- name: <call-main-job-with-db-1>
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- name: <call-main-job-with-db-2>
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
DATABASE: <my-db-2>
I am not sure if this is even possible since I didn't find any example of this.
Remember you are using YAML, so you can use YAML features like "Anchors"
You will find some additional information about "Anchors" in this link. Look for "EXTRA YAML FEATURES"
YAML also has a handy feature called 'anchors', which let you easily duplicate
content across your document. Both of these keys will have the same value: anchored_content: &anchor_name This string will appear as the
value of two keys. other_anchor: *anchor_name
# Anchors can be used to duplicate/inherit properties
base: &base
name: Everyone has same name
foo: &foo
<<: *base
age: 10
bar: &bar
<<: *base
age: 20
Try this for your Concourse Pipeline:
common:
db_common: &db_common
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
jobs:
- name: <call-main-job-with-db-1>
<<: *db_common
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- name: <call-main-job-with-db-2>
<<: *db_common
DATABASE: <my-db-2>
NOTE: Remember that you can have as many Anchors as you want, you can define two or more anchors for the same Job/Task/Resource, etc.
You need to just copy and paste the task as you do in the question description. Concourse expects an expressive yaml, there is no branching or logic allowed. If you don't want to copy and paste so much yaml, then you can do some yaml generation magic to simplify what you look at and work with, but concourse will want the full yaml with each job defined separately.
Concourse has this fan in fan out paradigm, where you want to keep the jobs simple and short. Use a scripting language e.g. like python or ruby to make your pipeline creation more flexible.
Personally i use one pipeline.yml.erb file where i render different job templates inside. I try to keep my job.yml.erb as generic as possible so i can reuse them for different pipelines.
To bring it to the next level you could specify a meta config.yml and use this config inside your templates to generate your pipeline depending on what you specified in the config.

Passing parameters between concourse jobs / tasks

What's the best way to pass parameters between concourse tasks and jobs? For example; if my first task generates a unique ID, what would be the best way to pass that ID to the next job or task?
If you are just passing between tasks within the same job, you can use artifacts (https://concourse-ci.org/running-tasks.html#outputs) and if you are passing between jobs, you can use resources (like putting it in git or s3). For example, if you are passing between tasks, you can have a task file
---
platform: linux
image_resource: # ...
outputs:
- name: unique-id
run:
path: project-src/ci/fill-in-output.sh
And the script fill-in-output.sh will put the file that contains the unique ID into path unique-id/. With that, you can have another task that takes the unique-id output as an input (https://concourse-ci.org/running-tasks.html#inputs) and use that unique id file.
Additionally to tasks resources will place files automagically for you in their working directory.
For example I have a pipeline job as follows
jobs:
- name: build
plan:
- get: git-some-repo
- put: push-some-image
params:
build: git-some-repo/the-image
- task: Use-the-image-details
config:
platform: linux
image_resource:
type: docker-image
source:
repository: alpine
inputs:
- name: push-some-image
run:
path: sh
args:
- -exc
- |
ls -lrt push-some-image
cat push-some-image/repository
cat push-some-image/digest
Well see the details of the image push from push-some-image
+ cat push-some-image/repository
xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/path/image
+ cat push-some-image/digest
sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Passing data within a job's tasks could easily be done with input/output artifacts (files), As Clara Fu noted.
For the case between jobs, when simple e.g. 'string' data has to be passed , and using a git is an overkill, the 'keyval' resource[1] seems to be a good solution.
The readme describes that the data is stored and managed as a standard properties file.
https://github.com/SWCE/keyval-resource