How I can pass the log filter key-value data into one of the reference job as an argument.
I was able to use the options available on the job as an argument to pass on the child reference job use,
But when I passed my log filter key-value data variable as ${data.my_log_filer_var} from the parent job argument, it does not evaluate in the child job but other argument passing is working for like for other option variables, I passed as: ${option.my_another_var}
Please let me know if there is any other way to pass the ${data.my_log_filer_var} to my child job.
Following this, I made a little example. Basically, you need an option on the child's job to "receive" the argument from the parent's job. This example is a couple of job definitions that you can import to your instance.
Child job (contains an option, this option is called from parent job passing the data variable):
- defaultTab: nodes
description: ''
executionEnabled: true
id: 0aeaa0f4-d090-4083-b0a5-2878c5f558d1
loglevel: INFO
name: ChildJob
nodeFilterEditable: false
options:
- name: opt1
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: 'echo "the argument is: ${option.opt1}"'
keepgoing: false
strategy: node-first
uuid: 0aeaa0f4-d090-4083-b0a5-2878c5f558d1
And the Parent job (creates a data value and passes it to a child using arguments -opt1 ${data.mydata})
- defaultTab: nodes
description: ''
executionEnabled: true
id: 5c3d4248-9761-45d4-b164-6e38d5146007
loglevel: INFO
name: ParentJob
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: env
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
regex: ^(USER)\s*=\s*(.+)$
type: key-value-data
- exec: 'echo "the data value is ${data.USER}, sending to child job using arguments
on job reference step..."'
- jobref:
args: -opt1 ${data.USER}
group: ''
name: ChildJob
nodeStep: 'true'
uuid: 0aeaa0f4-d090-4083-b0a5-2878c5f558d1
keepgoing: false
strategy: node-first
uuid: 5c3d4248-9761-45d4-b164-6e38d5146007
The global variable step can capture data variable captured by key-value data log filter.
For example, if you capture the key-value data to export group, you can create a global variable {export.keyname*}, which can be used on other steps or in notifications. Note that if you only want the value from a single node, you should use {export.keyname#nodename} instead.
Rundeck Docs
Related
I have the code as below which works fine
variables:
- group: docker-settings
I need to add a variable to use in a condition so i insert the variable as below but then I get an error? If I remove -group :docker-settings it works, if I remove isMaster line instead it works but it doesnt like them both there? What am I doing wrong?
variables:
- group: docker-settings
isMaster: $[eq(variables['Build.SourceBranch'], 'refs/heads
I used these docs
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml
I used the name / value notation and fixed the value based on the MS example and set master instead of main. I guess this is what you want to have.
variables:
- group: docker-settings
- name: 'isMaster'
value: $[eq(variables['Build.SourceBranch'], 'refs/heads/master')]
Microsoft example:
variables:
staticVar: 'my value' # static variable
compileVar: ${{ variables.staticVar }} # compile time expression
isMain: $[eq(variables['Build.SourceBranch'], 'refs/heads/main')] # runtime expression
steps:
- script: |
echo ${{variables.staticVar}} # outputs my value
echo $(compileVar) # outputs my value
echo $(isMain) # outputs True
Don't give up with yaml and Azure DevOps ;-)
I am trying to set up an environment variable that can be used from the Azure Devops Library that by default is set to null value. This part is working, but I want the development teams to be able to overwrite that value ad hoc. I don't want to add this manually to all environment variables and I would like to build a condition in my playbooks to say when var != "" rather then when: var != "$(rollback)"
Here is my config:
name: $(Date:yyyyMMdd)$(Rev:.r)
trigger: none
pr: none
resources:
pipelines:
- pipeline: my-ui
source: my-ui-ci-dev
trigger: true
variables:
- group: Dev
jobs:
- job: my_cd
pool:
vmImage: "ubuntu-20.04"
container:
image: "myacr.azurecr.io/devops/services-build:$(services_build_tag_version)"
endpoint: "Docker Registry"
steps:
- task: Bash#3
displayName: "My Playbook"
env:
git_username: "$(git_username)"
git_password: "$(git_password)"
config_repo: "$(config_repo)"
service_principal_id: "$(service_principal_id)"
service_principal_secret: "$(service_principal_secret)"
subscription_id: "$(subscription_id)"
tenant_id: "$(tenant_id)"
rollback: "$(rollback)"
source_dir: "$(Build.SourcesDirectory)"
env_dir: "$(Agent.BuildDirectory)/env"
HELM_EXPERIMENTAL_OCI: 1
inputs:
targetType: "inline"
script: |
ansible-playbook cicd/ansible/cd.yaml -i "localhost, " -v
When choosing to run the pipeline, I would like the developers to just go to Run > Add Variable > Manually add the variable and value > run pipeline
And then in the playbook the value is "" if not defined or if it is then show as the value they type. Any suggestions on how I can do this with AZDO?
You can do this more easily with a run-time parameter
parameters:
- name: rollback
type: string
default: ' '
variables:
- group: dev
- ${{ if ne(parameters.rollback, ' ') }}:
- name: rollback
value: ${{ parameters.rollback }}
The way this works in practice is:
the pipeline queue dialog automatically includes a 'rollback' text field:
if the developer types a value into the rollback parameter field,
that value is used to override the rollback variable
otherwise, the value from the variable group is used.
Note that you need to give the parameter a default value of a single space; otherwise the pipeline won't let you leave it empty.
is it possible to set a value on a parameter or to define the default value with the valiable.
e.g.
parameters:
- name: paraA
type: boolean
default: true
value: $(variableA)
#variableA is set in a yaml build in azure devops and for some build it should false and not true
parameters:
- name: paraA
type: boolean
default: ${{ if eq(variable.variableA, false) }}
As I know it's not supported scenario. Variables like variableA are expanded at runtime while parameters like paraA are expanded at template parsing time.
When processing one pipeline, it first expands templates and evaluate template expressions before expanding the runtime variables. The document has stated this:
I would like to trigger a job template with an object as parameter.
Unfortunately, even based on the examples I couldn't find a way to do that.
I would appreciate if someone could guide me how to achieve this.
What I want to achieve, is to replace the ["DEPLOY", "CONFIG"] part with a dynamic variable:
- template: job-template.yaml
parameters:
jobs: ["DEPLOY", "CONFIG"]
This is not possible. YAML is very limited here and you may read more about this here
Yaml variables have always been string: string mappings.
So for instance you can define paramaters as complex type
Template file
parameters:
- name: 'instances'
type: object
default: {}
- name: 'server'
type: string
default: ''
steps:
- ${{ each instance in parameters.instances }}:
- script: echo ${{ parameters.server }}:${{ instance }}
Main file
steps:
- template: template.yaml
parameters:
instances:
- test1
- test2
server: someServer
But you are not able to do it dynamically/programmatically as every output you will create will end up as simple string.
What you can do is to pass as string and then using powershell split that string. But it all depends what you want to run further because you won't be able to simply iterate over yaml structure in that way. All what you can do is to run in in powershell loop and do something, but it can be not enough for you.
It's possible with some logic. see below
- template: job-template.yaml
parameters:
param: ["DEPLOY", "CONFIG"]
and in job-template.yaml file you can define. So every job name will be different
parameters:
param: []
jobs:
- ${{each jobName in parameters.param}}:
- job: ${{jobName}}
steps:
- task: Downl......
I have a job with many tasks like this:
- name: main-job
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- get: <git-resource-3>
- task: <task-1>
file: <git-resource>/<path>/<task-1-no-db>.yml
- task: <task-2>
tags: ['<specific-tag>']
file: <git-resource>/<path>/<task-1>.yml
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- task: <task-2>
tags: ['<specific-tag>']
file: <git-resource>/<path>/<task-1>.yml
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
The problem for me is, I have to literally call the same job but instead of DATABASE params being my-db-1, I want it to be my-db-2.
The only way I am able to do this is by having new job and pass the params, literally copy the entire set of lines. My job is too fat, as in has too many tasks in them, so copying it though is the obvious solution, I am wondering if there's a way to re-use by having multiple pipelines and one main pipeline that essentially calls these pipelines with the param for DATABASE passed or have two small jobs that calls this main job with different params something like this:
- name: <call-main-job-with-db-1>
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- name: <call-main-job-with-db-2>
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
DATABASE: <my-db-2>
I am not sure if this is even possible since I didn't find any example of this.
Remember you are using YAML, so you can use YAML features like "Anchors"
You will find some additional information about "Anchors" in this link. Look for "EXTRA YAML FEATURES"
YAML also has a handy feature called 'anchors', which let you easily duplicate
content across your document. Both of these keys will have the same value: anchored_content: &anchor_name This string will appear as the
value of two keys. other_anchor: *anchor_name
# Anchors can be used to duplicate/inherit properties
base: &base
name: Everyone has same name
foo: &foo
<<: *base
age: 10
bar: &bar
<<: *base
age: 20
Try this for your Concourse Pipeline:
common:
db_common: &db_common
serial: true
plan:
- aggregate:
- get: <git-resource>
passed: [previous-job]
trigger: true
- task: <call-main-job-task>
params:
jobs:
- name: <call-main-job-with-db-1>
<<: *db_common
DATABASE_HOST: <file>
DATABASE: <my-db-1>
- name: <call-main-job-with-db-2>
<<: *db_common
DATABASE: <my-db-2>
NOTE: Remember that you can have as many Anchors as you want, you can define two or more anchors for the same Job/Task/Resource, etc.
You need to just copy and paste the task as you do in the question description. Concourse expects an expressive yaml, there is no branching or logic allowed. If you don't want to copy and paste so much yaml, then you can do some yaml generation magic to simplify what you look at and work with, but concourse will want the full yaml with each job defined separately.
Concourse has this fan in fan out paradigm, where you want to keep the jobs simple and short. Use a scripting language e.g. like python or ruby to make your pipeline creation more flexible.
Personally i use one pipeline.yml.erb file where i render different job templates inside. I try to keep my job.yml.erb as generic as possible so i can reuse them for different pipelines.
To bring it to the next level you could specify a meta config.yml and use this config inside your templates to generate your pipeline depending on what you specified in the config.