I have 2 Jobs - Job 1 and Job2 and Global Context Variables with default values in Talend both the Job's use the same context variables but when I run the Job2 from Job1 instead of having the default variables the Global Context Variables has NULL Value
This only happens when I run Job2 from Job1 if I run the Job2 Separately it runs correctly
Please anyone point me what is wrong in the flow
Thanks in advance
In order for context values to be transmitted from parent to child job, you must activate the option "Transmit whole context" on the tRunJob component which runs the child job.
Related
I have setup a in my azure pipeline which create a Redis cache instance through Azure CLI.
There is another task which runs afterwards that pick set the values in my application config file from a pipeline variable named "CacheConnectionKey".
I have to set the variable value manually in the pipeline. I want to automate this process by adding a new task in between both described above. The new task should get PrimaryKey from Redis cache instance and set the value in the pipeline variable (i.e. CacheConnectionKey).
There is a command I have tried in the power shell which is giving me the access keys:
Get-AzRedisCacheKey -ResourceGroupName "MyResourceGroup" -Name "MyCacheKey"
PrimaryKey : pJ+jruGKPHDKsEC8kmoybobH3TZx2njBR3ipEsquZFo=
SecondaryKey : sJ+jruGKPHDKsEC8kmoybobH3TZx2njBR3ipEsquZFo=
Now I want to set PrimaryKey resulting from this command to be set in the pipeline variable CacheConnectionKey so that the next process could use the value properly.
The "process" referred in the question could be anything like a run/job/stage/pipeline I suppose. Regardless, in YAML pipelines, you can set variables at the root, stage, and job level. You can also use a variable group to make variables available across multiple pipelines. Some tasks define output variables, which you can consume in downstream steps, jobs, and stages.
In YAML, you can access variables across jobs and stages by using dependencies. By default, each stage in a pipeline depends on the one just before it in the YAML file. So if you need to refer to a stage that isn't immediately prior to the current one, you can override this automatic default by adding a dependsOn section to the stage.
For example, let's assume you have a task called MyTask, which sets an output variable called MyVar.
To use outputs in the same job:
steps:
- task: MyTask#1 # this step generates the output variable
name: ProduceVar # because we're going to depend on it, we need to name the step
- script: echo $(ProduceVar.MyVar) # this step uses the output variable
To use outputs in a different job:
jobs:
- job: A
steps:
# assume that MyTask generates an output variable called "MyVar"
- task: MyTask#1
name: ProduceVar # because we're going to depend on it, we need to name the step
- job: B
dependsOn: A
variables:
# map the output variable from A into this job
varFromA: $[ dependencies.A.outputs['ProduceVar.MyVar'] ]
steps:
- script: echo $(varFromA) # this step uses the mapped-in variable
For more details on the syntax and examples, check the following articles:
Use output variables from tasks
Dependencies
I have a ParentJob with below 2 JobSteps:
JobStep1
Job1 - Step11 -> Step12
JobStep2
Job2 - Step21 -> Step22
I would have had a single Job with all the steps. But I have scenarios when I would have to run Step11 & Step12, and Step21 & Step22 independently. This is the reason I have included them in Job1 and Job2, and then calling Job1 and Job2 from ParentJob.
All of these jobs(ParentJob, Job1 and Job2) have RunIdIncrementer associated with them. When ParentJob runs, all the jobs run with same run.id paramater(RunIdIncrementer of child jobs i.e. Job1 and Job2 doesn't work in this scenario, which is obvious).
The problem I am facing is that, for e.g. on first run, all jobs run with run id as 1. Then if Job2 is run independently, it would run with run.id 2. Now, when ParentJob is run again, it runs with run.id 2, and Job1 runs fine, but Job2 gives JobInstanceAlreadyCompleteException. I understand that RunIdIncrementer of Job1 and Job2 will not work. But, is there a way to achieve what I want to do?
The solution in my mind is to have my custom JobParametersIncrementer, generating run.id unique every time it is run. Probably, run.id as current time in millis. Still I prefer RunIdIncrementer as run.id generated by it looks cleaner.
Would also appreciate any suggestions on design change, if that's required.
I have a pipeline in Azure DevOps somewhat like this:
parameters:
- name: Scenario
displayName: Scenario suite
type: string
default: 'Default'
variables:
Scenario: ${{ parameters.Scenario }}
...
steps:
- script: echo Scenario is $(Scenario)
And I'm executing the pipeline via the VSTS CLI like this:
vsts build queue ... --variables Scenario=Test
When I run my pipeline, it seems that the parameter default value overwrites my cmd line specified variable value and I get the step output Scenario is Default. I tried something like Scenario: $[coalesce(variables['Scenario'], ${{ parameters.Scenario }})] but I think I got the syntax wrong because that caused a parsing issue.
What would be the best way to only use the parameter value if the Scenario variable has not already been set?
What would be the best way to only use the parameter value if the
Scenario variable has not already been set?
Sorry but as I know your scenario is not supported by design. The Note here has stated that:
When you set a variable in the YAML file, don't define it in the web editor as settable at queue time. You can't currently change variables that are set in the YAML file at queue time. If you need a variable to be settable at queue time, don't set it in the YAML file.
The --variables switch in command can only be used to overwrite the variables which are marked as Settable at queue time. Since yaml pipeline doesn't support Settable variables by design, your --variables Scenario=Test won't actually be passed when queuing the yaml pipeline.
Here're my several tests to prove that:
1.Yaml pipeline which doesn't support Settable variable at Queue time:
pool:
vmImage: 'windows-latest'
variables:
Scenario: Test
steps:
- script: echo Scenario is $(Scenario)
I ran the command vsts build queue ... --variables Scenario=Test123, the pipeline run started but the output log would always be Scenario is Test instead of expected Scenario is Test123. It proves that it's not Pipeline parameter overwrites variable value, instead the --variables Scenario=xxx doesn't get passed cause yaml pipeline doesn't support Settable variables.
2.Create Classic UI build pipeline with pipeline variable Scenario:
Queuing it via command az pipelines build queue ... --variables Scenario=Test12345(It has the same function like vsts build queue ... --variables Scenario=Test) only gives this error:
Could not queue the build because there were validation errors or warnings.
3.Then enable the Settable at queue time option of this variable:
Run the same command again and now it works to queue the build. Also it succeeds to overwrite the original pipeline variable with the new value set in command-line.
You can do similar tests like what I did to figure out the cause of the behavior you met.
In addition:
VSTS CLI has been deprecated and replaced by Azure CLI with the Azure DevOps extension for a long time. So now it's more recommend to use az pipelines build queue
instead.
Lance had a great suggestion, but here is how I ended up solving it:
- name: Scenario
displayName: Scenario suite
type: string
default: 'Default'
variables:
ScenarioFinal: $[coalesce(variables['Scenario'], '${{ parameters.Scenario }}')]
...
steps:
- script: echo Scenario is $(ScenarioFinal)
In this case we use the coalesce expression to assign the value of a new variable, ScenarioFinal. This way we can still use --variables Scenario=Test via the CLI or use the parameter via the pipeline UI. coalesce will take the first non-null value and effectively "reorder" the precedence Lance linked to here: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#expansion-of-variables
(Note that there need to be single quotes around the parameter reference '${{}}' because the ${{}} is simply converted to to the value, but then the coalesce expression doesn't know how to interpret the raw value unless it has the single quotes around it to denote it as a string)
Note that the ability to set parameters via the CLI is a current feature suggestion here: https://github.com/Azure/azure-devops-cli-extension/issues/972
Suppose I have a build:
Job1:
Task1: Build
Job2:
dependsOn: Job1
Task2: Test
And the test task uses some kind of database, or another unique resource.
I would like to know if it is possible, when multiple builds are running in parallel, to lock Job2 to run unique without other builds trying to access the same resource.
I am using cmake and ctest, so I know I can do something similar between separate unit tests with RESOURCE_LOCK, but I am certain that I will not be able to lock that resource between multiple ctest processes.
Agree with #MarTin's work around. Set one variable with powershell task in Job1, then get this variable and use it in job condition for Job2.
You don't need to use API to add global variable. There has another easier way can for you try: Output variables. We expand one feature that customer can configure one output variable and access this output variable in the next job which is depending on the first job.
The sample of set output variable in Job1:
##vso[task.setvariable variable=firstVar;isOutput=true]Job2 Need skip
Then get the output variable from Job1 in Job2:
Job2Var: $[dependencies.Job1.outputs['outputVars.firstVar']]
Then, in job2, you can use it into job condition:
condition: eq(dependencies.Job1.outputs['outputVars.firstVar'], 'Job2 Need skip')
The completed simple sample script look like this:
jobs:
- job: Job1
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'echo "##vso[task.setvariable variable=firstVar;isOutput=true]Need skip Job2"'
name: outputVars
- job: Job2
dependsOn: Job1
variables:
Job2Var: $[dependencies.Job1.outputs['outputVars.firstVar'], 'Need skip Job2']
steps:
...
...
The logic I want to express is dynamically assign values to output variable based on Job1 and the current pipeline execution state. One specific value represent that the Job2 should be locked, which means skip its execution. In Job2's condition expression, when the obtained value $[dependencies.Job1.outputs['outputVars.firstVar'] satisfies the predefined expected value, skip current Job2.
In Azure DevOps pipelines there's an option to conditionally run a task based on a pipeline variable. This is handled under the Run this task > Custom conditions field and it uses the syntax:
eq(variables['VarName'], 'Desired Value')
An agent job has a similar field for conditional execution under Run this job > Custom condition using variable expressions.
However, when I use the same syntax as a conditional task the result always evaluates to 'false'.
So how can I conditionally run an agent job?
Screenshots:
Something like this worked for me:
- job: Job1
steps:
- powershell: |
if (some condition)
{
Write-Host ("##vso[task.setvariable variable=RunJob2;isOutput=true]True")
}
name: ScriptStep
- job: Job2
dependsOn: Create_Build_Matrix
condition: and(succeeded(), eq(dependencies.Job1.outputs['ScriptStep.RunJob2'], 'True'))
I discovered the answer. Unfortunately, it is not possible to conditionally run an agent job with a variable that is modified during build execution.
From the Azure DevOps Pipeline documentation under Pipeline Variables:
To define or modify a variable from a script, use the task.setvariable
logging command. Note that the updated variable value is scoped to
the job being executed, and does not flow across jobs or stages.
Try this one: https://stefanstranger.github.io/2019/06/26/PassingVariablesfromStagetoStage/
This one gives you to pass variables from one stage/job to another stage/job in the same release pipeline. I tried and it's working fine.
Also to run this you need to give some permissions for release pipeline. To allow for updating the Release Definition during the Release you need to configure the Release Permission Manage releases for the Project Collection Build Service.