Job of Jobs - Run Jobs as sub-jobs, as well as independently - spring-batch

I have a ParentJob with below 2 JobSteps:
JobStep1
Job1 - Step11 -> Step12
JobStep2
Job2 - Step21 -> Step22
I would have had a single Job with all the steps. But I have scenarios when I would have to run Step11 & Step12, and Step21 & Step22 independently. This is the reason I have included them in Job1 and Job2, and then calling Job1 and Job2 from ParentJob.
All of these jobs(ParentJob, Job1 and Job2) have RunIdIncrementer associated with them. When ParentJob runs, all the jobs run with same run.id paramater(RunIdIncrementer of child jobs i.e. Job1 and Job2 doesn't work in this scenario, which is obvious).
The problem I am facing is that, for e.g. on first run, all jobs run with run id as 1. Then if Job2 is run independently, it would run with run.id 2. Now, when ParentJob is run again, it runs with run.id 2, and Job1 runs fine, but Job2 gives JobInstanceAlreadyCompleteException. I understand that RunIdIncrementer of Job1 and Job2 will not work. But, is there a way to achieve what I want to do?
The solution in my mind is to have my custom JobParametersIncrementer, generating run.id unique every time it is run. Probably, run.id as current time in millis. Still I prefer RunIdIncrementer as run.id generated by it looks cleaner.
Would also appreciate any suggestions on design change, if that's required.

Related

Fail Github Actions Workflow if one job is failed (while keeping other jobs that are called after it)

I have a workflow that constists of 3 jobs - "Start self-hosted EC2 runner", "Run Integration Tests" and "Stop self-hosted EC2 runner". A Github Action that I used as a source for my implementation can be found at this link. In practice when Tests are failing and its job looks "red", workflow still looks "green" and I need to fix that (= make it look "red"). However, it is mandatory that "Stop self-hosted EC2 runner" job must execute even if "Run Integration test" job fails, so I can not fail the workflow from within "Run Integration Tests" job.
I suspect I can add yet another job dependent on all three, which then checks the status of all the Jobs and fail the workflow. How can I do that or otherwise make workflow "red" while always executing "Start.." and "Stop..." jobs regardless of tests success or failure?
The cause of workflow not failing is that 2nd job has to have continue-on-error: true attribute.
I ended up adding another job at the end like so:
// Last job:
fail-on-job2:
# Without this step workflow remains "green" if job2 does fail.
name: Job2 Status -> Workflow Status
needs:
- job1
- job2
- job3
runs-on: ubuntu-latest
if: always()
steps:
- uses: technote-space/workflow-conclusion-action#v2
- name: Check Job Status status and fail if they are red
if: env.WORKFLOW_CONCLUSION == 'failure'
run: exit 1

Post cancellation task in Azure Pipelines

We have a scenario where once we manually cancel a pipeline in Azure DevOps, we need to terminate a particular process (exe). We wanted to know how can a task be triggered in YAML post cancellation, to achieve this.
Please try with the following ways:
If there are multiple jobs in your pipeline, make sure the job that runs the task to terminate the particular process (exe) is processed after all other jobs have completed processing (Succeeded, Failed or Canceled). You can use the 'dependsOn' key to set the execution order of the jobs. For more details, see "Dependencies".
Then, as #ShaykiAbramczyk mentioned, on the job that terminates the particular process (exe), you can use the 'condition' key to specify the condition under which the job runs. For more details, see "Conditions".
jobs:
. . .
- job: string
dependsOn: string
condition: eq(variables['Agent.JobStatus'], 'Canceled')
If there is only one job in your pipeline, make sure the task to terminate the particular process (exe) is the last step in the job. You need to put this task to the bottom of the steps list of this job.
Then also need to specify the condition on the task to terminate the particular process (exe).
If using the 'condition' key on this step, this step will be always listed and display in the job run, regardless of whether it is skipped or not.
steps:
- step: string
condition: eq(variables['Agent.JobStatus'], 'Canceled')
If using the 'if' statement on the step, this step will be listed and display in the job run only when the 'if' condition is met and to run this step. If the condition is not met and the step is skipped, this step will be hidden in the job run.
steps:
- ${{ if eq(variables['Agent.JobStatus'], 'Canceled') }}:
- step: string
You can add the step that terminate the process, and configure him to run only if the pipeline is canceled - with custom condition:
condition: eq(variables['Agent.JobStatus'], 'Canceled')

Create dependencies between jobs in GitHub Actions

I'm new to GitHub Actions, playing with various options to work out good approaches to CI/CD pipelines.
Initially I had all my CI steps under one job, doing the following:
checkout code from repo
lint
scan source for vulnerabilities
build
test
create image
scan image for vulnerabilities
push to AWS ECR
Some of those steps don't need to be done in sequence though; e.g. we could run linting and source code vulnerability scanning in parallel with the build; saving time (if we assume that those steps are going to pass).
i.e. essentially I'd like my pipeline to do something like this:
job1 = {
- checkout code from repo #required per job, since each job runs on a different runner
- lint
}
job2 = {
- checkout code from repo
- scan source for vulnerabilities
}
job3 = {
- checkout code from repo
- build
- test
- create image
- scan image for vulnerabilities
- await job1 & job2
- push to AWS ECR
}
I have a couple of questions:
Is it possible to setup some await jobN rule within a job; i.e. to view the status of one job from another?
(only relevant if the answer to 1 is Yes): Is there any way to have the failure of one job immediately impact other jobs in the same workflow? i.e. If my linting job detects issues then I can immediately call this a fail, so would want the failure in job1 to immediately stop jobs 2 and 3 from consuming additional time, since they're no longer adding value.
Ideally, some of your jobs should be encapsulated in their own workflows, for example:
Workflow for testing the source by whatever means.
Workflow for (building and-) deploying.
and then, have these workflows depend on each other, or be triggered using different triggers.
Unfortunately, at least for the time being, workflow dependency is not an existing feature (reference).
Edit: Dependencies between workflows is now also possible, as discussed in this StackOverflow question.
Although I feel that including all of your mentioned jobs in a single workflow would create a long and hard to maintain file, I believe you can still achieve your goal by using some of the conditionals provided by the GitHub actions syntax.
Possible options:
jobs.<job_id>.if
jobs.<job_id>.needs
Using the latter, a sample syntax may look like this:
jobs:
job1:
job2:
needs: job1
job3:
needs: [job1, job2]
And here is a workflow ready to be used for testing of the above approach. In this example, job 2 will run only after job 1 completes, and job 3 will not run, since it depends on a job that failed.
name: Experiment
on: [push]
jobs:
job1:
name: Job 1
runs-on: ubuntu-latest
steps:
- name: Sleep and Run
run: |
echo "Sleeping for 10"
sleep 10
job2:
name: Job 2
needs: job1
runs-on: ubuntu-latest
steps:
- name: Dependant is Running
run: |
echo "Completed job 2, but triggering failure"
exit 1
job3:
name: Job 3
needs: job2
runs-on: ubuntu-latest
steps:
- name: Will never run
run: |
echo "If you can read this, the experiment failed"
Relevant docs:
Workflow syntax for GitHub Actions
Context and expression syntax for GitHub Actions

Context Default Value is Null in Child Job

I have 2 Jobs - Job 1 and Job2 and Global Context Variables with default values in Talend both the Job's use the same context variables but when I run the Job2 from Job1 instead of having the default variables the Global Context Variables has NULL Value
This only happens when I run Job2 from Job1 if I run the Job2 Separately it runs correctly
Please anyone point me what is wrong in the flow
Thanks in advance
In order for context values to be transmitted from parent to child job, you must activate the option "Transmit whole context" on the tRunJob component which runs the child job.

Lock resources between multiple builds that run parallel in Azure DevOps

Suppose I have a build:
Job1:
Task1: Build
Job2:
dependsOn: Job1
Task2: Test
And the test task uses some kind of database, or another unique resource.
I would like to know if it is possible, when multiple builds are running in parallel, to lock Job2 to run unique without other builds trying to access the same resource.
I am using cmake and ctest, so I know I can do something similar between separate unit tests with RESOURCE_LOCK, but I am certain that I will not be able to lock that resource between multiple ctest processes.
Agree with #MarTin's work around. Set one variable with powershell task in Job1, then get this variable and use it in job condition for Job2.
You don't need to use API to add global variable. There has another easier way can for you try: Output variables. We expand one feature that customer can configure one output variable and access this output variable in the next job which is depending on the first job.
The sample of set output variable in Job1:
##vso[task.setvariable variable=firstVar;isOutput=true]Job2 Need skip
Then get the output variable from Job1 in Job2:
Job2Var: $[dependencies.Job1.outputs['outputVars.firstVar']]
Then, in job2, you can use it into job condition:
condition: eq(dependencies.Job1.outputs['outputVars.firstVar'], 'Job2 Need skip')
The completed simple sample script look like this:
jobs:
- job: Job1
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'echo "##vso[task.setvariable variable=firstVar;isOutput=true]Need skip Job2"'
name: outputVars
- job: Job2
dependsOn: Job1
variables:
Job2Var: $[dependencies.Job1.outputs['outputVars.firstVar'], 'Need skip Job2']
steps:
...
...
The logic I want to express is dynamically assign values to output variable based on Job1 and the current pipeline execution state. One specific value represent that the Job2 should be locked, which means skip its execution. In Job2's condition expression, when the obtained value $[dependencies.Job1.outputs['outputVars.firstVar'] satisfies the predefined expected value, skip current Job2.