My Job does no longer starts when I use spring.batch.job.names - spring-batch

I have a number of jobs configured run at different occasions. There are jobs that are started from a rest endpoint and the main one that should start immediately. Here is the configuration:
spring:
config:
activate:
on-profile: dev
batch:
job:
enabled: true
names: startMigrationJob
jdbc:
initialize-schema: always
datasource:
schema: schema-postgresql.sql
auto-commit: true
The job does not run anymore and there is no error just the bootRun prompt. I don't know how the job used to run and now with the configuration the job no longer runs. Please help provide why this is the case

The batch job does run according to the scheduled cron job. I created a cron job that runs every 10 minutes to ensure it ran, and it did. It seems the behaviour is different when you do not specify a job to run.

Related

Jfrog pipelines - Is it possible to disable concurrent runs?

I have two runs - Run 1 & Run 2. I want Run 2 to wait until the Run 1 is finished, how can I achieve this in jfrog pipelines?
You can set this flag: chronological: true in your pipelines definition, which will allow your new runs of the pipeline to wait until the existing active runs are complete.
Please refer this documentation for more help https://www.jfrog.com/confluence/display/JFROG/Defining+a+Pipeline
Any runs of the pipeline will not start running while another run of the same pipeline is processing if chronological is set to true. The default is false, allowing runs to execute in parallel if there are nodes available.

how to kill a process on devops (ideally with a timeout)

I have a complex devops build script in yaml. Is there some way that if a given step takes too much time the process is killed (or some task is executes which kills certain processed).
This is would be useful in our case where we have large tests suites in several DLLs. I am seeing often that some tests fail and after devops hangs. I would like to kill the testrunner and other processes which may be hanging with (and also without) a timeout.
Is this possible on devops?
You can specify timeoutInMinutes and cancelTimeoutInMinutes for the job:
jobs:
- job: Test
timeoutInMinutes: 10 # how long to run the job before automatically cancelling
cancelTimeoutInMinutes: 2 # how much time to give 'run always even if cancelled tasks' before stopping them
More information: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml#timeouts

How to launch scheduled spark jobs even if previous jobs are still executing on rundeck?

Why rundeck not launching scheduled spark jobs even if the previous job is still executing?
Rundeck is skipping the jobs set to launch during the execution of the previous job, then after the completion of its execution launch new job based on the schedule.
But I want to launch a scheduled job even if the previous job is executing.
Check your workflow strategy, here you have an explanation about that:
https://www.rundeck.com/blog/howto-controlling-how-and-where-rundeck-jobs-execute
You can design a workflow strategy based on "Parallel" to launch the jobs simultaneously on your node.
Example using the parallel strategy with a parent job.
Example jobs:
Job one, Job two and Parent Job (using parallel strategy).

Scheduled job fails when run via trigger, but works when run manually

I have a scheduled job to run the following command:
copy 'C:\Users\tdjeilati\Desktop\RDP.rdg' '\\fil03\Dept_1Z\Public\my'
This copies a file to a remote server.
I have a job trigger -AtLogon.
When I log in to my PC, it runs the job.
When I retrieve that job with receive-job, I get the job got an an access is denied error:
But then I run the job by hand, and it works correctly! What gives?
I don't understand why it fails when running from the job trigger but works when I run it manually in powershell. I can only assume that the environment/permissions are different.
EDIT: One thing I noticed is that the job that runs from the jobtrigger doesn't have any childJobs, but the job that I start from command line has child jobs. Why should there be a difference?
The scheduled task may not be running under your user account. This could explain why it works when you manually start the job.
Verify that the task is running as a user with rights to the file and the remote share.

Does rundeck support jobs dependencies?

I've been searching for days on how to layout a rundeck workflow with job dependencies. what I need to do is to have 3 jobs: job-1 and job-2 are scheduled to run in parallel while job-3 will only be triggered after the completion of both job-1, and job-2. assuming that job-1 and job-2 have different execution times.
I tried using job state conditionals to do that but it seems that the condition if not met will halt or fail only. My idea is to halt the execution until all the parent jobs completes and then resume the workflow.
You can achieve this by compiling a master job which includes 2 steps:
step: job-1 and job-2 as a sub-job which includes both (run in parallel if node oriented execution is selected)
step: job-3
But not all 3 in in the same flow.
Right now you can use Job State Conditional feature for that: https://docs.rundeck.com/2.9.4/plugins-user-guide/bundled-plugins.html#job-state-plugin
Rundeck cannot do this for you automatically. You can set a scheduler for job-3 to run after the max timestamp of job1 or job2. Enable "retry" for job3 incase the dependencies would be fail.