How to handle when Quartz.NET fails to resolve dependencies for a job - quartz-scheduler

We're using Quartz.NET with Quartz.Extensions.DependencyInjection to resolve our Jobs with dependencies. The problem is, if a dependency cannot be resolved, the job silently doesn't trigger. There is no exception or log entry that I can find indicating that a dependency could not be resolved. Is there a way to configure Quartz to throw an exception or log something when this kind of failure occurs? Or perhaps validate jobs during startup?

Related

Specflow - Raise Build error when Ambiguous steps found

We are using Specflow3 with NUnit and using CI/CD pipelines to run automation tests.
When someone checks in a code which results in Ambiguous step definition, we were not able to catch it during design time. Is there any way to catch these during the build and fail the build in pipeline?
Sadly, this is not possible. The evaluation of the bindings is completely done at runtime and not at compile time.
You can only configure how the missing or pending steps are reported. If it is an error or an inconclusive warning.
The name of the property is missingOrPendingStepsOutcome - https://docs.specflow.org/projects/specflow/en/latest/Configuration/Configuration.html#runtime

How to run a Scio pipeline on Dataflow from SBT (local)

I am trying to run my first Scio pipeline on Dataflow .
The code in question can be found here. However I do not think that is too important.
My first experiment was to read some local CSV files and write another local CSV file, using the DirecRunner. That worked as expected.
Now, I am trying to read the files from GCS, write the output to BigQuery and run the pipeline using the DataflowRunner. I already made all the necessary changes (or that is what I believe). But I am unable to make it run.
I already gcloud auth application-default login and when I do
sbt run --runner=DataflowRunner --project=project-id --input-path=gs://path/to/data --output-table=dataset.table
I can see the Jb is submitted in Dataflow. However, after one hour the jobs fails with the following error message.
Workflow failed. Causes: The Dataflow job appears to be stuck because no worker activity has been seen in the last 1h.
(Note, the job did nothing in all that time, and since this is an experiment the data is simple too small to take more than a couple of minutes).
Checking the StackDriver I can find the follow error:
java.lang.ClassNotFoundException: scala.collection.Seq
Related to some jackson thing:
java.util.ServiceConfigurationError: com.fasterxml.jackson.databind.Module: Provider com.fasterxml.jackson.module.scala.DefaultScalaModule could not be instantiated
And that is what is killing each executor just at the start. I really do not understand why I can not find the Scala standard library.
I also tried to first create a template and runt it latter with:
sbt run --runner=DataflowRunner --project=project-id --input-path=gs://path/to/data --output-table=dataset.table --stagingLocation=gs://path/to/staging --templateLocation=gs://path/to/templates/template-1
But, after running the template, I get the same error.
Also, I noticed that in the staging folder there are a lot of jars, but the scala-library.jar is not in there.
I am missing something obvious?
It's a known issue with sbt 1.3.0 which introduced some breaking change w.r.t. class loaders. Try 1.2.8?
Also the Jackson issue is probably related to Java 11 or above. Stay with Java 8 for now.
Fix by setting the sbt classLoaderLayeringStrategy:
run / classLoaderLayeringStrategy := ClassLoaderLayeringStrategy.Flat
sbt uses a new classloader for the application that is run with run. This causes other classes already loaded by the JVM (Predef for instance) to be reused, reducing startup time. See in-process classloaders for details.
This doesn't play well with the Beam DataflowRunner because it explicitly does not stage classes from parent classloaders, see PipelineResources.java#L51:
Attempts to detect all the resources the class loader has access to. This does not recurse to class loader parents stopping it from pulling in resources from the system class loader.
So the fix is to force all classes used by your application to be loaded in the same classloader so that DataflowRunner stages everything.
Hope that helps

deploying webjobs through CI-CD pipeline

I have a continuous webJob and am running it through CI-CD pipeline. After successful release webjob is showing as restart pending. Getting an error in the logs
D:\local\Temp\jobs\continuous\MiddleCompassServer\rrwnz5aj.4el>dotnet
MiddleCompassServer.exe
A fatal error was encountered. The library 'hostpolicy.dll' required
to execute the application was not found in
'D:\local\Temp\jobs\continuous\MiddleCompassServer\rrwnz5aj.4el\'.
Failed to run as a self-contained app. If this should be a
framework-dependent app, add the
D:\local\Temp\jobs\continuous\MiddleCompassServer\rrwnz5aj.4el\MiddleCompassServer.runtimeconfig.json
file specifying the appropriate framework.
There was chance that the dotnet core version specified in .json file mismatch with the version installed. You can try the solutions given in this thread mentioned by TrevorBrooks.
If above thread didnot workout for you.Try run .exe directly from console (./MiddleCompassServer.exe ) without using dotnet. as pointed out in this thread.
You can also try adding the runtimes setting in project.json and modify the RuntimeIdentifiers in .csproj file as this thread pointed out.

In UrbanCode Deploy, how do I cause an application process to fail if not all component versions were specified?

Currently, when I run an application process that installs various components, if I don't specify a version for any of them, the deploy component process doesn't run, and it says "No Version Selected". However, the step doesn't fail, and the process continues. Is there a way to configure the process to fail if not all components have a version? Or is there a way for me to interrogate the manifest for the process in a step at the top to figure it out myself and fail accordingly? I currently can find no way to do either of these things. The version of UCD I am using is 6.1.1.3.
If your component process is configured as "Process Type* Operational (With Version)" then if you don't select the version the job will fail.

Talend: Task Execution Fails when re-built sub-job is deployed

We have a perfectly working Talend Workflow which has 4 sub-jobs. One of the jobs needed a change, so we modified it and re-built the job within Talend Open Studio. Copied the jar to our production machine. However, when the Task executed, it failed with a "No Class Def Found" error message.
So, is this not how its supposed to be done? Do we have to re-build and re-deploy the main task and all the sub-jobs even for a minor change in a sub-job? Any ideas?
TIA,
Bee
You need to rebuild and deploy the main job.
I don't know why, mays be have you increased the version of your subjob ?