Execute batch job (JSR 352) on startup - glassfish-4

I have a batchjob and need to run it up the application. He makes the call for the job, but the job does not reach the method.
BatchRuntime.getJobOperator().start(JOB_NAME, new Properties());
Throws no errors. So it seems that he is looking for the resource that indicates which class Implementing this job, but not yet loaded. Any idea?

The start() method is asynch so the caller isn't going to always see exceptions on failure.
Is the XML corresponding to JOB_NAME found? Any errors in the logs?

Related

system calls and the internal flow

Trying to understand the exact sequence of events in terms of flow in case of Linux:
User application program makes a call to a system call.
This results in execution of code which results in triggering of a software exception (which code is this? It's understandable if I called a glibc API, but what if my program is calling a system call directly). Is there some piece of code that regular system calls have which triggers this exception?
Once the exception is triggered -- user application processes context needs to be preserved and user application I presume will be in pend state(assuming its a waited system call that was invoked).
Exception handler then runs and picks suitable handler based on system call invoked(how is this system call number passed to exception handler from this exception triggering code?)
An exception handler runs(let's say a read call for example and has to return read data to use a process which triggered its invocation - this I assume would mean the exception triggering code has also passed a pointer to buffer to which read data needs to be copied over to)
Once the exception handler is done, the control now has to return to the application which would now be pending system call.
Does this flow look alright > Is there some sample code for each of these steps beyond user application?

running Karma in a loop and programmatic access

It's a question more about the architecture of a program that runs karma in a CI pipeline.
I have a set of web components. They are using karma to run tests (following open-wc.org recommendations). Then I have my custom CI pipeline that allows to schedule a test of selected group of components.
When the test is scheduled it execute tests for each component one by one. However in my logs I am getting messages like
MaxListenersExceededWarning: Possible EventEmitter memory leak
detected. 12 exit listeners added to [process]. Use
emitter.setMaxListeners() to increase limit
or sometimes
listen EADDRINUSE: address already in use 127.0.0.1:9877
which breaks the test (exists the process).
I can't really pinpoint the problem so I am guessing that I am not running the test in a correct way.
On the server I am using Server class to initialize the server, then I am calling start on the server. When the callback function passed to Server constructor is called I am assuming the server is stopped and I can start over with another component. But clearly it is not the case per errors I am getting.
So the question is what would be the right way of running Karma test in a loop, one by one, using node API instead of CLI.
Update
To be specific of how I am running the tests.
I am:
Creating configuration by calling config.parseConfig where the argument is component's karma config file
Calling new Server(opts, (code) => {}) where opts are the one generated in step 1
Adding listeners for browser_complete and browser_error to generate a report and to store it into the data store
Cleaning up (removing reference for the server) when constructor callback is called
Getting next component from the queue and going back to #1
To answer my on question,
I have moved the whole logic of executing a single test to a child process and after the test finishes, but before the next test is run, I am making sure the child process is killed. No more error messages are showing up.

unable to stop test with runner.storp.run method

I am trying to run the nunit test programmatically which I have successfully done. but I also wants to stop the test whenever I need to but runner.stoprun is not stopping the test run. can someone please help with the same.
following is the code snippet which is running the test:
IDictionary<String, Object> options = new Dictionary<String, Object>();
options.Add(FrameworkPackageSettings.DefaultTestNamePattern, testName);
DefaultTestAssemblyBuilder builder = new NUnit.Framework.Api.DefaultTestAssemblyBuilder();
ITest test=builder.Build(asmName, options);
runner = new NUnit.Framework.Api.NUnitTestAssemblyRunner(builder);
runner.Load(asmName, options);
runner.Run(null, TestFilter.Empty);
and with following code I am trying to stop the test in a different thread:
if (runner.IsTestRunning)
runner.StopRun(true);
Thanks in advance.
Not sure what you mean by a different thread. Different from the test thread? Different from the thread that called Run?
In any case, the Run method you are calling is synchronous, so the test will keep that thread busy till it returns.
Since you are using NUnit internal methods (i.e. not designed for public consumption even though visible) you should probably be using a debug build of NUnit so that you can step into the call to StopRun.
Essentially, there are two possibilities:
1. The setup you have created for running tests is wrong.
2. One of the tests is refusing to be terminated.
I'd rule out the second item first, by using a very simple test.

SparkJobServer - is validate() always called before runJob()

According to the SparkJobServer documentation:
validate allows for an initial validation of the context and any
provided configuration. If the context and configuration are OK to run the job, returning spark.jobserver.SparkJobValid will let the job execute, otherwise
returning spark.jobserver.SparkJobInvalid(reason) prevents the job from running and provides means to convey the reason of failure. In this case, the call immediately returns an HTTP/1.1 400 Bad Request status code.
validate helps you preventing running jobs that will eventually fail due to missing or wrong configuration and save both time and resources.
Can I therefore assume that validate() would always be called before runJob()?
If I load and verify the job configuration in validate(), can my runJob() assume it was loaded correctly and is available where validate() left it?
Yes, your assumption is correct. See https://github.com/spark-jobserver/spark-jobserver/blob/master/job-server/src/spark.jobserver/JobManagerActor.scala#L268

Suspending the workflow instance in the Fault Handler

I want to implement a solution in my workflows that will do the following :
In the workflow level I want to implement a Fault Handler that will suspend the workflow for any exception.
Then at sometime the instance will get a Resume() command .
What I want to implement that when the Resume() command received , the instance will execute again the activity that failed earlier ( and caused the exception ) and then continue to execute whatever he has to do .
What is my problem :
When suspended and then resumed inside the Fault Handler , the instance is just completes. The resume of course doesn't make the instance to return back to the execution,
since that in the Fault Handler , after the Suspend activity - I have nothing. So
obviously the execution of the workflow ends there.
I DO want to implement the Fault Handler in the workflow level and not to use a While+Sequence activity to wrap each activity in the workflow ( like described here:
Error Handling In Workflows ) since with my pretty heavy workflows - this will look like a hell.
It should be kinda generic handling..
Do you have any ideas ??
Thanks .
If you're working on State Machine Workflows, my technique for dealing with errors that require human intervention to fix is creating an additional 'stateactivity' node that indicates an 'error' state, something like STATE_FAULTED. Then every state has a faulthandler that catches any exception, logs the exception and changes state to STATE_FAULTED, passing information like current activity, type of exception raised and any other context info you might need.
In the STATE_FAULTED initialization you can listen for an external command (your Resume() command, or whatever suits your needs), and when everything is OK you can just switch to the previous state and resume execution.
I am afraid that isn't going to work. Error handling in a workflow is similar to a Try/Catch block and the only way to retry is to wrap everything is a loop and just execute the loop again if something was wrong.
Depending on the sort of error you are trying to deal with you might be able to achieve your goal by creating custom activities that wrap their own execution logic in a Try/Catch and contain the required retry logic.