Why is triggerJob disabled in Quartz's JMX? - quartz-scheduler

I have successfully configured our app to export Quartz's MBeans into JMX and can view everything in JConsole. I can run the majority of the scheduler operations.
The one I really want to run is 'triggerJob', but that is showing up in JConsole as greyed-out/disabled so I can't run it.
I've scanned the commits that added the JMX code to Quartz but can't see any differences between triggerJob and the other operations that are enabled.
Anyone have a clue what's going on?
EDIT - explanation found
A different StackOverflow issue describes what's going on: Why are some methods on the JConsole disabled
triggerJob (and two other operations) take non-primitive parameters, these complex parameters cannot be provided in JConsole.
I am not clear if the MBean provider might provide a custom editor for JConsole (or simlar), but at least I have my answer.

Thank you for your explanation. I have successfully triggered a job remotely through JMX using the following Groovy code:
def callParams = new Object[3]
callParams[0] = 'com.test.project.TestJob'
callParams[1] = 'DEFAULT_JOB_GROUP'
callParams[2] = new HashMap()
def callSignature = new String[3]
callSignature[0] = 'java.lang.String'
callSignature[1] = 'java.lang.String'
callSignature[2] = 'java.util.Map'
// server is an instance of MBeanServerConnection
server.invoke('triggerJob', callParams, callSignature)

Related

Authenticate with ECE ElasticSearch Sink from Apache Fink (Scala code)

Compiler error when using example provided in Flink documentation. The Flink documentation provides sample Scala code to set the REST client factory parameters when talking to Elasticsearch, https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/elasticsearch.html.
When trying out this code i get a compiler error in IntelliJ which says "Cannot resolve symbol restClientBuilder".
I found the following SO which is EXACTLY my problem except that it is in Java and i am doing this in Scala.
Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
I tried copy pasting the solution code provided in the above SO into IntelliJ, the auto-converted code also has compiler errors.
// provide a RestClientFactory for custom configuration on the internally created REST client
// i only show the setMaxRetryTimeoutMillis for illustration purposes, the actual code will use HTTP cutom callback
esSinkBuilder.setRestClientFactory(
restClientBuilder -> {
restClientBuilder.setMaxRetryTimeoutMillis(10)
}
)
Then i tried (auto generated Java to Scala code by IntelliJ)
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
import org.apache.http.auth.AuthScope
import org.apache.http.auth.UsernamePasswordCredentials
import org.apache.http.client.CredentialsProvider
import org.apache.http.impl.client.BasicCredentialsProvider
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder
import org.elasticsearch.client.RestClientBuilder
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
esSinkBuilder.setRestClientFactory((restClientBuilder) => {
def foo(restClientBuilder) = restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = { // elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
}
})
foo(restClientBuilder)
})
The original code snippet produces the error "cannot resolve RestClientFactory" and then Java to Scala shows several other errors.
So basically i need to find a Scala version of the solution described in Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
Update 1: I was able to make some progress with some help from IntelliJ. The following code compiles and runs but there is another problem.
esSinkBuilder.setRestClientFactory(
new RestClientFactory {
override def configureRestClientBuilder(restClientBuilder: RestClientBuilder): Unit = {
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = {
// elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
httpClientBuilder.setSSLContext(trustfulSslContext)
}
})
}
}
The problem is that i am not sure if i should be doing a new of the RestClientFactory object. What happens is that the application connects to the elasticsearch cluster but then discovers that the SSL CERT is not valid, so i had to put the trustfullSslContext (as described here https://gist.github.com/iRevive/4a3c7cb96374da5da80d4538f3da17cb), this got me past the SSL issue but now the ES REST Client does a ping test and the ping fails, it throws an exception and the app shutsdown. I am suspecting that the ping fails because of the SSL error and maybe it is not using the trustfulSslContext i setup as part of new RestClientFactory and this makes me suspect that i should not have done the new, there should be a simple way to update the existing RestclientFactory object and basically this is all happening because of my lack of Scala knowledge.
Happy to report that this is resolved. The code i posted in Update 1 is correct. The ping to ECE was not working for two reasons:
The certificate needs to include the complete chain including the root CA, the intermediate CA and the cert for the ECE. This helped get rid of the whole trustfulSslContext stuff.
The ECE was sitting behind an ha-proxy and the proxy did the mapping for the hostname in the HTTP request to the actual deployment cluster name in ECE. this mapping logic did not take into account that the Java REST High Level client uses the org.apache.httphost class which creates the hostname as hostname:port_number even when the port number is 443. Since it did not find the mapping because of the 443 therefore the ECE returned a 404 error instead of 200 ok (only way to find this was to look at unencrypted packets at the ha-proxy). Once the mapping logic in ha-proxy was fixed, the mapping was found and the pings are now successfull.

Running junit tests in maven ignores programmatic log4j2 setup

I have a particular JUnit test which processes a big data file and when the logging level is left on TRACE, it kills Eclipse - something to do with the console handling, which is not relevant to this question.
I often switch between running all my tests using m2e, in which case I don't need any debugging log output, and running individual tests, where I want often want to see the TRACE output.
To avoid the necessity of editing my log4j2.xml config every time, I coded the log4j config to increase the logging level to INFO in this particular test, like this from programmatically-change-log-level-in-log4j2:
#Before
public void beforeTest() {
LoggerContext ctx = (LoggerContext) LogManager.getContext(false);
Configuration config = ctx.getConfiguration();
LoggerConfig loggerConfig = config.getLoggerConfig(
LogManager.ROOT_LOGGER_NAME);
initialLogLevel = loggerConfig.getLevel();
loggerConfig.setLevel(Level.INFO);
}
But it has no effect.
If the "ROOT_LOGGER" that I am manipulating here represents the same logger as the <root> in my log4j2.xml, then this is not going to work, is it? I need to override all the other loggers, or shut it down completely, but how?
Could it be influenced by my use of slf4j as the log4j2 wrapper in all of my other classes?
I have tried getting hold of the Appenders and using append.stop() but that doesn't work.
You can put a log4j2 config file in src/test/resources/ directory. During unit tests that file will be used.

Felix Dependecy-Manager not creating GoGo-Command

I am trying to create a GoGo-Shell-Command using the Felix-Dependency-Manager (Version 3.2.0) without Annotations.
As far as I understand, the gogo-runtime uses the whiteboard-pattern and scans for services with Properties using the keys CommandProcessor.COMMAND_SCOPE and CommandProcessor.COMMAND_FUNCTION.
In my case, the bundle is started, the service is registered with the correct properties but my command is not listed under "help" nor does it work when I try to call it.
The following code registers the service within the BundleActivator (DependencyActivatorBase):
Properties props = new Properties();
props.put(CommandProcessor.COMMAND_SCOPE, "test");
props.put(CommandProcessor.FUNCTION_SCOPE, new String[] {"command"});
manager.add(createComponent()
.setInterface(Object.class.getName(), props)
.setImplementation(MyConsole.class)
.add(createServiceDependency()
.setService(MyService.class)));
The following bundles are listed with lb-Command when running my code.
org.apache.felix.gogo.command
org.apache.felix.gogo.runtime
org.apache.felix.gogo.shell
org.apache.felix.dependencymanager
org.apache.felix.dependencymanager.shell
mybundle.service
mybundle.api
mybundle.console
Development is done with BndTools.
Am I missing something here?
First of all, your assumption about how to register gogo commands is correct: a whiteboard pattern is used and the scope and function properties determine the commands.
You did not post the code for MyConsole. Does it actually contain a method called command? If not, that could be the problem.
Another potential problem could be that you did not actually add a Bundle-Activator line in your manifest.
If that's not it, use the dm notavail command to see if there are any unregistered components (because of missing dependencies).

Squeryl 0.9.5 (with Lift 2.4) not releasing database connections/pools

Following the recommended transaction setup for Squeryl, in my Boot.scala:
import net.liftweb.squerylrecord.SquerylRecord
import org.squeryl.Session
import org.squeryl.adapters.H2Adapter
SquerylRecord.initWithSquerylSession(Session.create(
DriverManager.getConnection("jdbc:h2:lift_proto.db;DB_CLOSE_DELAY=-1", "sa", ""),
new H2Adapter
))
The first startup works fine. I can connect via H2's web-interface and if I use my app, it updates the database appropriately. However if I restart jetty without restarting the JVM, I get:
java.sql.SQLException: No suitable driver found for jdbc:h2:lift_proto.db;DB_CLOSE_DELAY=-1
The same result is had if I replace "DB_CLOSE_DELAY=-1" with "AUTO_SERVER=TRUE", or remove it entirely.
Following the recommendations on the Squeryl list, I tried C3P0:
import com.mchange.v2.c3p0.ComboPooledDataSource
val cpds = new ComboPooledDataSource
cpds.setDriverClass("org.h2.Driver")
cpds.setJdbcUrl("jdbc:h2:lift_proto")
cpds.setUser("sa")
cpds.setPassword("")
org.squeryl.SessionFactory.concreteFactory =
Some(() => Session.create(
cpds.getConnection, new H2Adapter())
)
This produces similar behavior:
WARNING: A C3P0Registry mbean is already registered. This probably means that an application using c3p0 was undeployed, but not all PooledDataSources were closed prior to undeployment. This may lead to resource leaks over time. Please take care to close all PooledDataSources.
To be sure it wasn't anything I was doing which was causing this, I started and stopped the server without calling a transaction { } block. No exceptions were thrown. I then added to my Boot.scala:
transaction { /* Do nothing */ }
And the exception was once again thrown (I'm assuming because connections are lazy). So I moved the db initialization code to its own file away from Lift:
SessionFactory.concreteFactory = Some(()=>
Session.create(
java.sql.DriverManager.getConnection("jdbc:h2:mem:test", "sa", ""),
new H2Adapter
))
transaction {}
Results were unchanged. What am I doing wrong? I cannot find any mention of needing to explicitly close connections or sessions in the Squeryl documentation, and this is my first time using JDBC.
I found mention of the same issue here on the Lift google group, but no resolution.
Thanks for any help.
When you say you are restarting Jetty, I think what you're actually doing is reloading your webapp within Jetty. Neither the h2 database or C3P0 will automatically shut down when your app reloads, which explains the errors you are receiving when Lift tries to initialize them a second time. You don't see the error when you don't create a transaction block because both h2 and C3P0 are initialized when the first DB connection is retrieved.
I tend to use BoneCP as a connection pool myself. You can configure the minimum number of pooled connections to be > 1, which will stop h2 from shutting down without the need for DB_CLOSE_DELAY=-1. Then you can use:
LiftRules.unloadHooks append { () =>
yourPool.close() //should destroy the pool and it's associated threads
}
That will close all of the connections when Lift is shutdown, which should properly shutdown the h2 database as well.

Programmatically flush JBoss 4.2.2 connection pool

I'm running JBoss 4.2.2. I'm trying to determine the correct code to both:
Lookup the org.jboss.resource.connectionmanager.JBossManagedConnectionPool
Perform a flush() operation on said pool.
I've found a couple other questions out there with no answers. I'm hoping this doesn't become yet another one of them.
The closest question I've found so far: https://community.jboss.org/message/637784
Here's the basics using a quickie groovy example.
First, you want jboss-4.2.2/client/jbossall-client.jar in your classpath.
Next, you need the JMX ObjectName of the data source. It may be helpful to find this in the JMX Console at http://localhost:8080/jmx-console/ or however you have deployed. So the string value of the ObjectName is the domain + ":" + the properties.
For example:
The ObjectName is: jboss.jca:name=DefaultDS,service=ManagedConnectionPool.
Next, look up the RMIAdaptor in JNDI. This is the MBeanServer interfac that will allow you to invoke the flush operation on the target MBean. Then call the invocation. That's it.
import javax.management.*;
import javax.naming.*;
p = new Properties();
p.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
p.put(Context.PROVIDER_URL, "localhost:1099");
ctx = new InitialContext(p);
rmiAdaptor = ctx.lookup("jmx/rmi/RMIAdaptor");
rmiAdaptor.invoke(new ObjectName("jboss.jca:name=DefaultDS,service=ManagedConnectionPool"), "flush", [] as Object[], [] as String[]);
Make sense ?
===== Update =====
If you are executing this from inside the JBoss JVM, you don't need the JNDI setup:
import javax.management.*;
import org.jboss.mx.util.MBeanServerLocator;
MBeanServer server = MBeanServerLocator.locateJBoss();
server.invoke(new ObjectName("jboss.jca:name=DefaultDS,service=ManagedConnectionPool"), "flush", [] as Object[], [] as String[]);