New to stack exchange and Giraph so please overlook mistakes and ask any clarifying questions.
OS: ubuntu 13.10
Hadoop/Yarn: hadoop-2.2.0/ (2-node cluster)
Giraph: 1.0.0 (EDIT: trunk)
I'm getting a NullPointerException (NPE) when I attempt to run the following example:
$ hadoop jar
$GIRAPH_HOME/giraph-examples/target/giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.2.0-jar-with-dependencies.jar
org.apache.giraph.GiraphRunner
org.apache.giraph.examples.SimpleShortestPathsComputation -vif
org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat
-vip /user/hduser/rrdata/tiny_graph.txt -vof org.apache.giraph.io.formats.IdWithValueTextOutputFormat -op
/user/hduser/rrdata/output/tiny_graph.out -w 1
Stack Trace:
Exception in thread "main" java.lang.NullPointerException at
org.apache.giraph.yarn.GiraphYarnClient.checkJobLocalZooKeeperSupported(GiraphYarnClient.java:460)
at
org.apache.giraph.yarn.GiraphYarnClient.run(GiraphYarnClient.java:116)
at org.apache.giraph.GiraphRunner.run(GiraphRunner.java:96) at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at
org.apache.giraph.GiraphRunner.main(GiraphRunner.java:126) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606) at
org.apache.hadoop.util.RunJar.main(RunJar.java:212)
It seems zookeeper related. I installed zookeeper but not having used it before it seems like the configs are wrong. I've tried -Dgiraph.zkList=hostname:port and related options but get 'Unrecognized option' exception.
Looking for the correct zookeeper settings for this scenarios. I'll post a reply if I figure it out.
This is an example how you can specify -D flags:
hadoop jar giraph-examples-1.1.0-SNAPSHOT-for-hadoop-2.2-jar-with-dependencies.jar org.apache.giraph.GiraphRunner -D giraph.zkList="zkNode.net:2081" org.apache.giraph.examples.SimpleShortestPathsComputation -vif org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat -vip /user/rav/giraph/input/tiny_graph.txt -vof org.apache.giraph.io.formats.IdWithValueTextOutputFormat -op /user/rav/giraph/output/shortestpaths -w 1
Btw local zookeeper is not supported in Giraph yet (GiraphYarnClient):
/**
* Check if the job's configuration is for a local run. These can all be
* removed as we expand the functionality of the "pure YARN" Giraph profile.
*/
private void checkJobLocalZooKeeperSupported() {
final boolean isZkExternal = giraphConf.isZookeeperExternal();
final String checkZkList = giraphConf.getZookeeperList();
if (!isZkExternal || checkZkList.isEmpty()) {
throw new IllegalArgumentException("Giraph on YARN does not currently" +
"support Giraph-managed ZK instances: use a standalone ZooKeeper.");
}
}
Unfortunately checkZkList is NULL so you will never see this exception :)
The reason for the NPE is probably the lack of a giraphConf to check for ZK settings. i think this is due to earlier problems in the run. Looks like the examples jar was not supplied using a -yj argument. the jar you run with "hadoop jar" is typically giraph-core itself.
Good luck, please post on the Giraph user list if you have more trouble.
Related
Objective
I am trying to connect to my Oracle Database(12c) from Kafka Connect(ideally in distributed mode) using the Debezium connector(1.2.4.Final). The Kafka version i am using is 2.13-2.6.0.
Command used
As per mentioned here, i am running this command:
C:\Users\username\Downloads\kafka>bin\windows\connect-distributed.bat config\connect-distributed.properties
Error
The error i am getting is:
ERROR Stopping due to error
(org.apache.kafka.connect.cli.ConnectDistributed)
java.lang.NoClassDefFoundError: io/debezium/util/IoUtil
at io.debezium.connector.oracle.Module.(Module.java:19)
at io.debezium.connector.oracle.OracleConnector.version(OracleConnector.java:23)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.versionFor(DelegatingClassLoader.java:390)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.versionFor(DelegatingClassLoader.java:395)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:365)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:337)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:268)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:260)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:229)
at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:206)
at org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:61)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:91)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
Caused by: java.lang.ClassNotFoundException: io.debezium.util.IoUtil
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 13 more
Settings
In my connect-distributed.properties, i have this:
plugin.path=C:/Users/username/Downloads/kafka/libs/debezium
And inside the debezium folder(following Gunnar's recommendation from the comment in this question), i have these jars:
I also added the plugin path in %PATH% as follows:
echo %PATH% | findstr debezium
XXX;C:\Users\username\Downloads\kafka\libs\debezium;
Help
Any help would be greatly appreciated, as i hope to replace my database polling with this debezium connector which seems a better approach. Thanks!
The solution from Gunnar here works! (His explanation is there too if you want to check it out.)
plugin.path=C:\\Users\\username\\Downloads\\kafka\\libs
and that also works:
plugin.path=C:/Users/username/Downloads/kafka/libs
plugin.path=C:\Users\username\Downloads\kafka\libs
plugin.path=/Users/username/Downloads/kafka/libs
The mistake is: plugin.path should be up to libs and not libs/debezium
I've installed WSO2 Integration Studio version 6.5.0 in my Windows workstation and created a project using the Kafka Consumer and Producer built-in template.
Then I configured the project with my own Kafka server settings (topic name "myTopic").
I then right-clicked the composite application and chose Export Project Artifacts and Run.
The Console window displayed at the very top the following messages:
[2019-06-25 09:23:45,499] [micro-integrator] INFO - LibraryArtifactDeployer Synapse Library named '{org.wso2.carbon.connector}kafkaTransport' has been deployed from file : C:\IntegrationStudio\runtime\microesb\tmp\carbonapps\-1234\1561465425230TestCompositeApplication_1.0.0.car\kafkaTransport-connector_2.0.6\kafkaTransport-connector-2.0.6.zip
[2019-06-25 09:23:45,517] [micro-integrator] INFO - SynapseImportFactory Successfully created Synapse Import: kafkaTransport
[2019-06-25 09:23:45,533] [micro-integrator] ERROR - ClassMediatorFactory
Error in instantiating class :
org.wso2.carbon.connector.KafkaProduceConnector
java.lang.NoClassDefFoundError: org/apache/kafka/common/header/Headers
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getConstructor0(Class.java:3075)
[snipped rest for clarity]
I've tried uninstalling Integrator Studio and running it with elevated right to no avail.
I expected the project to be deployed normally.
EDIT: after copying:
kafka_2.11-2.2.1.jar
metrics-core-2.2.0.jar
zkclient-0.11.jar
kafka-clients-2.2.1.jar
scala-library-2.11.12.jar
zookeeper-3.4.13.jar
to the EI_HOME/lib directory, the exception changed to:
org.apache.axis2.deployment.DeploymentException: kafka/consumer/ConsumerTimeoutException
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:219)
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifactType(SynapseAppDeployer.java:1099)
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(SynapseAppDeployer.java:114)
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(ApplicationManager.java:272)
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(CappAxis2Deployer.java:72)
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807)
at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:144)
[snipped for clarity]
Caused by: org.apache.axis2.deployment.DeploymentException: kafka/consumer/ConsumerTimeoutException
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:207)
... 87 more
Caused by: java.lang.NoClassDefFoundError: kafka/consumer/ConsumerTimeoutException
at org.wso2.carbon.inbound.endpoint.protocol.kafka.KAFKAPollingConsumer.startsMessageListener(KAFKAPollingConsumer.java:90)
at org.wso2.carbon.inbound.endpoint.protocol.kafka.KAFKAProcessor.init(KAFKAProcessor.java:96)
at org.apache.synapse.inbound.InboundEndpoint.init(InboundEndpoint.java:79)
at org.apache.synapse.deployers.InboundEndpointDeployer.deploySynapseArtifact(InboundEndpointDeployer.java:57)
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:197)
... 87 more
Caused by: java.lang.ClassNotFoundException: kafka.consumer.ConsumerTimeoutException cannot be found by synapse-core_2.1.7.wso2v111
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:475)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 92 more
Have you copied the required jars from kafka_home/libs folder to EI_home/lib, if yes then share your code to get the issue detail
According to this documentation https://docs.wso2.com/display/EI650/Kafka+Inbound+Protocol, the recommended versions for Kafka is kafka_2.9.2-0.8.1.1. You can download it in the below link. http://kafka.apache.org/downloads.html. Please use those jars and copy them to the EI_HOME/lib. There is an github issue for this as well. https://github.com/wso2/product-ei/issues/2239
I may be a bit too late but we have been using Custom inbound endpoint for Kafka. We also faced exactly same issue and that was the only way to fix it.
You could use https://github.com/wso2-extensions/esb-inbound-kafka/blob/master/docs/config.md to configure it.
Spark 1.6.2 (YARN master)
Package name: com.example.spark.Main
Basic SparkSQL code
val conf = new SparkConf()
conf.setAppName("SparkSQL w/ Hive")
val sc = new SparkContext(conf)
val hiveContext = new HiveContext(sc)
import hiveContext.implicits._
// val rdd = <some RDD making>
val df = rdd.toDF()
df.write.saveAsTable("example")
And stacktrace...
No X11 DISPLAY variable was set, but this program performed an operation which requires it.
at java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:204)
at java.awt.Window.<init>(Window.java:536)
at java.awt.Frame.<init>(Frame.java:420)
at java.awt.Frame.<init>(Frame.java:385)
at com.trend.iwss.jscan.runtime.BaseDialog.getActiveFrame(BaseDialog.java:75)
at com.trend.iwss.jscan.runtime.AllowDialog.make(AllowDialog.java:32)
at com.trend.iwss.jscan.runtime.PolicyRuntime.showAllowDialog(PolicyRuntime.java:325)
at com.trend.iwss.jscan.runtime.PolicyRuntime.stopActionInner(PolicyRuntime.java:240)
at com.trend.iwss.jscan.runtime.PolicyRuntime.stopAction(PolicyRuntime.java:172)
at com.trend.iwss.jscan.runtime.PolicyRuntime.stopAction(PolicyRuntime.java:165)
at com.trend.iwss.jscan.runtime.NetworkPolicyRuntime.checkURL(NetworkPolicyRuntime.java:284)
at com.trend.iwss.jscan.runtime.NetworkPolicyRuntime._preFilter(NetworkPolicyRuntime.java:164)
at com.trend.iwss.jscan.runtime.PolicyRuntime.preFilter(PolicyRuntime.java:132)
at com.trend.iwss.jscan.runtime.NetworkPolicyRuntime.preFilter(NetworkPolicyRuntime.java:108)
at org.apache.commons.logging.LogFactory$5.run(LogFactory.java:1346)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.commons.logging.LogFactory.getProperties(LogFactory.java:1376)
at org.apache.commons.logging.LogFactory.getConfigurationFile(LogFactory.java:1412)
at org.apache.commons.logging.LogFactory.getFactory(LogFactory.java:455)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at org.apache.hadoop.hive.shims.HadoopShimsSecure.<clinit>(HadoopShimsSecure.java:60)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.hadoop.hive.shims.ShimLoader.createShim(ShimLoader.java:146)
at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:141)
at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:100)
at org.apache.spark.sql.hive.client.ClientWrapper.overrideHadoopShims(ClientWrapper.scala:116)
at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:69)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:249)
at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:345)
at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:255)
at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:459)
at org.apache.spark.sql.hive.HiveContext.defaultOverrides(HiveContext.scala:233)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:236)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:101)
at com.example.spark.Main1$.main(Main.scala:52)
at com.example.spark.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
ivysettings.xml file not found in HIVE_HOME or HIVE_CONF_DIR,/etc/hive/2.5.3.0-37/0/ivysettings.xml will be used
This same code was working a week ago on a fresh HDP cluster, and it works fine in the sandbox... the only thing I remember doing was trying to change around the JAVA_HOME variable, but I am fairly sure I undid those changes.
I'm at a loss - not sure how to start tracking down the issue.
The cluster is headless, so of-course it has no X11 display, but what piece of new HiveContext even needs to pop-up any JFrame?
Based on the logs, I'd say it's a Java configuration issue I messed up and something within org.apache.hadoop.hive.shims.HadoopShimsSecure.<clinit>(HadoopShimsSecure.java:60) got triggered, therefore a Java security dialog is appearing, but I don't know.
Can't do X11 forwarding, and tried to do export SPARK_OPTS="-Djava.awt.headless=true" before a spark-submit, and that didn't help.
Tried these, but again, can't forward and don't have a display
Getting a HeadlessException: No X11 DISPLAY variable was set
"No X11 DISPLAY variable" - what does it mean?
The error seems to be reproducible on two of the Spark clients.
Only on one machine did I try changing JAVA_HOME.
Did an Ambari Hive service-check. Didn't fix it.
Can connect fine to Hive database via Hive/Beeline CLI
As far as the Spark code is concerned, this seemed to mitigate the error.
val conf = SparkConf()
conf.set("spark.executor.extraJavaOptions" , "-Djava.awt.headless=true")
Original answer
Found this post. Spring 3.0.5 - java.awt.HeadlessException - com.trend.iwss.jscan
Basically, Trend Micro is inserting some com.trend.iwss.jscan package into the JAR files that are downloaded via Maven through a company firewall , and I have no control over that.
(link not working) http://esupport.trendmicro.com/Pages/IWSx-3x-Some-files-and-folders-are-added-to-the-Jar-files-after-passin.aspx
Wayback Machine to the rescue...
If anyone else has input, I would also like to hear it.
Problem:
When downloading some .JAR files via IWSA, a directory filled with .class file, which is not related to what is being downloaded, is added to the jar file (com\trend\iwss\jscan\runtime\).
Solution:
This happens because if a JAR file is originally unsigned, IWSA will insert some code into the applet to monitor and restrict potential harmful actions.
For IWSS/IWSA, every "get" request is the same so it will not know if you are trying to download an archive or an applet, which will be executed by your browser.
This code is added for security reason to monitor the behavior of the "possible" applet to be sure that it does not do any harm to the machine and its environment.
To prevent this issue, please follow these steps:
Log on to the IWSS web console.
Go to HTTP > Applets and ActiveX > Policies > Java Applet Security Rules.
Under Java Applet Security, change the value of "No signature" to either "Pass" or "Block", depending on what you want to do with the unsigned .JAR files.
Click Save.
After I deleted a Google Cloud Storage directory through the Google Cloud Console, (the directory was generated by early Spark (ver 1.3.1) job), when re-run the job, it always fail and seemed the directory was still there to the job; I cannot find the directory with gsutil.
Is this a bug, or anything I missed? Thanks!
The error I got:
java.lang.RuntimeException: path gs://<my_bucket>/job_dir1/output_1.parquet already exists.
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.parquet.DefaultSource.createRelation(newParquet.scala:112)
at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:240)
at org.apache.spark.sql.DataFrame.save(DataFrame.scala:1196)
at org.apache.spark.sql.DataFrame.saveAsParquetFile(DataFrame.scala:995)
at com.xxx.Job1$.execute(Job1.scala:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
It appears you might be running into a known bug with the NFS list-consistency cache: https://github.com/GoogleCloudPlatform/bigdata-interop/issues/5
It was fixed in the latest release, and if you upgrade by deploying a new cluster with bdutil-1.3.1 (announced here: https://groups.google.com/forum/#!topic/gcp-hadoop-announce/vstNuV0LpDc) the problem should be fixed. If you need to upgrade in-place, you can try to download the latest gcs-connector-1.4.1 jarfile onto your master and worker nodes under /home/hadoop/hadoop-install/lib/gcs-connector-*.jar and then rebooting the Spark daemons:
sudo sudo -u hadoop /home/hadoop/spark-install/sbin/stop-all.sh
sudo sudo -u hadoop /home/hadoop/spark-install/sbin/start-all.sh
When tried to build a JMS Client to get a connection to messaging on JBoss5(default configuration), I encounter this error(at line: Connection conn = qcf.createQueueConnection();). This is maven project with following libraries in classpath.
jboss:jnp-client:jar:4.0.2:compile
jboss:jboss-aop:jar:JBOSSAS-5.1:compile
jboss:jboss-messaging-client:jar:1.4.7.GA:compile
jboss:jbossall-client:jar:JBOSSAS-5.1:compile
jboss:jboss-common-core:jar:JBOSSAS-5.1:compile
jboss:jboss-mdr:jar:JBOSSAS-5.1:compile
jboss:jboss-logging-spi:jar:JBOSSAS-5.1:compile
org.jboss.remoting:jboss-remoting:jar:2.5.3.SP1:compile
For a very simple code this did not make sense. Any help is appreciated.
My code is as following:
Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
env.put(Context.PROVIDER_URL, "jnp://localhost:1099");
env.put(Context.OBJECT_FACTORIES, "ConnectionFactory");
env.put(Context.URL_PKG_PREFIXES, "org.jboss.naming:org.jnp.interfaces");
InitialContext iniCtx = new InitialContext(env);
Object tmp = iniCtx.lookup("java:/XAConnectionFactory");
QueueConnectionFactory qcf = (QueueConnectionFactory) tmp;
Connection conn = qcf.createQueueConnection();
The error I was getting is
Exception in thread "main" java.lang.RuntimeException: Failed to download and/or install client side AOP stack
at org.jboss.jms.client.JBossConnectionFactory.createConnectionInternal(JBossConnectionFactory.java:199)
at org.jboss.jms.client.JBossConnectionFactory.createQueueConnection(JBossConnectionFactory.java:101)
at org.jboss.jms.client.JBossConnectionFactory.createQueueConnection(JBossConnectionFactory.java:95)
at com.test.JMSExample.main(JMSExample.java:120)
Caused by: org.jboss.jms.exception.MessagingNetworkFailureException: Failed to connect client
at org.jboss.jms.client.delegate.ClientConnectionFactoryDelegate.createClient(ClientConnectionFactoryDelegate.java:347)
at org.jboss.jms.client.delegate.ClientConnectionFactoryDelegate.org$jboss$jms$client$delegate$ClientConnectionFactoryDelegate$getClientAOPStack$aop(ClientConnectionFactoryDelegate.java:246)
at org.jboss.jms.client.delegate.ClientConnectionFactoryDelegate.getClientAOPStack(ClientConnectionFactoryDelegate.java)
at org.jboss.jms.client.ClientAOPStackLoader.load(ClientAOPStackLoader.java:75)
at org.jboss.jms.client.JBossConnectionFactory.createConnectionInternal(JBossConnectionFactory.java:192)
... 3 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.jboss.remoting.InvokerRegistry.loadClientInvoker(InvokerRegistry.java:460)
at org.jboss.remoting.InvokerRegistry.createClientInvoker(InvokerRegistry.java:359)
at org.jboss.remoting.Client$6.run(Client.java:724)
at java.security.AccessController.doPrivileged(Native Method)
at org.jboss.remoting.Client.connect(Client.java:720)
at org.jboss.remoting.Client.connect(Client.java:668)
at org.jboss.jms.client.delegate.ClientConnectionFactoryDelegate.createClient(ClientConnectionFactoryDelegate.java:343)
... 7 more
Caused by: java.lang.NoSuchMethodError: org.jboss.util.propertyeditor.PropertyEditors.mapJavaBeanProperties(Ljava/lang/Object;Ljava/util/Properties;Z)V
at org.jboss.remoting.transport.socket.MicroSocketClientInvoker.mapJavaBeanProperties(MicroSocketClientInvoker.java:1359)
at org.jboss.remoting.transport.socket.MicroSocketClientInvoker.setup(MicroSocketClientInvoker.java:533)
at org.jboss.remoting.transport.socket.MicroSocketClientInvoker.<init>(MicroSocketClientInvoker.java:292)
at org.jboss.remoting.transport.socket.SocketClientInvoker.<init>(SocketClientInvoker.java:78)
at org.jboss.remoting.transport.bisocket.BisocketClientInvoker.<init>(BisocketClientInvoker.java:166)
at org.jboss.remoting.transport.bisocket.TransportClientFactory.createClientInvoker(TransportClientFactory.java:44)
... 18 more
It looks like there is a mismatch between JAR files or connectivity problem. Try to do following steps:
1) set -verbose:class option of JVM running JBoss AS and examine the output to find where MicroSocketClientInvoker.class comes from, it looks like JBoss couldn't find a method from it.
2) check if port 4457 is opened, because JBoss messaging connector uses default serverBindPort on 4457.
Hope it helps.
I had the same issue. The issue can be caused by several underlying root causes. In order to evaluate your specific root cause you need to look at the exception stack trace all the way down the "Caused by" exception the to root exception.
In my case the problem was caused by a truststore certificate file that had been corrupted due to incorrectly filtering it during processResources taks of my gradle project. Binary files get corrupted when they are filtered during processResources. For me the fix was to exclude the my certificate.truststore file from resource filtering.