Getting issue when using JGroups in Fuse ESB 7.1.0 - jboss

I am using Infinispan which uses JGroups
When I run my code on a Windows platform it works fine.
But when runnning on Linux I get the following exception:
failed sending discovery request
at java.net.PlainDatagramSocketImpl.send(Native Method)[:1.7.0_17]
at java.net.DatagramSocket.send(DatagramSocket.java:676)[:1.7.0_17]
at org.jgroups.protocols.MPING.sendMcastDiscoveryRequest(MPING.java:300)[143:xxxx:2.0.0.SNAPSHOT]
at org.jgroups.protocols.Discovery.sendDiscoveryRequest(Discovery.java:259)[143:xxxx:2.0.0.SNAPSHOT]
at org.jgroups.protocols.Discovery.sendDiscoveryRequest(Discovery.java:259)[143:xxxx:2.0.0.SNAPSHOT]at org.jgroups.protocols.Discovery.findMembers(Discovery.java:216)[143:xxxx:2.0.0.SNAPSHOT]
at org.jgroups.protocols.Discovery.findAllViews(Discovery.java:203)[143:xxxx:2.0.0.SNAPSHOT]
at org.jgroups.protocols.Discovery.down(Discovery.java:527)[143:xxxx:2.0.0.SNAPSHOT]
at org.jgroups.protocols.MERGE2$FindSubgroupsTask.findAllViews(MERGE2.java:326)[143:xxxx:2.0.0.SNAPSHOT]
at org.jgroups.protocols.MERGE2$FindSubgroupsTask._findAndNotify(MERGE2.java:261)[143:xxxx:2.0.0.SNAPSHOT]
at org.jgroups.protocols.MERGE2$FindSubgroupsTask.findAndNotify(MERGE2.java:249)[143:xxxx:2.0.0.SNAPSHOT]
at org.jgroups.protocols.MERGE2$FindSubkgroupsTask$1.run(MERGE2.java:226)[143:xxxx:2.0.0.SNAPSHOT]
at org.jgroups.util.TimeScheduler2$RecurringTask.run(TimeScheduler2.java:603)[143:xxxx:2.0.0.SNAPSHOT]
at org.jgroups.util.TimeScheduler2$MyTask.run(TimeScheduler2.java:535)[143:xxxx:2.0.0.SNAPSHOT]
at org.jgroups.util.TimeScheduler2$Entry.execute(TimeScheduler2.java:440)[143:xxxx:2.0.0.SNAPSHOT]
at org.jgroups.util.TimeScheduler2$1.run(TimeScheduler2.java:297)[143:xxxxx:2.0.0.SNAPSHOT]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_17]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_17]
How can I achieve this

Related

Not able to post MQ message from Websphere Application Server image using docker image

I have an legacy application that uses websphere application server 9.0.0 , I want to containerize it using docker, but I am putting the message on MQ , it is giving me below error:
Unwrapping Non-DiagnosticException:
Exception Type : com.ibm.msg.client.jms.DetailedJMSException
Exception Message: JMSWMQ0018: Failed to connect to queue manager 'MQGWD2' with connection mode 'Client' and host name 'mqgwd2.sdde.deere.com(2171)'.
Begin Stack Trace:
com.ibm.msg.client.jms.DetailedJMSException: JMSWMQ0018: Failed to connect to queue manager 'MQGWD2' with connection mode 'Client' and host name 'mqgwd2.sdde.deere.com(2171)'.
Check the queue manager is started and if running in client mode, check there is a listener running. Please see the linked exception for more information.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:595)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:215)
at com.ibm.msg.client.wmq.internal.WMQConnection.<init>(WMQConnection.java:424)
at com.ibm.msg.client.wmq.internal.WMQXAConnection.<init>(WMQXAConnection.java:67)
at com.ibm.msg.client.wmq.factories.WMQXAConnectionFactory.createV7ProviderConnection(WMQXAConnectionFactory.java:187)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createProviderConnection(WMQConnectionFactory.java:7810)
at com.ibm.msg.client.wmq.factories.WMQXAConnectionFactory.createProviderXAConnection(WMQXAConnectionFactory.java:98)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl.createXAConnectionInternal(JmsConnectionFactoryImpl.java:390)
at com.ibm.mq.jms.MQXAQueueConnectionFactory.createXAQueueConnection(MQXAQueueConnectionFactory.java:154)
My docker-compose.yml looks like:
version: "3.9"
services:
consolidated:
build: .
ports:
- "9043:9043"
- "9443:9443"
- "9083:9083"
To run this application without container, I didn't install any MQ separately, I am just using the one the is there already in websphere server, the same doesn't work with the containerized image. I have compared all the connection factories through admin console and they are looking ok. The same configuration works on non-containerized version of this application

Error message while starting JBoss server:Port 9303 already in use

I am getting below error message while starting my Jboss 1.5 server. Kindly help me out with suggestions to fix this issue.
The Jboss servers stays up for matter of minutes and immediately it stops working with the below error message.
Kindly help.
2015-11-21 07:06:12,922 INFO [org.jboss.web.WebService] (main) Using RMI server codebase: http://jboss URL:9303/
2015-11-21 07:06:12,982 ERROR [org.jboss.kernel.plugins.dependency.AbstractKernelController] (main) Error installing to Start: name=jboss:service=WebService state=Create mode=Manual requiredState=Installed
java.lang.Exception: Port 9303 already in use.
at org.jboss.web.WebServer.start(WebServer.java:233)
at org.jboss.web.WebService.startService(WebService.java:322)
at org.jboss.system.ServiceMBeanSupport.jbossInternalStart(ServiceMBeanSupport.java:376)
at org.jboss.system.ServiceMBeanSupport.jbossInternalLifecycle(ServiceMBeanSupport.java:322)
Something at your system is running at the port 9303. You can either set port offset or change port of the service which is going to run at the port 9303.
To change port in JBoss 5 is something like:
run -c server_instance_name -Djboss.service.binding.set=ports-01
For test try to kill a service which is using port 9303 and try to run JBoss.

How to check cluster working between two different JBoss Server

I configured cluster between two different JBoss server using Multicast method.
Both server will be connected , when I start both JBoss server.
After one days , I'm getting following messages
Errors start to show for the clustering in server.log
05:28:17,447 ERROR [org.hornetq.core.server] (Thread-11905 (HornetQ-client-global-threads-377807954)) HQ224037:
cluster connection Failed to handle message: java.lang.IllegalStateException:
Cannot find binding for d7c1004f-b1a1-4160-8888-c38175ac45d599cf0dfe-5f30-11e4-bd7e-556a35fb9ec6 on
ClusterConnectionImpl#538608327[nodeUUID=930dee51-5f30-11e4-9695-ef52e2a27831, connector=TransportConfiguration(name=netty,
factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=5445&host=172-29-250-191, address=jms,
server=HornetQServerImpl::serverUUID=930dee51-5f30-11e4-9695-ef52e2a27831]
at org.hornetq.core.server.cluster.impl.ClusterConnectionImpl$MessageFlowRecordImpl.doConsumerCreat
05:28:17,411 ERROR [org.hornetq.core.server] (Thread-11439
(HornetQ-remoting-threads-HornetQServerImpl::serverUUID=99cf0dfe-5f30-11e4-bd7e-556a35fb9ec6-136247994-702467456))
HQ224016: Caught exception: HornetQException[errorType=QUEUE_EXISTS message=HQ119019:
Queue already exists 7a8b46d5-a038-4efd-900e-4c041c2c121f]
At org.hornetq.core.server.impl.HornetQServerImpl.createQueue(HornetQServerImpl.java:1811)
[hornetq-server-2.3.1.Final-redhat-1.jar:2.3.1.Final-redhat-1]
How to ensure cluster between two servers. Is there any procedures or any work around available?
Red Hat provides a McastReceiverTest java client test utility- further information on its use can be located at https://access.redhat.com/solutions/123073

how to get ThreadPool,WebContainer,Session,ConnectionPool mbeans in websphere v 8.5.5v liberty profile

I need to monitor WAS liberty profiles i made some configuration changes in sever.xml
<feature>restConnector-1.0</feature>^M
<feature>jsp-2.2</feature>^M
<feature>appSecurity-1.0</feature>^M
<feature>ssl-1.0</feature>
<feature>monitor-1.0</feature>^M
but when i am connecting with rest port i am only getting following mbeans regarding websphere
WebSphere
WebSphere:feature=restConnector,type=FileService,name=FileService
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=WLProject
WebSphere:feature=channelfw,type=endpoint,name=defaultHttpEndpoint-ssl
WebSphere:feature=restConnector,type=FileTransfer,name=FileTransfer
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=kohls
WebSphere:service=com.ibm.ws.kernel.filemonitor.FileNotificationMBean
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=worklightadmin
WebSphere:feature=channelfw,type=endpoint,name=defaultHttpEndpoint
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=worklightconsole
WebSphere:name=com.ibm.ws.jmx.mbeans.generatePluginConfig
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=_analytics
WebSphere:name=com.ibm.ws.config.serverSchemaGenerator
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=_MobileBrowserSimulator
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=nsecom
not able to get threadpool, webcontaineer mbeans , is there any configuration i have to do??
Maybe update to the latest Liberty version and try to test with jconsole. I'm running v8.5.5.3 and it works fine. I'm using the following command to start jconsole using rest connector (all in one line, formatted for readability):
jconsole
-J-Djava.class.path=C:\
IBM\WebSphere\LibertyIM\java\java_1.7_32\lib\jconsole.jar;C:\IBM\WebSphere\Liber
tyIM\java\java_1.7_32\lib\tools.jar;C:\IBM\WebSphere\wlp\clients\restConnector.j
ar
-J-Djavax.net.ssl.trustStore=C:/IBM/WebSphere/wlp/usr/servers/monitoringServe
r/resources/security/key.jks
-J-Djavax.net.ssl.trustStorePassword=password
-J-Djavax.net.ssl.trustStoreType=jks
-J-Duser.language=en
I can see ThreadPoolStats and ServletStats. For SessionStats or ConnectionPoolStats your application actually needs to use the feature (e.g. session or connection to db) to be visible in jconsole and have mbean.

Cassandra Channel has been closed

We have a small test cluster with 3 nodes on Amazon. Everything seems working with cqlsh. But when I try to debug my app from my laptop (outside of Amazon of course), I'm getting 'Channel has been closed' errors, and it starts retrying forever. I know it's likely caused by the config in cassandra.ymal, as it shows some private IPs in my Eclipse console. Tried many different ways but still getting the same problem. Appreciate any input on this. How to get rid of the private IPs 10.251.x.x from the client?
Here are some context,
Versions:
[cqlsh 4.0.1 | Cassandra 2.0.4 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
cassandra-driver-core-2.0.0-rc1.jar
In cassandra.ymal:
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "54.203.x.x,54.203.x.y"
listen_address: 10.251.a.b
broadcast_address: 54.203.x.x
native_transport_port: 9042
endpoint_snitch: Ec2MultiRegionSnitch
In Eclipse console:
DEBUG [main] (ControlConnection.java:145) - [Control connection] Successfully connected to /54.203.x.x
DEBUG [Cassandra Java Driver worker-0] (Session.java:379) - Adding /54.203.x.x to list of queried hosts
DEBUG [Cassandra Java Driver worker-1] (Session.java:379) - Adding /10.251.a.c to list of queried hosts
DEBUG [Cassandra Java Driver worker-1] (Connection.java:103) - [/10.251.a.c-1] Error connecting to /10.251.a.c (connection timed out: /10.251.a.c:9042)
DEBUG [Cassandra Java Driver worker-1] (Session.java:390) - Error creating pool to /10.251.a.c ([/10.251.a.c] Cannot connect)
DEBUG [Cassandra Java Driver worker-1] (Cluster.java:1064) - /10.251.a.c is down, scheduling connection retries
DEBUG [New I/O worker #4] (Connection.java:194) - Defuncting connection to /10.251.a.c
com.datastax.driver.core.TransportException: [/10.251.a.b] Channel has been closed
at com.datastax.driver.core.Connection$Dispatcher.channelClosed(Connection.java:548)
...
It seem that your Java driver is using auto discovery by calling "describe cluster" to get a list of all nodes in your cluster. In AWS using Ec2Snitch, that yields to private ips which obviously won't work from outside of AWS. There is a discussion on this topic here:
https://datastax-oss.atlassian.net/browse/JAVA-145
The last commend got my attention. It says you can do something with LoadBalancingPolicy of the driver to limit the nodes. Hope this includes specifying the specific IPs so it does not auto discover.