How to check cluster working between two different JBoss Server - jboss

I configured cluster between two different JBoss server using Multicast method.
Both server will be connected , when I start both JBoss server.
After one days , I'm getting following messages
Errors start to show for the clustering in server.log
05:28:17,447 ERROR [org.hornetq.core.server] (Thread-11905 (HornetQ-client-global-threads-377807954)) HQ224037:
cluster connection Failed to handle message: java.lang.IllegalStateException:
Cannot find binding for d7c1004f-b1a1-4160-8888-c38175ac45d599cf0dfe-5f30-11e4-bd7e-556a35fb9ec6 on
ClusterConnectionImpl#538608327[nodeUUID=930dee51-5f30-11e4-9695-ef52e2a27831, connector=TransportConfiguration(name=netty,
factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=5445&host=172-29-250-191, address=jms,
server=HornetQServerImpl::serverUUID=930dee51-5f30-11e4-9695-ef52e2a27831]
at org.hornetq.core.server.cluster.impl.ClusterConnectionImpl$MessageFlowRecordImpl.doConsumerCreat
05:28:17,411 ERROR [org.hornetq.core.server] (Thread-11439
(HornetQ-remoting-threads-HornetQServerImpl::serverUUID=99cf0dfe-5f30-11e4-bd7e-556a35fb9ec6-136247994-702467456))
HQ224016: Caught exception: HornetQException[errorType=QUEUE_EXISTS message=HQ119019:
Queue already exists 7a8b46d5-a038-4efd-900e-4c041c2c121f]
At org.hornetq.core.server.impl.HornetQServerImpl.createQueue(HornetQServerImpl.java:1811)
[hornetq-server-2.3.1.Final-redhat-1.jar:2.3.1.Final-redhat-1]
How to ensure cluster between two servers. Is there any procedures or any work around available?

Red Hat provides a McastReceiverTest java client test utility- further information on its use can be located at https://access.redhat.com/solutions/123073

Related

kafka.com:9092/0: Connect to ipv4# failed: Connection refused

I use Kafka with one node (version 1.0.0).
why when I set listeners=PLAINTEXT://138.201.YYYY.YYY:9092, everything is OK and I can produce and consume well in local and client enviroment , but when I set listeners=PLAINTEXT://kafka1.dev:9092, in the client side, I cannot produce and consume any message and this error will appear‍‍ :
%3|1531686738.672|FAIL|rdkafka#producer-1| [thrd:kafka1.dev:9092/0]: kafka1.dev:9092/0: Failed to resolve 'kafka1.dev:9092': Name or service not known
%3|1531686738.672|ERROR|rdkafka#producer-1| [thrd:kafka1.dev:9092/0]: kafka1.dev:9092/0: Failed to resolve 'kafka1.dev:9092': Name or service not known
and I have to add 138.201.YYY.YYY kafka1.dev in the etc/hosts file (in local) to fix the problem and produce fine.

how to get ThreadPool,WebContainer,Session,ConnectionPool mbeans in websphere v 8.5.5v liberty profile

I need to monitor WAS liberty profiles i made some configuration changes in sever.xml
<feature>restConnector-1.0</feature>^M
<feature>jsp-2.2</feature>^M
<feature>appSecurity-1.0</feature>^M
<feature>ssl-1.0</feature>
<feature>monitor-1.0</feature>^M
but when i am connecting with rest port i am only getting following mbeans regarding websphere
WebSphere
WebSphere:feature=restConnector,type=FileService,name=FileService
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=WLProject
WebSphere:feature=channelfw,type=endpoint,name=defaultHttpEndpoint-ssl
WebSphere:feature=restConnector,type=FileTransfer,name=FileTransfer
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=kohls
WebSphere:service=com.ibm.ws.kernel.filemonitor.FileNotificationMBean
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=worklightadmin
WebSphere:feature=channelfw,type=endpoint,name=defaultHttpEndpoint
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=worklightconsole
WebSphere:name=com.ibm.ws.jmx.mbeans.generatePluginConfig
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=_analytics
WebSphere:name=com.ibm.ws.config.serverSchemaGenerator
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=_MobileBrowserSimulator
WebSphere:service=com.ibm.websphere.application.ApplicationMBean,name=nsecom
not able to get threadpool, webcontaineer mbeans , is there any configuration i have to do??
Maybe update to the latest Liberty version and try to test with jconsole. I'm running v8.5.5.3 and it works fine. I'm using the following command to start jconsole using rest connector (all in one line, formatted for readability):
jconsole
-J-Djava.class.path=C:\
IBM\WebSphere\LibertyIM\java\java_1.7_32\lib\jconsole.jar;C:\IBM\WebSphere\Liber
tyIM\java\java_1.7_32\lib\tools.jar;C:\IBM\WebSphere\wlp\clients\restConnector.j
ar
-J-Djavax.net.ssl.trustStore=C:/IBM/WebSphere/wlp/usr/servers/monitoringServe
r/resources/security/key.jks
-J-Djavax.net.ssl.trustStorePassword=password
-J-Djavax.net.ssl.trustStoreType=jks
-J-Duser.language=en
I can see ThreadPoolStats and ServletStats. For SessionStats or ConnectionPoolStats your application actually needs to use the feature (e.g. session or connection to db) to be visible in jconsole and have mbean.

Cassandra Channel has been closed

We have a small test cluster with 3 nodes on Amazon. Everything seems working with cqlsh. But when I try to debug my app from my laptop (outside of Amazon of course), I'm getting 'Channel has been closed' errors, and it starts retrying forever. I know it's likely caused by the config in cassandra.ymal, as it shows some private IPs in my Eclipse console. Tried many different ways but still getting the same problem. Appreciate any input on this. How to get rid of the private IPs 10.251.x.x from the client?
Here are some context,
Versions:
[cqlsh 4.0.1 | Cassandra 2.0.4 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
cassandra-driver-core-2.0.0-rc1.jar
In cassandra.ymal:
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "54.203.x.x,54.203.x.y"
listen_address: 10.251.a.b
broadcast_address: 54.203.x.x
native_transport_port: 9042
endpoint_snitch: Ec2MultiRegionSnitch
In Eclipse console:
DEBUG [main] (ControlConnection.java:145) - [Control connection] Successfully connected to /54.203.x.x
DEBUG [Cassandra Java Driver worker-0] (Session.java:379) - Adding /54.203.x.x to list of queried hosts
DEBUG [Cassandra Java Driver worker-1] (Session.java:379) - Adding /10.251.a.c to list of queried hosts
DEBUG [Cassandra Java Driver worker-1] (Connection.java:103) - [/10.251.a.c-1] Error connecting to /10.251.a.c (connection timed out: /10.251.a.c:9042)
DEBUG [Cassandra Java Driver worker-1] (Session.java:390) - Error creating pool to /10.251.a.c ([/10.251.a.c] Cannot connect)
DEBUG [Cassandra Java Driver worker-1] (Cluster.java:1064) - /10.251.a.c is down, scheduling connection retries
DEBUG [New I/O worker #4] (Connection.java:194) - Defuncting connection to /10.251.a.c
com.datastax.driver.core.TransportException: [/10.251.a.b] Channel has been closed
at com.datastax.driver.core.Connection$Dispatcher.channelClosed(Connection.java:548)
...
It seem that your Java driver is using auto discovery by calling "describe cluster" to get a list of all nodes in your cluster. In AWS using Ec2Snitch, that yields to private ips which obviously won't work from outside of AWS. There is a discussion on this topic here:
https://datastax-oss.atlassian.net/browse/JAVA-145
The last commend got my attention. It says you can do something with LoadBalancingPolicy of the driver to limit the nodes. Hope this includes specifying the specific IPs so it does not auto discover.

Migrating JMS Queue from Hypersonic to MSSQL

I am currently trying to replace Hypersonic with MS-SQL 2008 R2 in JBoss AS 5.1.0GA.
I have followed the instructions in the JBoss Server Configuration Guide, however the server fails to load with this error:
2013-09-26 17:06:04,479 WARN [org.jboss.resource.adapter.jms.inflow.JmsActivation] (WorkManager(2)-3) Failure in jms activation org.jboss.resource.adapter.jms.inflow.JmsActivationSpec#8bb1eb(ra=org.jboss.resource.adapter.jms.JmsResourceAdapter#c54851 destination=queue/iam/im/jms/queue/wpUtilQueue destinationType=javax.jms.Queue tx=true durable=false reconnect=10 provider=DefaultJMSProvider user=null maxMessages=1 minSession=1 maxSession=15 keepAlive=30000 useDLQ=true DLQHandler=org.jboss.resource.adapter.jms.inflow.dlq.GenericDLQHandler DLQJndiName=queue/DLQ DLQUser=null DLQMaxResent=10)
javax.naming.NameNotFoundException: DLQ not bound
(I left out the stack trace for brevity; it isn't important.)
I have checked, and DLQ is defined in destinations-service.xml
I'm not sure where to proceed from here; every response I can find on Google seems to suggest that defining the queue in destinations-service.xml has solved the issue for almost everyone.
Any help would be appreciated.
It turns out that the instructions in the Configuration Guide aren't 100% complete. The issue was that a ChannelFactory was referenced in the mssql-persistence-service.xml; however, this environment is not clustered, and so there were no ChannelFactory objects defined.
Removing the reference to the ChannelFactory was sufficient to resolve the issue.

Jboss shows error with datasource on startup

On starting jboss I am getting the following error :
--- MBEANS THAT ARE THE ROOT CAUSE OF THE PROBLEM ---
ObjectName: jboss.jca:service=DataSourceBinding,name=DefaultDS
State: NOTYETINSTALLED
Depends On Me:
jboss.ejb:service=EJBTimerService,persistencePolicy=database
jboss:service=KeyGeneratorFactory,type=HiLo
jboss.mq:service=StateManager
jboss.mq:service=PersistenceManager
And for all database connections in the servlet I get the following exception :
org.postgresql.util.PSQLException: FATAL: password a
uthentication failed for user "poll"
It was working fine and all of a sudden I started getting these errors. My password is correct. I even tried changing the password and then tried again it showed the same exception. What is happening here?
The DefaultDS data source is what the name suggests; the default datasource. It ships with JBoss and is configured to use the Hypersonic (ie in-memory) database. JBoss uses the DefaultDS datasource to read/write internal queues, timed events, etc
Check the file ../conf/standardjbosscmp-jdbc.xml to see what you've got configured for the DefaultDS datasource. It sounds like you've edited that file unintentionally. Unless you need to persist internal queues etc across boots, just leave it as shipped using Hypersonic.
See the JBoss doc for more.