Is there a list of Jboss port numbers that a JBoss instance uses? - jboss

I have to configure an instance of Jboss 5.1.0 to use a different port number (i.e. 8480). To do this i made the following changes to the bindings-jboss-beans.xml.
<parameter>
<set>
<inject bean="PortsDefaultBindings"/>
<inject bean="Ports01Bindings"/>
<inject bean="Ports02Bindings"/>
<inject bean="Ports03Bindings"/>
<inject bean="Ports04Bindings"/>
</set>
</parameter>
<bean name="Ports04Bindings" class="org.jboss.services.binding.impl.ServiceBindingSet">
<constructor>
<!-- The name of the set -->
<parameter>ports-04</parameter>
<!-- Default host name -->
<parameter>${jboss.bind.address}</parameter>
<!-- The port offset -->
<parameter>400</parameter>
<!-- Set of bindings to which the "offset by X" approach can't be applied -->
<parameter><null/></parameter>
</constructor>
</bean>
The change works fine in that i can access my application using the URL http://localhost:8480/XYZApp.
Now to be able to do the deployment, i have to inform the infrastructure people all the port numbers that the application will use.
I know that we will be using 8480 but how would i know all the other port numbers that Jboss will use for this instance based on an offset of 400?

JBoss listens to many ports for each of its services respectively, but you shouldn't need to open all those ports if your applications don't make use of the services related to these ports. For example if no external applications will use the Naming Service you shouldn't need to open the port 1099 (1499 in your case).
Anyway, if you need a list of all the ports from where Jboss listens, you can check the bean with name="StandardBindings" in the file conf/bindingservice.beans/META-INF/bindings-jboss-beans.xml. Those are the standard ports, so if you have defined an offset (in your case 400) you'll have to add it to the respective port to get the ports used by your JBoss instance.

Related

Unable to dynamically scale Ignite Pods in Kubernetes

We have been experimenting with the number of Ignite server pods to see the impact on performance.
One thing that we have noticed is that if the number of Ignite server pods is increased after client nodes have established communication the new pod will just fail loop with the error below.
If however the grid is destroyed (bring down all client and server nodes) and then the desired number of server nodes is launch there are no issues.
Also the above procedure is not fully dependable for anything other than launching a single Ignite server.
From reading it looks like [this stack over flow][1] post and [this documentation][2] that the issue may be that we are not launching the "Kubernetes service".
Ignite's KubernetesIPFinder requires users to configure and deploy a special Kubernetes service that maintains a list of the IP addresses of all the alive Ignite pods (nodes).
However this is the only documentation I have found and it says that it is no longer current.
Is this information still relevant for Ignite 2.11.1?
If not is there some more recent documentation?
If this service is indeed needed, are there some more concreate examples and information on setting them up?
Error on new Server pod:
[21:37:55,793][SEVERE][main][IgniteKernal] Failed to start manager: GridManagerAdapter [enabled=true, name=o.a.i.i.managers.discovery.GridDiscoveryManager]
class org.apache.ignite.IgniteCheckedException: Failed to start SPI: TcpDiscoverySpi [addrRslvr=null, addressFilter=null, sockTimeout=5000, ackTimeout=5000, marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1#78422efb], reconCnt=10, reconDelay=2000, maxAckTimeout=600000, soLinger=0, forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null, skipAddrsRandomization=false]
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:281)
at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980)
at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1985)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1331)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1172)
at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1066)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:952)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:851)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:721)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690)
at org.apache.ignite.Ignition.start(Ignition.java:353)
at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:367)
Caused by: class org.apache.ignite.spi.IgniteSpiException: Node with the same ID was found in node IDs history or existing node in topology has the same ID (fix configuration and restart local node) [localNode=TcpDiscoveryNode [id=000e84bb-f587-43a2-a662-c7c6147d2dde, consistentId=8751ef49-db25-4cf9-a38c-26e23a96a3e4, addrs=ArrayList [0:0:0:0:0:0:0:1%lo, 127.0.0.1, fd00:85:4001:5:f831:8cc:cd3:f863%eth0], sockAddrs=HashSet [nkw-mnomni-ignite-1-1-1.nkw-mnomni-ignite-1-1.680e5bbc-21b1-5d61-8dfa-6b27be10ede7.svc.cluster.local/fd00:85:4001:5:f831:8cc:cd3:f863:47500, /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=0, intOrder=0, lastExchangeTime=1676497065109, loc=true, ver=2.11.1#20211220-sha1:eae1147d, isClient=false], existingNode=000e84bb-f587-43a2-a662-c7c6147d2dde]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.duplicateIdError(TcpDiscoverySpi.java:2083)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1201)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:473)
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207)
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278)
... 13 more
Server DiscoverySpi Config:
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="myNameSpace"/>
<property name="serviceName" value="myServiceName"/>
</bean>
</property>
</bean>
</property>
Client DiscoverySpi Configs:
<bean id="discoverySpi" class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder" ref="ipFinder" />
</bean>
<bean id="ipFinder" class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="shared" value="false" />
<property name="addresses">
<list>
<value>myServiceName.myNameSpace:47500</value>
</list>
</property>
</bean>
Edit:
I have experimented more with this issue. As long as I do not deploy any clients (using the static TcpDiscoveryVmIpFinder above) I am able to scale up and down server pods without any issue. However as soon as a single client joins I am no longer able to scale the server pods up.
I can see that the server pods have ports 47500 and 47100 open so I am not sure what the issue is. Dows the TcpDiscoveryKubernetesIpFinder still need the port to be specified on the client config?
I have tried to change my client config to use the TcpDiscoveryKubernetesIpFinder below but I am getting a discovery timeout falure (see below).
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="680e5bbc-21b1-5d61-8dfa-6b27be10ede7"/>
<property name="serviceName" value="nkw-mnomni-ignite-1-1"/>
</bean>
</property>
</bean>
</property>
24-Feb-2023 14:15:02.450 WARNING [grid-timeout-worker-#22%igniteClientInstance%] org.apache.ignite.logger.java.JavaLogger.warning Thread dump at 2023/02/24 14:15:02 UTC
Thread [name="main", id=1, state=WAITING, blockCnt=78, waitCnt=3]
Lock [object=java.util.concurrent.CountDownLatch$Sync#45296dbd, ownerName=null, ownerId=-1]
at java.base#17.0.1/jdk.internal.misc.Unsafe.park(Native Method)
at java.base#17.0.1/java.util.concurrent.locks.LockSupport.park(LockSupport.java:211)
at java.base#17.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:715)
at java.base#17.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1047)
at java.base#17.0.1/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:230)
at o.a.i.spi.discovery.tcp.ClientImpl.spiStart(ClientImpl.java:324)
at o.a.i.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207)
at o.a.i.i.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278)
at o.a.i.i.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980)
at o.a.i.i.IgniteKernal.startManager(IgniteKernal.java:1985)
at o.a.i.i.IgniteKernal.start(IgniteKernal.java:1331)
at o.a.i.i.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141)
at o.a.i.i.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787)
- locked o.a.i.i.IgnitionEx$IgniteNamedInstance#57ac9100
at o.a.i.i.IgnitionEx.start0(IgnitionEx.java:1172)
at o.a.i.i.IgnitionEx.startConfigurations(IgnitionEx.java:1066)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:952)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:851)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:721)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:690)
at o.a.i.Ignition.start(Ignition.java:353)
Edit 2:
I also spoke with an admin about opening client side ports in case that was the issue. He indicated that should not be needed as clients should be able to open ephemeral ports to communicate with the server nodes.
[1]: Ignite not discoverable in kubernetes cluster with TcpDiscoveryKubernetesIpFinder
[2]: https://apacheignite.readme.io/docs/kubernetes-ip-finder
It's hard to say precisely what the root cause is, but in general it's something related to the network or domain names resolution.
A public address is assigned to a node on a startup and is exposed to other nodes for communication. Other nodes store that address and nodeId in their history. Here is what is happening: a new node is trying to enter the cluster, it connects to a random node, then this request is transferred to the coordinator. The coordinator issues TcpDiscoveryNodeAddedMessage that must circle across the topology ring and be ACKed by all other nodes. That process didn't finish during a join timeout, so the new node is trying to re-enter the topology by starting the same joining process but with a new ID. But, other nodes see that this address is already registered by another nodeId, causing the original duplicate nodeId error.
Some recommendations:
If the issue is reproducible on a regular basis, I'd recommend collecting more information by enabling DEBUG logging for the following package:
org.apache.ignite.spi.discovery (discovery-related events tracing)
Take thread dumps from affected nodes (could be done by kill -3). Check for discovery-related issues. Search for "lookupAllHostAddr".
Check that it's not DNS issue and all public addresses for your node are resolved instantly nkw-mnomni-ignite-1-1-1.nkw-mnomni-ignite-1-1.680e5bbc-21b1-5d61-8dfa-6b27be10ede7.svc.cluster.local. I was asking about the provider, because in OpenShift there seems to be a hard limit on DNS resolution time.
Check GC and safepoints.
To hide the underlying issue you can play around by increasing Ignite configuration: network timeout, join timeout, reducing failure detection timeout. But I recommend finding the real root cause instead of treating the symptoms.

JMS 2.0 durable subscriptions topic best practice in Kubernetes

We are creating a Mule application which will be running in a container on Kubernetes and will be in a replica set that will be connecting to JMS 2.0 Red Hat AMQ 7 (based on ActiveMQ Artemis).
The pom.xml has been configured to get the jms client:
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>artemis-jms-client-all</artifactId>
<version>2.10.1</version>
</dependency>
And the JMS config is configured as:
<jms:config name="JMS_Config" doc:name="JMS Config" doc:id="8621b07d-b203-463e-bbbe-76eb03741a61" >
<jms:generic-connection specification="JMS_2_0" username="${mq.user}" password="${mq.password}" clientId="${mq.client.id}">
<reconnection >
<reconnect-forever frequency="${mq.reconnection.frequency}" />
</reconnection>
<jms:connection-factory >
<jms:jndi-connection-factory connectionFactoryJndiName="ConnectionFactory" >
<jms:name-resolver-builder jndiInitialContextFactory="org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory" jndiProviderUrl="${mq.brokerurl}"/>
</jms:jndi-connection-factory>
</jms:connection-factory>
</jms:generic-connection>
<jms:consumer-config>
<jms:consumer-type >
<jms:topic-consumer shared="true" durable="true"/>
</jms:consumer-type>
</jms:consumer-config>
<jms:producer-config persistentDelivery="true"/>
</jms:config>
Then in the JMS listener component:
<jms:listener doc:name="EMS JMS Listener" doc:id="318b4f08-daf6-41f4-944b-3ec1420d5c12" config-ref="JMS_Config" destination="${mq.incoming.queue}" ackMode="AUTO" >
<jms:consumer-type >
<jms:topic-consumer shared="true" subscriptionName="${mq.sub.name}" durable="true"/>
</jms:consumer-type>
<jms:response sendCorrelationId="ALWAYS" />
</jms:listener>
The variables are set as:
mq.client.id=client-id-135a9514-d4d5-4f52-b01c-f6ca34a76b40
mq.sub.name=my-sub
mq.incoming.queue=my-queue
Is this the best way to configure the client? As we have seen errors in the logs when deployed to K8s regarding connections to the AMQ server:
javax.jms.InvalidClientIDException: client-id-135a9514-d4d5-4f52-b01c-f6ca34a76b40 was already set into another connection
In JMS 2.0 you don't have to set the client identifier when creating a shared durable subscription. However, if you do set the client identifier then it must be unique per connection. For whatever reason (e.g. due to Mule or perhaps K8s) multiple connections are being created and since each connection is using the same client identifier you're receiving the javax.jms.InvalidClientIDException.
Remove clientId="${mq.client.id}" from your configuration and the javax.jms.InvalidClientIDException should go away.

Jboss5.01GA RMI EJB3.0

I have a cloud instance where i have installed Jboss5.0.1GA server. Server instance contains a Public ip and a natted Ip Address. I have run Jboss server using -b with ip(natted) address and web url is working fine. Now i am creating Java external client to access EJB3 bean which is deployed in Jboss server where i am getting the exception and trying solution using google which is not helped my case. Find below code which tells what i am using in external client to access EJB3.
properties = new Properties();
properties.load(stream);
// Set the context
Hashtable ht = new Hashtable();
ht.put(Context.INITIAL_CONTEXT_FACTORY,
"org.jnp.interfaces.NamingContextFactory");
ht.put(Context.PROVIDER_URL,"public ip address");
ht.put(Context.URL_PKG_PREFIXES,
"org.jboss.naming:org.jnp.interfaces");
// Find and create a reference to the bean using JNDI
context = new InitialContext(ht);
While executing it localhost its working fine. While connecting remote throwing below exception. "javax.naming.CommunicationException [Root exception is java.rmi.ConnectException: Connection refused to host: ". Can anyone help me on the same.
`This is my connector file(ejb3-connectors-jboss-beans.xml).
EJB3 Connectors
-->
JBoss Remoting Connector
Note: Bean Name "org.jboss.ejb3.RemotingConnector" is used
as a lookup value; alter only after checking java references
to this key.
-->
<property name="invokerLocator">
<value-factory bean="ServiceBindingManager"
method="getStringBinding">
<parameter>
jboss.remoting:type=Connector,name=DefaultEjb3Connector,handler=ejb3
</parameter>
<parameter>
<null />
</parameter>
<parameter>socket://${jboss.bind.address}:${port}</parameter>
<parameter>
<null />
</parameter>
<parameter>3873</parameter>
</value-factory>
</property>
<property name="serverConfiguration">
<inject bean="ServerConfiguration" />
</property>
AOP
org.jboss.aspects.remoting.AOPRemotingInvocationHandler
`
Do a telnet to the ip and port you are trying to connect on the jboss from the remote server instance. If that's not working then you have to solve networking issues first. (Let me know, so I can guide you on how to do it)
Also check your EJB3 binding settings and check networking. Out of the box config looks looks this..
<mbean code="org.jboss.remoting.transport.Connector"
xmbean-dd="org/jboss/remoting/transport/Connector.xml"
name="jboss.remoting:type=Connector,name=DefaultEjb3Connector,handler=ejb3">
<depends>jboss.aop:service=AspectDeployer</depends>
<attribute name="InvokerLocator">socket://0.0.0.0:3873</attribute>
<attribute name="Configuration">
<handlers>
<handler subsystem="AOP">org.jboss.aspects.remoting.AOPRemotingInvocationHandler</handler>
</handlers>
</attribute>
</mbean>
Thanks!
#leo.
To my case below 2 things worked for me.
1. Running Jboss server using run.bat -b **public ip(not nat ip)** -Djboss.bind.address=0.0.0.0
2. Enabling my **local** machine hosts file to point remote ip to hostname ie remoteip remotehostname.
Hope it will help to others as well.

JBoss domain dynamic port offset configuration

I cannot seem to get configuring port-offsets via properties file on the domain managed setup to start multiple server instances in a server group.
I have the following configuration in host.xml:
<servers>
<server name="instance-one" group="main-server-group" auto-start="true">
<socket-bindings port-offset="${jboss.instance1.offset}"/>
</server>
<server name="instance-two" group="main-server-group" auto-start="true">
<!-- server-two avoids port conflicts by incrementing the ports in
the default socket-group declared in the server-group -->
<socket-bindings port-offset="${jboss.instance2.offset}"/>
</server>
</servers>
The properties are configured via properties file (custom-domain.properties):
jboss.domain.base.dir=custom-domain
jboss.instance1.offset=10300
jboss.instance2.offset=20300
And I try to startup the domain using
./domain.sh -P=custom-domain.properties
The problem is that jboss.instance1.offset and jboss.instance2.offset are not being applied to the corresponding properties in host.xml. If I have hardcoded values in the host.xml it appears to start up instance 1 and instance 2 on the hardcoded port offsets.
Does custom property configuration not work in domain setup?
Thanks for any help.

Turning off JBoss hot deploy service?

What is the correct way to turn off the JBoss hot deploy service?
This is a production environment.
Edit: JBoss version 5.1.0 GA
I think deleting the "deploy/hdscanner-jboss-beans.xml" file is the correct way to do this.
From JBoss in Action, ch. 3.1.5:
The deployer is configured via the deployers.xml and profile.xml descriptor files,
both found in the server/xxx/conf directory. This file defines several POJOs that
manage various deployment responsibilities. Table 3.3 identifies each of these POJOs
and highlights some of the more interesting configuration properties provided by
each one. [...]
And the relevant bits from the table:
Bean: HDScanner
Property: scanEnabled - Set this to true (default) to enable the hot
deployer and to false to disable it. When set to
false, applications are deployed only when the
server is started or when the deploy method on
the MainDeployer MBean is called.
Property: scanPeriod - The number of milliseconds the hot deployer
waits between performing scans. The default is
5000 milliseconds (5 seconds). This value is
ignored if scanEnabled is set to false.
Property: scanThreadName - You can use this to change the name of the
thread from its default of HDScanner. The thread
name enables you to identify the hot deployer
thread if you should take a thread dump.
You can disable and expose it with JMX:
<bean name="HDScanner" class="org.jboss.system.server.profileservice.hotdeploy.HDScanner">
<annotation>#org.jboss.aop.microcontainer.aspects.jmx.JMX(name="jboss.deployment:service=HDScanner", exposedInterface=org.jboss.system.server.profileservice.hotdeploy.Scanner, registerDirectly=false)</annotation>
<start method="start" ignored="true" />
<property name="deployer"><inject bean="ProfileServiceDeployer"/></property>
<property name="profileService"><inject bean="ProfileService"/></property>
<property name="scanPeriod">5000</property>
<property name="scanThreadName">HDScanner</property>
<property name="scanEnabled">false</property>
</bean>
Property: scanEnabled doesn't exist on JBoss 5.x only on JBoss 4.x for the Deployment Scanner.
On JBoss 5.x just delete the hdscanner-jboss-beans.xml from the deploy directory and use the MainDeployer MBean to deploy your applications.