Hazelcast Kubernetes plugin is not defined - kubernetes

I`m trying to create an OrientDB (version 3.0.10) cluster using Kubernetes. OrientDB uses Hazelcast (version 3.10.4) in its distributed mode that is why I hat to set up KubernetesHazelcast plugin. I used this repository as an example.
I have created all the necessary configuration files, I have defined hazelcast Kubernetes dependency (version 1.3.1) in build.sbt file for my project and this dependency appeared in the classpath
However, the logs on each pod show this error message:
com.orientechnologies.orient.server.distributed.ODistributedStartupException: Error on starting distributed plugin
Caused by: com.hazelcast.config.properties.ValidationException: There is no discovery strategy factory to create 'DiscoveryStrategyConfig{properties={service-dns=orientdbservice2.default.svc.cluster.local, service-dns-timeout=10}, className='com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy', discoveryStrategyFactory=null}' Is it a typo in a strategy classname? Perhaps you forgot to include implementation on a classpath?
So it looks like the Hazelcast Kubernetes dependency is set up in a worng way. How can this error be fixed?
Here is my config hazelcast.xml file:
<properties>
<property name="hazelcast.discovery.enabled">true</property>
</properties>
<network>
<join>
<multicast enabled="false"/>
<tcp-ip enabled="false" />
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
<properties>
<property name="service-dns">orientdbservice2.default.svc.cluster.local</property>
<property name="service-dns-timeout">10</property>
</properties>
</discovery-strategy>
</discovery-strategies>
</join>
</network>
For the cluster creation, I use StatefulSet with OrientDB image and mount all the config files as config maps. I am pretty sure that the problem is not in my config files as with multicast instead of the dns strategy everything works fine. Also, there are no network problems in the Kubernetes cluster itself.

First of all, OrientDB version should be updated to the latest - 3.0.10 with embedded newest Hazelcast version. Also, I have mounted hazelcast-kubernetes.jar dependency file directly into /orientdb/lib folder and it started to work properly. HazelcastKubernetes plugin is discovered and nodes join the cluster:
INFO [172.17.0.3]:5701 [orientdb-test-cluster-1] [3.10.4] Kubernetes Discovery activated resolver: DnsEndpointResolver [DiscoveryService]
INFO [172.17.0.3]:5701 [orientdb-test-cluster-1] [3.10.4] Activating Discovery SPI Joiner [Node]
INFO [172.17.0.3]:5701 [orientdb-test-cluster-1] [3.10.4] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks) [OperationExecutorImpl]
Members {size:3, ver:3} [
Member [172.17.0.3]:5701 - hash
Member [172.17.0.4]:5701 - hash
Member [172.17.0.8]:5701 - hash
]

Related

JMS 2.0 durable subscriptions topic best practice in Kubernetes

We are creating a Mule application which will be running in a container on Kubernetes and will be in a replica set that will be connecting to JMS 2.0 Red Hat AMQ 7 (based on ActiveMQ Artemis).
The pom.xml has been configured to get the jms client:
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>artemis-jms-client-all</artifactId>
<version>2.10.1</version>
</dependency>
And the JMS config is configured as:
<jms:config name="JMS_Config" doc:name="JMS Config" doc:id="8621b07d-b203-463e-bbbe-76eb03741a61" >
<jms:generic-connection specification="JMS_2_0" username="${mq.user}" password="${mq.password}" clientId="${mq.client.id}">
<reconnection >
<reconnect-forever frequency="${mq.reconnection.frequency}" />
</reconnection>
<jms:connection-factory >
<jms:jndi-connection-factory connectionFactoryJndiName="ConnectionFactory" >
<jms:name-resolver-builder jndiInitialContextFactory="org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory" jndiProviderUrl="${mq.brokerurl}"/>
</jms:jndi-connection-factory>
</jms:connection-factory>
</jms:generic-connection>
<jms:consumer-config>
<jms:consumer-type >
<jms:topic-consumer shared="true" durable="true"/>
</jms:consumer-type>
</jms:consumer-config>
<jms:producer-config persistentDelivery="true"/>
</jms:config>
Then in the JMS listener component:
<jms:listener doc:name="EMS JMS Listener" doc:id="318b4f08-daf6-41f4-944b-3ec1420d5c12" config-ref="JMS_Config" destination="${mq.incoming.queue}" ackMode="AUTO" >
<jms:consumer-type >
<jms:topic-consumer shared="true" subscriptionName="${mq.sub.name}" durable="true"/>
</jms:consumer-type>
<jms:response sendCorrelationId="ALWAYS" />
</jms:listener>
The variables are set as:
mq.client.id=client-id-135a9514-d4d5-4f52-b01c-f6ca34a76b40
mq.sub.name=my-sub
mq.incoming.queue=my-queue
Is this the best way to configure the client? As we have seen errors in the logs when deployed to K8s regarding connections to the AMQ server:
javax.jms.InvalidClientIDException: client-id-135a9514-d4d5-4f52-b01c-f6ca34a76b40 was already set into another connection
In JMS 2.0 you don't have to set the client identifier when creating a shared durable subscription. However, if you do set the client identifier then it must be unique per connection. For whatever reason (e.g. due to Mule or perhaps K8s) multiple connections are being created and since each connection is using the same client identifier you're receiving the javax.jms.InvalidClientIDException.
Remove clientId="${mq.client.id}" from your configuration and the javax.jms.InvalidClientIDException should go away.

How to append kubernetes container id/name to log file in log4j.xml

but the problem is my application is running before my server started (WebLogic server).
Your question seems to be in reference to Log4j 1. Log4j 1 has been end-of-life since August of 2015.
Upgrading to Log4j 2.13.0 would allow you to use the Kubernetes Lookup which would allow you to configure your log file name as:
<File name="MyFile" filename="${path}/app_${k8s:containerName}.${date:MM-dd-yyyy}.log>
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
</File>

Failover Artemis URI in WIldfly 14

Is it possible to configure failover URI for native Artemis server in Wildfly 14? I'd like to create pooled connection factory with url like (tcp://localhost:61616,tcp://localhost:61617). As far as I know wildfly creates connection factory from connector host and port. I use Wildfly 14.0.1.Final and Artemis 2.6.3.
Update
<remote-connector name="remote-artemis-master" socket-binding="remote-artemis-master" />
<remote-connector name="remote-artemis-slave" socket-binding="remote-artemis-slave" />
<pooled-connection-factory
ha="true"
name="activemq-ra"
connectors="remote-artemis-master remote-artemis-slave"
entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory"
transaction="xa"
user="user"
password="password" />
Update
When master node stops the slave becomes live but the jee app is unable to send/consume messages 30 for seconds. After this period everything works fine.
The syntax (tcp://localhost:61616,tcp://localhost:61617) is just a way to configure multiple initial connectors via a URL. You can accomplish the same thing in Wildfly by defining multiple remote-connector elements and referencing those in the connectors attribute of the pooled-connection-factory.

How to change JBoss eap 6.1 deployment folder

I'm trying to change deployment folder for JBoss without success.
Regarding some information which I've found on google I was trying to change standalone.xml configuration file. I've added following lines after <extensions> node :
<system-properties>
<property name="deploydir" value="/home/Artur"/>
</system-properties>
And I've changed <subsystem xmlns="urn:jboss:domain:deployment-scanner:1.1"> as following:
<subsystem xmlns="urn:jboss:domain:deployment-scanner:1.1">
<deployment-scanner path="deployments" relative-to="deploydir" scan-interval="10000"/>
</subsystem>
I have the following path /home/Artur/deployments on my system.
But when i try to run JBoss server I always get an error :
09:05:21,283 ERROR [org.jboss.as.controller.management-operation] (ServerService Thread Pool -- 2) JBAS014612: Operation ("add") failed - address: ([
("subsystem" => "deployment-scanner"),
("scanner" => "default")
]): java.lang.IllegalArgumentException: JBAS014847: Could not find a path called 'deployments'
I was trying to configure it with different paths on my system, I was checking also for spelling in every case. But nothing helps. Does anyone have and idea how to properly configure path for deployment folder in JBoss ? (version as in title)
OK I solved this issue. To change deployment directory it's needed to specyify path to this directory in block :
<paths>
<path name="deploydir" path="/home/Artur"/>
</paths>
instead of
<system-properties>
<property name="deploydir" value="/home/Artur"/>
</system-properties>
which I mentioned about earlier. So i conclusion we need to specify <path> node in standalone.xml configuration file and change <subsystem xmlns="urn:jboss:domain:deployment-scanner:1.1"> to point on newly created path (in this case to "deploydir")

Turning off JBoss hot deploy service?

What is the correct way to turn off the JBoss hot deploy service?
This is a production environment.
Edit: JBoss version 5.1.0 GA
I think deleting the "deploy/hdscanner-jboss-beans.xml" file is the correct way to do this.
From JBoss in Action, ch. 3.1.5:
The deployer is configured via the deployers.xml and profile.xml descriptor files,
both found in the server/xxx/conf directory. This file defines several POJOs that
manage various deployment responsibilities. Table 3.3 identifies each of these POJOs
and highlights some of the more interesting configuration properties provided by
each one. [...]
And the relevant bits from the table:
Bean: HDScanner
Property: scanEnabled - Set this to true (default) to enable the hot
deployer and to false to disable it. When set to
false, applications are deployed only when the
server is started or when the deploy method on
the MainDeployer MBean is called.
Property: scanPeriod - The number of milliseconds the hot deployer
waits between performing scans. The default is
5000 milliseconds (5 seconds). This value is
ignored if scanEnabled is set to false.
Property: scanThreadName - You can use this to change the name of the
thread from its default of HDScanner. The thread
name enables you to identify the hot deployer
thread if you should take a thread dump.
You can disable and expose it with JMX:
<bean name="HDScanner" class="org.jboss.system.server.profileservice.hotdeploy.HDScanner">
<annotation>#org.jboss.aop.microcontainer.aspects.jmx.JMX(name="jboss.deployment:service=HDScanner", exposedInterface=org.jboss.system.server.profileservice.hotdeploy.Scanner, registerDirectly=false)</annotation>
<start method="start" ignored="true" />
<property name="deployer"><inject bean="ProfileServiceDeployer"/></property>
<property name="profileService"><inject bean="ProfileService"/></property>
<property name="scanPeriod">5000</property>
<property name="scanThreadName">HDScanner</property>
<property name="scanEnabled">false</property>
</bean>
Property: scanEnabled doesn't exist on JBoss 5.x only on JBoss 4.x for the Deployment Scanner.
On JBoss 5.x just delete the hdscanner-jboss-beans.xml from the deploy directory and use the MainDeployer MBean to deploy your applications.